This paper aims to develop a new method based on the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) to solve Multiple Attribute Decision Making (MADM) problems for Interval Vague Sets (IVSs). A TOPSIS algorithm is constructed on the basis of the concepts of the relative-closeness coefficient computed from the correlation coefficient of IVSs. This novel method also identifies the positive and negative ideal solutions using the correlation coefficient of IVSs. A numerical illustration explains the proposed algorithms and comparisons are made with various existing methods.
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATIONmathsjournal
In Numerical analysis, interpolation is a manner of calculating the unknown values of a function for any conferred value of argument within the limit of the arguments. It provides basically a concept of estimating unknown data with the aid of relating acquainted data. The main goal of this research is to constitute a central difference interpolation method which is derived from the combination of Gauss’s third formula, Gauss’s Backward formula and Gauss’s forward formula. We have also demonstrated the graphical presentations as well as comparison through all the existing interpolation formulas with our propound method of central difference interpolation. By the comparison and graphical presentation, the new method gives the best result with the lowest error from another existing interpolationformula.
Discrete Mathematics is a collection of branches of mathematics that involves discrete elements using algebra and arithmetic. This is a tool being used to improve reasoning and problem-solving capabilities. It involves distinct values; i.e. between any two points, there are several number of points.
Solving Assignment Problem Using L-R Fuzzy Numbersijcoa
In this paper we determine a new method to solving assignment problem using L-R fuzzy parameters. This method requires finding out the minimal cost reach optimality compared to the existing methods available in the literature. Numerical examples show that the fuzzy assignment ranking method offers an effective way for handling the fuzzy assignment problem.
The unifying purpose of this paper to introduces basic ideas and methods of dynamic programming. It sets out the basic elements of a recursive optimization problem, describes Bellman's Principle of Optimality, the Bellman equation, and presents three methods for solving the Bellman equation with example.
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATIONmathsjournal
In Numerical analysis, interpolation is a manner of calculating the unknown values of a function for any conferred value of argument within the limit of the arguments. It provides basically a concept of estimating unknown data with the aid of relating acquainted data. The main goal of this research is to constitute a central difference interpolation method which is derived from the combination of Gauss’s third formula, Gauss’s Backward formula and Gauss’s forward formula. We have also demonstrated the graphical presentations as well as comparison through all the existing interpolation formulas with our propound method of central difference interpolation. By the comparison and graphical presentation, the new method gives the best result with the lowest error from another existing interpolationformula.
Discrete Mathematics is a collection of branches of mathematics that involves discrete elements using algebra and arithmetic. This is a tool being used to improve reasoning and problem-solving capabilities. It involves distinct values; i.e. between any two points, there are several number of points.
Solving Assignment Problem Using L-R Fuzzy Numbersijcoa
In this paper we determine a new method to solving assignment problem using L-R fuzzy parameters. This method requires finding out the minimal cost reach optimality compared to the existing methods available in the literature. Numerical examples show that the fuzzy assignment ranking method offers an effective way for handling the fuzzy assignment problem.
The unifying purpose of this paper to introduces basic ideas and methods of dynamic programming. It sets out the basic elements of a recursive optimization problem, describes Bellman's Principle of Optimality, the Bellman equation, and presents three methods for solving the Bellman equation with example.
Influence over the Dimensionality Reduction and Clustering for Air Quality Me...IJAEMSJORNAL
The current trend in the industry is to analyze large data sets and apply data mining, machine learning techniques to identify a pattern. But the challenges with huge data sets are the high dimensions associated with it. Sometimes in data analytics applications, large amounts of data produce worse performance. Also, most of the data mining algorithms are implemented column wise and too many columns restrict the performance and make it slower. Therefore, dimensionality reduction is an important step in data analysis. Dimensionality reduction is a technique that converts high dimensional data into much lower dimension, such that maximum variance is explained within the first few dimensions. This paper focuses on multivariate statistical and artificial neural networks techniques for data reduction. Each method has a different rationale to preserve the relationship between input parameters during analysis. Principal Component Analysis which is a multivariate technique and Self Organising Map a neural network technique is presented in this paper. Also, a hierarchical clustering approach has been applied to the reduced data set. A case study of Air quality measurement has been considered to evaluate the performance of the proposed techniques.
The Comprehensive Guide on Branches of MathematicsStat Analytica
Are you struggling to get all the branches of mathematics? If yes then here is the best ever presentation that will help you to get all the branches of math. Here we have mentioned the basic mathematics branches to the advanced level.
Presentation summarizes main content of Farrelly, C. M. (2017). Extensions of Morse-Smale Regression with Application to Actuarial Science. arXiv preprint arXiv:1708.05712.
Paper was accepted December 2017 by Casualty Actuarial Society.
one of the areas of discrete mathematics is graph theory. From a pure mathematics viewpoint, graph theory studies the pairwise relationships between objects. Those objects are vertices. Graph theory is frequently applied to analysing relationships between objects. It is a natural extension of graph theory to apply that mathematical tool to the evaluation of forensic evidence. In fact the literature reveals several, limited, forensic applications of graph theory. The current paper describes a more broad based application of graph theory to the problem of evaluation relationships in forensic investigation. The process takes standard graph theory and identifies entities in the investigation as vertices with the connections between the various entities as edges. Those entities can be suspects, victims, computer system, or any entity relevant to the investigation. Regardless of the nature of the entity, all entities are represented as vertices, and the relationship between them is represented as edges connecting the vertices. This allows the mathematical modelling of the events in question and facilitates analysis of the data.
Panel data analysis a survey on model based clustering of time series - stats...Stats Statswork
The Clustering technique in Statistical Analysis is used to determine the subsets as clusters in the data using the specified distance measure. However, this technique cannot be applied easily for longitudinal or time-series data. In this blog, I will discuss some of the methods used for modeling longitudinal or panel data using the Clustering Analysis technique as explained in Schmatter (2011). Statswork offers statistical services as per the requirements of the customers. When you Order statistical Services at Statswork, we promise you the following – Always on Time, outstanding customer support, and High-quality Subject Matter Experts.
Why Statswork?
Plagiarism Free | Unlimited Support | Prompt Turnaround Times | Subject Matter Expertise | Experienced Bio-statisticians & Statisticians | Statistics Across Methodologies | Wide Range Of Tools & Technologies Supports | Tutoring Services | 24/7 Email Support | Recommended by Universities
Contact Us:
Website: http://www.statswork.com/
Email: info@statswork.com
UnitedKingdom: +44-1143520021
India: +91-4448137070
WhatsApp: +91-8754446690
This Presentation is on recommended system on question paper predication using machine learning techniques. We did literature survey and implement using same technique.
A Novel Clustering Method for Similarity Measuring in Text DocumentsIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
A Survey on Unsupervised Graph-based Word Sense DisambiguationElena-Oana Tabaranu
This paper presents comparative evaluations of graph based
word sense disambiguation techniques using several measures of word
semantic similarity and several ranking algorithms. Unsupervised word
sense disambiguation has received a lot of attention lately because of it's
fast execution time and it's ability to make the most of a small input
corpus. Recent state of the art graph based systems have tried to close
the gap between the supervised and the unsupervised approaches.
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATIONmathsjournal
In Numerical analysis, interpolation is a manner of calculating the unknown values of a function for any
conferred value of argument within the limit of the arguments. It provides basically a concept of
estimating unknown data with the aid of relating acquainted data. The main goal of this research is to
constitute a central difference interpolation method which is derived from the combination of Gauss’s
third formula, Gauss’s Backward formula and Gauss’s forward formula. We have also demonstrated the
graphical presentations as well as comparison through all the existing interpolation formulas with our
propound method of central difference interpolation. By the comparison and graphical presentation, the
new method gives the best result with the lowest error from another existing interpolationformula.
Influence over the Dimensionality Reduction and Clustering for Air Quality Me...IJAEMSJORNAL
The current trend in the industry is to analyze large data sets and apply data mining, machine learning techniques to identify a pattern. But the challenges with huge data sets are the high dimensions associated with it. Sometimes in data analytics applications, large amounts of data produce worse performance. Also, most of the data mining algorithms are implemented column wise and too many columns restrict the performance and make it slower. Therefore, dimensionality reduction is an important step in data analysis. Dimensionality reduction is a technique that converts high dimensional data into much lower dimension, such that maximum variance is explained within the first few dimensions. This paper focuses on multivariate statistical and artificial neural networks techniques for data reduction. Each method has a different rationale to preserve the relationship between input parameters during analysis. Principal Component Analysis which is a multivariate technique and Self Organising Map a neural network technique is presented in this paper. Also, a hierarchical clustering approach has been applied to the reduced data set. A case study of Air quality measurement has been considered to evaluate the performance of the proposed techniques.
The Comprehensive Guide on Branches of MathematicsStat Analytica
Are you struggling to get all the branches of mathematics? If yes then here is the best ever presentation that will help you to get all the branches of math. Here we have mentioned the basic mathematics branches to the advanced level.
Presentation summarizes main content of Farrelly, C. M. (2017). Extensions of Morse-Smale Regression with Application to Actuarial Science. arXiv preprint arXiv:1708.05712.
Paper was accepted December 2017 by Casualty Actuarial Society.
one of the areas of discrete mathematics is graph theory. From a pure mathematics viewpoint, graph theory studies the pairwise relationships between objects. Those objects are vertices. Graph theory is frequently applied to analysing relationships between objects. It is a natural extension of graph theory to apply that mathematical tool to the evaluation of forensic evidence. In fact the literature reveals several, limited, forensic applications of graph theory. The current paper describes a more broad based application of graph theory to the problem of evaluation relationships in forensic investigation. The process takes standard graph theory and identifies entities in the investigation as vertices with the connections between the various entities as edges. Those entities can be suspects, victims, computer system, or any entity relevant to the investigation. Regardless of the nature of the entity, all entities are represented as vertices, and the relationship between them is represented as edges connecting the vertices. This allows the mathematical modelling of the events in question and facilitates analysis of the data.
Panel data analysis a survey on model based clustering of time series - stats...Stats Statswork
The Clustering technique in Statistical Analysis is used to determine the subsets as clusters in the data using the specified distance measure. However, this technique cannot be applied easily for longitudinal or time-series data. In this blog, I will discuss some of the methods used for modeling longitudinal or panel data using the Clustering Analysis technique as explained in Schmatter (2011). Statswork offers statistical services as per the requirements of the customers. When you Order statistical Services at Statswork, we promise you the following – Always on Time, outstanding customer support, and High-quality Subject Matter Experts.
Why Statswork?
Plagiarism Free | Unlimited Support | Prompt Turnaround Times | Subject Matter Expertise | Experienced Bio-statisticians & Statisticians | Statistics Across Methodologies | Wide Range Of Tools & Technologies Supports | Tutoring Services | 24/7 Email Support | Recommended by Universities
Contact Us:
Website: http://www.statswork.com/
Email: info@statswork.com
UnitedKingdom: +44-1143520021
India: +91-4448137070
WhatsApp: +91-8754446690
This Presentation is on recommended system on question paper predication using machine learning techniques. We did literature survey and implement using same technique.
A Novel Clustering Method for Similarity Measuring in Text DocumentsIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
A Survey on Unsupervised Graph-based Word Sense DisambiguationElena-Oana Tabaranu
This paper presents comparative evaluations of graph based
word sense disambiguation techniques using several measures of word
semantic similarity and several ranking algorithms. Unsupervised word
sense disambiguation has received a lot of attention lately because of it's
fast execution time and it's ability to make the most of a small input
corpus. Recent state of the art graph based systems have tried to close
the gap between the supervised and the unsupervised approaches.
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATIONmathsjournal
In Numerical analysis, interpolation is a manner of calculating the unknown values of a function for any
conferred value of argument within the limit of the arguments. It provides basically a concept of
estimating unknown data with the aid of relating acquainted data. The main goal of this research is to
constitute a central difference interpolation method which is derived from the combination of Gauss’s
third formula, Gauss’s Backward formula and Gauss’s forward formula. We have also demonstrated the
graphical presentations as well as comparison through all the existing interpolation formulas with our
propound method of central difference interpolation. By the comparison and graphical presentation, the
new method gives the best result with the lowest error from another existing interpolationformula.
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATIONmathsjournal
In Numerical analysis, interpolation is a manner of calculating the unknown values of a function for any
conferred value of argument within the limit of the arguments. It provides basically a concept of
estimating unknown data with the aid of relating acquainted data. The main goal of this research is to
constitute a central difference interpolation method which is derived from the combination of Gauss’s
third formula, Gauss’s Backward formula and Gauss’s forward formula. We have also demonstrated the
graphical presentations as well as comparison through all the existing interpolation formulas with our
propound method of central difference interpolation. By the comparison and graphical presentation, the
new method gives the best result with the lowest error from another existing interpolationformula.
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATIONmathsjournal
In Numerical analysis, interpolation is a manner of calculating the unknown values of a function for any
conferred value of argument within the limit of the arguments. It provides basically a concept of
estimating unknown data with the aid of relating acquainted data. The main goal of this research is to
constitute a central difference interpolation method which is derived from the combination of Gauss’s
third formula, Gauss’s Backward formula and Gauss’s forward formula. We have also demonstrated the
graphical presentations as well as comparison through all the existing interpolation formulas with our
propound method of central difference interpolation. By the comparison and graphical presentation, the
new method gives the best result with the lowest error from another existing interpolationformula.
A NEW STUDY OF TRAPEZOIDAL, SIMPSON’S1/3 AND SIMPSON’S 3/8 RULES OF NUMERICAL...mathsjournal
The main goal of this research is to give the complete conception about numerical integration including Newton-Cotes formulas and aimed at comparing the rate of performance or the rate of accuracy of Trapezoidal, Simpson’s 1/3, and Simpson’s 3/8. To verify the accuracy, we compare each rules demonstrating the smallest error values among them. The software package MATLAB R2013a is applied to determine the best method, as well as the results, are compared. It includes graphical comparisons mentioning these methods graphically. After all, it is then emphasized that the among methods considered, Simpson’s 1/3 is more effective and accurate when the condition of the subdivision is only even for solving a definite integral.
A hybrid naïve Bayes based on similarity measure to optimize the mixed-data c...TELKOMNIKA JOURNAL
In this paper, a hybrid method has been introduced to improve the classification performance of naïve Bayes (NB) for the mixed dataset and multi-class problems. This proposed method relies on a similarity measure which is applied to portions that are not correctly classified by NB. Since the data contains a multi-valued short text with rare words that limit the NB performance, we have employed an adapted selective classifier based on similarities (CSBS) classifier to exceed the NB limitations and included the rare words in the computation. This action has been achieved by transforming the formula from the product of the probabilities of the categorical variable to its sum weighted by numerical variable. The proposed algorithm has been experimented on card payment transaction data that contains the label of transactions: the multi-valued short text and the transaction amount. Based on K-fold cross validation, the evaluation results confirm that the proposed method achieved better results in terms of precision, recall, and F-score compared to NB and CSBS classifiers separately. Besides, the fact of converting a product form to a sum gives more chance to rare words to optimize the text classification, which is another advantage of the proposed method.
A SYSTEM OF SERIAL COMPUTATION FOR CLASSIFIED RULES PREDICTION IN NONREGULAR ...ijaia
Objects or structures that are regular take uniform dimensions. Based on the concepts of regular models,
our previous research work has developed a system of a regular ontology that models learning structures
in a multiagent system for uniform pre-assessments in a learning environment. This regular ontology has
led to the modelling of a classified rules learning algorithm that predicts the actual number of rules needed
for inductive learning processes and decision making in a multiagent system. But not all processes or
models are regular. Thus this paper presents a system of polynomial equation that can estimate and predict
the required number of rules of a non-regular ontology model given some defined parameters.
An Application of Pattern matching for Motif IdentificationCSCJournals
Pattern matching is one of the central and most widely studied problem in theoretical computer science. Solutions to the problem play an important role in many areas of science and information processing. Its performance has great impact on many applications including database query, text processing and DNA sequence analysis. In general Pattern matching algorithms are based on the shift value, the direction of the sliding window and the order in which comparisons are made. The performance of the algorithms can be enhanced to a great extent by a larger shift value and less number of comparison to get the shift value. In this paper we proposed an algorithm, for finding motif in DNA sequence. The algorithm is based on preprocessing of the pattern string(motif) by considering four consecutive nucleotides of the DNA that immediately follow the aligned pattern window in an event of mismatch between pattern(motif) and DNA sequence .Theoretically, we found the proposed algorithms work efficiently for motif identification.
In the present day huge amount of data is generated in every minute and transferred frequently. Although
the data is sometimes static but most commonly it is dynamic and transactional. New data that is being
generated is getting constantly added to the old/existing data. To discover the knowledge from this
incremental data, one approach is to run the algorithm repeatedly for the modified data sets which is time
consuming. Again to analyze the datasets properly, construction of efficient classifier model is necessary.
The objective of developing such a classifier is to classify unlabeled dataset into appropriate classes. The
paper proposes a dimension reduction algorithm that can be applied in dynamic environment for
generation of reduced attribute set as dynamic reduct, and an optimization algorithm which uses the
reduct and build up the corresponding classification system. The method analyzes the new dataset, when it
becomes available, and modifies the reduct accordingly to fit the entire dataset and from the entire data
set, interesting optimal classification rule sets are generated. The concepts of discernibility relation,
attribute dependency and attribute significance of Rough Set Theory are integrated for the generation of
dynamic reduct set, and optimal classification rules are selected using PSO method, which not only
reduces the complexity but also helps to achieve higher accuracy of the decision system. The proposed
method has been applied on some benchmark dataset collected from the UCI repository and dynamic
reduct is computed, and from the reduct optimal classification rules are also generated. Experimental
result shows the efficiency of the proposed method.
Analytical Hierarchical Process has been used as a useful methodology for multi-criteria decision making environments with substantial applications in recent years. But the weakness of the traditional AHP method lies in the use of subjective judgement based assessment and standardized scale for pairwise comparison matrix creation. The paper proposes a Condorcet Voting Theory based AHP method to solve multi criteria decision making problems where Analytical Hierarchy Process (AHP) is combined with Condorcet theory based preferential voting technique followed by a quantitative ratio method for framing the comparison matrix instead of the standard importance scale in traditional AHP approach. The consistency ratio (CR) is calculated for both the approaches to determine and compare the consistency of both the methods. The results reveal Condorcet- AHP method to be superior generating lower consistency ratio and more accurate ranking of the criterion for solving MCDM problems.
IMAGE REGISTRATION USING ADVANCED TOPOLOGY PRESERVING RELAXATION LABELING csandit
This paper presents a relaxation labeling technique with newly defined compatibility measures
for solving a general non-rigid point matching problem. In the literature, there exists a point
matching method using relaxation labeling, however, the compatibility coefficients always take
a binary value zero or one depending on whether a point and a neighboring point have
corresponding points. Our approach generalizes this relaxation labeling approach. The
compatibility coefficients take n-discrete values which measures the correlation between edges.
We use log-polar diagram to compute correlations. Through simulations, we show that this
topology preserving relaxation method improves the matching performance significantly
compared to other state-of-the-art algorithms such as shape context, thin plate spline-robust
point matching, robust point matching by preserving local neighborhood structures and
coherent point drift.
IMAGE REGISTRATION USING ADVANCED TOPOLOGY PRESERVING RELAXATION LABELING cscpconf
This paper presents a relaxation labeling technique with newly defined compatibility measures
for solving a general non-rigid point matching problem. In the literature, there exists a point
matching method using relaxation labeling, however, the compatibility coefficients always take
a binary value zero or one depending on whether a point and a neighboring point have
corresponding points. Our approach generalizes this relaxation labeling approach. The
compatibility coefficients take n-discrete values which measures the correlation between edges.
We use log-polar diagram to compute correlations. Through simulations, we show that this
topology preserving relaxation method improves the matching performance significantly
compared to other state-of-the-art algorithms such as shape context, thin plate spline-robust
point matching, robust point matching by preserving local neighborhood structures and
coherent point drif
Applied Mathematics and Sciences: An International Journal (MathSJ)mathsjournal
The main goal of this research is to give the complete conception about numerical integration including
Newton-Cotes formulas and aimed at comparing the rate of performance or the rate of accuracy of
Trapezoidal, Simpson’s 1/3, and Simpson’s 3/8. To verify the accuracy, we compare each rules
demonstrating the smallest error values among them. The software package MATLAB R2013a is applied to
determine the best method, as well as the results, are compared. It includes graphical comparisons
mentioning these methods graphically. After all, it is then emphasized that the among methods considered,
Simpson’s 1/3 is more effective and accurate when the condition of the subdivision is only even for solving
a definite integral.
Regression and classification techniques play an essential role in many data mining tasks and have broad applications. However, most of the state-of-the-art regression and classification techniques are often unable to adequately model the interactions among predictor variables in highly heterogeneous datasets. New techniques that can effectively model such complex and heterogeneous structures are needed to significantly improve prediction accuracy.
In this dissertation, we propose a novel type of accurate and interpretable regression and classification models, named as Pattern Aided Regression (PXR) and Pattern Aided Classification (PXC) respectively. Both PXR and PXC rely on identifying regions in the data space where a given baseline model has large modeling errors, characterizing such regions using patterns, and learning specialized models for those regions. Each PXR/PXC model contains several pairs of contrast patterns and local models, where a local classifier is applied only to data instances matching its associated pattern. We also propose a class of classification and regression techniques called Contrast Pattern Aided Regression (CPXR) and Contrast Pattern Aided Classification (CPXC) to build accurate and interpretable PXR and PXC models.
We have conducted a set of comprehensive performance studies to evaluate the performance of CPXR and CPXC. The results show that CPXR and CPXC outperform state-of-the-art regression and classification algorithms, often by significant margins. The results also show that CPXR and CPXC are especially effective for heterogeneous and high dimensional datasets. Besides being new types of modeling, PXR and PXC models can also provide insights into data heterogeneity and diverse predictor-response relationships.
We have also adapted CPXC to handle classifying imbalanced datasets and introduced a new algorithm called Contrast Pattern Aided Classification for Imbalanced Datasets (CPXCim). In CPXCim, we applied a weighting method to boost minority instances as well as a new filtering method to prune patterns with imbalanced matching datasets.
Finally, we applied our techniques on three real applications, two in the healthcare domain and one in the soil mechanic domain. PXR and PXC models are significantly more accurate than other learning algorithms in those three applications.
Similar to A STRATEGIC TOPSIS ALGORITHM WITH CORRELATION COEFFICIENT OF INTERVAL VAGUE SETS (20)
SPSS: An Effective Tool to Compute Learning Outcomes in Acadamicsijcoa
OBJECTIVES: To determine how SPSS can be a useful tool to evaluate Course Learning Outcomes and analyze student's performance with the help of KS test, histogram and skewness of the tool. It is also contributing to facilitate Deep learning amongst students with the help of achievement of normal distribution of grades. METHODS: Comparative analysis is done using course specification, syllabus, assessment method(Final Exam Question paper is taken as tool) and result statistics for the course of STATISTICAL PROGRAMMING (217 CSM), 3rd year (level 5 course) that is part of BCS curriculum in department of Computer Science, College of computer Science in King Khalid University. Teaching strategies are compared for two years. i.e; 2013 and 2014. Moreover the research inferences the relevance of application of NCAAA standards in meeting Learning outcomes of any module for department of Computer Science, CCS, KKU. RESULTS: Comparison of question papers depict that now students are motivated to have deep learning in terms of understanding, solving, reasoning based questions as contrast to shallow learning (memorized questions) in the past. It is indeed improving Learning domains too (Knowledge, Cognitive, Interpersonal and Communication skills) more effectively than in the past. Also grade distribution is Normal with well-defined curve for 2014 as compared to 2013 having variation in Standard deviation too. Teacher centered learning lead to surface learning. After NCAA standards implementation, there is more focus on learned centered teaching. Design of learning assessments is in such a way that it should meet learning outcomes successfully. CONCLUSIONS: The research is contributing in flourishing the personality of the students to produce qualified graduates with excellence in communication, logically and technically capable enough to share their knowledge nationally and internationally with much more confidence at any platform. Also it is opening door for researchers to evaluate their performance with the help of SPSS in academics or anywhere where we want to gather.
BugLoc: Bug Localization in Multi Threaded Application via Graph Mining Approachijcoa
Detection of software bugs and its occurrences, repudiation and its root cause is a very difficult process in large multi threaded applications. It is a must for a software developer or software organization to identify bugs in their applications and to remove or overcome them. The application should be protected from malfunctioning. Many of the compilers and Integrated Development Environments are effectively identifying errors and bugs in applications while running or compiling, but they fail in detecting actual cause for the bugs in the running applications. The developer has to reframe or recreate the package with the new one without bugs. It is time consuming and effort is wasted in Software Development Life Cycle. There is a possibility to use graph mining techniques in detecting software bugs. But there are many problems in using graph mining techniques. Managing large graph data, processing nodes with links and processing subgraphs are the problems to be faced in graph mining approach. This paper presents a novel algorithm named BugLoc which is capable of detecting bugs from the multi threaded software application. The BugLoc uses object template to store graph data which reduces graph management complexities. It also uses substring analysis method in detecting frequent subgraphs. The BugLoc then analyses frequent subgraphs to detect exact location of the software bugs. The experimental results show that the algorithm is very efficient, accurate and scalable for large graph dataset.
The detection of moving object is an important in many applications such as a vehicle identification in a traffic monitoring system,human detection in a crime branch.In this paper we identify a vehicle in a video sequence.This paper briefly explain the detection of moving vehicle in a video.We introduce a new algorithm BGS for idntifying vehicle in a video sequence. First, we differentiate the foreground from background in frames by learning the background. Then, the image is divided into many small nonoverlapped frames. The candidates of the vehicle part can be found from the frames if there is some change in gray level between the current image and the background. The extracted background subtraction method is used in subsequent analysis to detect a vehicle and classify moving vehicle.
Analysis of Women Harassment in Villages Using CETD Matrix Modalijcoa
It is commonly understood that misbehavior intends to upset .Law says , the repeated intentional misbehavior towards women is an offensive. The main concept of this paper can find something interesting that will make us reflect on what is done by women’s rights and gender equality. To solve such problem, in this paper we are interested to adopt CETD matrix.
Fuzzy Chroamtic Number of Line Graph using α-Cutsijcoa
In this paper, we introduce chromatic number of line graph using α-cuts. The concept of chromatic number of fuzzy graphs was introduced by Munoz et.al, later Eslahchi and onagh. They are defined by the fuzzy chromatic number of complete graphs (kn), cycle graph (cn), star graph (sn), wheel graph (wn), and line graph are found and results are summarized.
A Study on the Exposures of Rag- Pickers Using Induced Neutrosophic Cognitive...ijcoa
In this paper, using a new Fuzzy bimodal called Induced Neutrosophic Cognitive Relational Maps (INCRM) we analyse the Socio-Economic problem faced by Rag-Pickers. Based on the study, conclusions and some remedial measures are stated.
Dual Trapezoidal Fuzzy Number and its Applicationsijcoa
In this Paper, we introduce Convergence of α-Cut. We define at Which point the α-Cut converges to the fuzzy numbers and it will be illustrated by example using dual trapezoidal fuzzy number and some mensuration problems are illustrated with approximated values.
Developing and Porting multi-grid solver for CFD application on Intel-MIC pla...ijcoa
This paper presents an implementation of one dimensional Burgers equation using implicit method with Intel Xeon Phi Coprocessor. In particular, we used MAGMA MIC library which is an open source high performance library for solving a systems of non-linear equations. Further for high performance computation we consider offload mode as the primary mode of operation for Intel Xeon phi coprocessor. The result obtained from implicit scheme is then compared with the exact values and it’s seen that the results obtained are approximate and reliable. The result table showed that the proposed scheme achieved higher performance on Intel MIC platform.
Adin and Roichman [1] introduced the concept of permutation graphs and Peter Keevash, Po-Shen Loh and Benny Sudakov [2] identified some permutation graphs with maximum number of edges. Ryuhei Uehara, Gabriel Valiente, discussed on Linear structure of Bipartite Permutation Graphs and the Longest Path Problem [3]. If i, j belongs to a permutation on p symbols {1, 2, …, p} and i is less than j then there is an edge between i and j in the permutation graph if i appears after j in the sequence of permutation. So the line of i crosses the line of j in the permutation. Hence there is a one to one correspondence between crossing of lines in the permutation and the edges of the corresponding permutation graph. In this paper we found the conditions for a permutation to realize a double star and comprehend the algorithm to determine the satuation index of the permutation. AMS Subject Classification (2010): 05C35, 05C69, 20B30.
Classification and Prediction of Heart Disease from Diabetes Patients using H...ijcoa
The multi-factorial chronicle, severe disease among human is diabetes. As a result of abnormal level of glucose in body leads to heart attack, kidney disease, renal failure, Hyperglycemia and also cancer in organs like liver and pancreas. Many studies have been proved that several types of heart diseases are possible in diabetic patients having a high blood sugar. Many approaches were proposed to diagnose both Diabetes and heart diseases. Most of the diabetes people can also have heart diseases called as Diabetic Cardiomyopathy. The earliest manifestation of diabetic cardiomyopathy is needed certain processes. The objective of the study is to examine the association of heart disease and diabetes. The relationship between diabetes and cardiovascular diseases are examined by taking into account of age, sex and associated diabetic and cardiovascular risk factors. The data are collected from patients with diabetes. From these data, features are selected by ant colony optimization and those selected features are given to hybrid PSO-LIBSVM to classify abnormal and normal data. This performance is evaluated using performance metrics and proved this classifiers efficiency for detection of Diabetic Cardiomyopathy.
Format Preserving Encryption for Small Domainijcoa
Cryptography is important in communicating secured information that is vulnerable to distortion. The main goal of this paper is encrypting the small arbitrary length data without any changes in the length and data type. We propose a flexible arbitrary length small domain block cipher (FPESD). FPESD is based on AES Algorithm. The resulting cipher text is the same as input plaintext. Here, we use Galois finite field GF (28) and format preserving key to implement FPE. For decryption format preserving key is used along with cipher text and secret key.
Intuitionistic Double Layered Fuzzy Graph and its Cartesian Product Vertex De...ijcoa
The intuitionistic double layered fuzzy graph gives a 3-D structural view to a fuzzy graph. To find the cartesian product of two intuitionistic double layered fuzzy graphsis a challenging one. In this paper under some condition,a simple method to find the vertex degree of cartesian product of two IDLFG without the cartesianproduct structure is given.
Clustering and Classification in Support of Climatology to mine Weather Data ...ijcoa
Knowledge of climate data of region is essential for business, society, agriculture, pollution and energy applications. Climate is not fixed, the fluctuation in the climate can be seen from year to year.The data mining application help meteorological scientists to predictaccurate weather forecast and decisions and also provide more performance and reliability than any other methods. The data mining techniques applied on weather data are efficient when compare to the mathematical models used. Various techniques of data mining are applied on climate data to support weather forecasting, climate scientists, agriculture, vegetation, water resources and tourism. The aim of this paper is to provide a review report on various data mining techniques applied on weather data set in support of weather prediction and climate analysis.
Test Suite Reduction Based on Fault Detection with Cost Optimizationijcoa
Test Suite Reduction is an optimization technique to identify the minimally sized subset of test cases with enforced constraints involved. The main purpose of test suite reduction is to deduce increased number of test cases that in turn increase the time and cost involved in execution. Fault Detection is the method of identifying the faults that affect the outcome of the system either logically or syntactically. This paper focuses on the reduction of the test suite that has high fault identification rates and also incurs low cost of execution of test cases. The proposed approach includes a new parameter Fault Detection Effectiveness to identify fault rates of test suite; an algorithm for test suite reduction based on priority of requirements; a low cost framework to identify the execution of test cases with minimum budget. Thus, the proposed work defines a test suite that has high fault detection effectiveness providing maximum coverage to requirements at minimum cost of execution.
As the e-commerce is gaining popularity various customer surveys of objects are currently accessible on the Internet. These surveys are frequently disordered, prompting challenges in knowledge discovery and object assessment. This article proposes an object feature positioning skeleton, which consequently recognizes the critical features of an object from online customer surveys. The critical object features are recognized focused around two perceptions: 1) they are normally commented extensively by customers and 2) customer suppositions on the critical feature significantly impact their general assessments on the object. Specifically, given the customer surveys of an item, we first extract the object feature by a shallow reliance parser and focus customer suppositions on these features through an opinion characterizer. We then create a probabilistic object feature positioning calculation to identify the criticalness of perspectives by at the same time considering feature recurrence and the impact of customer opinion given to every feature over their general opinion. The experimental results on 3 popular products demonstrate the effectiveness of our approach.
Effectual Data Integrity Checking Technique Based on Number Theoryijcoa
Cloud Computing makes data really mobile and a client can simply access a chosen cloud with any internet accessible device. The espousal and dispersion of Cloud computing are threatened by unresolved security issues that affect both the cloud provider and the cloud user. The integrity of data stored in the cloud is one of the challenges to be addressed before the novel storage model is applied widely. This paper analyses the efficiency issues and security dodge of an existing scheme and proposes an amended data integrity scheme using improved RSA and number theory based concept for cloud archive. This scheme of protecting the integrity of guest virtual machines can be agreed upon by both the cloud and the customer and can be incorporated in the service level agreement (SLA).Based onhypothetical analysis, we demonstrate that the proposedscheme has a provably safe and highly adroit dataintegrity inspection measure.
Micro-Neuro-Sensor Recording of STN Neurons of the Human Brainijcoa
What cause to the neurons of the human brain cells when they are damaged. They become inactive. So damage to subthalamiuc nucleus (STN) neurons of the human brain causing larger involuntary movements and thereby attacking the Parkinson’s disease (PD). Deep brain stimulation (DBS) of bilateral sub thalamic nuclei (STN) is an efficient method of rehabilitation technique in subjects with advanced idiopathic Parkinson’s (or Parkinson) disease. Accurate targeting of STN neurons and placement of microelectrodes/ (neurosensors) are paramount importance for optimal results after STN-DBS method.In this paper, microminiaturized electrode recordings (MER) of STN neurons were detected in a mean of 3.5 ±1.1 channels on right hemisphere and 3.6 ±1.04 on left hemisphere.Final channel selected were most commonly central seen in 42.3% followed by anterior in 33.7%. When a high current is delivered to STN or GPi neurons of basal ganglia (a component of human brain), causing their inhibition and improved indication of symptoms. It is now known that there is a significant change in the firing pattern and a reorganization of the entire basal ganglia circuit with DBS. The MER of STN neurons has identified a specific high frequency irregular larger amplitude firing patterns seen only in disease states and hence used to detect the neurons of ST nucleus during functional surgery. Microelectrode recording is so useful to confirm the right path but has to be taken in consideration with effects on macro stimulation.
EDM: Finding Answers to Reduce Dropout Rates in Schoolsijcoa
The focus of this paper is to get a bird's eye view of the various factors that could be analyzed to present schools and governments timely indicators of school dropouts. With the help of EDM, one could have access to enormous amounts of data. The crux of the matter would be to identify the data in line with plans and ideas that would eventually lead to the formulation of policies that enhance the teaching and learning process.
Analysis of Effort Estimation Model in Traditional and Agile (USING METRICS ...ijcoa
Agile software development has been gaining popularity and replacing the traditional methods of developing software. However, estimating the size and effort in Agile software development still remains a challenge. Measurement practices in agile methods are more important than traditional methods, because lack of appropriate an effective measurement practices will increase the risk of project. This paper discuss about traditional and agile effort estimation model, and analysis done on how the metrics are used in estimation process. The paper also suggeststo use object point and use case point to improve accuracy of effort in agile software development.
Multi-Biometric Authentication through Hybrid Cryptographic Systemijcoa
In most of the real time scenario, authentication is required very much so as to enable the person to access a private database of any type. Researchers have started using biometric traits for the authenticity of a person. The various biometrics traits available are face, iris, palm print, hand geometry, fingerprint, ear etc., But the application that uses a single biometric trait often have to challenge with noisy data, restricted degrees of autonomy, non-universality of the biometric trait and intolerable error rates. Multi biometric systems seem to lighten these drawbacks by providing multiple verification of the same personality. Biometric fusion is the use of multiple biometric inputs or methods of processing to improve performance. In this paper, a novel combination of Multi biometric fusion, Symmetric Cryptography and Asymmetric Cryptography is proposed. A fused biometric image is encrypted using Advanced Encryption Standard whose secret key is in turn encrypted using elliptic curve cryptography which is considered as one of the efficient Asymmetric cryptographic algorithms. As the symmetric cryptographic algorithms involve in key exchange mechanism, the secret key is proposed to be secured by using ECC. Hence, the system proposed is expected to be more secured to store the biometric traits of an individual.
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
Cancer cell metabolism: special Reference to Lactate PathwayAADYARAJPANDEY1
Normal Cell Metabolism:
Cellular respiration describes the series of steps that cells use to break down sugar and other chemicals to get the energy we need to function.
Energy is stored in the bonds of glucose and when glucose is broken down, much of that energy is released.
Cell utilize energy in the form of ATP.
The first step of respiration is called glycolysis. In a series of steps, glycolysis breaks glucose into two smaller molecules - a chemical called pyruvate. A small amount of ATP is formed during this process.
Most healthy cells continue the breakdown in a second process, called the Kreb's cycle. The Kreb's cycle allows cells to “burn” the pyruvates made in glycolysis to get more ATP.
The last step in the breakdown of glucose is called oxidative phosphorylation (Ox-Phos).
It takes place in specialized cell structures called mitochondria. This process produces a large amount of ATP. Importantly, cells need oxygen to complete oxidative phosphorylation.
If a cell completes only glycolysis, only 2 molecules of ATP are made per glucose. However, if the cell completes the entire respiration process (glycolysis - Kreb's - oxidative phosphorylation), about 36 molecules of ATP are created, giving it much more energy to use.
IN CANCER CELL:
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
introduction to WARBERG PHENOMENA:
WARBURG EFFECT Usually, cancer cells are highly glycolytic (glucose addiction) and take up more glucose than do normal cells from outside.
Otto Heinrich Warburg (; 8 October 1883 – 1 August 1970) In 1931 was awarded the Nobel Prize in Physiology for his "discovery of the nature and mode of action of the respiratory enzyme.
WARNBURG EFFECT : cancer cells under aerobic (well-oxygenated) conditions to metabolize glucose to lactate (aerobic glycolysis) is known as the Warburg effect. Warburg made the observation that tumor slices consume glucose and secrete lactate at a higher rate than normal tissues.
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.Sérgio Sacani
The return of a sample of near-surface atmosphere from Mars would facilitate answers to several first-order science questions surrounding the formation and evolution of the planet. One of the important aspects of terrestrial planet formation in general is the role that primary atmospheres played in influencing the chemistry and structure of the planets and their antecedents. Studies of the martian atmosphere can be used to investigate the role of a primary atmosphere in its history. Atmosphere samples would also inform our understanding of the near-surface chemistry of the planet, and ultimately the prospects for life. High-precision isotopic analyses of constituent gases are needed to address these questions, requiring that the analyses are made on returned samples rather than in situ.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...
A STRATEGIC TOPSIS ALGORITHM WITH CORRELATION COEFFICIENT OF INTERVAL VAGUE SETS
1. International Journal of Computing Algorithm, Vol 2(2), December 2013
ISSN(Print):2278-2397
Website: www.ijcoa.com
A Strategic Topsis Algorithm with Correlation
Coefficient of Interval Vague Sets
P.John Robinson, E.C.Henry Amirtharaj
PG & Research Department of Mathematics, Bishop Heber College, Tiruchirappalli
Email: robijohnsharon@gmail.com
Abstract
This paper aims to develop a new method based on the Technique for Order Preference by
Similarity to Ideal Solution (TOPSIS) to solve Multiple Attribute Decision Making (MADM)
problems for Interval Vague Sets (IVSs). A TOPSIS algorithm is constructed on the basis of
the concepts of the relative-closeness coefficient computed from the correlation coefficient of
IVSs. This novel method also identifies the positive and negative ideal solutions using the
correlation coefficient of IVSs. A numerical illustration explains the proposed algorithms and
comparisons are made with various existing methods.