This document summarizes a student project that analyzed speech data to predict schizophrenia using machine learning algorithms. The students collected speech data from schizophrenic and healthy individuals over two days. They tested logistic regression, naive Bayes, random forest, decision tree, and OneR algorithms on the data. Logistic regression performed best, accurately predicting schizophrenia from emotions data over 80% of the time. The small dataset size was a challenge, and future work could involve implementing support vector machines and obtaining a larger dataset.
SENSE DISAMBIGUATION TECHNIQUE FOR PROVIDING MORE ACCURATE RESULTS IN WEB SEARCHijwscjournal
As the web is increasing exponentially, so it is very much difficult to provide relevant information to the information seekers. While searching some information on the web, users can easily fade out in rich hypertext. The existing techniques provide the results that are not up to the mark. This paper focuses on the technique that helps in offering more accurate results, especially in case of Homographs. Homograph is a word that shares the same written form but has different meanings. The technique that shows how senses of words can play an important role in offering accurate search results, is described in following sections. While adopting this technique user can receive only relevant pages on the top of the search result.
A survey on phrase structure learning methods for text classificationijnlc
Text classification is a task of automatic classification of text into one of the predefined categories. The
problem of text classification has been widely studied in different communities like natural language
processing, data mining and information retrieval. Text classification is an important constituent in many
information management tasks like topic identification, spam filtering, email routing, language
identification, genre classification, readability assessment etc. The performance of text classification
improves notably when phrase patterns are used. The use of phrase patterns helps in capturing non-local
behaviours and thus helps in the improvement of text classification task. Phrase structure extraction is the
first step to continue with the phrase pattern identification. In this survey, detailed study of phrase structure
learning methods have been carried out. This will enable future work in several NLP tasks, which uses
syntactic information from phrase structure like grammar checkers, question answering, information
extraction, machine translation, text classification. The paper also provides different levels of classification
and detailed comparison of the phrase structure learning methods.
Parameters Optimization for Improving ASR Performance in Adverse Real World N...Waqas Tariq
From the existing research it has been observed that many techniques and methodologies are available for performing every step of Automatic Speech Recognition (ASR) system, but the performance (Minimization of Word Error Recognition-WER and Maximization of Word Accuracy Rate- WAR) of the methodology is not dependent on the only technique applied in that method. The research work indicates that, performance mainly depends on the category of the noise, the level of the noise and the variable size of the window, frame, frame overlap etc is considered in the existing methods. The main aim of the work presented in this paper is to use variable size of parameters like window size, frame size and frame overlap percentage to observe the performance of algorithms for various categories of noise with different levels and also train the system for all size of parameters and category of real world noisy environment to improve the performance of the speech recognition system. This paper presents the results of Signal-to-Noise Ratio (SNR) and Accuracy test by applying variable size of parameters. It is observed that, it is really very hard to evaluate test results and decide parameter size for ASR performance improvement for its resultant optimization. Hence, this study further suggests the feasible and optimum parameter size using Fuzzy Inference System (FIS) for enhancing resultant accuracy in adverse real world noisy environmental conditions. This work will be helpful to give discriminative training of ubiquitous ASR system for better Human Computer Interaction (HCI). Keywords: ASR Performance, ASR Parameters Optimization, Multi-Environmental Training, Fuzzy Inference System for ASR, ubiquitous ASR system, Human Computer Interaction (HCI)
Suggestion Generation for Specific Erroneous Part in a Sentence using Deep Le...ijtsrd
Natural Language Processing NLP is the one of the major filed of Natural Language Generation NLG . NLG can generate natural language from a machine representation. Generating suggestions for a sentence especially for Indian languages is much difficult. One of the major reason is that it is morphologically rich and the format is just reverse of English language. By using deep learning approach with the help of Long Short Term Memory LSTM layers we can generate a possible set of solutions for erroneous part in a sentence. To effectively generate a bunch of sentences having equivalent meaning as the original sentence using Deep Learning DL approach is to train a model on this task, e.g. we need thousands of examples of inputs and outputs with which to train a model. Veena S Nair | Amina Beevi A ""Suggestion Generation for Specific Erroneous Part in a Sentence using Deep Learning"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-4 , June 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23842.pdf
Paper URL: https://www.ijtsrd.com/engineering/computer-engineering/23842/suggestion-generation-for-specific-erroneous-part-in-a-sentence-using-deep-learning/veena-s-nair
Importance of the neutral category in fuzzy clustering of sentimentsijfls
Social media is said to have an impact on the public discourse and communication in the society. It is increasingly
being used in the political context. Social networks sites such as Facebook, Twitter and other
microblogging services provide an opportunity for public to give opinions about some issues of interest.
Twitter is an ideal platform for users to spread not only information in general but also political opinions,
whereas Facebook provides the capability for direct dialogs. A lot of studies have shown that a need exists
for stakeholders to collect, monitor, analyze, summarize and visualize these social media views. Some authors
have tended to categorize these comments as either positive or negative ignoring the neutral category.
In this paper, we demonstrate the importance of the neutral category in the clustering of sentiments
from the social media. We then demonstrate the use of fuzzy clustering for this kind of task.
The peer-reviewed International Journal of Engineering Inventions (IJEI) is started with a mission to encourage contribution to research in Science and Technology. Encourage and motivate researchers in challenging areas of Sciences and Technology.
SENSE DISAMBIGUATION TECHNIQUE FOR PROVIDING MORE ACCURATE RESULTS IN WEB SEARCHijwscjournal
As the web is increasing exponentially, so it is very much difficult to provide relevant information to the information seekers. While searching some information on the web, users can easily fade out in rich hypertext. The existing techniques provide the results that are not up to the mark. This paper focuses on the technique that helps in offering more accurate results, especially in case of Homographs. Homograph is a word that shares the same written form but has different meanings. The technique that shows how senses of words can play an important role in offering accurate search results, is described in following sections. While adopting this technique user can receive only relevant pages on the top of the search result.
A survey on phrase structure learning methods for text classificationijnlc
Text classification is a task of automatic classification of text into one of the predefined categories. The
problem of text classification has been widely studied in different communities like natural language
processing, data mining and information retrieval. Text classification is an important constituent in many
information management tasks like topic identification, spam filtering, email routing, language
identification, genre classification, readability assessment etc. The performance of text classification
improves notably when phrase patterns are used. The use of phrase patterns helps in capturing non-local
behaviours and thus helps in the improvement of text classification task. Phrase structure extraction is the
first step to continue with the phrase pattern identification. In this survey, detailed study of phrase structure
learning methods have been carried out. This will enable future work in several NLP tasks, which uses
syntactic information from phrase structure like grammar checkers, question answering, information
extraction, machine translation, text classification. The paper also provides different levels of classification
and detailed comparison of the phrase structure learning methods.
Parameters Optimization for Improving ASR Performance in Adverse Real World N...Waqas Tariq
From the existing research it has been observed that many techniques and methodologies are available for performing every step of Automatic Speech Recognition (ASR) system, but the performance (Minimization of Word Error Recognition-WER and Maximization of Word Accuracy Rate- WAR) of the methodology is not dependent on the only technique applied in that method. The research work indicates that, performance mainly depends on the category of the noise, the level of the noise and the variable size of the window, frame, frame overlap etc is considered in the existing methods. The main aim of the work presented in this paper is to use variable size of parameters like window size, frame size and frame overlap percentage to observe the performance of algorithms for various categories of noise with different levels and also train the system for all size of parameters and category of real world noisy environment to improve the performance of the speech recognition system. This paper presents the results of Signal-to-Noise Ratio (SNR) and Accuracy test by applying variable size of parameters. It is observed that, it is really very hard to evaluate test results and decide parameter size for ASR performance improvement for its resultant optimization. Hence, this study further suggests the feasible and optimum parameter size using Fuzzy Inference System (FIS) for enhancing resultant accuracy in adverse real world noisy environmental conditions. This work will be helpful to give discriminative training of ubiquitous ASR system for better Human Computer Interaction (HCI). Keywords: ASR Performance, ASR Parameters Optimization, Multi-Environmental Training, Fuzzy Inference System for ASR, ubiquitous ASR system, Human Computer Interaction (HCI)
Suggestion Generation for Specific Erroneous Part in a Sentence using Deep Le...ijtsrd
Natural Language Processing NLP is the one of the major filed of Natural Language Generation NLG . NLG can generate natural language from a machine representation. Generating suggestions for a sentence especially for Indian languages is much difficult. One of the major reason is that it is morphologically rich and the format is just reverse of English language. By using deep learning approach with the help of Long Short Term Memory LSTM layers we can generate a possible set of solutions for erroneous part in a sentence. To effectively generate a bunch of sentences having equivalent meaning as the original sentence using Deep Learning DL approach is to train a model on this task, e.g. we need thousands of examples of inputs and outputs with which to train a model. Veena S Nair | Amina Beevi A ""Suggestion Generation for Specific Erroneous Part in a Sentence using Deep Learning"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-4 , June 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23842.pdf
Paper URL: https://www.ijtsrd.com/engineering/computer-engineering/23842/suggestion-generation-for-specific-erroneous-part-in-a-sentence-using-deep-learning/veena-s-nair
Importance of the neutral category in fuzzy clustering of sentimentsijfls
Social media is said to have an impact on the public discourse and communication in the society. It is increasingly
being used in the political context. Social networks sites such as Facebook, Twitter and other
microblogging services provide an opportunity for public to give opinions about some issues of interest.
Twitter is an ideal platform for users to spread not only information in general but also political opinions,
whereas Facebook provides the capability for direct dialogs. A lot of studies have shown that a need exists
for stakeholders to collect, monitor, analyze, summarize and visualize these social media views. Some authors
have tended to categorize these comments as either positive or negative ignoring the neutral category.
In this paper, we demonstrate the importance of the neutral category in the clustering of sentiments
from the social media. We then demonstrate the use of fuzzy clustering for this kind of task.
The peer-reviewed International Journal of Engineering Inventions (IJEI) is started with a mission to encourage contribution to research in Science and Technology. Encourage and motivate researchers in challenging areas of Sciences and Technology.
A scalable, lexicon based technique for sentiment analysisijfcstjournal
Rapid increase in the volume of sentiment rich social media on the web has resulted in an increased
interest among researchers regarding Sentimental Analysis and opinion mining. However, with so much
social media available on the web, sentiment analysis is now considered as a big data task. Hence the
conventional sentiment analysis approaches fails to efficiently handle the vast amount of sentiment data
available now a days. The main focus of the research was to find such a technique that can efficiently
perform sentiment analysis on big data sets. A technique that can categorize the text as positive, negative
and neutral in a fast and accurate manner. In the research, sentiment analysis was performed on a large
data set of tweets using Hadoop and the performance of the technique was measured in form of speed and
accuracy. The experimental results shows that the technique exhibits very good efficiency in handling big
sentiment data sets.
Text mining is a new and exciting research area that tries to solve the information overload problem by using techniques from machine learning, natural language processing (NLP), data mining, information retrieval (IR), and knowledge management. Text mining involves the pre-processing of document collections such as information extraction, term extraction, text categorization, and storage of intermediate representations. The techniques that are used to analyse these intermediate representations such as clustering, distribution analysis, association rules and visualisation of the results.
The article presents Part of Speech Tagging for Nepali Text using three techniques of Artificial Neural networks. The novel algorithm for POS tagging is introduced .Features are extracted from the marginal probability of Hidden Markov Model. The extracted features are supplied to 3 different ANN architectures viz. Radial Basis Function (RBF) network, General Regression Neural Networks (GRNN) and Feed forward Neural network as an input vector for each word. Two different Annotated Tagged sets are constructed for training and testing purpose. Results are compared using all the 3 techniques and applied on both the sets. GRNN based POS tagging technique is found better as it produces 100% and 98.32% accuracies for both training and testing sets respectively.
Our project is about guessing the correct missing
word in a given sentence. To find of guess the missing word
we have two main methods one of them statistical language
modeling, while the other is neural language models.
Statistical language modeling depend on the frequency of the
relation between words and here we use Markov chain. Since
neural language models uses artificial neural networks which
uses deep learning, here we use BERT which is the state of art
in language modeling provided by google.
THE EFFECTS OF THE LDA TOPIC MODEL ON SENTIMENT CLASSIFICATIONijscai
Online reviews are a feedback to the product and play a key role in improving the product to cater to consumers. Online reviews that rely heavily on manual categorization are time consuming and labor intensive.The recurrent neural network in deep learning can process time series data, while the long and short term memory network can process long time sequence data well. This has good experimental verification support in natural language processing, machine translation, speech recognition and language model.The merits of the extracted data features affect the classification results produced by the classification model. The LDA topic model adds a priori a posteriori knowledge to classify the data so that the characteristics of the data can be extracted efficiently.Applied to the classifier can improve accuracy and efficiency. Two-way long-term and short-term memory networks are variants and extensions of cyclic neural networks.The deep learning framework Keras uses Tensorflow as the backend to build a convenient two-way long-term and short-term memory network model, which provides a strong technical support for the experiment.Using the LDA topic model to extract the keywords needed to train the neural network and increase the internal relationship between words can improve the learning efficiency of the model. The experimental results in the same experimental environment are better than the traditional word frequency features.
SEMI-SUPERVISED BOOTSTRAPPING APPROACH FOR NAMED ENTITY RECOGNITIONkevig
The aim of Named Entity Recognition (NER) is to identify references of named entities in unstructured documents, and to classify them into pre-defined semantic categories. NER often aids from added background knowledge in the form of gazetteers. However using such a collection does not deal with name variants and cannot resolve ambiguities associated in identifying the entities in context and associating them with predefined categories. We present a semi-supervised NER approach that starts with identifying named entities with a small set of training data. Using the identified named entities, the word and the context features are used to define the pattern. This pattern of each named entity category is used as a seed pattern to identify the named entities in the test set. Pattern scoring and tuple value score enables the generation of the new patterns to identify the named entity categories. We have evaluated the proposed system for English language with the dataset of tagged (IEER) and untagged (CoNLL 2003) named entity corpus and for Tamil language with the documents from the FIRE corpus and yield an average f-measure of 75% for both the languages.
Sentiment analysis is an important current research area. The demand for sentiment analysis and classification is growing day by day; this paper presents a novel method to classify Urdu documents as previously no work recorded on sentiment classification for Urdu text. We consider the problem by determining whether the review or sentence is positive, negative or neutral. For the purpose we use two machine learning methods Naïve Bayes and Support Vector Machines (SVM) . Firstly the documents are preprocessed and the sentiments features are extracted, then the polarity has been calculated, judged and classify through Machine learning methods.
A scalable, lexicon based technique for sentiment analysisijfcstjournal
Rapid increase in the volume of sentiment rich social media on the web has resulted in an increased
interest among researchers regarding Sentimental Analysis and opinion mining. However, with so much
social media available on the web, sentiment analysis is now considered as a big data task. Hence the
conventional sentiment analysis approaches fails to efficiently handle the vast amount of sentiment data
available now a days. The main focus of the research was to find such a technique that can efficiently
perform sentiment analysis on big data sets. A technique that can categorize the text as positive, negative
and neutral in a fast and accurate manner. In the research, sentiment analysis was performed on a large
data set of tweets using Hadoop and the performance of the technique was measured in form of speed and
accuracy. The experimental results shows that the technique exhibits very good efficiency in handling big
sentiment data sets.
Text mining is a new and exciting research area that tries to solve the information overload problem by using techniques from machine learning, natural language processing (NLP), data mining, information retrieval (IR), and knowledge management. Text mining involves the pre-processing of document collections such as information extraction, term extraction, text categorization, and storage of intermediate representations. The techniques that are used to analyse these intermediate representations such as clustering, distribution analysis, association rules and visualisation of the results.
The article presents Part of Speech Tagging for Nepali Text using three techniques of Artificial Neural networks. The novel algorithm for POS tagging is introduced .Features are extracted from the marginal probability of Hidden Markov Model. The extracted features are supplied to 3 different ANN architectures viz. Radial Basis Function (RBF) network, General Regression Neural Networks (GRNN) and Feed forward Neural network as an input vector for each word. Two different Annotated Tagged sets are constructed for training and testing purpose. Results are compared using all the 3 techniques and applied on both the sets. GRNN based POS tagging technique is found better as it produces 100% and 98.32% accuracies for both training and testing sets respectively.
Our project is about guessing the correct missing
word in a given sentence. To find of guess the missing word
we have two main methods one of them statistical language
modeling, while the other is neural language models.
Statistical language modeling depend on the frequency of the
relation between words and here we use Markov chain. Since
neural language models uses artificial neural networks which
uses deep learning, here we use BERT which is the state of art
in language modeling provided by google.
THE EFFECTS OF THE LDA TOPIC MODEL ON SENTIMENT CLASSIFICATIONijscai
Online reviews are a feedback to the product and play a key role in improving the product to cater to consumers. Online reviews that rely heavily on manual categorization are time consuming and labor intensive.The recurrent neural network in deep learning can process time series data, while the long and short term memory network can process long time sequence data well. This has good experimental verification support in natural language processing, machine translation, speech recognition and language model.The merits of the extracted data features affect the classification results produced by the classification model. The LDA topic model adds a priori a posteriori knowledge to classify the data so that the characteristics of the data can be extracted efficiently.Applied to the classifier can improve accuracy and efficiency. Two-way long-term and short-term memory networks are variants and extensions of cyclic neural networks.The deep learning framework Keras uses Tensorflow as the backend to build a convenient two-way long-term and short-term memory network model, which provides a strong technical support for the experiment.Using the LDA topic model to extract the keywords needed to train the neural network and increase the internal relationship between words can improve the learning efficiency of the model. The experimental results in the same experimental environment are better than the traditional word frequency features.
SEMI-SUPERVISED BOOTSTRAPPING APPROACH FOR NAMED ENTITY RECOGNITIONkevig
The aim of Named Entity Recognition (NER) is to identify references of named entities in unstructured documents, and to classify them into pre-defined semantic categories. NER often aids from added background knowledge in the form of gazetteers. However using such a collection does not deal with name variants and cannot resolve ambiguities associated in identifying the entities in context and associating them with predefined categories. We present a semi-supervised NER approach that starts with identifying named entities with a small set of training data. Using the identified named entities, the word and the context features are used to define the pattern. This pattern of each named entity category is used as a seed pattern to identify the named entities in the test set. Pattern scoring and tuple value score enables the generation of the new patterns to identify the named entity categories. We have evaluated the proposed system for English language with the dataset of tagged (IEER) and untagged (CoNLL 2003) named entity corpus and for Tamil language with the documents from the FIRE corpus and yield an average f-measure of 75% for both the languages.
Sentiment analysis is an important current research area. The demand for sentiment analysis and classification is growing day by day; this paper presents a novel method to classify Urdu documents as previously no work recorded on sentiment classification for Urdu text. We consider the problem by determining whether the review or sentence is positive, negative or neutral. For the purpose we use two machine learning methods Naïve Bayes and Support Vector Machines (SVM) . Firstly the documents are preprocessed and the sentiments features are extracted, then the polarity has been calculated, judged and classify through Machine learning methods.
10 Things VicinityBrew Software is Thankful ForJulia Clark
2016 has been an excellent and eventful year for VicinityBrew Software. We have built relationships with many new friends and clients, have attended a great deal of events in the beer industry, and have had fun throughout the whole year. We are thankful for many things, too many to list, but here are the top contenders of things we're thankful for this year.
1 ¶ Pastaj pashë një engjëll tjetër të fuqishëm që zbriste nga qielli, i mbështjellë në një re dhe me ylber mbi krye; dhe fytyra e tij ishte si diell dhe këmbët e tij si shtylla zjarri.
2 Ai kishte në dorë një libërth të hapur dhe vuri këmbën e tij të djathtë mbi det dhe të majtin mbi dhe,
3 dhe thirri me zë të madh si një luan që vrumbullon; dhe, si bërtiti, të shtatë bubullima bënë të dëgjohej ushtima e tyre.
4 Dhe kur të shtatë bubullimat bënë të dëgjohej ushtima e tyre, u gatita të shkruaj, por dëgjova një zë nga qielli që më thoshte: “Vulosi gjërat që thonin të shtatë bubullimat dhe mos i shkruaj’’
5 Atëherë engjëlli që unë pashë që rrinte në këmbë mbi det e mbi dhe, ngriti dorën e djathtë drejt qiellit,
6 dhe bëri be për atë që rron në shekuj të shekujve, që krijoi qiellin dhe gjërat që janë në të, tokën dhe gjërat që janë në të, detin dhe gjërat që janë në të, se nuk do të vonohet më.
7 por në ditët kur engjëlli i shtatë të bëjë të dëgjohet zëri i tij, kur ai t’i bjerë borisë do të zbatohet misteri i Perëndisë, ashtu si ai e ua shpalli shërbëtorëve të tij, profetëve.
8 ¶ Dhe zëri që kisha dëgjuar nga qielli më foli përsëri dhe tha: "Shko, merr libërthin e hapur që ndodhet në dorën e engjëllit që rri mbi det e mbi dhe".
9 Dhe shkova tek engjëlli dhe i thashë: “Më jep libërthin’’ Dhe ai më tha: “Merre dhe gllabëroje atë dhe ai do ta hidhërojë të brëndëshmet e tua, por në gojën tënde do të jetë i ëmbël si mjaltë’’
10 Dhe e mora libërthin nga dora e engjëllit, dhe si e gllabërova; dhe ishte në gojën time i ëmbël si mjaltë; dhe, mbasi e gllabërova, barku im u hidhërua.
11 Dhe ai më tha: “Të duhet përsëri të profetizosh mbi shumë popuj, kombe, gjuhë dhe mbretër’’
Cervical cancer is the 2nd most common cancer among South African women & the leading cancer among black South African women - 1 in 39 women in South Africa will be diagnosed with Cervical Cancer (NCR 2005). Having regular Pap smears can detect abnormal cells in the cervix (mouth of the womb), that could develop into Cervical Cancer. We encourage all women to go for Pap smears at least every 3 years, from the age of 25, to detect abnormal cells early.
http://cansa.kansa.co.za/womens-health/
Geniet die somer gedurende Desember, Januarie en Februarie deur veilig in die son te wees, adviseer die Kankervereniging van Suid-Afrika (KANSA). Velkanker is een van die mees algemene kankers in Suid-Afrika en is as gevolg van vel selbeskadiging, wat in die onderste gedeelte van die epidermis (die boonste laag van die vel) begin.
Lees meer: http://www.cansa.org.za/be-sunsmart/
Chunker Based Sentiment Analysis and Tense Classification for Nepali Textkevig
The article represents the Sentiment Analysis (SA) and Tense Classification using Skip gram model for the word to vector encoding on Nepali language. The experiment on SA for positive-negative classification is carried out in two ways. In the first experiment the vector representation of each sentence is generated by using Skip-gram model followed by the Multi-Layer Perceptron (MLP) classification and it is observed that the F1 score of 0.6486 is achieved for positive-negative classification with overall accuracy of 68%. Whereas in the second experiment the verb chunks are extracted using Nepali parser and carried out the similar experiment on the verb chunks. F1 scores of 0.6779 is observed for positive -negative classification with overall accuracy of 85%. Hence, Chunker based sentiment analysis is proven to be better than sentiment analysis using sentences. This paper also proposes using a skip-gram model to identify the tenses of Nepali sentences and verbs. In the third experiment, the vector representation of each sentence is generated by using Skip-gram model followed by the Multi-Layer Perceptron (MLP)classification and it is observed that verb chunks had very low overall accuracy of 53%. In the fourth experiment, conducted for Tense Classification using Sentences resulted in improved efficiency with overall accuracy of 89%. Past tenses were identified and classified more accurately than other tenses. Hence, sentence based tense classification is proven to be better than verb Chunker based sentiment analysis.
Chunker Based Sentiment Analysis and Tense Classification for Nepali Textkevig
The article represents the Sentiment Analysis (SA) and Tense Classification using Skip gram model for the word to vector encoding on Nepali language. The experiment on SA for positive-negative classification is carried out in two ways. In the first experiment the vector representation of each sentence is generated by using Skip-gram model followed by the Multi-Layer Perceptron (MLP) classification and it is observed that the F1 score of 0.6486 is achieved for positive-negative classification with overall accuracy of 68%. Whereas in the second experiment the verb chunks are extracted using Nepali parser and carried out the similar experiment on the verb chunks. F1 scores of 0.6779 is observed for positive -negative classification with overall accuracy of 85%. Hence, Chunker based sentiment analysis is proven to be better than sentiment analysis using sentences. This paper also proposes using a skip-gram model to identify the tenses of Nepali sentences and verbs. In the third experiment, the vector representation of each sentence is generated by using Skip-gram model followed by the Multi-Layer Perceptron (MLP)classification and it is observed that verb chunks had very low overall accuracy of 53%. In the fourth experiment, conducted for Tense Classification using Sentences resulted in improved efficiency with overall accuracy of 89%. Past tenses were identified and classified more accurately than other tenses. Hence, sentence based tense classification is proven to be better than verb Chunker based sentiment analysis.
We propose a model for carrying out deep learning based multimodal sentiment analysis. The MOUD dataset is taken for experimentation purposes. We developed two parallel text based and audio basedmodels and further, fused these heterogeneous feature maps taken from intermediate layers to complete thearchitecture. Performance measures–Accuracy, precision, recall and F1-score–are observed to outperformthe existing models.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
APPROXIMATE ANALYTICAL SOLUTION OF NON-LINEAR BOUSSINESQ EQUATION FOR THE UNS...mathsjournal
For one dimensional homogeneous, isotropic aquifer, without accretion the governing Boussinesq
equation under Dupuit assumptions is a nonlinear partial differential equation. In the present paper
approximate analytical solution of nonlinear Boussinesq equation is obtained using Homotopy
perturbation transform method(HPTM). The solution is compared with the exact solution. The
comparison shows that the HPTM is efficient, accurate and reliable. The analysis of two important aquifer
parameters namely viz. specific yield and hydraulic conductivity is studied to see the effects on the height
of water table. The results resemble well with the physical phenomena.
FEATURE SELECTION AND CLASSIFICATION APPROACH FOR SENTIMENT ANALYSISmlaij
Sentiment analysis and Opinion mining has emerged as a popular and efficient technique for information retrieval and web data analysis. The exponential growth of the user generated content has opened new horizons for research in the field of sentiment analysis. This paper proposes a model for sentiment analysis of movie reviews using a combination of natural language processing and machine learning approaches. Firstly, different data pre-processing schemes are applied on the dataset. Secondly, the behaviour of twoclassifiers, Naive Bayes and SVM, is investigated in combination with different feature selection schemes to
obtain the results for sentiment analysis. Thirdly, the proposed model for sentiment analysis is extended to
obtain the results for higher order n-grams.
A Review on Pattern Recognition with Offline Signature Classification and Tec...IJSRD
Pattern recognition gets renowned and come in touch with us since 1960’s and it get lots of attention from all over the world by their outstanding characteristics. In this paper Pattern recognition was presented, including idea, system, application and incorporation. In the meantime, ten descriptions and also more than ten techniques, for example- recognition was compressed. At the end, grouping of PR and structure and its associated fields and also application ranges were presented at point of interest. PR has researcher’s attention attracted in the last few decades as an approach of machine learning because of its areas of extensive spread application. The application area contains business, speech recognition, data mining, communications, military intelligence, automations, medicine, Bioinformatics, document classification and various others. In this review paper many Pattern Recognition approaches have been reviewed and also their pros/cons, application particular paradigm has been presented. The signature Recognition and check framework is used to aware and confirm individual's written hand signature. Presently transcribed mark is a standout amongst the most commonly acknowledged individual traits for personal attributes. Signature verification gives approval in money related and business exchange. Signature confirmation finds its uses in the field of net keeping money, travel permit check framework, gives validation to a hopefuls in broad daylight examination from their marks, charge cards, bank checks.
Sentiment classification is an ongoing field and interesting area of research because of its application in various fields collecting review from people about products and social and political events through the web. Currently, Sentiment Analysis concentrates for subjective statements or on subjectivity and overlook objective statements which carry sentiment(s). During the sentiment classification more challenging problem are faced due to the ambiguous sense of words, negation words and intensifier. Due to its importance the correct sense of target word is extracted and determined for which the similarity arise in WordNet Glosses. This paper presents a survey covering the techniques and methods in sentiment analysis and challenges appear in the field.
The sarcasm detection with the method of logistic regressionEditorIJAERD
The prediction analysis is approach which may predict future possibilities. This research work is based on the
sarcasm detection from the text data. In the previous time SVM classification is applied for the sarcasm detection. The SVM
classifier classifies data based on the hyper plane which give low accuracy. To improve accuracy for sarcasm detection
logistic regression is applied during this work. The existing and proposed techniques are implemented in python and results
are analysed in terms of accuracy, execution time. The proposed approach has high accuracy and low execution time as
compared to SVM classifier for sarcasm detection.
Sentiment classification aims to detect information such as opinions, explicit , implicit feelings expressed
in text. The most existing approaches are able to detect either explicit expressions or implicit expressions of
sentiments in the text separately. In this proposed framework it will detect both Implicit and Explicit
expressions available in the meeting transcripts. It will classify the Positive, Negative, Neutral words and
also identify the topic of the particular meeting transcripts by using fuzzy logic. This paper aims to add
some additional features for improving the classification method. The quality of the sentiment classification
is improved using proposed fuzzy logic framework .In this fuzzy logic it includes the features like Fuzzy
rules and Fuzzy C-means algorithm.The quality of the output is evaluated using the parameters such as
precision, recall, f-measure. Here Fuzzy C-means Clustering technique measured in terms of Purity and
Entropy. The data set was validated using 10-fold cross validation method and observed 95% confidence
interval between the accuracy values .Finally, the proposed fuzzy logic method produced more than 85 %
accurate results and error rate is very less compared to existing sentiment classification techniques.
A hybrid composite features based sentence level sentiment analyzerIAESIJAI
Current lexica and machine learning based sentiment analysis approaches
still suffer from a two-fold limitation. First, manual lexicon construction and
machine training is time consuming and error-prone. Second, the
prediction’s accuracy entails sentences and their corresponding training text
should fall under the same domain. In this article, we experimentally
evaluate four sentiment classifiers, namely support vector machines (SVMs),
Naive Bayes (NB), logistic regression (LR) and random forest (RF). We
quantify the quality of each of these models using three real-world datasets
that comprise 50,000 movie reviews, 10,662 sentences, and 300 generic
movie reviews. Specifically, we study the impact of a variety of natural
language processing (NLP) pipelines on the quality of the predicted
sentiment orientations. Additionally, we measure the impact of incorporating
lexical semantic knowledge captured by WordNet on expanding original
words in sentences. Findings demonstrate that the utilizing different NLP
pipelines and semantic relationships impacts the quality of the sentiment
analyzers. In particular, results indicate that coupling lemmatization and
knowledge-based n-gram features proved to produce higher accuracy results.
With this coupling, the accuracy of the SVM classifier has improved to
90.43%, while it was 86.83%, 90.11%, 86.20%, respectively using the three
other classifiers.
Sentimental analysis is a context based mining of text, which extracts and identify subjective information from a text or sentence provided. Here the main concept is extracting the sentiment of the text using machine learning techniques such as LSTM Long short term memory . This text classification method analyses the incoming text and determines whether the underlined emotion is positive or negative along with probability associated with that positive or negative statements. Probability depicts the strength of a positive or negative statement, if the probability is close to zero, it implies that the sentiment is strongly negative and if probability is close to1, it means that the statement is strongly positive. Here a web application is created to deploy this model using a Python based micro framework called flask. Many other methods, such as RNN and CNN, are inefficient when compared to LSTM. Dirash A R | Dr. S K Manju Bargavi "LSTM Based Sentiment Analysis" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-4 , June 2021, URL: https://www.ijtsrd.compapers/ijtsrd42345.pdf Paper URL: https://www.ijtsrd.comcomputer-science/data-processing/42345/lstm-based-sentiment-analysis/dirash-a-r
Effective modelling of human expressive states from voice by adaptively tunin...IAESIJAI
This paper aims to develop efficient speech-expressive models using the adaptively tuning neuro-fuzzy inference system (ANFIS). The developed models differentiate a high-arousal happiness state from a low-arousal sadness state from the benchmark Berlin (EMODB) database. The proposed low-cost flexible developed algorithms are self-tunable and can address several vivid real-world issues such as home tutoring, banking, and finance sectors, criminal investigations, psychological studies, call centers, cognitive and biomedical sciences. The work develops the proposed structures by formulating several novel feature vectors comprising both time and frequency information. The features considered are pitch (F0), the standard deviation of pitch (SDF0), autocorrelation coefficient (AC), log-energy (E), jitter, shimmer, harmonic to noise ratio (HNR), spectral centroid (SC), spectral roll-off (SR), spectral flux (SF), and zero-crossing rate (ZCR). to alleviate the issues of the curse of dimensionality associated with the frame-level extraction, the features are extracted at the utterance level. Several performance parameters have been computed to validate the individual time and frequency models. Further, the ANFIS models are tested for their efficacy in a combinational platform. The chosen features are complementary and the augmented vectors have indeed shown improved performance with more available information as revealed by our results.
Similar to Intelligent Systems - Predictive Analytics Project (20)
Intelligent Systems - Predictive Analytics Project
1. Fall 2016, Department of Computer and Information Science, IUPUI
Prediction Of Schizophrenia from Speech Analysis of individuals
Priyanka Ahire Shreya Chakrabarti Yash Agrawal
Abstract –Schizophrenia is a mental disorder of a
type involving a breakdown in the relation between
thought, emotion, and behavior, leading to faulty
perception, inappropriate actions and feelings,
withdrawal from reality and personal relationships
in to fantasy and delusion, and a sense of mental
fragmentation. Schizophrenia is a disease which
cannot be cured but treatment might help
someway. It can last lifelong. The objective of this
project is to analyze the schizophrenic dataset and
determine the features from which it is easy to
conclude that the patient is schizophrenic. Various
methods are implemented and compared the
results but Logistic regression is the best fit for this
situation.
Keywords – Logistic Regression, Best fit, Random
Forest, OneR, Gaussian Naïve Bayes, Decision Tree
I. INTRODUCTION
Schizophrenia is a mental disorder. People convey
meaning by what they say as well as how they say it:
Tone, word choice, and the length of a phrase are all
crucial cues to understanding what’s going on in
someone’s mind. When a psychiatrist or psychologist
examines a person, they listen for these signals to get
a sense of their wellbeing, drawing on past experience
to guide their judgment. [2]
A similar approach is applied here using machine
learning concept such as diffrerent Classification
algorithms.
This project represents an overview of Analysis of
Schizophrenic dataset using Logistic regression.
Logistic regression is an appropriate regression
analysis to conduct when the dependent variable is
binary (dichotomous). Like all regression analyses, the
logistic regression is a predictive analysis. Logistic
regression is used to describe data and to describe
relationship between dependent variable and one or
more interval or ratio scale independent variable. [3]
Implementing analysis of Schizophrenic dataset is
complex because of the limited dataset. The dataset
consists of speech data of the person who is
schizophrenic and the person who is healthy over a
period of two days. Challenge involved in the analysis
process was that the dataset provided was not large
enough. The results from the Logistic Regression
classification are compared with Random Forest,
Decision Tree and OneR algorithm results.
II. LITERATURE REVIEW
Analysis of the speech dataset is an important research
area in the field of speech classification. The research
poses to be extremely challenging. There are several
popular theories for speech classification such as
Motor theory [2], TRACE model [4,5], cohort
model[6], Fuzzy logical model[4]
Motor Theory- The Motor theory was proposed by
Liberman and Cooper [2] in the 1950s. The Motor
theory was developed further by Liberman et al[1,2].
In this theory, listeners were said tointerpret speech
sounds in terms of the motoric gestures they would use
to make those same sounds.
TRACE Model- The TRACE model[5] is a
connectionist network with an input layer and three
processing layers: pseudo-spectra, phoneme and word.
There are three types of connection in TRACE model.
The first connection type is feedforward excitatory
connections from input to features, features to
phonemes and phonemes to words. The second
connection type is lateral inhibitory connections at the
feature, phonemenon word layers. The last connection
type is top-down feedback excitatory connections
from words to phonemes.
Cohort Model- The original Cohort model was
proposed in 1984 by Wil-son et al[6]. The core idea
at the heart of the Cohort model is that human speech
comprehension is achieved by processing incoming
speech continuously as it is heard. At all times, the
system computes the best interpretation of currently
available input combining information in the speech
signal with prior semantic and syntactic context.
Fuzzy Logic Model- The fuzzy logical theory of
speech perception was developed by Massaro[4]. He
proposes that people remember speech sounds in a
probabilistic, or graded, way. It suggests that people
remember descriptions of the perceptual units of
language, called prototypes. Within each prototype,
various features may combine. However, features are
not just binary, there is a fuzzy value corresponding to
2. how likely it is that a sound belongs to a particular
speech category. Thus, when perceiving a speech
signal our decision about what we actually hear is
based on the relative goodness of the match between
the stimulus information and values of particular
prototypes. The final decision is based on multiple
features or sources of information, even visual
information.
Signal Modelling- In 2001, Karnjanadecha[22]
proposed signal modeling for high performance and
robust isolated word recognition. In this model, HMM
was used for classification. The recognition accuracy
rate of this experiment was 97.9% for speaker-
independent isolated alphabet recognition. When
adding Gaussian noise (15 dB) or testing like
telephone speech simulation, the recognition rates
were 95.8 and 89.6%, respectively.
Time extended features Model- In 2004, Ibrahim[23]
presented a technique to overcome the confusion
problem by means of time-extended features.He
expanded the duration of the consonants to gain a high
characteristic difference between confusable pairs in
the E-set letters. A continuous density HMM model
was used as the classifier. The best recognition rate
was only 88.72%.Moreover, the author did not test on
any noisy speech.
CNN- In 2015, Palaz at al. used CNN for continuous
speech recognition using raw speech signal [17]. They
extended the CNN-based approach to large vocabulary
speech recogni-tion problem and compared the CNN-
based approach against the conventional ANN-based
approach on Wall Street Journal corpus. They also
showed that the CNN-based method achieves better
performance in comparison with the conventional
ANN-based method as many parameters and features
learned from raw speech by the CNN-based approach
could generalize across different databases.
Pretrained, deep neural networks Model- In 2009,
Mohamed et al. tried using pre-trained, deep neural
networks as part of a hybrid monophone DNN–HMM
model on TIMIT, a small-scale speech task [25], and
in 2012, Mohamed et al. were the first to succeed in
pre-trained DNN–HMMs on acoustic modeling with
varying depths of networks [26,27]. In 2013,
Bocchieri and Tuske succeeded in using DNN for
speech recognition for large vocabulary speech tasks
[28,29].
Sound Event Classification Model- In 2011,
Jonathan developed a model for Sound event
classification in mismatched conditions [24]. In this
model,they developed a nonlinear feature extraction
method which first maps the spectrogram into a higher
dimensional space, by quantizing the dynamic range
into different regions, and then extracts the central
moments of the partitioned monochrome intensity
distributions as the feature of sound.
III. METHODOLOGY
Random Forest: Random forest is a concept of
collective learning technique for classification and
regression that work by building a huge number of
decision trees during training time and yielding the
class that is a kind of grouping or mean expectation of
the individual trees [21].
Decision Tree: Decision trees are non-parametric
supervised learning method used for classification and
regression. The main aim of decision tree is to create
a model that predicts the value of a target variable by
learning simple decision rules inferred from the data
[20].
OneR: OneR, short for "One Rule", is a simple, yet
accurate, classification algorithm that generates one
rule for each predictor in the data, then selects the rule
with the smallest total error as its "one rule". To create
a rule for a predictor, we construct a frequency table
for each predictor against the target. It has been shown
that OneR produces rules only slightly less accurate
than state-of-the-art classification algorithms while
producing rules that are simple for humans to
interpret.[10]
Naïve Bayes Classifier: Since speech recognition
is a multiclass classification problem and Naive Bayes
classifiers can handle multiclass classification
problems, it is also used here for classifying the digits.
Naive Bayes classifier is based on the Bayesian theory
which is a simple and effective probability
classification method. This is a supervised
classification technique. For each class value it
estimates that a given instance belongs to that class [6].
The feature items in one class are assumed to be
independent of other attribute values called class
conditional independence [7]. Naive Bayes classifier
needs only small amount of training set to estimate the
3. parameters for classification. The classifier is stated as
P(A|B) = P (B|A) * P (A)/P(B) (7)
Where P(A) is the prior probability of marginal
probability of A, P(A|B) is the conditional probability
of A, given B called the posterior probability, P(B|A)
is the conditional probability of B given A and P(B) is
the prior or marginal probability of B which acts as a
normalizing constant. The probability value of the
winning class dominates over that of the others [8].
SVM: SVM is a very useful technique used for
classification. It is a classifier which performs
classification methods by constructing hyper planes in
a multidimensional space that separates different class
labels based on statistical learning theory [7][8].
Though SVM is inherently a binary nonlinear
classifier, we can extend it to multiclass classification
since ASR is a multiclass problem. There are two
major strategies for multiclass classification namely
One-against-All [7] and One-against-One or pair wise
classification [9]. The conventional way is to
decompose the M-class problem into a series of two-
class problems and construct several binary classifiers.
In this work, we have used One-against-One method
in which there is one binary SVM for each pair of
classes to separate members of one class from
members of the other. This method allows us to train
all the system, with a maximum number of different
samples for each class, with a limited computer
memory [12].
Logistic Regression: Logistic regression was first
proposed in the 1940s as an alternative technique to
overcome limitations of ordinary least squares (OLS)
regression in handling dichotomous outcomes.[16]
Logistic regression measures the relationship between
the categorical dependent variable and one or more
independent variables. In logistic regression, the
dependent variable is binary or dichotomous, i.e. it
only contains data coded as 1 (TRUE, success,
Schizophrenic, etc.) or 0 (FALSE, failure, Healthy,
etc.).The goal of logistic regression is to find the best
fitting (yet biologically reasonable) model to describe
the relationship between the dichotomous
characteristic of interest (dependent variable =
response or outcome variable) and a set of independent
(predictor or explanatory) variables. Logistic
regression generates the coefficients (and its standard
errors and significance levels) of a formula to predict
a logit transformation of the probability of presence of
the characteristic of interest:[30]
where p is the probability of presence of the
characteristic of interest. The logit transformation is
defined as the logged odds:
And
IV. IMPLEMENTATION
1. Dataset: The dataset was collected by Department
of Psychology. They have collected the speech
samples from of the individuals who are schizophrenic
and the healthy individual over a period of two day.
All the values mentioned in the dataset are in
percentage form.
The dataset consist of two files:
1.1 Full: This file contains all the data from subjects
across two days. The data has been collected from 15
individuals. Some of them are schizophrenic and few
are healthy.
1.2 Individual: This file contains speech data from
subjects at individual times. This data is collected
across 15 individuals recorded at different times of the
day over the period of 2 days.
The dataset consist of 88 attributes in total. The group
attribute decides if the person is Schizophrenic or
Healthy.(1 –Schizophrenic , 0 – Healthy). The dataset
has been recorded from individuals if they spoke more
than 50 words at a particular time.
2. Logistic Regression is applied to the dataset as the
data is in the form given in Fig 1.
Logistic Regression is best fit for the dataset as the
already has binary classification in the form of healthy
individual and schizophrenic individual (0- Healthy,
1- Schizophrenic)
4. Fig. 1 Structure of data
The dataset is divided in to 4 data frames. The features
are:
a. Cognitive Processes
b. Pronoun
c. Emotions
d. Social
Fig.2 Distribution of dataset in different features
The logistic regression is performed on each of the
data frames predicting how likely a person with a
particular emotion is to develop schizophrenia.
3. As all the attributes in the dataset are independent
of each other, a Naïve Bayes is implemented and tested
the results.
4. A training set and testing set is created from the
dataset.
a. Training Set- Training set is the data set on which
your model is built. Training set is usually manually
written and your model follows exactly the same rules
and definitions given in the training set.
b. Testing Set- Test set is the data set on which you
apply your model and see if it is working correctly and
yielding expected and desired results or not.
A model is created from the training set and the results
are computed and the model is then applied on testing
data to check whether it is working correctly.
V. RESULTS AND DISCUSSION
1. Results from Logistic Regression:
1.1 Results of Logistic Regression on Emotions
Data frame:
Fig.3 Result on emotion data frame.
1.2. Result of Logistic Regression on Pronouns
Data Frame:
5. Fig.4 Result on Pronouns Data Frame
1.3. Result of Logistic Regression on Social Data
Frame:
Fig.5 Result on Social Data Frame
1.4. Result of Logistic Regression on Cognitive
Data Frame:
Fig. 6 Result on Cognitive Data Frame
2. Results from Gaussian Naïve Bayes:
Fig. 7 Results from Naïve Bayes approach
3.Results from Random Decision Forests:
Fig. 8 Result after running data on Random Decision Forest.
4. Results from Random Tree:
Fig. 9 Result after running data on Random Tree.
5. Results from OneR algorithm:
Fig. 10 Result after running data on OneR algorithm.
VI. CONTRIBUTION
It is a collaborative work done between Shreya and
Priyanka. Shreya has worked on implementation of
different models and collection of results and also
seeked feedback from the Professor after the final
Presentation. She also has created Presentation.
Priyanka has collected all the datasets from Professor,
generated training and testing sets. Priyanka also
gathered the information from presentation, Literature
Survey and constructed a final report. Yash has no
contribution to this project.
VII. CONCLUSION
The best suited algorithm for the given dataset is a
regression model as the dataset provided is already
divided into Binary format(i.e. 0- Healthy,1-
Schizophrenic). Regression tree based algorithm
(Random Decision Trees) are best used when
dependent variable is continuous. Rule-based
algorithms are best suited if there is a set of IF-THEN
6. rules for classification. Emotions is the best feature
observed as it gives the desired accuracy among the
other features.(i.e. >=80%)
VIII. FUTURE SCOPE
The future scope is to implement Support Vector
Machine as Logistic Regression is Suitable.
Implement Regularization in statistics to improve the
logistic regression model. The large dataset is
expected in upcoming days then more correct results
are expected.
IX. REFERENCES
1. Liberman, A.M., Cooper, F.S., Shankweiler, D.P.,
Studdert-Kennedy, M.: Perception of speech
code. Psychol. Rev.74,431–461 (1967)
2. Liberman, A.M., Mattingly, I.G.: The motor
theory of speech perception revised. Cognition21,
1–36 (1985)
3. Cole, R., Fanty, M.: ISOLET (Isolated Letter
Speech Recognition),Department of Computer
Science and Engineering, September 12(1994)
4. Massaro, D.W.: Testing between the TRACE
Model and the Fuzzy Logical Model of Speech
perception. Cognitive Psychology, pp.398–421
(1989)
5. McClelland, J.L., Elman, J.L.: The TRACE
model of speech perception. Cognitive
Psychology (1986)6. Wilson, W., Marslen, M.:
Functional parallelism in spoken word
recognition. Cognition 25, 71–102 (1984)
6. Economou K., Lymberopoulos D., 1999. A New
Perspective in Learning Pattern Generation for
Teaching Neural Networks, Volume 12, Issue 4-
5, 767-775.
7. V.N. Vapnik., Statistical Learning Theory, J.
Wiley, N.Y., 1998.
8. N. Cristianini, J. Shawe-Taylor., An introduction
to Support Vector Machines, Cambridge
University Press, Cambridge, U.K., 2000.
9. Ulrich H.-G. Kreßel., Pairwise Classification and
Support Vector Machines, Advances inKernel
Methods Support Vector Machine Learning,
Cambridge, MA, MIT press, pp. 255-268, 1999.
10. http://www.saedsayad.com/oner.html
11. PERFORMANCE OF DIFFERENT
CLASSIFIERS IN SPEECH RECOGNITION
Sonia Suuny1 , David Peter S2 , K. Poulose
Jacob3
12. C.W. Hsu, C.J. Lin, A Comparison of Methods for
Multi-class Support Vector Machines. IEEE
Transactions on Neural Networks, 13(2), pp. 415–
425, 2002.
13. Logistic regression, Newsom, Data analysis 2,
Fall 2015.
14. http://scikitlearn.org/stable/modules/tree.htm
15. https://en.wikipedia.org/wiki/Random_forest
16. Logistic Regression, Chao -Ying Joanne Pen
Indiana University-Bloomington
17. Palaz, D., Magimai, M., Collobert, R.:
Convolutional neural networks-based continuous
speech recognition using raw speech signal. In:
ICASSP (2015)
18. Loizou, P.C., Spanias, A.S.: High-performance
alphabet recognition. IEEE Trans. Speech Audio
Proc.4, 430–445 (1996)
19. Cole, R., Fanty, M., Muthusamy, Y.,
Gopalakrishnan M.: Speaker-independent
recognition of spoken english letters. In:
International Joint Conference on Neural
Networks (IJCNN), pp. 45–51 (1990)
20. Cole, R., Fanty, M.,: Spoken letter recognition. In:
Presented at the Proceedings of the conference on
advances in neural information processing
systems Denver, Colorado, United States (1990)
21. Fanty, M., Cole, R.: Spoken Letter Recognition.
In: Presented at theProceedings of the conference
on advances in neural information processing
systems Denver, Colorado, United States (1990)
22. Karnjanadecha, M., Zahorian, S.A.: Signal
modeling for high-performance robust isolated
word recognition. IEEE Trans. Speech Audio
Proc.9, 647–654 (2001)
23. Ibrahim, M.D., Ahmad, A.M., Smaon, D.F.,
Salam M.S.H.: Improved E-set recognition
performance using time-expanded features. In:
Presented at the second national conference on
computer graphics and multimedia
(CoGRAMM), Selangor, Malaysia(2004)
24. Jonathan, D., Da, T.H., Haizhou, L.: Spectrogram
Image feature for sound event classification in
mismatched conditions. In: IEEE Signal
Processing letters, pp. 130–133 (2011 )
25. Mohamed, A.R., Dahl, G.E., Hinton, G.E.: Deep
belief networks for phone recognition. In: NIPS
workshop on deep learning for speech recognition
and related applications (2009)
26. 26.Mohamed,A.,Dahl,G.,Hinton,G.:“Acousticmo
delingusingdeep belief networks. In: IEEE Trans.
Speech, & Language Proc, Audio(2012)
7. 27. 27.Mohamed,A.,Hinton,G.,Penn,G.:Understandi
nghowdeepbelief networks perform acoustic
modelling. In: Proc. ICASSP (2012)
28. Bocchieri, E., Dimitriadis, D.: Investigating deep
neural network k based transforms of robust audio
features for lvcsr. In: ICASSP(2013)
29. Tuske, Z., Golik, P., Schluter, R., Ney, H.:
Acousticmodelingwithdeepneuralnetworksusingr
awtimesignalforlvcsr.In:Interspeech(2014)
30. https://www.medcalc.org/manual/logistic_regres
sion.php