Nearly 70% of people are concerned about the propagation of fake news. This paper aims to detect fake news in online articles through the use of semantic features and various machine learning techniques. In this research, we investigated recurrent neural networks vs. the naive bayes classifier and random forest classifiers using five groups of linguistic features. Evaluated with real or fake dataset from kaggle.com, the best performing model achieved an accuracy of 95.66% using bigram features with the random forest classifier. The fact that bigrams outperform unigrams, trigrams, and quadgrams show that word pairs as opposed to single words or phrases best indicate the authenticity of news.
Recently, fake news has been incurring many problems to our society. As a result, many researchers have been working on identifying fake news. Most of the fake news detection systems utilize the linguistic feature of the news. However, they have difficulty in sensing highly ambiguous fake news which can be detected only after identifying meaning and latest related information. In this paper, to resolve this problem, we shall present a new Korean fake news detection system using fact DB which is built and updated by human's direct judgement after collecting obvious facts. Our system receives a proposition, and search the semantically related articles from Fact DB in order to verify whether the given proposition is true or not by comparing the proposition with the related articles in fact DB. To achieve this, we utilize a deep learning model, Bidirectional Multi Perspective Matching for Natural Language Sentence BiMPM , which has demonstrated a good performance for the sentence matching task. However, BiMPM has some limitations in that the longer the length of the input sentence is, the lower its performance is, and it has difficulty in making an accurate judgement when an unlearned word or relation between words appear. In order to overcome the limitations, we shall propose a new matching technique which exploits article abstraction as well as entity matching set in addition to BiMPM. In our experiment, we shall show that our system improves the whole performance for fake news detection. Prasanth. K | Praveen. N | Vijay. S | Auxilia Osvin Nancy. V ""Fake News Detection using Machine Learning"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-2 , February 2020,
URL: https://www.ijtsrd.com/papers/ijtsrd30014.pdf
Paper Url : https://www.ijtsrd.com/engineering/information-technology/30014/fake-news-detection-using-machine-learning/prasanth-k
Hoax analyzer for Indonesian news using RNNs with fasttext and glove embeddingsjournalBEEI
Misinformation has become an innocuous yet potentially harmful problem ever since the development of internet. Numbers of efforts are done to prevent the consumption of misinformation, including the use of artificial intelligence (AI), mainly natural language processing (NLP). Unfortunately, most of natural language processing use English as its linguistic approach since English is a high resource language. On the contrary, Indonesia language is considered a low resource language thus the amount of effort to diminish consumption of misinformation is low compared to English-based natural language processing. This experiment is intended to compare fastText and GloVe embeddings for four deep neural networks (DNN) models: long short-term memory (LSTM), bidirectional long short-term memory (BI-LSTM), gated recurrent unit (GRU) and bidirectional gated recurrent unit (BI-GRU) in terms of metrics score when classifying news between three classes: fake, valid, and satire. The latter results show that fastText embedding is better than GloVe embedding in supervised text classification, along with BI-GRU + fastText yielding the best result.
News Reliability Evaluation using Latent Semantic AnalysisTELKOMNIKA JOURNAL
The rapid rise and widespread of ‘Fake News’ has severe implications in the society today. Much efforts have been directed towards the development of methods to verify news reliability on the Internet in recent years. In this paper, an automated news reliability evaluation system was proposed. The system utilizes term several Natural Language Processing (NLP) techniques such as Term Frequency-Inverse Document Frequency (TF-IDF), Phrase Detection and Cosine Similarity in tandem with Latent Semantic Analysis (LSA). A collection of 9203 labelled articles from both reliable and unreliable sources were collected. This dataset was then applied random test-train split to create the training dataset and testing dataset. The final results obtained shows 81.87% for precision and 86.95% for recall with the accuracy being 73.33%.
Recently, fake news has been incurring many problems to our society. As a result, many researchers have been working on identifying fake news. Most of the fake news detection systems utilize the linguistic feature of the news. However, they have difficulty in sensing highly ambiguous fake news which can be detected only after identifying meaning and latest related information. In this paper, to resolve this problem, we shall present a new Korean fake news detection system using fact DB which is built and updated by human's direct judgement after collecting obvious facts. Our system receives a proposition, and search the semantically related articles from Fact DB in order to verify whether the given proposition is true or not by comparing the proposition with the related articles in fact DB. To achieve this, we utilize a deep learning model, Bidirectional Multi Perspective Matching for Natural Language Sentence BiMPM , which has demonstrated a good performance for the sentence matching task. However, BiMPM has some limitations in that the longer the length of the input sentence is, the lower its performance is, and it has difficulty in making an accurate judgement when an unlearned word or relation between words appear. In order to overcome the limitations, we shall propose a new matching technique which exploits article abstraction as well as entity matching set in addition to BiMPM. In our experiment, we shall show that our system improves the whole performance for fake news detection. Prasanth. K | Praveen. N | Vijay. S | Auxilia Osvin Nancy. V ""Fake News Detection using Machine Learning"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-2 , February 2020,
URL: https://www.ijtsrd.com/papers/ijtsrd30014.pdf
Paper Url : https://www.ijtsrd.com/engineering/information-technology/30014/fake-news-detection-using-machine-learning/prasanth-k
Hoax analyzer for Indonesian news using RNNs with fasttext and glove embeddingsjournalBEEI
Misinformation has become an innocuous yet potentially harmful problem ever since the development of internet. Numbers of efforts are done to prevent the consumption of misinformation, including the use of artificial intelligence (AI), mainly natural language processing (NLP). Unfortunately, most of natural language processing use English as its linguistic approach since English is a high resource language. On the contrary, Indonesia language is considered a low resource language thus the amount of effort to diminish consumption of misinformation is low compared to English-based natural language processing. This experiment is intended to compare fastText and GloVe embeddings for four deep neural networks (DNN) models: long short-term memory (LSTM), bidirectional long short-term memory (BI-LSTM), gated recurrent unit (GRU) and bidirectional gated recurrent unit (BI-GRU) in terms of metrics score when classifying news between three classes: fake, valid, and satire. The latter results show that fastText embedding is better than GloVe embedding in supervised text classification, along with BI-GRU + fastText yielding the best result.
News Reliability Evaluation using Latent Semantic AnalysisTELKOMNIKA JOURNAL
The rapid rise and widespread of ‘Fake News’ has severe implications in the society today. Much efforts have been directed towards the development of methods to verify news reliability on the Internet in recent years. In this paper, an automated news reliability evaluation system was proposed. The system utilizes term several Natural Language Processing (NLP) techniques such as Term Frequency-Inverse Document Frequency (TF-IDF), Phrase Detection and Cosine Similarity in tandem with Latent Semantic Analysis (LSA). A collection of 9203 labelled articles from both reliable and unreliable sources were collected. This dataset was then applied random test-train split to create the training dataset and testing dataset. The final results obtained shows 81.87% for precision and 86.95% for recall with the accuracy being 73.33%.
WARRANTS GENERATIONS USING A LANGUAGE MODEL AND A MULTI-AGENT SYSTEMijnlc
Each argument begins with a conclusion, which is followed by one or more premises supporting the
conclusion. The warrant is a critical component of Toulmin's argument model; it explains why the premises
support the claim. Despite its critical role in establishing the claim's veracity, it is frequently omitted or left
implicit, leaving readers to infer. We consider the problem of producing more diverse and high-quality
warrants in response to a claim and evidence. To begin, we employ BART [1] as a conditional sequence tosequence language model to guide the output generation process. On the ARCT dataset [2], we fine-tune
the BART model. Second, we propose the Multi-Agent Network for Warrant Generation as a model for
producing more diverse and high-quality warrants by combining Reinforcement Learning (RL) and
Generative Adversarial Networks (GAN) with the mechanism of mutual awareness of agents. In terms of
warrant generation, our model generates a greater variety of warrants than other baseline models. The
experimental results validate the effectiveness of our proposed hybrid model for generating warrants.
Evolving Swings (topics) from Social Streams using Probability ModelIJERA Editor
Evolving swings from social streams is receiving renewed interest and it is motivated by the growth of social
media and social streams. Non-conventional based approaches can be appropriate which include text, images,
URLs and videos. The focus is on evolving topics by social aspects of the networks and the mentions of user
links between users which are generated intentionally or unintentionally through replies, mentions and retweets.
A probability model of the mentioning behavior is proposed and the proposed model detects the evolving topic
from the anomalies measured. After a several experiments, it shows that mention anomaly based approaches
detects the evolving swing as early as text anomaly based approa0ches.
Neural Network Based Context Sensitive Sentiment AnalysisEditor IJCATR
Social media communication is evolving more in these days. Social networking site is being rapidly increased in recent years, which provides platform to connect people all over the world and share their interests. The conversation and the posts available in social media are unstructured in nature. So sentiment analysis will be a challenging work in this platform. These analyses are mostly performed in machine learning techniques which are less accurate than neural network methodologies. This paper is based on sentiment classification using Competitive layer neural networks and classifies the polarity of a given text whether the expressed opinion in the text is positive or negative or neutral. It determines the overall topic of the given text. Context independent sentences and implicit meaning in the text are also considered in polarity classification.
The use of social media has grown significantly due to evolution of web 2.0 technologies. People can share the ideas, comments and posting any events. Twitter is among of those social media sites. It contains very short message created by registered users. Twitter has played the important parts in many events by sharing message posted by registered user. This study aims on evaluating performance of Naïve Bayes and J48 Classification algorithms on Swahili tweets. Swahili is among of the African language that is growing faster and is receiving a wide attention in web usage through social networks, blogs, portals etc. To the best of the researcher’s knowledge; many studies have been conducted on other language for comparing classification algorithms, but no similar studies found on Swahili language. The data of this study was collected from the top ten most popular twitter accounts in Tanzania using Nodexl. These accounts were identified according to the number of followers. The extracted data were pre-processed in order to remove noise, incomplete data, outlier, inconsistent data, symbols etc. Further, the tweets contains words which are not in Swahili language were identified and removed and filtered by removing url links and twitter user names. The pre-processed data analysed on WEKA using Naïve Bayes and J48 classification algorithms. The algorithm then evaluated based on their accuracy, precision, recall and Receiver Operator Characteristic (ROC). It has been found that; Naïve Bayes classification algorithms perform better on Swahili tweets compared to J48 classification algorithm.
A FILM SYNOPSIS GENRE CLASSIFIER BASED ON MAJORITY VOTEkevig
We propose an automatic classification system of movie genres based on different features from their textual
synopsis. Our system is first trained on thousands of movie synopsis from online open databases, by learning relationships between textual signatures and movie genres. Then it is tested on other movie synopsis,
and its results are compared to the true genres obtained from the Wikipedia and the Open Movie Database
(OMDB) databases. The results show that our algorithm achieves a classification accuracy exceeding 75%.
A FILM SYNOPSIS GENRE CLASSIFIER BASED ON MAJORITY VOTEijnlc
We propose an automatic classification system of movie genres based on different features from their textual synopsis. Our system is first trained on thousands of movie synopsis from online open databases, by learning relationships between textual signatures and movie genres. Then it is tested on other movie synopsis, and its results are compared to the true genres obtained from the Wikipedia and the Open Movie Database
(OMDB) databases. The results show that our algorithm achieves a classification accuracy exceeding 75%.
The peer-reviewed International Journal of Engineering Inventions (IJEI) is started with a mission to encourage contribution to research in Science and Technology. Encourage and motivate researchers in challenging areas of Sciences and Technology.
A Cooperative Peer Clustering Scheme for Unstructured Peer-to-Peer Systemsijp2p
This paper proposes a peer clustering scheme for unstructured Peer-to-Peer (P2P) systems. The proposed
scheme consists of an identification of critical links, local reconfiguration of incident links, and a
retaliation rule. The simulation result indicates that the proposed scheme improves the performance of
previous schemes and that a peer taking a cooperative action will receive a higher profit than selfish peers.
Hate speech has been an ongoing problem on the Internet for many years. Besides, social media, especially Facebook, and Twitter have given it a global stage where those hate speeches can spread far more rapidly. Every social media platform needs to implement an effective hate speech detection system to remove offensive content in real-time. There are various approaches to identify hate speech, such as Rule-Based, Machine Learning based, deep learning based and Hybrid approach. Since this is a review paper, we explained the valuable works of various authors who have invested their valuable time in studying to identifying hate speech using various approaches.
Era of Sociology News Rumors News Detection using Machine Learningijtsrd
In this paper we have perform the political fact checking and fake news detection using various technologies such as Python libraries , Anaconda , and algorithm such as Naïve Bayes, we present an analytical study on the language of news media. To find linguistic features of untrustworthy text, we compare the language of real news with that of satire, hoaxes, and propaganda. We are also presenting a case study based on PolitiFact.com using their factuality judgments on a 6 point scale to prove the feasibility of automatic political fact checking. Experiments show that while media fact checking remains an open research issue, stylistic indications can help determine the veracity of the text. Chandni Jain | S. Vignesh ""Era of Sociology News Rumors News Detection using Machine Learning"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-3 , April 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23534.pdf
Paper URL: https://www.ijtsrd.com/computer-science/artificial-intelligence/23534/era-of-sociology-news-rumors-news-detection-using-machine-learning/chandni-jain
A RELIABLE ARTIFICIAL INTELLIGENCE MODEL FOR FALSE NEWS DETECTION MADE BY PUB...caijjournal
The quick access to information on social media networks as well as its exponential rise also made it
difficult to distinguish among fake information or real information. The fast dissemination by way of
sharing has enhanced its falsification exponentially. It is also important for the credibility of social media
networks to avoid the spread of fake information. So it is emerging research challenge to automatically
check for misstatement of information through its source, content, or publisher and prevent the
unauthenticated sources from spreading rumours. This paper demonstrates an artificial intelligence based
approach for the identification of the false statements made by social network entities. Two variants of
Deep neural networks are being applied to evalues datasets and analyse for fake news presence. The
implementation setup produced maximum extent 99% classification accuracy, when dataset is tested for
binary (true or false) labeling with multiple epochs.
Multispectral Image Analysis Using Random Forest ijsc
Classical methods for classification of pixels in multispectral images include supervised classifiers such as the maximum-likelihood classifier, neural network classifiers, fuzzy neural networks, support vector machines, and decision trees. Recently, there has been an increase of interest in ensemble learning – a method that generates many classifiers and aggregates their results. Breiman proposed Random Forestin 2001 for classification and clustering. Random Forest grows many decision trees for classification. To classify a new object, the input vector is run through each decision tree in the forest. Each tree gives a classification. The forest chooses the classification having the most votes. Random Forest provides a robust algorithm for classifying large datasets. The potential of Random Forest is not been explored in analyzing multispectral satellite images. To evaluate the performance of Random Forest, we classified multispectral images using various classifiers such as the maximum likelihood classifier, neural network, support vector machine (SVM), and Random Forest and compare their results.
Classical methods for classification of pixels in multispectral images include supervised classifiers such as
the maximum-likelihood classifier, neural network classifiers, fuzzy neural networks, support vector
machines, and decision trees. Recently, there has been an increase of interest in ensemble learning – a
method that generates many classifiers and aggregates their results. Breiman proposed Random Forestin
2001 for classification and clustering. Random Forest grows many decision trees for classification. To
classify a new object, the input vector is run through each decision tree in the forest. Each tree gives a
classification. The forest chooses the classification having the most votes. Random Forest provides a robust
algorithm for classifying large datasets. The potential of Random Forest is not been explored in analyzing
multispectral satellite images. To evaluate the performance of Random Forest, we classified multispectral
images using various classifiers such as the maximum likelihood classifier, neural network, support vector
machine (SVM), and Random Forest and compare their results.
WARRANTS GENERATIONS USING A LANGUAGE MODEL AND A MULTI-AGENT SYSTEMijnlc
Each argument begins with a conclusion, which is followed by one or more premises supporting the
conclusion. The warrant is a critical component of Toulmin's argument model; it explains why the premises
support the claim. Despite its critical role in establishing the claim's veracity, it is frequently omitted or left
implicit, leaving readers to infer. We consider the problem of producing more diverse and high-quality
warrants in response to a claim and evidence. To begin, we employ BART [1] as a conditional sequence tosequence language model to guide the output generation process. On the ARCT dataset [2], we fine-tune
the BART model. Second, we propose the Multi-Agent Network for Warrant Generation as a model for
producing more diverse and high-quality warrants by combining Reinforcement Learning (RL) and
Generative Adversarial Networks (GAN) with the mechanism of mutual awareness of agents. In terms of
warrant generation, our model generates a greater variety of warrants than other baseline models. The
experimental results validate the effectiveness of our proposed hybrid model for generating warrants.
Evolving Swings (topics) from Social Streams using Probability ModelIJERA Editor
Evolving swings from social streams is receiving renewed interest and it is motivated by the growth of social
media and social streams. Non-conventional based approaches can be appropriate which include text, images,
URLs and videos. The focus is on evolving topics by social aspects of the networks and the mentions of user
links between users which are generated intentionally or unintentionally through replies, mentions and retweets.
A probability model of the mentioning behavior is proposed and the proposed model detects the evolving topic
from the anomalies measured. After a several experiments, it shows that mention anomaly based approaches
detects the evolving swing as early as text anomaly based approa0ches.
Neural Network Based Context Sensitive Sentiment AnalysisEditor IJCATR
Social media communication is evolving more in these days. Social networking site is being rapidly increased in recent years, which provides platform to connect people all over the world and share their interests. The conversation and the posts available in social media are unstructured in nature. So sentiment analysis will be a challenging work in this platform. These analyses are mostly performed in machine learning techniques which are less accurate than neural network methodologies. This paper is based on sentiment classification using Competitive layer neural networks and classifies the polarity of a given text whether the expressed opinion in the text is positive or negative or neutral. It determines the overall topic of the given text. Context independent sentences and implicit meaning in the text are also considered in polarity classification.
The use of social media has grown significantly due to evolution of web 2.0 technologies. People can share the ideas, comments and posting any events. Twitter is among of those social media sites. It contains very short message created by registered users. Twitter has played the important parts in many events by sharing message posted by registered user. This study aims on evaluating performance of Naïve Bayes and J48 Classification algorithms on Swahili tweets. Swahili is among of the African language that is growing faster and is receiving a wide attention in web usage through social networks, blogs, portals etc. To the best of the researcher’s knowledge; many studies have been conducted on other language for comparing classification algorithms, but no similar studies found on Swahili language. The data of this study was collected from the top ten most popular twitter accounts in Tanzania using Nodexl. These accounts were identified according to the number of followers. The extracted data were pre-processed in order to remove noise, incomplete data, outlier, inconsistent data, symbols etc. Further, the tweets contains words which are not in Swahili language were identified and removed and filtered by removing url links and twitter user names. The pre-processed data analysed on WEKA using Naïve Bayes and J48 classification algorithms. The algorithm then evaluated based on their accuracy, precision, recall and Receiver Operator Characteristic (ROC). It has been found that; Naïve Bayes classification algorithms perform better on Swahili tweets compared to J48 classification algorithm.
A FILM SYNOPSIS GENRE CLASSIFIER BASED ON MAJORITY VOTEkevig
We propose an automatic classification system of movie genres based on different features from their textual
synopsis. Our system is first trained on thousands of movie synopsis from online open databases, by learning relationships between textual signatures and movie genres. Then it is tested on other movie synopsis,
and its results are compared to the true genres obtained from the Wikipedia and the Open Movie Database
(OMDB) databases. The results show that our algorithm achieves a classification accuracy exceeding 75%.
A FILM SYNOPSIS GENRE CLASSIFIER BASED ON MAJORITY VOTEijnlc
We propose an automatic classification system of movie genres based on different features from their textual synopsis. Our system is first trained on thousands of movie synopsis from online open databases, by learning relationships between textual signatures and movie genres. Then it is tested on other movie synopsis, and its results are compared to the true genres obtained from the Wikipedia and the Open Movie Database
(OMDB) databases. The results show that our algorithm achieves a classification accuracy exceeding 75%.
The peer-reviewed International Journal of Engineering Inventions (IJEI) is started with a mission to encourage contribution to research in Science and Technology. Encourage and motivate researchers in challenging areas of Sciences and Technology.
A Cooperative Peer Clustering Scheme for Unstructured Peer-to-Peer Systemsijp2p
This paper proposes a peer clustering scheme for unstructured Peer-to-Peer (P2P) systems. The proposed
scheme consists of an identification of critical links, local reconfiguration of incident links, and a
retaliation rule. The simulation result indicates that the proposed scheme improves the performance of
previous schemes and that a peer taking a cooperative action will receive a higher profit than selfish peers.
Hate speech has been an ongoing problem on the Internet for many years. Besides, social media, especially Facebook, and Twitter have given it a global stage where those hate speeches can spread far more rapidly. Every social media platform needs to implement an effective hate speech detection system to remove offensive content in real-time. There are various approaches to identify hate speech, such as Rule-Based, Machine Learning based, deep learning based and Hybrid approach. Since this is a review paper, we explained the valuable works of various authors who have invested their valuable time in studying to identifying hate speech using various approaches.
Era of Sociology News Rumors News Detection using Machine Learningijtsrd
In this paper we have perform the political fact checking and fake news detection using various technologies such as Python libraries , Anaconda , and algorithm such as Naïve Bayes, we present an analytical study on the language of news media. To find linguistic features of untrustworthy text, we compare the language of real news with that of satire, hoaxes, and propaganda. We are also presenting a case study based on PolitiFact.com using their factuality judgments on a 6 point scale to prove the feasibility of automatic political fact checking. Experiments show that while media fact checking remains an open research issue, stylistic indications can help determine the veracity of the text. Chandni Jain | S. Vignesh ""Era of Sociology News Rumors News Detection using Machine Learning"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-3 , April 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23534.pdf
Paper URL: https://www.ijtsrd.com/computer-science/artificial-intelligence/23534/era-of-sociology-news-rumors-news-detection-using-machine-learning/chandni-jain
A RELIABLE ARTIFICIAL INTELLIGENCE MODEL FOR FALSE NEWS DETECTION MADE BY PUB...caijjournal
The quick access to information on social media networks as well as its exponential rise also made it
difficult to distinguish among fake information or real information. The fast dissemination by way of
sharing has enhanced its falsification exponentially. It is also important for the credibility of social media
networks to avoid the spread of fake information. So it is emerging research challenge to automatically
check for misstatement of information through its source, content, or publisher and prevent the
unauthenticated sources from spreading rumours. This paper demonstrates an artificial intelligence based
approach for the identification of the false statements made by social network entities. Two variants of
Deep neural networks are being applied to evalues datasets and analyse for fake news presence. The
implementation setup produced maximum extent 99% classification accuracy, when dataset is tested for
binary (true or false) labeling with multiple epochs.
Multispectral Image Analysis Using Random Forest ijsc
Classical methods for classification of pixels in multispectral images include supervised classifiers such as the maximum-likelihood classifier, neural network classifiers, fuzzy neural networks, support vector machines, and decision trees. Recently, there has been an increase of interest in ensemble learning – a method that generates many classifiers and aggregates their results. Breiman proposed Random Forestin 2001 for classification and clustering. Random Forest grows many decision trees for classification. To classify a new object, the input vector is run through each decision tree in the forest. Each tree gives a classification. The forest chooses the classification having the most votes. Random Forest provides a robust algorithm for classifying large datasets. The potential of Random Forest is not been explored in analyzing multispectral satellite images. To evaluate the performance of Random Forest, we classified multispectral images using various classifiers such as the maximum likelihood classifier, neural network, support vector machine (SVM), and Random Forest and compare their results.
Classical methods for classification of pixels in multispectral images include supervised classifiers such as
the maximum-likelihood classifier, neural network classifiers, fuzzy neural networks, support vector
machines, and decision trees. Recently, there has been an increase of interest in ensemble learning – a
method that generates many classifiers and aggregates their results. Breiman proposed Random Forestin
2001 for classification and clustering. Random Forest grows many decision trees for classification. To
classify a new object, the input vector is run through each decision tree in the forest. Each tree gives a
classification. The forest chooses the classification having the most votes. Random Forest provides a robust
algorithm for classifying large datasets. The potential of Random Forest is not been explored in analyzing
multispectral satellite images. To evaluate the performance of Random Forest, we classified multispectral
images using various classifiers such as the maximum likelihood classifier, neural network, support vector
machine (SVM), and Random Forest and compare their results.
High Accuracy Location Information Extraction From Social Network Texts Using...kevig
Terrorism has become a worldwide plague with severe consequences for the development of nations. Besides killing innocent people daily and preventing educational activities from taking place, terrorism is also hindering economic growth. Machine Learning (ML) and Natural Language Processing (NLP) can contribute to fighting terrorism by predicting in real-time future terrorist attacks if accurate data is available. This paper is part of a research project that uses text from social networks to extract necessary information to build an adequate dataset for terrorist attack prediction. We collected a set of 3000 social network texts about terrorism in Burkina Faso and used a subset to experiment with existing NLP solutions. The experiment reveals that existing solutions have poor accuracy for location recognition, which our solution resolves. We will extend the solution to extract dates and action information to achieve the project's goal.
High Accuracy Location Information Extraction From Social Network Texts Using...kevig
Terrorism has become a worldwide plague with severe consequences for the development of nations. Besides killing innocent people daily and preventing educational activities from taking place, terrorism is also hindering economic growth. Machine Learning (ML) and Natural Language Processing (NLP) can contribute to fighting terrorism by predicting in real-time future terrorist attacks if accurate data is available. This paper is part of a research project that uses text from social networks to extract necessary information to build an adequate dataset for terrorist attack prediction. We collected a set of 3000 social network texts about terrorism in Burkina Faso and used a subset to experiment with existing NLP solutions. The experiment reveals that existing solutions have poor accuracy for location recognition, which our solution resolves. We will extend the solution to extract dates and action information to achieve the project's goal.
On Semantics and Deep Learning for Event Detection in Crisis SituationsCOMRADES project
In this paper, we introduce Dual-CNN, a semantically-enhanced deep learning model to target the problem of event detection in crisis situations from
social media data. A layer of semantics is added to a traditional Convolutional Neural Network (CNN) model to capture the contextual information that is generally scarce in short, ill-formed social media messages. Our results show that
our methods are able to successfully identify the existence of events, and event types (hurricane, floods, etc.) accurately (> 79% F-measure), but the performance of the model significantly drops (61% F-measure) when identifying fine-grained event-related information (affected individuals, damaged infrastructures, etc.).
These results are competitive with more traditional Machine Learning models, such as SVM.
http://oro.open.ac.uk/49639/1/event_detection.pdf
A Hybrid Method of Long Short-Term Memory and AutoEncoder Architectures for S...AhmedAdilNafea
Sarcasm detection is considered one of the most challenging tasks in sentiment analysis and opinion mining applications in the social media. Sarcasm identification is therefore essential for a good public opinion decision. There are some studies on sarcasm detection that apply standard word2vec model and have shown great performance with word-level analysis. However, once a sequence of terms is being tackled, the performance drops. This is because averaging the embedding of each term in a sentence to get the general embedding would discard the important embedding of some terms. LSTM showed significant improvement in terms of document embedding. However, within the classification LSTM requires adding additional information in order to precisely classify the document into sarcasm or not. This study aims to propose two technique based on LSTM and Auto-Encoder for improving the sarcasm detection. A benchmark dataset has been used in the experiments along with several pre-processing operations that have been applied. These include stop word removal, tokenization and special character removal with LSTM which can be represented by configuring the document embedding and using Auto-Encoder the classifier that was trained on the proposed LSTM. Results showed that the proposed LSTM with Auto-Encoder outperformed the baseline by achieving 84% of f-measure for the dataset. The main reason behind the superiority is that the proposed auto encoder is processing the document embedding as input and attempt to output the same embedding vector. This will enable the architecture to learn the interesting embedding that have significant impact on sarcasm polarity.
With the advent of the Internet and social media, while hundreds of people have benefitted from the vast sources of information available, there has been an enormous increase in the rise of cyber-crimes, particularly targeted towards women. According to a 2019 report in the [4] Economics Times, India has witnessed a 457% rise in cybercrime in the five year span between 2011 and 2016. Most speculate that this is due to impact of social media such as Facebook, Instagram and Twitter on our daily lives. While these definitely help in creating a sound social network, creation of user accounts in these sites usually needs just an email-id. A real life person can create multiple fake IDs and hence impostors can easily be made. Unlike the real world scenario where multiple rules and regulations are imposed to identify oneself in a unique manner (for example while issuing one’s passport or driver’s license), in the virtual world of social media, admission does not require any such checks. In this paper, we study the different accounts of Instagram, in particular and try to assess an account as fake or real using Machine Learning techniques namely Logistic Regression and Random Forest Algorithm.
DETECTION OF FAKE ACCOUNTS IN INSTAGRAM USING MACHINE LEARNINGijcsit
With the advent of the Internet and social media, while hundreds of people have benefitted from the vast sources of information available, there has been an enormous increase in the rise of cyber-crimes, particularly targeted towards women. According to a 2019 report in the [4] Economics Times, India has witnessed a 457% rise in cybercrime in the five year span between 2011 and 2016. Most speculate that this is due to impact of social media such as Facebook, Instagram and Twitter on our daily lives. While these definitely help in creating a sound social network, creation of user accounts in these sites usually needs just an email-id. A real life person can create multiple fake IDs and hence impostors can easily be made. Unlike the real world scenario where multiple rules and regulations are imposed to identify oneself in a unique manner (for example while issuing one’s passport or driver’s license), in the virtual world of social media, admission does not require any such checks. In this paper, we study the different accounts of Instagram, in particular and try to assess an account as fake or real using Machine Learning techniques namely Logistic Regression and Random Forest Algorithm.
Fake accounts detection system based on bidirectional gated recurrent unit n...IJECEIAES
Online social networks have become the most widely used medium to interact with friends and family, share news and important events or publish daily activities. However, this growing popularity has made social networks a target for suspicious exploitation such as the spreading of misleading or malicious information, making them less reliable and less trustworthy. In this paper, a fake account detection system based on the bidirectional gated recurrent unit (BiGRU) model is proposed. The focus has been on the content of users’ tweets to classify twitter user profile as legitimate or fake. Tweets are gathered in a single file and are transformed into a vector space using the global vectors (GloVe) word embedding technique in order to preserve the semantic and syntax context. Compared with the baseline models such as long short-term memory (LSTM) and convolutional neural networks (CNN), the results are promising and confirm that using GloVe with BiGRU classifier outperforms with 99.44% for accuracy and 99.25% for precision. To prove the efficiency of our approach the results obtained with GloVe were compared to Word2vec under the same conditions. Results confirm that GloVe with BiGRU classifier performs the best results for detection of fake Twitter accounts using only tweets content feature.
Abstract: Detection of fake news based on deep learning techniques is a major issue used to mislead people. For
the experiments, several types of datasets, models, and methodologies have been used to detect fake news. Also,
most of the datasets contain text id, tweets id, and user-based id and user-based features. To get the proper results
and accuracy various models like CNN (Convolution neural network), DEEP CNN, and LSTM (Long short-term
memory) are used
Predicting Forced Population Displacement Using News ArticlesJaresJournal
The world has witnessed mass forced population displacement across the globe. Population displacement has various indications, with different social and policy consequences. Mitigation of the humanitarian crisis requires tracking and predicting the population movements to
allocate the necessary resources and inform the policymakers. The set of events that triggers population movements can be traced in the news articles. In this paper, we propose the Population
Displacement-Signal Extraction Framework (PD-SEF) to explore a large news corpus and extract
the signals of forced population displacement. PD-SEF measures and evaluates violence signals,
which is a critical factor of forced displacement from it. Following signal extraction, we propose a
displacement prediction model based on extracted violence scores. Experimental results indicate
the effectiveness of our framework in extracting high quality violence scores and building accurate
prediction models.
Similar to FAKE NEWS DETECTION WITH SEMANTIC FEATURES AND TEXT MINING (20)
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
FAKE NEWS DETECTION WITH SEMANTIC FEATURES AND TEXT MINING
1. International Journal on Natural Language Computing (IJNLC) Vol.8, No.3, June 2019
DOI: 10.5121/ijnlc.2019.8302 17
FAKE NEWS DETECTION WITH SEMANTIC
FEATURES AND TEXT MINING
Pranav Bharadwaj1 and Zongru Shao2
1
South Brunswick High School, Monmouth Junction, NJ
2
Spectronn , USA
ABSTRACT
Nearly 70% of people are concerned about the propagation of fake news. This paper aims to detect fake
news in online articles through the use of semantic features and various machine learning techniques. In
this research, we investigated recurrent neural networks vs. the naive bayes classifier and random forest
classifiers using five groups of linguistic features. Evaluated with real or fake dataset from kaggle.com, the
best performing model achieved an accuracy of 95.66% using bigram features with the random forest
classifier. The fact that bigrams outperform unigrams, trigrams, and quadgrams show that word pairs as
opposed to single words or phrases best indicate the authenticity of news.
KEYWORDS
Text Mining, Fake News, Machine Learning, Semantic Features, Natural Language Processing (NLP)
1. INTRODUCTION
Nearly 70% of the population is concerned about malicious use of fake news [3]. Fake news
detection is a problem that has been taken on by large social-networking companies such as
Facebook and Twitter to inhibit the propagation of misinformation across their online platforms.
Some fake news articles have targeted major political events such as the 2016 US Presidential
Election and Brexit [4]. However, the scope of fake news extends beyond globally significant
political events. Individuals falsely reported that a golden asteroid on target to hit Earth contains
$10 quadrillion worth of precious metals in an attempt to increase the value of Bitcoin [1]. With
fake news infiltrating multiple facets of public information, many are rightly concerned.
According to a Pew Research Center survey, fake news and misinformation has had a significant
impact on 68% of Americans’ confidence in government, 54% of Americans’ confidence in each
other, and 51% of Americans’ confidence in their political readers to get work done [5].
Additionally, the previously mentioned survey also states that 79% of US adults believe that
measures should be taken to inhibit the propagation of misinformation [5]. Residents of a
Macedonian town named Veles use Google AdSense to distribute fake news around the internet,
run politically manipulative Facebook pages and websites in order to make a living [12]. One of
their Facebook pages had garnered over 1.5 million followers [12]. The rising problem that is
fake news has become increasingly important because of the vulnerability of the massive readers
and its widespread malicious influence. As these invalid sources of information have gained
traction and have established themselves as credible informants to many individuals, preventing
this category of content from spreading and detecting it at the source has become
increasingly crucial. Consequently, automated identification of fake news has been studied by
Facebook and Twitter as well as other researchers [9].
In this paper, we present a comparison between recurrent neural networks, naive bayes, and
2. International Journal on Natural Language Computing (IJNLC) Vol.8, No.3, June 2019
18
random forest algorithms using various linguistic features for fake-news-detection. We use the
real or fake dataset from kaggle.com evaluate these models. The remainder of this paper is
structured into three sections. Section 2 details related works, how they approached detecting fake
news, and Section 3 describes the semantic features and machine learning algorithms in our
experiment. Section 4 presents the evaluation results, in which random forest with bigram
features achieved the best accuracy of 95.66%. Section 5 presents the conclusions and future
work.
2. RELATED WORKS
Several solutions were proposed for this problem. Prior studies employed logistic regression and
“boolean crowd-sourcing algorithms” to detect fake news in social networking websites [13].
However, this research assumed that agents who post misinformation can be detected by the users
who have prior contact of the content [13]. Another study used convolutional neural networks
(CNNs), with a long short term memory (LSTM) layer to detect fake news by the text context and
additional metadata [14]. Shu et al. studied linguistic features such as word count, frequency of
words, character count, and similarity and clarity score for videos and images while proposing
rumor classification, truth discovery, click-bait detection, and spammer and bot detection [11].
Rubin et al. proposed to classify fake news as one of three types: (a) serious fabrications, (b)
large-scale hoaxes, (c) humorous fakes. It also discussed the challenges that each variant of fake
news presents to its detection [11].
However, none of the prior studies had utilized recurrent neural networks (RNNs), naive bayes,
or random forest.
3. MATERIALS AND METHODS
In this section, we describe the material dataset, text preprocessing, semantic features including
term frequency (TF), term frequency–inverse document frequency (TFIDF), bigrams, trigram,
quadgram, vectorized word representations, and machine learning algorithms such as naive bayes
classifier, random forest, recurrent neural networks (RNN). The process of making each model
has been detailed below in Figure 1.
Figure 1
3.1 Dataset
We used real-or-fake news dataset from kaggle.com in our experiments to evaluate semantic
features. It contains 6256 articles including their titles. 50% of the articles are labeled as FAKE
and the remaining as REAL. Therefore, detecting the FAKE articles is a binary classification
problem. We split the dataset into 80% for training and 20% for testing.
3.2 Text Pre-Processing
We pre-process the raw text to extract semantic features for machine learning. We use n-grams as
semantic features. We first tokenize the title and body of each article. Then, each token is
transformed into lower cases and proper nouns lose their capital-letter information. Next, we
remove stopwords and numbers for unigrams since they carry less meaning in the context. As a
3. International Journal on Natural Language Computing (IJNLC) Vol.8, No.3, June 2019
19
N
result, the remaining to- kens are semantic representations from linguistic perspective. Stopwords
and numbers are reserved for n-grams other than unigrams. Then, we extract TF and TFIDF
numerical features using the semantic representations.
3.3 Linguistic Features
3.3.1 TF and TFIDF
Note that a text subject (e.g., an article) is called a document in natural language processing. TF
computes how frequently a term appears in a document. Given a document d with a set of terms T
= {t1, t2, ..., tM }, and the document length is N (the total occurrence of all terms); suppose term ti
appeared xi times; then, TF of ti is denoted as
As a result, [TF(1),TF(2),...,TF(i),...,TF(M)], i ∈ [1,M] is a semantic representation for the
document. Inverse term frequency (IDF) denotes the popularity of a term across documents.
Given a set of documents D = {d1,d2,...,dk} as the subjects of interest, and TF(i) for term ti is
calculated for each document; suppose C(i) denotes the number of documents in which xi ≠ 0;
then,
Note that each term appears at least once in D. Meanwhile, TF and IDF are calculated in
logarithmically scaled:
Where i ∈ [1, M] and j ∈ [1, K]. Then, TFIDF is the product of TF and IDF:
3.3.2 N-grams
N-grams are continuous chucks of n items from a tokenized sequence for a document. Especially,
uni- grams are terms where n = 1. Bigrams are pairs of adjacent terms where n = 2. Trigrams and
quad- grams are three and four continuous terms, respectively. For example, the sentence “Your
destination is 3 miles away” is tokenized into “your”, “destination”, “is”, “3”, “miles”, and
“away”, where each term is a unigram. The bigrams are two-term strings: “your destination”,
“destination is”, “is 3”, “3 miles”, and “miles away”. Trigrams are three-term strings: “your
destination is”, “destination is 3”, “is 3 miles”, and “3 miles away”. And quadgrams are four-term
strings: “your destination is 3”, “destination is 3 miles”, and “is 3 miles away”. In our
experiment, we use unigrams, bigrams, trigrams, and quadgrams to calculate the correlated TF
and TFIDF features.
4. International Journal on Natural Language Computing (IJNLC) Vol.8, No.3, June 2019
20
3.3.3 Naive Bayes Classifier
The naive Bayes classifier is a classifier based on Bayes’ Theorem:
P (A |B ) =
P(B | A) × P(A)
P (B )
Where A and B are two conditions. The naive Bayes classifier takes each semantic feature as a
condition and classify the samples with the highest occurring probability. Note that it assumes
that the semantic features are independent [8].
3.3.4 Random Forest Classifier
A decision tree is a “tree” where different conditions branch off from their parents and each node
represents a class for classification. The random forest classifier is an ensemble method that
operates a multitude of decision trees and thus improves the accuracy. We adjust parameters such
as max depth, min samples split, n estimators, and random state to achieve the best performance;
where Max depth is the maximum depth of a decision tree; Min samples split is the minimum
amount of samples to split an internal node, and N estimators is the number of decision trees in
the random forest [2].
3.4 GLOVE
GloVe is an unsupervised learning algorithm that parallels the closeness of two words with their
distance in a vector space [7]. The generated vector representations are called word embed- dings.
We use word embeddings as semantic features in addition to n-grams because they represent the
semantic distances between the words in the context.
3.5 RNN
Recurrent neural networks (RNNs) utilize “memory” to process inputs and are widely used in text
generation and natural language processing [6]. Long short-term memory (LSTM) is a RNN
architecture that uses “gates” to “forget” the input at a condition. In our model we use 100 LSTM
cells in one layer and a softmax activation function. We trained the model with 22 epochs and a
batch size of 64.
4. EXPERIMENTAL RESULTS
This section presents the experimental results using naive bayes, random forest, and RNN with
six groups of features: TF, TFIDF, frequency of bigrams, trigrams, and quadgrams, and GloVe
word embeddings.
TF TFIDF Bigram Trigram Quadgram GloVe
Naive Bayes 88.08% 89.90% 90.77% 90.06% 89.74% N/A
Random Forest 89.03% 89.34% 95.66% 94.71% 89.60% N/A
RNN N/A N/A N/A N/A N/A 92.70%
Table 1 shows the accuracy using each method. Observe that random forest results in better
accuracy than the naive bayes classifier with TF, bigrams, trigrams, and quadgrams. Meanwhile,
5. International Journal on Natural Language Computing (IJNLC) Vol.8, No.3, June 2019
21
bigrams outperform TF, TFIDF, trigrams, and quadgrams. The RNN model with GloVe features
outperform TF, TFIDF, and quadgrams but not bigrams and trigrams.
Note that unigrams represent words; bigrams represent words and their one-to-one connections;
trigrams carry level-two connections for words if consider a one-to-one connection between two
words as level-one. As a result, bigrams carry more information than unigrams; trigrams more
than bigrams; and quadgrams more than trigrams. Also, more information for training suppose to
provide better accuracy. Therefore, we would expect quadgrams to result in higher accuracy than
trigrams, trigrams higher than bigrams, and bigrams higher than unigrams. This assumption
contradicts the data displayed in Table 1 as quadgrams do not result in the highest accuracy in the
table. The reason is when information increases, the training process takes specific details and the
model is “over-fitted”, when a model predicts the training set too well that it impairs its ability to
classify examples that are not within its training set. However, bigrams do outperform unigrams
because they carry more information. For the same reason, TF and TFIDF result in similar
accuracies because they both are derived from unigrams. Meanwhile, GloVe with RNN
outperforms unigrams and quadgrams but results in lower accuracy than bigrams and trigrams.
This is caused by single layer LSTM cells and word embeddings represent unigrams. Therefore,
the RNN model outperforms random forest if disregard the difference between word embeddings
and unigrams.
Some implications of the success of these models are that they can be used by readers to filter
through the content they consume to be wary of what articles may contain misinformation.
These models can also be used by agencies, organizations, corporations, campaigns, or any other
formal group to filter through news to find any false claims made about them or their actions.
Additionally, publishing houses or news agencies can employ these methods in order to fact
check pieces their writers compose to avoid any fake news from being produced under their
name.
5. CONCLUSION
In this paper, we applied semantic features including unigram TF & TFIDF, bigrams, trigrams,
quad- grams, and GloVe word embeddings along with naive bayes, random forest, and RNN
classifiers to detect fake news. The performance is promising as bigrams and random forest
achieved an accuracy of 95.66%. It implies that semantic features are useful for fake news
detection. As the next step, semantic features may be combined with other linguistic cues and
meta data to improve the detection performance.
REFERENCES
[1] Babayan, Davit. “Bitcoin Bulls Spreading Fake News About Golden Asteroid: Peter Schiff.”
NewsBTC, NewsBTC, 29 June 2019, www.newsbtc.com/2019/06/29/bitcoin-bulls-spreading- fake-
news-about-golden-asteroid-peter-schiff/.
[2] Breiman, L. (2001). Random forests machine learning. 45: 5–32. View Article PubMed/NCBI Google
Scholar.
[3] Handley, Lucy. “Nearly 70 Percent of People Are Worried about Fake News as a 'Weapon,'
Survey Says.” CNBC, CNBC, 22 Jan. 2018, www.cnbc.com/2018/01/22/nearly-70-percent-of-
people-are-worried-about-fake-news-as-a-weapon-survey-says.html.
[4] Kowalewski, J. (2017). The impact of fake news: Politics. Lexology.
[5] Mitchell, Amy, et al. “Many Americans Say Made-Up News Is a Critical Problem That Needs To Be
Fixed.” Pew Research Center's Journalism Project, Pew Research Center, 28 June 2019,
6. International Journal on Natural Language Computing (IJNLC) Vol.8, No.3, June 2019
22
www.journalism.org/2019/06/05/many-americans-say-made-up-news-is-a-critical-problem-that-
needs-to-be-fixed/.
[6] Mikolov, T., M. Karafia ́t, L. Burget, J. Cˇernocky`, and S. Khudanpur (2010). Recurrent neural
network based language model. In Eleventh annual conference of the international speech
communication association.
[7] Patel, S. (2017). Chapter 1 : Supervised learning and naive bayes classification - part 1 (theory).
Medium.
[8] Pennington, J., R. Socher, and C. Manning (2014). Glove: Global vectors for word representation. In
Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP),
pp. 1532–1543.
[9] Romano, A. (2018). Mark zuckerberg lays out facebooks 3-pronged approach to fake news.
[10] Rubin, V. L., Y. Chen, and N. J. Conroy (2015). Deception detection for news: three types of fakes.
In Proceedings of the 78th ASIS&T Annual Meeting: Information Science with Impact: Research in
and for the Community, pp. 83. American Society for Information Science.
[11] Shu, K., A. Sliva, S. Wang, J. Tang, and H. Liu (2017). Fake news detection on social media: A data
mining perspective. ACM SIGKDD Explorations Newsletter 19(1), 22–36.
[12] Soares, Isa, and Florence Davey-Atlee. “The Fake News Machine: Inside a Town Gearing up for
2020.” CNNMoney, Cable News Network, money.cnn.com/interactive/media/the-macedonia- story/.
[13] Tacchini, E., G. Ballarin, M. L. Della Vedova, S. Moret, and L. de Alfaro (2017). Some like it hoax:
Automated fake news detection in social networks. arXiv preprint arXiv:1704.07506.
[14] Wang, W. Y. (2017). “liar, liar pants on fire”: A new benchmark dataset for fake news detection.
arXiv preprint arXiv:1705.00648.