This document discusses using Cobweb clustering and Gradient Boosting techniques to detect spam on Twitter. Cobweb clustering creates a classification tree to predict attributes of new objects by summarizing the attribute distributions of existing nodes. Gradient Boosting is an ensemble method that uses multiple weak learners (decision trees) to create a stronger predictive model. The paper aims to combine these techniques to create an enhanced spam detection system. It also reviews several existing approaches for Twitter spam detection using techniques like Hidden Markov Models, Random Forests, and asynchronous link-based algorithms.
IRJET - Fake News Detection using Machine LearningIRJET Journal
This document presents a machine learning approach for detecting fake news. It discusses existing fake news detection methods and their limitations. The proposed system uses natural language processing and machine learning techniques like TF-IDF vectorization, naive Bayes classification and XGBoost to build a model that classifies news articles as real or fake. It extracts linguistic features from news content and social context to train models that can identify fake news with greater accuracy than existing approaches. The system is intended to help reduce the spread of misinformation on social media platforms.
Graph-based Analysis and Opinion Mining in Social NetworkKhan Mostafa
This is the final report for Networks & Data Mining Techniques project focusing on mining social network to estimate public opinion about entities and associated keywords. This project mines Twitter for recent feeds and analyzes them to estimate sentiment score, discussed entity and describing keywords in each tweet. This data is then exploited to elicit overall sentiment associated with each entity. Entities and keywords extracted is also used to form an entity-keyword bigraph. This graph is further used to detect entity communities and keywords found within those communities. Presented implementation works in linear time.
Recently, fake news has been incurring many problems to our society. As a result, many researchers have been working on identifying fake news. Most of the fake news detection systems utilize the linguistic feature of the news. However, they have difficulty in sensing highly ambiguous fake news which can be detected only after identifying meaning and latest related information. In this paper, to resolve this problem, we shall present a new Korean fake news detection system using fact DB which is built and updated by human's direct judgement after collecting obvious facts. Our system receives a proposition, and search the semantically related articles from Fact DB in order to verify whether the given proposition is true or not by comparing the proposition with the related articles in fact DB. To achieve this, we utilize a deep learning model, Bidirectional Multi Perspective Matching for Natural Language Sentence BiMPM , which has demonstrated a good performance for the sentence matching task. However, BiMPM has some limitations in that the longer the length of the input sentence is, the lower its performance is, and it has difficulty in making an accurate judgement when an unlearned word or relation between words appear. In order to overcome the limitations, we shall propose a new matching technique which exploits article abstraction as well as entity matching set in addition to BiMPM. In our experiment, we shall show that our system improves the whole performance for fake news detection. Prasanth. K | Praveen. N | Vijay. S | Auxilia Osvin Nancy. V ""Fake News Detection using Machine Learning"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-2 , February 2020,
URL: https://www.ijtsrd.com/papers/ijtsrd30014.pdf
Paper Url : https://www.ijtsrd.com/engineering/information-technology/30014/fake-news-detection-using-machine-learning/prasanth-k
This document summarizes research on detecting fake news using text analysis techniques. It discusses how social media consumption of news has increased and the challenges of identifying trustworthy sources. Various types of fake news are described based on visual/text content or the targeted audience. Methods for detection include clustering similar news reports and using predictive models to analyze linguistic features like punctuation, semantic levels, and readability. The proposed approach uses text summarization, web crawling to find related articles, latent semantic analysis to compare articles, and fuzzy logic to determine the authenticity score of a target news article. The goal is to develop a system to help users identify fake news on social media platforms.
Event detection and summarization based on social networks and semantic query...ijnlc
Events can be characterized by a set of descriptive, collocated keywords extracted documents. Intuitively,
documents describing the same event will contain similar sets of keywords, and the graph for a document collection will contain clusters individual events. Helping users to understand the event is an acute problem nowadays as the users are struggling to keep up with tremendous amount of information published every day in the Internet. The challenging task is to detect the events from online web resources, it is getting more attentions. The important data source for event detection is a Web search log because the information it contains reflects users’ activities and interestingness to various real world events. There are three major issues playing role for event detection from web search logs: effectiveness, efficiency of
detected events. We focus on modeling the content of events by their semantic relations with other events
and generating structured summarization. Event mining is a useful way to understand computer system behaviors. The focus of recent works on event mining has been shifted to event summarization from discovering frequent patterns. Event summarization provides a comprehensible explanation of the event sequence based on certain aspects.
Using content and interactions for discovering communities inmoresmile
1. The document proposes methods for discovering communities in social networks using content and interactions by modeling communities based on discussed topics and social connections between users. This allows discovering both user interests and popular topics within each community.
2. Bayesian models are used to extract latent communities from the network, assuming community relationships depend on user interests in topics and their links. Different models are proposed to handle different network structures like broadcast vs conversation networks.
3. The models aim to utilize both content and link information to discover communities in incomplete social networks with missing link information. A distance metric is learned using observed links and used for hierarchical clustering.
This paper introduces co-operative caching
policies for reducing electronic content provisioning cost in
Social Wireless Networks (SWNET). SWNET are formed by
mobile devices such as laptops, modern cell phones etc. sharing
common electronic contents, data and actually gathering in
public places like college campus, mall etc. Electronic object
caching in such SWNET are shown to be able to minimize the
content provisioning cost which mainly depend on service and
pricing dependencies between various stakeholders including
content provider(CP), network service provider, end
consumer(EC). This paper introduces practical network service
and pricing model which are used for creating two object
caching strategies for minimizing provisioning cost in networks
which are homogeneous and heterogeneous object demand. The
paper develops analytical and simulation design for analyzing
the proposed caching strategies in the presence of selfish user
that deviates from networks-wide cost-optimal policies.
This document provides instructions for an assignment on data and file structures for the MCS-021 course. It outlines four questions to answer for the assignment, which is worth 100 marks total and 25% of the course grade. Students must answer all four questions, with each question worth 20 marks. The assignment should be submitted by October 15th, 2013 for the July 2013 session or April 15th, 2014 for the January 2014 session. Question 1 asks students to write an algorithm for implementing doubly linked lists.
IRJET - Fake News Detection using Machine LearningIRJET Journal
This document presents a machine learning approach for detecting fake news. It discusses existing fake news detection methods and their limitations. The proposed system uses natural language processing and machine learning techniques like TF-IDF vectorization, naive Bayes classification and XGBoost to build a model that classifies news articles as real or fake. It extracts linguistic features from news content and social context to train models that can identify fake news with greater accuracy than existing approaches. The system is intended to help reduce the spread of misinformation on social media platforms.
Graph-based Analysis and Opinion Mining in Social NetworkKhan Mostafa
This is the final report for Networks & Data Mining Techniques project focusing on mining social network to estimate public opinion about entities and associated keywords. This project mines Twitter for recent feeds and analyzes them to estimate sentiment score, discussed entity and describing keywords in each tweet. This data is then exploited to elicit overall sentiment associated with each entity. Entities and keywords extracted is also used to form an entity-keyword bigraph. This graph is further used to detect entity communities and keywords found within those communities. Presented implementation works in linear time.
Recently, fake news has been incurring many problems to our society. As a result, many researchers have been working on identifying fake news. Most of the fake news detection systems utilize the linguistic feature of the news. However, they have difficulty in sensing highly ambiguous fake news which can be detected only after identifying meaning and latest related information. In this paper, to resolve this problem, we shall present a new Korean fake news detection system using fact DB which is built and updated by human's direct judgement after collecting obvious facts. Our system receives a proposition, and search the semantically related articles from Fact DB in order to verify whether the given proposition is true or not by comparing the proposition with the related articles in fact DB. To achieve this, we utilize a deep learning model, Bidirectional Multi Perspective Matching for Natural Language Sentence BiMPM , which has demonstrated a good performance for the sentence matching task. However, BiMPM has some limitations in that the longer the length of the input sentence is, the lower its performance is, and it has difficulty in making an accurate judgement when an unlearned word or relation between words appear. In order to overcome the limitations, we shall propose a new matching technique which exploits article abstraction as well as entity matching set in addition to BiMPM. In our experiment, we shall show that our system improves the whole performance for fake news detection. Prasanth. K | Praveen. N | Vijay. S | Auxilia Osvin Nancy. V ""Fake News Detection using Machine Learning"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-2 , February 2020,
URL: https://www.ijtsrd.com/papers/ijtsrd30014.pdf
Paper Url : https://www.ijtsrd.com/engineering/information-technology/30014/fake-news-detection-using-machine-learning/prasanth-k
This document summarizes research on detecting fake news using text analysis techniques. It discusses how social media consumption of news has increased and the challenges of identifying trustworthy sources. Various types of fake news are described based on visual/text content or the targeted audience. Methods for detection include clustering similar news reports and using predictive models to analyze linguistic features like punctuation, semantic levels, and readability. The proposed approach uses text summarization, web crawling to find related articles, latent semantic analysis to compare articles, and fuzzy logic to determine the authenticity score of a target news article. The goal is to develop a system to help users identify fake news on social media platforms.
Event detection and summarization based on social networks and semantic query...ijnlc
Events can be characterized by a set of descriptive, collocated keywords extracted documents. Intuitively,
documents describing the same event will contain similar sets of keywords, and the graph for a document collection will contain clusters individual events. Helping users to understand the event is an acute problem nowadays as the users are struggling to keep up with tremendous amount of information published every day in the Internet. The challenging task is to detect the events from online web resources, it is getting more attentions. The important data source for event detection is a Web search log because the information it contains reflects users’ activities and interestingness to various real world events. There are three major issues playing role for event detection from web search logs: effectiveness, efficiency of
detected events. We focus on modeling the content of events by their semantic relations with other events
and generating structured summarization. Event mining is a useful way to understand computer system behaviors. The focus of recent works on event mining has been shifted to event summarization from discovering frequent patterns. Event summarization provides a comprehensible explanation of the event sequence based on certain aspects.
Using content and interactions for discovering communities inmoresmile
1. The document proposes methods for discovering communities in social networks using content and interactions by modeling communities based on discussed topics and social connections between users. This allows discovering both user interests and popular topics within each community.
2. Bayesian models are used to extract latent communities from the network, assuming community relationships depend on user interests in topics and their links. Different models are proposed to handle different network structures like broadcast vs conversation networks.
3. The models aim to utilize both content and link information to discover communities in incomplete social networks with missing link information. A distance metric is learned using observed links and used for hierarchical clustering.
This paper introduces co-operative caching
policies for reducing electronic content provisioning cost in
Social Wireless Networks (SWNET). SWNET are formed by
mobile devices such as laptops, modern cell phones etc. sharing
common electronic contents, data and actually gathering in
public places like college campus, mall etc. Electronic object
caching in such SWNET are shown to be able to minimize the
content provisioning cost which mainly depend on service and
pricing dependencies between various stakeholders including
content provider(CP), network service provider, end
consumer(EC). This paper introduces practical network service
and pricing model which are used for creating two object
caching strategies for minimizing provisioning cost in networks
which are homogeneous and heterogeneous object demand. The
paper develops analytical and simulation design for analyzing
the proposed caching strategies in the presence of selfish user
that deviates from networks-wide cost-optimal policies.
This document provides instructions for an assignment on data and file structures for the MCS-021 course. It outlines four questions to answer for the assignment, which is worth 100 marks total and 25% of the course grade. Students must answer all four questions, with each question worth 20 marks. The assignment should be submitted by October 15th, 2013 for the July 2013 session or April 15th, 2014 for the January 2014 session. Question 1 asks students to write an algorithm for implementing doubly linked lists.
The document discusses algorithms for tree data structures. It describes AVL trees as self-balancing binary search trees where the heights of subtrees can differ by at most one. It also describes red-black trees, noting they are similar to AVL trees but use color attributes (red or black) to balance the tree. The key differences are that AVL trees are generally faster for lookup-intensive applications as they are more rigidly balanced, while red-black tree insertions and deletions may require fewer rotations than AVL trees.
Study, analysis and formulation of a new method for integrity protection of d...ijsrd.com
This document discusses a text-based fuzzy clustering algorithm to filter spam emails. It begins with an introduction discussing how most classification approaches are for structured data but large amounts of unstructured data are transmitted online. It then discusses spam emails being a major problem and filtering being an important approach. The paper aims to use a fuzzy clustering approach called Fuzzy C-Means to classify emails. It describes the training and testing modules, which extract features from emails to create vector space models and then applies the fuzzy clustering algorithm to determine if emails are spam or not spam. Evaluation results show the precision and accuracy of the approach on different datasets, with the author concluding the vector space model with fuzzy C-Means works well for both small and large datasets.
WARRANTS GENERATIONS USING A LANGUAGE MODEL AND A MULTI-AGENT SYSTEMijnlc
Each argument begins with a conclusion, which is followed by one or more premises supporting the
conclusion. The warrant is a critical component of Toulmin's argument model; it explains why the premises
support the claim. Despite its critical role in establishing the claim's veracity, it is frequently omitted or left
implicit, leaving readers to infer. We consider the problem of producing more diverse and high-quality
warrants in response to a claim and evidence. To begin, we employ BART [1] as a conditional sequence tosequence language model to guide the output generation process. On the ARCT dataset [2], we fine-tune
the BART model. Second, we propose the Multi-Agent Network for Warrant Generation as a model for
producing more diverse and high-quality warrants by combining Reinforcement Learning (RL) and
Generative Adversarial Networks (GAN) with the mechanism of mutual awareness of agents. In terms of
warrant generation, our model generates a greater variety of warrants than other baseline models. The
experimental results validate the effectiveness of our proposed hybrid model for generating warrants.
Link prediction 방법의 개념 및 활용Kyunghoon Kim
The document discusses link prediction in social networks. It begins with an introduction to social networks and link prediction. It then covers the framework of link prediction, including common methods and applications. As an example, it discusses using link prediction to analyze terrorist networks. Finally, it discusses performing link prediction using Python tools like NumPy, Pandas, and NetworkX.
Presentation for tutorial session 'Measuring scholarly impact: Methods and practice' at ISSI2015
Explains how to use linkpred: https://github.com/rafguns/linkpred
FAKE NEWS DETECTION WITH SEMANTIC FEATURES AND TEXT MININGijnlc
Nearly 70% of people are concerned about the propagation of fake news. This paper aims to detect fake news in online articles through the use of semantic features and various machine learning techniques. In this research, we investigated recurrent neural networks vs. the naive bayes classifier and random forest classifiers using five groups of linguistic features. Evaluated with real or fake dataset from kaggle.com, the best performing model achieved an accuracy of 95.66% using bigram features with the random forest classifier. The fact that bigrams outperform unigrams, trigrams, and quadgrams show that word pairs as opposed to single words or phrases best indicate the authenticity of news.
Sub-Graph Finding Information over Nebula Networksijceronline
Social and information networks have been extensively studied over years. This paper studies a new query on sub graph search on heterogeneous networks. Given an uncertain network of N objects, where each object is associated with a network to an underlying critical problem of discovering, top-k sub graphs of entities with rare and surprising associations returns k objects such that the expected matching sub graph queries efficiently involves, Compute all matching sub graphs which satisfy "Nebula computing requests" and this query is useful in ranking such results based on the rarity and the interestingness of the associations among nebula requests in the sub graphs. "In evaluating Top k-selection queries, "we compute information nebula using a global structural context similarity, and our similarity measure is independent of connection sub graphs". We need to compute the previous work on the matching problem can be harnessed for expected best for a naive ranking after matching for large graphs. Top k-selection sets and search for the optimal selection set with the large graphs; sub graphs may have enormous number of matches. In this paper, we identify several important properties of top-k selection queries, We propose novel top–K mechanisms to exploit these indexes for answering interesting sub graph queries efficiently.
Survey on Location Based Recommendation System Using POIIRJET Journal
This document summarizes a survey on location-based recommendation systems using points of interest (POIs). It discusses how data mining techniques can be used to analyze user check-in data from location-based social networks to discover patterns and interests in order to provide personalized POI recommendations to users. The system architecture involves collecting check-in data, performing data mining, building user profiles, and generating recommendations. It also reviews several related works on topic modeling, matrix factorization methods, and exploiting sequential check-in data and geographical influences to provide successive POI recommendations. The goal is to develop improved recommendation methods that can recommend new locations for users to explore based on their interests and behaviors.
The document describes a system for automatically classifying bug reports from cloud infrastructure software using natural language processing techniques. It preprocesses bug descriptions to extract keywords, encodes them as vectors, performs hierarchical clustering to group similar bugs, and assigns classifications to new unlabeled bugs based on the cluster labels. Preliminary results show the system can accurately predict categories and specific classes for bug reports, especially with a large dataset. Future work aims to improve performance and apply the system to analyze bugs in other cloud infrastructures.
IRJET - Suicidal Text Detection using Machine LearningIRJET Journal
This document presents research on detecting suicidal text using machine learning techniques. The researchers collected Twitter posts and used data preprocessing, word embeddings and a Convolutional Neural Network model to classify tweets as suicidal or non-suicidal. They found that the CNN model achieved the best performance compared to other classifiers. The system could help identify at-risk individuals by analyzing their social media posts and inform organizations to provide support. Future work involves using location data from tweets to better target help to those in need.
Top k query processing and malicious node identification based on node groupi...redpel dot com
Top k query processing and malicious node identification based on node grouping in manets
for more ieee paper / full abstract / implementation , just visit www.redpel.com
This document summarizes a research paper on analyzing sentiments from Twitter data using data mining techniques. The paper presents an approach for analyzing user sentiments using data mining classifiers and compares the performance of single classifiers versus an ensemble of classifiers for sentiment analysis. Experimental results show that the k-nearest neighbor classifier achieved very high predictive accuracy, and single classifiers outperformed the ensemble approach.
Generating synthetic online social network graph data and topologiesGraph-TA
This document describes a 3-step approach to generating synthetic social network data that respects user privacy:
1. Topology generation uses the R-mat method to create a graph with power law distributions and community structure. Communities are identified using Louvain method. Seed nodes are selected as central nodes in each community.
2. Data attributes like age, gender, interests are defined based on real statistics. Attribute values and their proportions are specified in a table.
3. Data is populated starting from seed nodes using propagation rules. Nearby nodes are more likely to get similar attribute values to their seed. Challenges include disproportionate seed influence and ensuring diversity while meeting proportions.
MMP-TREE FOR SEQUENTIAL PATTERN MINING WITH MULTIPLE MINIMUM SUPPORTS IN PROG...IJCSEA Journal
The document proposes a new algorithm called MS-PISA for mining sequential patterns from progressive databases that have multiple minimum support thresholds. MS-PISA uses a tree structure called MMP-tree to store information about the database and discovered patterns. The MMP-tree tracks the percentage of participation of each itemset based on the minimum support thresholds. MS-PISA progressively updates the MMP-tree as new data arrives to find sequential patterns that satisfy the varying minimum support requirements.
This document describes a cyberbullying detection model that uses machine learning techniques to overcome limitations of existing methods. It analyzes a Twitter dataset containing annotated tweets using natural language processing and classifiers like SVM, random forest, and KNN. The models achieved up to 95% accuracy in detecting cyberbullying posts. The authors propose expanding the model to use unsupervised learning, integrate with social media APIs to detect bullying in real-time, and develop image recognition to identify bullying across multiple media platforms.
Evaluating and Enhancing Efficiency of Recommendation System using Big Data A...IRJET Journal
This document discusses evaluating and enhancing the efficiency of recommendation systems using big data analytics. It begins with an abstract that outlines recommendation systems, collaborative filtering, and the need for big data analytics due to large datasets. It then discusses specific collaborative filtering techniques like user-based, item-based, and matrix factorization. It describes challenges like scalability that big data analytics can help address. The document evaluates recommendation algorithms using metrics like MAE, RMSE, precision and time taken on movie recommendation datasets. It aims to design an efficient recommendation system using the best techniques.
This document summarizes the process of using SAS Text Miner to predict the duration of customer service emails. It began with manually analyzing emails to identify keywords related to specific issues. The author later used SAS Text Miner to generate text topics, clusters, and rules from emails to create predictive models. Text topics and rules grouped emails by common words, while clusters grouped semantically similar emails. Models using these text mining outputs as predictors were able to classify emails as long or short-duration with reasonable accuracy. The most interpretable was a decision tree model, which the author focuses on discussing.
IRJET- Twitter Sentimental Analysis for Predicting Election Result using ...IRJET Journal
This document proposes using Twitter sentiment analysis and an LSTM neural network to predict election results. It involves collecting tweets related to political parties and candidates in India, cleaning the data, training an LSTM classifier on labeled tweets, and using the trained model to classify tweets as positive, negative or neutral sentiment and compare sentiment levels for each candidate. The goal is to analyze public sentiment expressed on Twitter and how it correlates with actual election outcomes.
Analyzing Social media’s real data detection through Web content mining using...IRJET Journal
This document proposes a framework to detect fake news on social media platforms like Twitter. It involves collecting Twitter data using APIs and preprocessing the data by removing URLs, usernames, punctuation etc. and applying techniques like tokenization, stemming and lemmatization. Features are then extracted from the preprocessed data using TF-IDF. Various machine learning classifiers like KNN, SVM, Decision Trees etc. are trained on the labeled data to classify tweets as real or fake. An ensemble of AdaBoost and a combined SVM-KNN model is also proposed and expected to achieve higher accuracy than individual models. The proposed framework aims to automatically detect and stop the spread of fake news on Twitter in real-time.
Abstract: Detection of fake news based on deep learning techniques is a major issue used to mislead people. For
the experiments, several types of datasets, models, and methodologies have been used to detect fake news. Also,
most of the datasets contain text id, tweets id, and user-based id and user-based features. To get the proper results
and accuracy various models like CNN (Convolution neural network), DEEP CNN, and LSTM (Long short-term
memory) are used
The document discusses algorithms for tree data structures. It describes AVL trees as self-balancing binary search trees where the heights of subtrees can differ by at most one. It also describes red-black trees, noting they are similar to AVL trees but use color attributes (red or black) to balance the tree. The key differences are that AVL trees are generally faster for lookup-intensive applications as they are more rigidly balanced, while red-black tree insertions and deletions may require fewer rotations than AVL trees.
Study, analysis and formulation of a new method for integrity protection of d...ijsrd.com
This document discusses a text-based fuzzy clustering algorithm to filter spam emails. It begins with an introduction discussing how most classification approaches are for structured data but large amounts of unstructured data are transmitted online. It then discusses spam emails being a major problem and filtering being an important approach. The paper aims to use a fuzzy clustering approach called Fuzzy C-Means to classify emails. It describes the training and testing modules, which extract features from emails to create vector space models and then applies the fuzzy clustering algorithm to determine if emails are spam or not spam. Evaluation results show the precision and accuracy of the approach on different datasets, with the author concluding the vector space model with fuzzy C-Means works well for both small and large datasets.
WARRANTS GENERATIONS USING A LANGUAGE MODEL AND A MULTI-AGENT SYSTEMijnlc
Each argument begins with a conclusion, which is followed by one or more premises supporting the
conclusion. The warrant is a critical component of Toulmin's argument model; it explains why the premises
support the claim. Despite its critical role in establishing the claim's veracity, it is frequently omitted or left
implicit, leaving readers to infer. We consider the problem of producing more diverse and high-quality
warrants in response to a claim and evidence. To begin, we employ BART [1] as a conditional sequence tosequence language model to guide the output generation process. On the ARCT dataset [2], we fine-tune
the BART model. Second, we propose the Multi-Agent Network for Warrant Generation as a model for
producing more diverse and high-quality warrants by combining Reinforcement Learning (RL) and
Generative Adversarial Networks (GAN) with the mechanism of mutual awareness of agents. In terms of
warrant generation, our model generates a greater variety of warrants than other baseline models. The
experimental results validate the effectiveness of our proposed hybrid model for generating warrants.
Link prediction 방법의 개념 및 활용Kyunghoon Kim
The document discusses link prediction in social networks. It begins with an introduction to social networks and link prediction. It then covers the framework of link prediction, including common methods and applications. As an example, it discusses using link prediction to analyze terrorist networks. Finally, it discusses performing link prediction using Python tools like NumPy, Pandas, and NetworkX.
Presentation for tutorial session 'Measuring scholarly impact: Methods and practice' at ISSI2015
Explains how to use linkpred: https://github.com/rafguns/linkpred
FAKE NEWS DETECTION WITH SEMANTIC FEATURES AND TEXT MININGijnlc
Nearly 70% of people are concerned about the propagation of fake news. This paper aims to detect fake news in online articles through the use of semantic features and various machine learning techniques. In this research, we investigated recurrent neural networks vs. the naive bayes classifier and random forest classifiers using five groups of linguistic features. Evaluated with real or fake dataset from kaggle.com, the best performing model achieved an accuracy of 95.66% using bigram features with the random forest classifier. The fact that bigrams outperform unigrams, trigrams, and quadgrams show that word pairs as opposed to single words or phrases best indicate the authenticity of news.
Sub-Graph Finding Information over Nebula Networksijceronline
Social and information networks have been extensively studied over years. This paper studies a new query on sub graph search on heterogeneous networks. Given an uncertain network of N objects, where each object is associated with a network to an underlying critical problem of discovering, top-k sub graphs of entities with rare and surprising associations returns k objects such that the expected matching sub graph queries efficiently involves, Compute all matching sub graphs which satisfy "Nebula computing requests" and this query is useful in ranking such results based on the rarity and the interestingness of the associations among nebula requests in the sub graphs. "In evaluating Top k-selection queries, "we compute information nebula using a global structural context similarity, and our similarity measure is independent of connection sub graphs". We need to compute the previous work on the matching problem can be harnessed for expected best for a naive ranking after matching for large graphs. Top k-selection sets and search for the optimal selection set with the large graphs; sub graphs may have enormous number of matches. In this paper, we identify several important properties of top-k selection queries, We propose novel top–K mechanisms to exploit these indexes for answering interesting sub graph queries efficiently.
Survey on Location Based Recommendation System Using POIIRJET Journal
This document summarizes a survey on location-based recommendation systems using points of interest (POIs). It discusses how data mining techniques can be used to analyze user check-in data from location-based social networks to discover patterns and interests in order to provide personalized POI recommendations to users. The system architecture involves collecting check-in data, performing data mining, building user profiles, and generating recommendations. It also reviews several related works on topic modeling, matrix factorization methods, and exploiting sequential check-in data and geographical influences to provide successive POI recommendations. The goal is to develop improved recommendation methods that can recommend new locations for users to explore based on their interests and behaviors.
The document describes a system for automatically classifying bug reports from cloud infrastructure software using natural language processing techniques. It preprocesses bug descriptions to extract keywords, encodes them as vectors, performs hierarchical clustering to group similar bugs, and assigns classifications to new unlabeled bugs based on the cluster labels. Preliminary results show the system can accurately predict categories and specific classes for bug reports, especially with a large dataset. Future work aims to improve performance and apply the system to analyze bugs in other cloud infrastructures.
IRJET - Suicidal Text Detection using Machine LearningIRJET Journal
This document presents research on detecting suicidal text using machine learning techniques. The researchers collected Twitter posts and used data preprocessing, word embeddings and a Convolutional Neural Network model to classify tweets as suicidal or non-suicidal. They found that the CNN model achieved the best performance compared to other classifiers. The system could help identify at-risk individuals by analyzing their social media posts and inform organizations to provide support. Future work involves using location data from tweets to better target help to those in need.
Top k query processing and malicious node identification based on node groupi...redpel dot com
Top k query processing and malicious node identification based on node grouping in manets
for more ieee paper / full abstract / implementation , just visit www.redpel.com
This document summarizes a research paper on analyzing sentiments from Twitter data using data mining techniques. The paper presents an approach for analyzing user sentiments using data mining classifiers and compares the performance of single classifiers versus an ensemble of classifiers for sentiment analysis. Experimental results show that the k-nearest neighbor classifier achieved very high predictive accuracy, and single classifiers outperformed the ensemble approach.
Generating synthetic online social network graph data and topologiesGraph-TA
This document describes a 3-step approach to generating synthetic social network data that respects user privacy:
1. Topology generation uses the R-mat method to create a graph with power law distributions and community structure. Communities are identified using Louvain method. Seed nodes are selected as central nodes in each community.
2. Data attributes like age, gender, interests are defined based on real statistics. Attribute values and their proportions are specified in a table.
3. Data is populated starting from seed nodes using propagation rules. Nearby nodes are more likely to get similar attribute values to their seed. Challenges include disproportionate seed influence and ensuring diversity while meeting proportions.
MMP-TREE FOR SEQUENTIAL PATTERN MINING WITH MULTIPLE MINIMUM SUPPORTS IN PROG...IJCSEA Journal
The document proposes a new algorithm called MS-PISA for mining sequential patterns from progressive databases that have multiple minimum support thresholds. MS-PISA uses a tree structure called MMP-tree to store information about the database and discovered patterns. The MMP-tree tracks the percentage of participation of each itemset based on the minimum support thresholds. MS-PISA progressively updates the MMP-tree as new data arrives to find sequential patterns that satisfy the varying minimum support requirements.
This document describes a cyberbullying detection model that uses machine learning techniques to overcome limitations of existing methods. It analyzes a Twitter dataset containing annotated tweets using natural language processing and classifiers like SVM, random forest, and KNN. The models achieved up to 95% accuracy in detecting cyberbullying posts. The authors propose expanding the model to use unsupervised learning, integrate with social media APIs to detect bullying in real-time, and develop image recognition to identify bullying across multiple media platforms.
Evaluating and Enhancing Efficiency of Recommendation System using Big Data A...IRJET Journal
This document discusses evaluating and enhancing the efficiency of recommendation systems using big data analytics. It begins with an abstract that outlines recommendation systems, collaborative filtering, and the need for big data analytics due to large datasets. It then discusses specific collaborative filtering techniques like user-based, item-based, and matrix factorization. It describes challenges like scalability that big data analytics can help address. The document evaluates recommendation algorithms using metrics like MAE, RMSE, precision and time taken on movie recommendation datasets. It aims to design an efficient recommendation system using the best techniques.
This document summarizes the process of using SAS Text Miner to predict the duration of customer service emails. It began with manually analyzing emails to identify keywords related to specific issues. The author later used SAS Text Miner to generate text topics, clusters, and rules from emails to create predictive models. Text topics and rules grouped emails by common words, while clusters grouped semantically similar emails. Models using these text mining outputs as predictors were able to classify emails as long or short-duration with reasonable accuracy. The most interpretable was a decision tree model, which the author focuses on discussing.
IRJET- Twitter Sentimental Analysis for Predicting Election Result using ...IRJET Journal
This document proposes using Twitter sentiment analysis and an LSTM neural network to predict election results. It involves collecting tweets related to political parties and candidates in India, cleaning the data, training an LSTM classifier on labeled tweets, and using the trained model to classify tweets as positive, negative or neutral sentiment and compare sentiment levels for each candidate. The goal is to analyze public sentiment expressed on Twitter and how it correlates with actual election outcomes.
Analyzing Social media’s real data detection through Web content mining using...IRJET Journal
This document proposes a framework to detect fake news on social media platforms like Twitter. It involves collecting Twitter data using APIs and preprocessing the data by removing URLs, usernames, punctuation etc. and applying techniques like tokenization, stemming and lemmatization. Features are then extracted from the preprocessed data using TF-IDF. Various machine learning classifiers like KNN, SVM, Decision Trees etc. are trained on the labeled data to classify tweets as real or fake. An ensemble of AdaBoost and a combined SVM-KNN model is also proposed and expected to achieve higher accuracy than individual models. The proposed framework aims to automatically detect and stop the spread of fake news on Twitter in real-time.
Abstract: Detection of fake news based on deep learning techniques is a major issue used to mislead people. For
the experiments, several types of datasets, models, and methodologies have been used to detect fake news. Also,
most of the datasets contain text id, tweets id, and user-based id and user-based features. To get the proper results
and accuracy various models like CNN (Convolution neural network), DEEP CNN, and LSTM (Long short-term
memory) are used
Avoiding Anonymous Users in Multiple Social Media Networks (SMN)paperpublications3
Abstract: The main aim of this project is secure the user login and data sharing among the social networks like Gmail, Facebook and also find anonymous user using this networks. If the original user not available in the networks, but their friends or anonymous user knows their login details means possible to misuse their chats. In this project we have to overcome the anonymous user using the network without original user knowledge. Unauthorized user using the login to chat, share images or videos etc This is the problem to be overcome in this project .That means user first register their details with one secured question and answer. Because the anonymous user can delete their chat or data In this by using the secured questions we have to recover the unauthorized user chat history or sharing details with their IP address or MAC address. So in this project they have found out a way to prevent the anonymous users misuse the original user login details.
This document discusses techniques for detecting fake news. It begins with an introduction to the problem of fake news and how it spreads on social media. It then reviews different machine learning techniques that have been used for fake news detection, including naïve bayes, decision trees, random forests, K-nearest neighbors, and LSTM. The document also categorizes different types of fake news and surveys related literature applying machine learning to fake news detection. It concludes that detecting fake news is still an ongoing challenge and more work is needed with improved datasets and models.
Discovering Influential User by Coupling Multiplex Heterogeneous OSN’SIRJET Journal
This document proposes a framework for modeling and analyzing influence diffusion in multiplex online social networks (OSNs). It introduces coupling plans to represent how data spreads across overlapping users in multiple OSNs. Specifically, it proposes both lossless and lossy coupling plans to map multiple networks into a single network. Extensive tests on real and synthetic datasets show the coupling plans can effectively identify influential users by considering their roles across multiple OSNs. The framework provides insights into influence propagation in multiplex networks and can solve the minimum cost influence problem by exploiting algorithms for single networks.
IRJET- Machine Learning: Survey, Types and ChallengesIRJET Journal
This document provides an overview of machine learning, including its types and challenges. It discusses supervised and unsupervised machine learning algorithms. Supervised learning uses labeled training data to predict discrete or continuous output values. Unsupervised learning finds hidden patterns in unlabeled data through clustering. Common supervised algorithms are logistic regression, decision trees, k-nearest neighbors and common unsupervised algorithm is clustering. The document also gives examples to explain machine learning concepts and algorithms.
Unification Algorithm in Hefty Iterative Multi-tier Classifiers for Gigantic ...Editor IJAIEM
Dr.G.Anandharaj1, Dr.P.Srimanchari2
1Associate Professor and Head, Department of Computer Science
Adhiparasakthi College of Arts and Science (Autonomous), Kalavai, Vellore (Dt) -632506
2 Assistant Professor and Head, Department of Computer Applications
Erode Arts and Science College (Autonomous), Erode (Dt) - 638001
ABSTRACT
In unpredictable increase in mobile apps, more and more threats migrate from outmoded PC client to mobile device. Compared
with traditional windows Intel alliance in PC, Android alliance dominates in Mobile Internet, the apps replace the PC client
software as the foremost target of hateful usage. In this paper, to improve the confidence status of recent mobile apps, we
propose a methodology to estimate mobile apps based on cloud computing platform and data mining. Compared with
traditional method, such as permission pattern based method, combines the dynamic and static analysis methods to
comprehensively evaluate an Android applications The Internet of Things (IoT) indicates a worldwide network of
interconnected items uniquely addressable, via standard communication protocols. Accordingly, preparing us for the
forthcoming invasion of things, a tool called data fusion can be used to manipulate and manage such data in order to improve
progression efficiency and provide advanced intelligence. In this paper, we propose an efficient multidimensional fusion
algorithm for IoT data based on partitioning. Finally, the attribute reduction and rule extraction methods are used to obtain the
synthesis results. By means of proving a few theorems and simulation, the correctness and effectiveness of this algorithm is
illustrated. This paper introduces and investigates large iterative multitier ensemble (LIME) classifiers specifically tailored for
big data. These classifiers are very hefty, but are quite easy to generate and use. They can be so large that it makes sense to use
them only for big data. Our experiments compare LIME classifiers with various vile classifiers and standard ordinary ensemble
Meta classifiers. The results obtained demonstrate that LIME classifiers can significantly increase the accuracy of
classifications. LIME classifiers made better than the base classifiers and standard ensemble Meta classifiers.
Keywords: LIME classifiers, ensemble Meta classifiers, Internet of Things, Big data
A DATA MINING APPROACH FOR FILTERING OUT SOCIAL SPAMMERS IN LARGE-SCALE TWITT...ijaia
This document discusses using data mining and machine learning techniques to detect social media spammers on Twitter. It begins with an introduction to the problem of spam on Twitter and prior related work using classification algorithms. The proposed method has four main parts: 1) Using bio-inspired optimization algorithms like PSO, Cuckoo Search, and Bat Algorithm to reduce features and select optimal subsets for classification. 2) Comparing classifiers like BayesNet, C4.5, Random Forest, kNN, and MLP on Twitter data. 3) Evaluating classifiers using cross-validation and metrics on real imbalanced datasets. 4) Identifying important features that improve classifier performance for spam detection. The document outlines the dataset and features used, which include user
Real-time Classification of Malicious URLs on Twitter using Machine Activity ...Pete Burnap
This document summarizes research on classifying malicious URLs on Twitter in real-time using machine activity data. The researchers collected data on URLs shared on Twitter during sporting events and used a honeypot to identify malicious ones. They built machine learning models to predict maliciousness based on metrics like CPU usage, network traffic, and processes when a URL was clicked. The best model was a multi-layer perceptron that achieved up to 72% accuracy. It showed network activity, CPU usage, and processes were predictive. Testing on a new dataset showed some independence between events. Using only 1% of training data caused a small 5% drop in performance, alleviating concerns over data requirements.
Fake News Detection using Passive Aggressive and Naïve BayesIRJET Journal
This document describes research on detecting fake news using machine learning algorithms. The researchers collected a dataset of labeled real and fake news articles and extracted features using bag-of-words and TF-IDF. They then trained two classification models, Naive Bayes and Passive-Aggressive, on the dataset. The Passive-Aggressive model achieved a higher accuracy of 93% compared to 89% for the Naive Bayes model. The researchers concluded the Passive-Aggressive algorithm is better able to differentiate between real and fake news, but note that further improving datasets and incorporating other media could enhance fake news detection.
IRJET- Sentimental Analysis for Online Reviews using Machine Learning AlgorithmsIRJET Journal
The document discusses sentiment analysis of online product reviews using machine learning algorithms. It first provides background on sentiment analysis and its uses. It then describes preprocessing customer review data and extracting features using count and TF-IDF vectorization. Three machine learning algorithms are tested - support vector machine (SVM), random forest, and XGBoost classifier. The results show that XGBoost achieved higher accuracy than SVM and random forest for sentiment classification of the product review data.
A study of cyberbullying detection using Deep Learning and Machine Learning T...IRJET Journal
This document discusses a study on detecting cyberbullying using machine learning and deep learning techniques. Specifically, it examines using a hybrid model combining K-Nearest Neighbors, Support Vector Machine, and Random Forest algorithms, as well as a Convolutional Neural Network. The study uses a Twitter dataset to classify tweets as not bullying, racism, or sexism. It finds that the CNN model produces more accurate predictions than the hybrid stacking algorithm. The document provides background on related work applying machine and deep learning to cyberbullying detection, particularly using content-based and user-based approaches.
A study of cyberbullying detection using Deep Learning and Machine Learning T...IRJET Journal
This document summarizes a research paper that studied the detection of cyberbullying using machine learning and deep learning techniques. Specifically, it used a hybrid model combining KNN, SVM and Random Forest algorithms (stacking algorithm) and a Convolutional Neural Network (CNN) on a Twitter dataset. The stacking algorithm achieved an accuracy of X% while the CNN achieved a higher accuracy of Y%. A comparison of the two models found that CNN produced a more precise prediction of cyberbullying. The document also reviewed related work on cyberbullying detection using content-based, user-based and network-based approaches with machine learning algorithms like SVM, Naive Bayes and deep learning methods like CNN.
IRJET- Cross System User Modeling and Personalization on the Social WebIRJET Journal
The document discusses cross-system user modeling and personalization on social media networks. It proposes the Friend Relationship-Based User Identification (FRUI) algorithm to identify identical users across different social media platforms based on their friendship networks. FRUI calculates a matching score for candidate user pairs and only high scoring pairs are considered matches. Two proposals are introduced to improve the efficiency of the algorithm. Experimental results show FRUI performs better than existing network structure-based methods. The real-world friendship network is highly individual, so using friendship structure to analyze cross-platform social media is more accurate.
Detecting root of the rumor in social network using GSSSIRJET Journal
1) The document proposes a method to detect the root of rumors spreading in social networks using monitor nodes and the Greedy Source Set Size algorithm.
2) Monitor nodes record and report data to identify rumors and their sources based on which nodes received the information.
3) The GSSS algorithm aims to find the exact solution and improve efficiency for identifying rumor sources.
4) Three methods are used to identify the rumor root: identification method, reverse dissemination method, and microscopic rumor spreading model based on maximum likelihood.
Feature Subset Selection for High Dimensional Data using Clustering TechniquesIRJET Journal
The document discusses feature subset selection for high dimensional data using clustering techniques. It proposes a FAST algorithm that has three steps: (1) removing irrelevant features, (2) dividing features into clusters, (3) selecting the most representative feature from each cluster. The FAST algorithm uses DBSCAN, a density-based clustering algorithm, to cluster the features. DBSCAN can identify clusters of arbitrary shape and detect noise, making it suitable for high dimensional data. The goal of feature subset selection is to find a small number of discriminative features that best represent the data.
Detecting Malicious Bots in Social Media Accounts Using Machine Learning Tech...IRJET Journal
The document describes a proposed system to detect malicious bots on Twitter using machine learning techniques. It involves collecting Twitter data, extracting features, and using classification algorithms like random forest and decision tree. The random forest model combines the predictions of multiple decision trees trained on different feature subsets to improve accuracy. Features like account age, tweet frequency, and sentiment are used. The system is evaluated using cross-validation, performance metrics from a confusion matrix like accuracy, and comparisons to other algorithms. The goal is to improve security on social media by identifying bot accounts.
MAPREDUCE IMPLEMENTATION FOR MALICIOUS WEBSITES CLASSIFICATIONIJNSA Journal
Due to the rapid growth of the internet, malicious websites [1] have become the cornerstone for internet crime activities. There are lots of existing approaches to detect benign and malicious websites — some of them giving near 99% accuracy. However, effective and efficient detection of malicious websites has now
seemed reasonable enough in terms of accuracy, but in terms of processing speed, it is still considered an enormous and costly task because of their qualities and complexities. In this project, We wanted to implement a classifier that would detect benign and malicious websites using network and application features that are available in a data-set from Kaggle, and we will do that using Map Reduce to make the classification speeds faster than the traditional approaches.[2].
MAPREDUCE IMPLEMENTATION FOR MALICIOUS WEBSITES CLASSIFICATIONIJNSA Journal
This document summarizes research on using MapReduce to classify websites as malicious or benign more efficiently than traditional approaches. The researchers used a dataset from Kaggle containing network and application features of websites to train and evaluate machine learning models. They preprocessed the data to handle missing values and encode categorical features. Models tested included neural networks, random forests, and decision trees. Random forests achieved 100% accuracy when trained on 30% of the data and were much faster than other models. Processing time was improved when using Apache Spark on two nodes compared to a single node or traditional programming, and further speedups could be gained with more nodes.
Social Friend Overlying Communities Based on Social Network ContextIRJET Journal
This document discusses algorithms for detecting overlapping communities in social networks. It begins with an introduction to social networks and community detection. It then reviews various algorithms that have been proposed for detecting overlapping communities, including clique percolation methods, fuzzy detection algorithms, agent-based and dynamic algorithms, and more. It also discusses using these algorithms to recommend friends and locations to users based on their behaviors and communities within social networks. The document presents results from applying these algorithms and concludes by discussing opportunities for future work improving recommendation performance.
Similar to IRJET - Twitter Spam Detection using Cobweb (20)
TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...IRJET Journal
1) The document discusses the Sungal Tunnel project in Jammu and Kashmir, India, which is being constructed using the New Austrian Tunneling Method (NATM).
2) NATM involves continuous monitoring during construction to adapt to changing ground conditions, and makes extensive use of shotcrete for temporary tunnel support.
3) The methodology section outlines the systematic geotechnical design process for tunnels according to Austrian guidelines, and describes the various steps of NATM tunnel construction including initial and secondary tunnel support.
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTUREIRJET Journal
This study examines the effect of response reduction factors (R factors) on reinforced concrete (RC) framed structures through nonlinear dynamic analysis. Three RC frame models with varying heights (4, 8, and 12 stories) were analyzed in ETABS software under different R factors ranging from 1 to 5. The results showed that displacement increased as the R factor decreased, indicating less linear behavior for lower R factors. Drift also decreased proportionally with increasing R factors from 1 to 5. Shear forces in the frames decreased with higher R factors. In general, R factors of 3 to 5 produced more satisfactory performance with less displacement and drift. The displacement variations between different building heights were consistent at different R factors. This study evaluated how R factors influence
A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...IRJET Journal
This study compares the use of Stark Steel and TMT Steel as reinforcement materials in a two-way reinforced concrete slab. Mechanical testing is conducted to determine the tensile strength, yield strength, and other properties of each material. A two-way slab design adhering to codes and standards is executed with both materials. The performance is analyzed in terms of deflection, stability under loads, and displacement. Cost analyses accounting for material, durability, maintenance, and life cycle costs are also conducted. The findings provide insights into the economic and structural implications of each material for reinforcement selection and recommendations on the most suitable material based on the analysis.
Effect of Camber and Angles of Attack on Airfoil CharacteristicsIRJET Journal
This document discusses a study analyzing the effect of camber, position of camber, and angle of attack on the aerodynamic characteristics of airfoils. Sixteen modified asymmetric NACA airfoils were analyzed using computational fluid dynamics (CFD) by varying the camber, camber position, and angle of attack. The results showed the relationship between these parameters and the lift coefficient, drag coefficient, and lift to drag ratio. This provides insight into how changes in airfoil geometry impact aerodynamic performance.
A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...IRJET Journal
This document reviews the progress and challenges of aluminum-based metal matrix composites (MMCs), focusing on their fabrication processes and applications. It discusses how various aluminum MMCs have been developed using reinforcements like borides, carbides, oxides, and nitrides to improve mechanical and wear properties. These composites have gained prominence for their lightweight, high-strength and corrosion resistance properties. The document also examines recent advancements in fabrication techniques for aluminum MMCs and their growing applications in industries such as aerospace and automotive. However, it notes that challenges remain around issues like improper mixing of reinforcements and reducing reinforcement agglomeration.
Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...IRJET Journal
This document discusses research on using graph neural networks (GNNs) for dynamic optimization of public transportation networks in real-time. GNNs represent transit networks as graphs with nodes as stops and edges as connections. The GNN model aims to optimize networks using real-time data on vehicle locations, arrival times, and passenger loads. This helps increase mobility, decrease traffic, and improve efficiency. The system continuously trains and infers to adapt to changing transit conditions, providing decision support tools. While research has focused on performance, more work is needed on security, socio-economic impacts, contextual generalization of models, continuous learning approaches, and effective real-time visualization.
Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...IRJET Journal
This document summarizes a research project that aims to compare the structural performance of conventional slab and grid slab systems in multi-story buildings using ETABS software. The study will analyze both symmetric and asymmetric building models under various loading conditions. Parameters like deflections, moments, shears, and stresses will be examined to evaluate the structural effectiveness of each slab type. The results will provide insights into the comparative behavior of conventional and grid slabs to help engineers and architects select appropriate slab systems based on building layouts and design requirements.
A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...IRJET Journal
This document summarizes and reviews a research paper on the seismic response of reinforced concrete (RC) structures with plan and vertical irregularities, with and without infill walls. It discusses how infill walls can improve or reduce the seismic performance of RC buildings, depending on factors like wall layout, height distribution, connection to the frame, and relative stiffness of walls and frames. The reviewed research paper analyzes the behavior of infill walls, effects of vertical irregularities, and seismic performance of high-rise structures under linear static and dynamic analysis. It studies response characteristics like story drift, deflection and shear. The document also provides literature on similar research investigating the effects of infill walls, soft stories, plan irregularities, and different
This document provides a review of machine learning techniques used in Advanced Driver Assistance Systems (ADAS). It begins with an abstract that summarizes key applications of machine learning in ADAS, including object detection, recognition, and decision-making. The introduction discusses the integration of machine learning in ADAS and how it is transforming vehicle safety. The literature review then examines several research papers on topics like lightweight deep learning models for object detection and lane detection models using image processing. It concludes by discussing challenges and opportunities in the field, such as improving algorithm robustness and adaptability.
Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...IRJET Journal
The document analyzes temperature and precipitation trends in Asosa District, Benishangul Gumuz Region, Ethiopia from 1993 to 2022 based on data from the local meteorological station. The results show:
1) The average maximum and minimum annual temperatures have generally decreased over time, with maximum temperatures decreasing by a factor of -0.0341 and minimum by -0.0152.
2) Mann-Kendall tests found the decreasing temperature trends to be statistically significant for annual maximum temperatures but not for annual minimum temperatures.
3) Annual precipitation in Asosa District showed a statistically significant increasing trend.
The conclusions recommend development planners account for rising summer precipitation and declining temperatures in
P.E.B. Framed Structure Design and Analysis Using STAAD ProIRJET Journal
This document discusses the design and analysis of pre-engineered building (PEB) framed structures using STAAD Pro software. It provides an overview of PEBs, including that they are designed off-site with building trusses and beams produced in a factory. STAAD Pro is identified as a key tool for modeling, analyzing, and designing PEBs to ensure their performance and safety under various load scenarios. The document outlines modeling structural parts in STAAD Pro, evaluating structural reactions, assigning loads, and following international design codes and standards. In summary, STAAD Pro is used to design and analyze PEB framed structures to ensure safety and code compliance.
A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...IRJET Journal
This document provides a review of research on innovative fiber integration methods for reinforcing concrete structures. It discusses studies that have explored using carbon fiber reinforced polymer (CFRP) composites with recycled plastic aggregates to develop more sustainable strengthening techniques. It also examines using ultra-high performance fiber reinforced concrete to improve shear strength in beams. Additional topics covered include the dynamic responses of FRP-strengthened beams under static and impact loads, and the performance of preloaded CFRP-strengthened fiber reinforced concrete beams. The review highlights the potential of fiber composites to enable more sustainable and resilient construction practices.
Survey Paper on Cloud-Based Secured Healthcare SystemIRJET Journal
This document summarizes a survey on securing patient healthcare data in cloud-based systems. It discusses using technologies like facial recognition, smart cards, and cloud computing combined with strong encryption to securely store patient data. The survey found that healthcare professionals believe digitizing patient records and storing them in a centralized cloud system would improve access during emergencies and enable more efficient care compared to paper-based systems. However, ensuring privacy and security of patient data is paramount as healthcare incorporates these digital technologies.
Review on studies and research on widening of existing concrete bridgesIRJET Journal
This document summarizes several studies that have been conducted on widening existing concrete bridges. It describes a study from China that examined load distribution factors for a bridge widened with composite steel-concrete girders. It also outlines challenges and solutions for widening a bridge in the UAE, including replacing bearings and stitching the new and existing structures. Additionally, it discusses two bridge widening projects in New Zealand that involved adding precast beams and stitching to connect structures. Finally, safety measures and challenges for strengthening a historic bridge in Switzerland under live traffic are presented.
React based fullstack edtech web applicationIRJET Journal
The document describes the architecture of an educational technology web application built using the MERN stack. It discusses the frontend developed with ReactJS, backend with NodeJS and ExpressJS, and MongoDB database. The frontend provides dynamic user interfaces, while the backend offers APIs for authentication, course management, and other functions. MongoDB enables flexible data storage. The architecture aims to provide a scalable, responsive platform for online learning.
A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...IRJET Journal
This paper proposes integrating Internet of Things (IoT) and blockchain technologies to help implement objectives of India's National Education Policy (NEP) in the education sector. The paper discusses how blockchain could be used for secure student data management, credential verification, and decentralized learning platforms. IoT devices could create smart classrooms, automate attendance tracking, and enable real-time monitoring. Blockchain would ensure integrity of exam processes and resource allocation, while smart contracts automate agreements. The paper argues this integration has potential to revolutionize education by making it more secure, transparent and efficient, in alignment with NEP goals. However, challenges like infrastructure needs, data privacy, and collaborative efforts are also discussed.
A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.IRJET Journal
This document provides a review of research on the performance of coconut fibre reinforced concrete. It summarizes several studies that tested different volume fractions and lengths of coconut fibres in concrete mixtures with varying compressive strengths. The studies found that coconut fibre improved properties like tensile strength, toughness, crack resistance, and spalling resistance compared to plain concrete. Volume fractions of 2-5% and fibre lengths of 20-50mm produced the best results. The document concludes that using a 4-5% volume fraction of coconut fibres 30-40mm in length with M30-M60 grade concrete would provide benefits based on previous research.
Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...IRJET Journal
The document discusses optimizing business management processes through automation using Microsoft Power Automate and artificial intelligence. It provides an overview of Power Automate's key components and features for automating workflows across various apps and services. The document then presents several scenarios applying automation solutions to common business processes like data entry, monitoring, HR, finance, customer support, and more. It estimates the potential time and cost savings from implementing automation for each scenario. Finally, the conclusion emphasizes the transformative impact of AI and automation tools on business processes and the need for ongoing optimization.
Multistoried and Multi Bay Steel Building Frame by using Seismic DesignIRJET Journal
The document describes the seismic design of a G+5 steel building frame located in Roorkee, India according to Indian codes IS 1893-2002 and IS 800. The frame was analyzed using the equivalent static load method and response spectrum method, and its response in terms of displacements and shear forces were compared. Based on the analysis, the frame was designed as a seismic-resistant steel structure according to IS 800:2007. The software STAAD Pro was used for the analysis and design.
Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...IRJET Journal
This research paper explores using plastic waste as a sustainable and cost-effective construction material. The study focuses on manufacturing pavers and bricks using recycled plastic and partially replacing concrete with plastic alternatives. Initial results found that pavers and bricks made from recycled plastic demonstrate comparable strength and durability to traditional materials while providing environmental and cost benefits. Additionally, preliminary research indicates incorporating plastic waste as a partial concrete replacement significantly reduces construction costs without compromising structural integrity. The outcomes suggest adopting plastic waste in construction can address plastic pollution while optimizing costs, promoting more sustainable building practices.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
artificial intelligence and data science contents.pptxGauravCar
What is artificial intelligence? Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the intellectual processes characteristic of humans, such as the ability to reason.
› ...
Artificial intelligence (AI) | Definitio
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Rainfall intensity duration frequency curve statistical analysis and modeling...bijceesjournal
Using data from 41 years in Patna’ India’ the study’s goal is to analyze the trends of how often it rains on a weekly, seasonal, and annual basis (1981−2020). First, utilizing the intensity-duration-frequency (IDF) curve and the relationship by statistically analyzing rainfall’ the historical rainfall data set for Patna’ India’ during a 41 year period (1981−2020), was evaluated for its quality. Changes in the hydrologic cycle as a result of increased greenhouse gas emissions are expected to induce variations in the intensity, length, and frequency of precipitation events. One strategy to lessen vulnerability is to quantify probable changes and adapt to them. Techniques such as log-normal, normal, and Gumbel are used (EV-I). Distributions were created with durations of 1, 2, 3, 6, and 24 h and return times of 2, 5, 10, 25, and 100 years. There were also mathematical correlations discovered between rainfall and recurrence interval.
Findings: Based on findings, the Gumbel approach produced the highest intensity values, whereas the other approaches produced values that were close to each other. The data indicates that 461.9 mm of rain fell during the monsoon season’s 301st week. However, it was found that the 29th week had the greatest average rainfall, 92.6 mm. With 952.6 mm on average, the monsoon season saw the highest rainfall. Calculations revealed that the yearly rainfall averaged 1171.1 mm. Using Weibull’s method, the study was subsequently expanded to examine rainfall distribution at different recurrence intervals of 2, 5, 10, and 25 years. Rainfall and recurrence interval mathematical correlations were also developed. Further regression analysis revealed that short wave irrigation, wind direction, wind speed, pressure, relative humidity, and temperature all had a substantial influence on rainfall.
Originality and value: The results of the rainfall IDF curves can provide useful information to policymakers in making appropriate decisions in managing and minimizing floods in the study area.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.