The challenging task in cyber space is to detect malicious URLs. The websites pointed by the malicious URLs injects malicious code into the client machine or steals the crucial information. As detecting a phishing URL is a challenging task, it is essential to enhance detection techniques against the emerging attacks. The most of the existing approaches are feature based and cannot detect dynamic attacks. Mostly the attacker uses the input form, active content and embeds @ symbol in URL for malicious attack. To detect this attack, a Behaviour based Malicious URL Finder (BMUF) algorithm is proposed. It analyzes the behaviour of the URL. The FSM based state transition diagram is used to model the URL behaviour into various states. The state transition from initial to final state is used for classification. This approach tests the genuine and malicious behavior of the URL based on the responses to the user. It accurately detects the nature of the URL.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Third-party apps are a major reason for the popularity and addictiveness of Facebook. Unfortunately, hackers have realized the potential of using apps for spreading malware and spam. The problem is already significant, as we find that at least 13% of apps in our dataset are malicious. So far, the research community has focused on detecting malicious posts and campaigns. First, we identify a set of features that help us distinguish malicious apps from benign ones. For example, we find that malicious apps often share names with other apps, and they typically request fewer permissions than benign apps.
>Contact me on mangenashiva@gmail.com for full documentaion and code
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Third-party apps are a major reason for the popularity and addictiveness of Facebook. Unfortunately, hackers have realized the potential of using apps for spreading malware and spam. The problem is already significant, as we find that at least 13% of apps in our dataset are malicious. So far, the research community has focused on detecting malicious posts and campaigns. First, we identify a set of features that help us distinguish malicious apps from benign ones. For example, we find that malicious apps often share names with other apps, and they typically request fewer permissions than benign apps.
>Contact me on mangenashiva@gmail.com for full documentaion and code
A Deep Learning Technique for Web Phishing Detection Combined URL Features an...IJCNCJournal
The most popular way to deceive online users nowadays is phishing. Consequently, to increase cybersecurity, more efficient web page phishing detection mechanisms are needed. In this paper, we propose an approach that rely on websites image and URL to deals with the issue of phishing website recognition as a classification challenge. Our model uses webpage URLs and images to detect a phishing attack using convolution neural networks (CNNs) to extract the most important features of website images and URLs and then classifies them into benign and phishing pages. The accuracy rate of the results of the experiment was 99.67%, proving the effectiveness of the proposed model in detecting a web phishing attack.
Classification is one of the data mining technique to classify the data. Here, I have tried the different technologies such as Machine Learning and Deep Learning using R Programming Language.
A Comparative Analysis of Different Feature Set on the Performance of Differe...gerogepatton
Reducing the risk pose by phishers and other cybercriminals in the cyber space requires a robust and
automatic means of detecting phishing websites, since the culprits are constantly coming up with new
techniques of achieving their goals almost on daily basis. Phishers are constantly evolving the methods
they used for luring user to revealing their sensitive information. Many methods have been proposed in
past for phishing detection. But the quest for better solution is still on. This research covers the
development of phishing website model based on different algorithms with different set of features in order
to investigate the most significant features in the dataset.
Analyzing the effectualness of Phishing Algorithms in Web Applications Inques...Editor IJMTER
The initial and proficient loss of deception is belief. A wolf in sheep’s clothing is tough
to recognize, similar is the schema of a phishing website. Phishing is the emulsion of social
engineering and technical exploits designed to persuade a victim to provide personal information, for
the fiscal gain of the attacker. It is a new kind of network assault where the attacker creates a spitting
image of an already existing Web Page to delude users. In this paper, we will study two anti-phishing
algorithms, one an end-host based algorithm known as the LinkGuard Algorithm, while the other a
content based approach known as the CANTINA.
Introduction of Check For Plag ( Anti Plagiarism) Software
Check-For-Plag: A unique MAKE IN INDIA Initiative to curb plagiarism globally.
CFP is an anti Plagiarism software developed to curb the serious problem of plagiarism faced by publishers/ societies, researchers and institutions worldwide. The technology used and regularly enhanced content covered makes this solution realistic and acceptable by users globally.
Service offers publishers a way to compare documents against the largest comparison database of scientific, technical and medical content.
Service offered for Researcher
A national code of conduct to help ensure credibility, integrity and quality of research through common principles and standards of good academic practice. As a researcher good academic practice is a must. Good academic practice requires academic integrity, good research conduct and loyal collegiate conduct.
Service offered for Students
Automated search of plagiarism in student assignments. Check-For-Plag for students is a certain assurance for getting a great result. Otherwise you can risk deteriorating the progression or even losing a chance for a brilliant profession. You can improve the effort for a successful future.
Check For Plag India - Plagiarism checker software: Instantly check online for plagiarism, this software designed to effectively detect and thereby prevent plagiarism. An affordable trustworthy solution to enhance research quality at institutional or society level.
MALICIOUS URL DETECTION USING CONVOLUTIONAL NEURAL NETWORKijcseit
The World Wide Web has become an important part of our everyday life for information communication
and knowledge dissemination. It helps to transact information timely, rapidly and easily. Identifying theft
and identity fraud are referred as two sides of cyber-crime in which hackers and malicious users obtain the
personal data of existing legitimate users to attempt fraud or deception motivation for financial gain.
Malicious URLs host unsolicited content (spam, phishing, drive-by exploits, etc.) and lure unsuspecting
users to become victims of scams (monetary loss, theft of private information, and malware installation),
and cause losses of billions of dollars every year. To detect such crimes systems should be fast and precise
with the ability to detect new malicious content. Traditionally, this detection is done mostly through the
usage of blacklists. However, blacklists cannot be exhaustive, and lack the ability to detect newly generated
malicious URLs. To improve the generality of malicious URL detectors, machine learning techniques have
been explored with increasing attention in recent years. In this paper, I use a simple algorithm to detect
and predicting URLs it is good or bad and compared with two other algorithms to know (SVM, LR).
MALICIOUS URL DETECTION USING CONVOLUTIONAL NEURAL NETWORKijcseit
The World Wide Web has become an important part of our everyday life for information communication
and knowledge dissemination. It helps to transact information timely, rapidly and easily. Identifying theft
and identity fraud are referred as two sides of cyber-crime in which hackers and malicious users obtain the
personal data of existing legitimate users to attempt fraud or deception motivation for financial gain.
Malicious URLs host unsolicited content (spam, phishing, drive-by exploits, etc.) and lure unsuspecting
users to become victims of scams (monetary loss, theft of private information, and malware installation),
and cause losses of billions of dollars every year. To detect such crimes systems should be fast and precise
with the ability to detect new malicious content. Traditionally, this detection is done mostly through the
usage of blacklists. However, blacklists cannot be exhaustive, and lack the ability to detect newly generated
malicious URLs. To improve the generality of malicious URL detectors, machine learning techniques have
been explored with increasing attention in recent years. In this paper, I use a simple algorithm to detect
and predicting URLs it is good or bad and compared with two other algorithms to know (SVM, LR).
State of the Art Analysis Approach for Identification of the Malignant URLsIOSRjournaljce
Malicious URLs have been universally used to ascend various cyber attacks including spamming, phishing and malware. Malware, short term for malicious software, is software which is developed to penetrate computers in a network without the user’s permission or notification. Existing methods typically detect malicious URLs of a single attack type. Hence such detection systems are failed to protect the users from various attacks. Malware spreading widely throughout the area of network as consequence of this it becomes predicament in distributed computer and network systems. Malicious links are the place of origin of all attacks which circulated all over the web. Hence malicious URLs should be detected for the prevention of users from these malware attacks. In this paper we described a novel approach which analyze all types of attacks by identifying malicious URLs and secure the web users from them. This technique prevents the users from malignant URLs before visiting them. Therefore efficiency of web security gets maintained. For such anatomization we developed an analyzer which identifies URLs and examine as malicious or benign. We also developed five processes which crawl for suspicious URLs. This approach will prevent the users from all types of attacks and increase efficiency of web crawling phase.
Phishing is a social engineering Technique which they main aim is to target the user Information like user id, password, credit card information and so on. Which result a financial loss to the user. Detecting Phishing is the one of the challenge problem that relay to human vulnerabilities. This paper proposed the Detecting Phishing Web Sites using different Machine Learning Approaches. In this to evaluate different classification models to predict malicious and benign websites by using Machine Learning Algorithms. Experiments are performed on data set consisting malicious and benign, In This paper the results shows the proposed Algorithms has high detection accuracy. Nakkala Srinivas Mudiraj ""Detecting Phishing using Machine Learning"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-4 , June 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23755.pdf
Paper URL: https://www.ijtsrd.com/computer-science/computer-security/23755/detecting-phishing-using-machine-learning/nakkala-srinivas-mudiraj
A Deep Learning Technique for Web Phishing Detection Combined URL Features an...IJCNCJournal
The most popular way to deceive online users nowadays is phishing. Consequently, to increase cybersecurity, more efficient web page phishing detection mechanisms are needed. In this paper, we propose an approach that rely on websites image and URL to deals with the issue of phishing website recognition as a classification challenge. Our model uses webpage URLs and images to detect a phishing attack using convolution neural networks (CNNs) to extract the most important features of website images and URLs and then classifies them into benign and phishing pages. The accuracy rate of the results of the experiment was 99.67%, proving the effectiveness of the proposed model in detecting a web phishing attack.
Classification is one of the data mining technique to classify the data. Here, I have tried the different technologies such as Machine Learning and Deep Learning using R Programming Language.
A Comparative Analysis of Different Feature Set on the Performance of Differe...gerogepatton
Reducing the risk pose by phishers and other cybercriminals in the cyber space requires a robust and
automatic means of detecting phishing websites, since the culprits are constantly coming up with new
techniques of achieving their goals almost on daily basis. Phishers are constantly evolving the methods
they used for luring user to revealing their sensitive information. Many methods have been proposed in
past for phishing detection. But the quest for better solution is still on. This research covers the
development of phishing website model based on different algorithms with different set of features in order
to investigate the most significant features in the dataset.
Analyzing the effectualness of Phishing Algorithms in Web Applications Inques...Editor IJMTER
The initial and proficient loss of deception is belief. A wolf in sheep’s clothing is tough
to recognize, similar is the schema of a phishing website. Phishing is the emulsion of social
engineering and technical exploits designed to persuade a victim to provide personal information, for
the fiscal gain of the attacker. It is a new kind of network assault where the attacker creates a spitting
image of an already existing Web Page to delude users. In this paper, we will study two anti-phishing
algorithms, one an end-host based algorithm known as the LinkGuard Algorithm, while the other a
content based approach known as the CANTINA.
Introduction of Check For Plag ( Anti Plagiarism) Software
Check-For-Plag: A unique MAKE IN INDIA Initiative to curb plagiarism globally.
CFP is an anti Plagiarism software developed to curb the serious problem of plagiarism faced by publishers/ societies, researchers and institutions worldwide. The technology used and regularly enhanced content covered makes this solution realistic and acceptable by users globally.
Service offers publishers a way to compare documents against the largest comparison database of scientific, technical and medical content.
Service offered for Researcher
A national code of conduct to help ensure credibility, integrity and quality of research through common principles and standards of good academic practice. As a researcher good academic practice is a must. Good academic practice requires academic integrity, good research conduct and loyal collegiate conduct.
Service offered for Students
Automated search of plagiarism in student assignments. Check-For-Plag for students is a certain assurance for getting a great result. Otherwise you can risk deteriorating the progression or even losing a chance for a brilliant profession. You can improve the effort for a successful future.
Check For Plag India - Plagiarism checker software: Instantly check online for plagiarism, this software designed to effectively detect and thereby prevent plagiarism. An affordable trustworthy solution to enhance research quality at institutional or society level.
MALICIOUS URL DETECTION USING CONVOLUTIONAL NEURAL NETWORKijcseit
The World Wide Web has become an important part of our everyday life for information communication
and knowledge dissemination. It helps to transact information timely, rapidly and easily. Identifying theft
and identity fraud are referred as two sides of cyber-crime in which hackers and malicious users obtain the
personal data of existing legitimate users to attempt fraud or deception motivation for financial gain.
Malicious URLs host unsolicited content (spam, phishing, drive-by exploits, etc.) and lure unsuspecting
users to become victims of scams (monetary loss, theft of private information, and malware installation),
and cause losses of billions of dollars every year. To detect such crimes systems should be fast and precise
with the ability to detect new malicious content. Traditionally, this detection is done mostly through the
usage of blacklists. However, blacklists cannot be exhaustive, and lack the ability to detect newly generated
malicious URLs. To improve the generality of malicious URL detectors, machine learning techniques have
been explored with increasing attention in recent years. In this paper, I use a simple algorithm to detect
and predicting URLs it is good or bad and compared with two other algorithms to know (SVM, LR).
MALICIOUS URL DETECTION USING CONVOLUTIONAL NEURAL NETWORKijcseit
The World Wide Web has become an important part of our everyday life for information communication
and knowledge dissemination. It helps to transact information timely, rapidly and easily. Identifying theft
and identity fraud are referred as two sides of cyber-crime in which hackers and malicious users obtain the
personal data of existing legitimate users to attempt fraud or deception motivation for financial gain.
Malicious URLs host unsolicited content (spam, phishing, drive-by exploits, etc.) and lure unsuspecting
users to become victims of scams (monetary loss, theft of private information, and malware installation),
and cause losses of billions of dollars every year. To detect such crimes systems should be fast and precise
with the ability to detect new malicious content. Traditionally, this detection is done mostly through the
usage of blacklists. However, blacklists cannot be exhaustive, and lack the ability to detect newly generated
malicious URLs. To improve the generality of malicious URL detectors, machine learning techniques have
been explored with increasing attention in recent years. In this paper, I use a simple algorithm to detect
and predicting URLs it is good or bad and compared with two other algorithms to know (SVM, LR).
State of the Art Analysis Approach for Identification of the Malignant URLsIOSRjournaljce
Malicious URLs have been universally used to ascend various cyber attacks including spamming, phishing and malware. Malware, short term for malicious software, is software which is developed to penetrate computers in a network without the user’s permission or notification. Existing methods typically detect malicious URLs of a single attack type. Hence such detection systems are failed to protect the users from various attacks. Malware spreading widely throughout the area of network as consequence of this it becomes predicament in distributed computer and network systems. Malicious links are the place of origin of all attacks which circulated all over the web. Hence malicious URLs should be detected for the prevention of users from these malware attacks. In this paper we described a novel approach which analyze all types of attacks by identifying malicious URLs and secure the web users from them. This technique prevents the users from malignant URLs before visiting them. Therefore efficiency of web security gets maintained. For such anatomization we developed an analyzer which identifies URLs and examine as malicious or benign. We also developed five processes which crawl for suspicious URLs. This approach will prevent the users from all types of attacks and increase efficiency of web crawling phase.
Phishing is a social engineering Technique which they main aim is to target the user Information like user id, password, credit card information and so on. Which result a financial loss to the user. Detecting Phishing is the one of the challenge problem that relay to human vulnerabilities. This paper proposed the Detecting Phishing Web Sites using different Machine Learning Approaches. In this to evaluate different classification models to predict malicious and benign websites by using Machine Learning Algorithms. Experiments are performed on data set consisting malicious and benign, In This paper the results shows the proposed Algorithms has high detection accuracy. Nakkala Srinivas Mudiraj ""Detecting Phishing using Machine Learning"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-4 , June 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23755.pdf
Paper URL: https://www.ijtsrd.com/computer-science/computer-security/23755/detecting-phishing-using-machine-learning/nakkala-srinivas-mudiraj
PUMMP: PHISHING URL DETECTION USING MACHINE LEARNING WITH MONOMORPHIC AND POL...IJCNCJournal
Phishing scams are increasing drastically, which affects Internet users in compromising personal
credentials. This paper proposes a novel feature utilization method for phishing URL detection called the
Polymorphic property of features. In the initial stage, the URL-related features (46 features) were
extracted. Later, a subset of features (19 out of 46) with the polymorphic property of features was
identified, and they were extracted from different parts of the URL (the domain and path). After extracting
the features, various machine learning classification algorithms were applied to build the machine
learning model using monomorphic treatment of features, polymorphic treatment of features, and both
monomorphic and polymorphic treatment of features. By the polymorphic property of features, we mean
that the same feature provides different interpretations when considered in different parts of the URL. The
machine learning models were built on two different datasets. A comparison of the machine learning
models derived from the two datasets reveals the fact that the model built with both monomorphic and
polymorphic treatment of features yielded higher accuracy in Phishing URL detection than the existing
works.
PUMMP: Phishing URL Detection using Machine Learning with Monomorphic and Pol...IJCNCJournal
Phishing scams are increasing drastically, which affects Internet users in compromising personal credentials. This paper proposes a novel feature utilization method for phishing URL detection called the Polymorphic property of features. In the initial stage, the URL-related features (46 features) were extracted. Later, a subset of features (19 out of 46) with the polymorphic property of features was identified, and they were extracted from different parts of the URL (the domain and path). After extracting the features, various machine learning classification algorithms were applied to build the machine learning model using monomorphic treatment of features, polymorphic treatment of features, and both monomorphic and polymorphic treatment of features. By the polymorphic property of features, we mean that the same feature provides different interpretations when considered in different parts of the URL. The machine learning models were built on two different datasets. A comparison of the machine learning models derived from the two datasets reveals the fact that the model built with both monomorphic and polymorphic treatment of features yielded higher accuracy in Phishing URL detection than the existing works
USING BLACK-LIST AND WHITE-LIST TECHNIQUE TO DETECT MALICIOUS URLSAM Publications,India
Malicious URLs are harmful to every aspect of computer users. Detecting of the malicious URL is very important. Currently, detection of malicious web pages techniques includes black-list and white-list methodology and machine learning classification algorithms are used. However, the black-list and white-list technology is useless if a particular URL is not in list. In this paper, we propose a multi-layer model for detecting malicious URL. The filter can directly determine the URL by training the threshold of each layer filter when it reaches the threshold. Otherwise, the filter leaves the URL to next layer. We also used an example to verify that the model can improve the accuracy of URL detection.
PDMLP: PHISHING DETECTION USING MULTILAYER PERCEPTRONIJNSA Journal
A phishing website is a significant problem on the internet. It’s one of the Cyber-attack types where attackers try to obtain sensitive information such as username and password or credit card information. The recent growth in deploying a Detection phishing URL system on many websites has resulted in a massive amount of available data to predict phishing websites. In this paper, we purpose a new method to develop a phishing detection system called phishing detection based on a multilayer perceptron (PDMLP), which used on two types of datasets. The performance of these mechanisms evaluated in terms of Accuracy, Precision, Recall, and F-measure. Results showed that PDMLP provides better performance in comparison to KNN, SVM, C4.5 Decision Tree, RF, and RoF to classifiers.
Detecting malicious URLs using binary classification through ada boost algori...IJECEIAES
Malicious Uniform Resource Locator (URL) is a frequent and severe menace to cybersecurity. Malicious URLs are used to extract unsolicited information and trick inexperienced end users as a sufferer of scams and create losses of billions of money each year. It is crucial to identify and appropriately respond to such URLs. Usually, this discovery is made by the practice and use of blacklists in the cyber world. However, blacklists cannot be exhaustive, and cannot recognize zero-day malicious URLs. So to increase the observation of malicious URL indicators, machine learning procedures should be incorporated. In this study, we have developed a complete prototype of Malicious URL Detection using machine learning methods. In particular, we have attempted an exact formulation of Malicious URL exposure from a machine learning perspective and proposed an approach using the AdaBoost algorithm - the proposed approach has brought forward more accuracy than other existing algorithms.
A Hybrid Approach For Phishing Website Detection Using Machine Learning.vivatechijri
In this technical age there are many ways where an attacker can get access to people’s sensitive information illegitimately. One of the ways is Phishing, Phishing is an activity of misleading people into giving their sensitive information on fraud websites that lookalike to the real website. The phishers aim is to steal personal information, bank details etc. Day by day it’s getting more and more risky to enter your personal information on websites fearing that it might be a phishing attack and can steal your sensitive information. That’s why phishing website detection is necessary to alert the user and block the website. An automated detection of phishing attack is necessary one of which is machine learning. Machine Learning is one of the efficient techniques to detect phishing attack as it removes drawback of existing approaches. Efficient machine learning model with content based approach proves very effective to detect phishing websites.
Our proposed system uses Hybrid approach which combines machine learning based method and content based method. The URL based features will be extracted and passed to machine learning model and in content based approach, TF-IDF algorithm will detect a phishing website by using the top keywords of a web page. This hybrid approach is used to achieve highly efficient result. Finally, our system will notify and alert user if the website is Phishing or Legitimate.
Similar to Classification Model to Detect Malicious URL via Behaviour Analysis (20)
Text Mining in Digital Libraries using OKAPI BM25 ModelEditor IJCATR
The emergence of the internet has made vast amounts of information available and easily accessible online. As a result, most libraries have digitized their content in order to remain relevant to their users and to keep pace with the advancement of the internet. However, these digital libraries have been criticized for using inefficient information retrieval models that do not perform relevance ranking to the retrieved results. This paper proposed the use of OKAPI BM25 model in text mining so as means of improving relevance ranking of digital libraries. Okapi BM25 model was selected because it is a probability-based relevance ranking algorithm. A case study research was conducted and the model design was based on information retrieval processes. The performance of Boolean, vector space, and Okapi BM25 models was compared for data retrieval. Relevant ranked documents were retrieved and displayed at the OPAC framework search page. The results revealed that Okapi BM 25 outperformed Boolean model and Vector Space model. Therefore, this paper proposes the use of Okapi BM25 model to reward terms according to their relative frequencies in a document so as to improve the performance of text mining in digital libraries.
Green Computing, eco trends, climate change, e-waste and eco-friendlyEditor IJCATR
This study focused on the practice of using computing resources more efficiently while maintaining or increasing overall performance. Sustainable IT services require the integration of green computing practices such as power management, virtualization, improving cooling technology, recycling, electronic waste disposal, and optimization of the IT infrastructure to meet sustainability requirements. Studies have shown that costs of power utilized by IT departments can approach 50% of the overall energy costs for an organization. While there is an expectation that green IT should lower costs and the firm’s impact on the environment, there has been far less attention directed at understanding the strategic benefits of sustainable IT services in terms of the creation of customer value, business value and societal value. This paper provides a review of the literature on sustainable IT, key areas of focus, and identifies a core set of principles to guide sustainable IT service design.
Policies for Green Computing and E-Waste in NigeriaEditor IJCATR
Computers today are an integral part of individuals’ lives all around the world, but unfortunately these devices are toxic to the environment given the materials used, their limited battery life and technological obsolescence. Individuals are concerned about the hazardous materials ever present in computers, even if the importance of various attributes differs, and that a more environment -friendly attitude can be obtained through exposure to educational materials. In this paper, we aim to delineate the problem of e-waste in Nigeria and highlight a series of measures and the advantage they herald for our country and propose a series of action steps to develop in these areas further. It is possible for Nigeria to have an immediate economic stimulus and job creation while moving quickly to abide by the requirements of climate change legislation and energy efficiency directives. The costs of implementing energy efficiency and renewable energy measures are minimal as they are not cash expenditures but rather investments paid back by future, continuous energy savings.
Performance Evaluation of VANETs for Evaluating Node Stability in Dynamic Sce...Editor IJCATR
Vehicular ad hoc networks (VANETs) are a favorable area of exploration which empowers the interconnection amid the movable vehicles and between transportable units (vehicles) and road side units (RSU). In Vehicular Ad Hoc Networks (VANETs), mobile vehicles can be organized into assemblage to promote interconnection links. The assemblage arrangement according to dimensions and geographical extend has serious influence on attribute of interaction .Vehicular ad hoc networks (VANETs) are subclass of mobile Ad-hoc network involving more complex mobility patterns. Because of mobility the topology changes very frequently. This raises a number of technical challenges including the stability of the network .There is a need for assemblage configuration leading to more stable realistic network. The paper provides investigation of various simulation scenarios in which cluster using k-means algorithm are generated and their numbers are varied to find the more stable configuration in real scenario of road.
Optimum Location of DG Units Considering Operation ConditionsEditor IJCATR
The optimal sizing and placement of Distributed Generation units (DG) are becoming very attractive to researchers these days. In this paper a two stage approach has been used for allocation and sizing of DGs in distribution system with time varying load model. The strategic placement of DGs can help in reducing energy losses and improving voltage profile. The proposed work discusses time varying loads that can be useful for selecting the location and optimizing DG operation. The method has the potential to be used for integrating the available DGs by identifying the best locations in a power system. The proposed method has been demonstrated on 9-bus test system.
Analysis of Comparison of Fuzzy Knn, C4.5 Algorithm, and Naïve Bayes Classifi...Editor IJCATR
Early detection of diabetes mellitus (DM) can prevent or inhibit complication. There are several laboratory test that must be done to detect DM. The result of this laboratory test then converted into data training. Data training used in this study generated from UCI Pima Database with 6 attributes that were used to classify positive or negative diabetes. There are various classification methods that are commonly used, and in this study three of them were compared, which were fuzzy KNN, C4.5 algorithm and Naïve Bayes Classifier (NBC) with one identical case. The objective of this study was to create software to classify DM using tested methods and compared the three methods based on accuracy, precision, and recall. The results showed that the best method was Fuzzy KNN with average and maximum accuracy reached 96% and 98%, respectively. In second place, NBC method had respective average and maximum accuracy of 87.5% and 90%. Lastly, C4.5 algorithm had average and maximum accuracy of 79.5% and 86%, respectively.
Web Scraping for Estimating new Record from Source SiteEditor IJCATR
Study in the Competitive field of Intelligent, and studies in the field of Web Scraping, have a symbiotic relationship mutualism. In the information age today, the website serves as a main source. The research focus is on how to get data from websites and how to slow down the intensity of the download. The problem that arises is the website sources are autonomous so that vulnerable changes the structure of the content at any time. The next problem is the system intrusion detection snort installed on the server to detect bot crawler. So the researchers propose the use of the methods of Mining Data Records and the method of Exponential Smoothing so that adaptive to changes in the structure of the content and do a browse or fetch automatically follow the pattern of the occurrences of the news. The results of the tests, with the threshold 0.3 for MDR and similarity threshold score 0.65 for STM, using recall and precision values produce f-measure average 92.6%. While the results of the tests of the exponential estimation smoothing using ? = 0.5 produces MAE 18.2 datarecord duplicate. It slowed down to 3.6 datarecord from 21.8 datarecord results schedule download/fetch fix in an average time of occurrence news.
Evaluating Semantic Similarity between Biomedical Concepts/Classes through S...Editor IJCATR
Most of the existing semantic similarity measures that use ontology structure as their primary source can measure semantic similarity between concepts/classes using single ontology. The ontology-based semantic similarity techniques such as structure-based semantic similarity techniques (Path Length Measure, Wu and Palmer’s Measure, and Leacock and Chodorow’s measure), information content-based similarity techniques (Resnik’s measure, Lin’s measure), and biomedical domain ontology techniques (Al-Mubaid and Nguyen’s measure (SimDist)) were evaluated relative to human experts’ ratings, and compared on sets of concepts using the ICD-10 “V1.0” terminology within the UMLS. The experimental results validate the efficiency of the SemDist technique in single ontology, and demonstrate that SemDist semantic similarity techniques, compared with the existing techniques, gives the best overall results of correlation with experts’ ratings.
Semantic Similarity Measures between Terms in the Biomedical Domain within f...Editor IJCATR
The techniques and tests are tools used to define how measure the goodness of ontology or its resources. The similarity between biomedical classes/concepts is an important task for the biomedical information extraction and knowledge discovery. However, most of the semantic similarity techniques can be adopted to be used in the biomedical domain (UMLS). Many experiments have been conducted to check the applicability of these measures. In this paper, we investigate to measure semantic similarity between two terms within single ontology or multiple ontologies in ICD-10 “V1.0” as primary source, and compare my results to human experts score by correlation coefficient.
A Strategy for Improving the Performance of Small Files in Openstack Swift Editor IJCATR
This is an effective way to improve the storage access performance of small files in Openstack Swift by adding an aggregate storage module. Because Swift will lead to too much disk operation when querying metadata, the transfer performance of plenty of small files is low. In this paper, we propose an aggregated storage strategy (ASS), and implement it in Swift. ASS comprises two parts which include merge storage and index storage. At the first stage, ASS arranges the write request queue in chronological order, and then stores objects in volumes. These volumes are large files that are stored in Swift actually. During the short encounter time, the object-to-volume mapping information is stored in Key-Value store at the second stage. The experimental results show that the ASS can effectively improve Swift's small file transfer performance.
Integrated System for Vehicle Clearance and RegistrationEditor IJCATR
Efficient management and control of government's cash resources rely on government banking arrangements. Nigeria, like many low income countries, employed fragmented systems in handling government receipts and payments. Later in 2016, Nigeria implemented a unified structure as recommended by the IMF, where all government funds are collected in one account would reduce borrowing costs, extend credit and improve government's fiscal policy among other benefits to government. This situation motivated us to embark on this research to design and implement an integrated system for vehicle clearance and registration. This system complies with the new Treasury Single Account policy to enable proper interaction and collaboration among five different level agencies (NCS, FRSC, SBIR, VIO and NPF) saddled with vehicular administration and activities in Nigeria. Since the system is web based, Object Oriented Hypermedia Design Methodology (OOHDM) is used. Tools such as Php, JavaScript, css, html, AJAX and other web development technologies were used. The result is a web based system that gives proper information about a vehicle starting from the exact date of importation to registration and renewal of licensing. Vehicle owner information, custom duty information, plate number registration details, etc. will also be efficiently retrieved from the system by any of the agencies without contacting the other agency at any point in time. Also number plate will no longer be the only means of vehicle identification as it is presently the case in Nigeria, because the unified system will automatically generate and assigned a Unique Vehicle Identification Pin Number (UVIPN) on payment of duty in the system to the vehicle and the UVIPN will be linked to the various agencies in the management information system.
Assessment of the Efficiency of Customer Order Management System: A Case Stu...Editor IJCATR
The Supermarket Management System deals with the automation of buying and selling of good and services. It includes both sales and purchase of items. The project Supermarket Management System is to be developed with the objective of making the system reliable, easier, fast, and more informative.
Energy-Aware Routing in Wireless Sensor Network Using Modified Bi-Directional A*Editor IJCATR
Energy is a key component in the Wireless Sensor Network (WSN)[1]. The system will not be able to run according to its function without the availability of adequate power units. One of the characteristics of wireless sensor network is Limitation energy[2]. A lot of research has been done to develop strategies to overcome this problem. One of them is clustering technique. The popular clustering technique is Low Energy Adaptive Clustering Hierarchy (LEACH)[3]. In LEACH, clustering techniques are used to determine Cluster Head (CH), which will then be assigned to forward packets to Base Station (BS). In this research, we propose other clustering techniques, which utilize the Social Network Analysis approach theory of Betweeness Centrality (BC) which will then be implemented in the Setup phase. While in the Steady-State phase, one of the heuristic searching algorithms, Modified Bi-Directional A* (MBDA *) is implemented. The experiment was performed deploy 100 nodes statically in the 100x100 area, with one Base Station at coordinates (50,50). To find out the reliability of the system, the experiment to do in 5000 rounds. The performance of the designed routing protocol strategy will be tested based on network lifetime, throughput, and residual energy. The results show that BC-MBDA * is better than LEACH. This is influenced by the ways of working LEACH in determining the CH that is dynamic, which is always changing in every data transmission process. This will result in the use of energy, because they always doing any computation to determine CH in every transmission process. In contrast to BC-MBDA *, CH is statically determined, so it can decrease energy usage.
Security in Software Defined Networks (SDN): Challenges and Research Opportun...Editor IJCATR
In networks, the rapidly changing traffic patterns of search engines, Internet of Things (IoT) devices, Big Data and data centers has thrown up new challenges for legacy; existing networks; and prompted the need for a more intelligent and innovative way to dynamically manage traffic and allocate limited network resources. Software Defined Network (SDN) which decouples the control plane from the data plane through network vitalizations aims to address these challenges. This paper has explored the SDN architecture and its implementation with the OpenFlow protocol. It has also assessed some of its benefits over traditional network architectures, security concerns and how it can be addressed in future research and related works in emerging economies such as Nigeria.
Measure the Similarity of Complaint Document Using Cosine Similarity Based on...Editor IJCATR
Report handling on "LAPOR!" (Laporan, Aspirasi dan Pengaduan Online Rakyat) system depending on the system administrator who manually reads every incoming report [3]. Read manually can lead to errors in handling complaints [4] if the data flow is huge and grows rapidly, it needs at least three days to prepare a confirmation and it sensitive to inconsistencies [3]. In this study, the authors propose a model that can measure the identities of the Query (Incoming) with Document (Archive). The authors employed Class-Based Indexing term weighting scheme, and Cosine Similarities to analyse document similarities. CoSimTFIDF, CoSimTFICF and CoSimTFIDFICF values used in classification as feature for K-Nearest Neighbour (K-NN) classifier. The optimum result evaluation is pre-processing employ 75% of training data ratio and 25% of test data with CoSimTFIDF feature. It deliver a high accuracy 84%. The k = 5 value obtain high accuracy 84.12%
Hangul Recognition Using Support Vector MachineEditor IJCATR
The recognition of Hangul Image is more difficult compared with that of Latin. It could be recognized from the structural arrangement. Hangul is arranged from two dimensions while Latin is only from the left to the right. The current research creates a system to convert Hangul image into Latin text in order to use it as a learning material on reading Hangul. In general, image recognition system is divided into three steps. The first step is preprocessing, which includes binarization, segmentation through connected component-labeling method, and thinning with Zhang Suen to decrease some pattern information. The second is receiving the feature from every single image, whose identification process is done through chain code method. The third is recognizing the process using Support Vector Machine (SVM) with some kernels. It works through letter image and Hangul word recognition. It consists of 34 letters, each of which has 15 different patterns. The whole patterns are 510, divided into 3 data scenarios. The highest result achieved is 94,7% using SVM kernel polynomial and radial basis function. The level of recognition result is influenced by many trained data. Whilst the recognition process of Hangul word applies to the type 2 Hangul word with 6 different patterns. The difference of these patterns appears from the change of the font type. The chosen fonts for data training are such as Batang, Dotum, Gaeul, Gulim, Malgun Gothic. Arial Unicode MS is used to test the data. The lowest accuracy is achieved through the use of SVM kernel radial basis function, which is 69%. The same result, 72 %, is given by the SVM kernel linear and polynomial.
Application of 3D Printing in EducationEditor IJCATR
This paper provides a review of literature concerning the application of 3D printing in the education system. The review identifies that 3D Printing is being applied across the Educational levels [1] as well as in Libraries, Laboratories, and Distance education systems. The review also finds that 3D Printing is being used to teach both students and trainers about 3D Printing and to develop 3D Printing skills.
Survey on Energy-Efficient Routing Algorithms for Underwater Wireless Sensor ...Editor IJCATR
In underwater environment, for retrieval of information the routing mechanism is used. In routing mechanism there are three to four types of nodes are used, one is sink node which is deployed on the water surface and can collect the information, courier/super/AUV or dolphin powerful nodes are deployed in the middle of the water for forwarding the packets, ordinary nodes are also forwarder nodes which can be deployed from bottom to surface of the water and source nodes are deployed at the seabed which can extract the valuable information from the bottom of the sea. In underwater environment the battery power of the nodes is limited and that power can be enhanced through better selection of the routing algorithm. This paper focuses the energy-efficient routing algorithms for their routing mechanisms to prolong the battery power of the nodes. This paper also focuses the performance analysis of the energy-efficient algorithms under which we can examine the better performance of the route selection mechanism which can prolong the battery power of the node
Comparative analysis on Void Node Removal Routing algorithms for Underwater W...Editor IJCATR
The designing of routing algorithms faces many challenges in underwater environment like: propagation delay, acoustic channel behaviour, limited bandwidth, high bit error rate, limited battery power, underwater pressure, node mobility, localization 3D deployment, and underwater obstacles (voids). This paper focuses the underwater voids which affects the overall performance of the entire network. The majority of the researchers have used the better approaches for removal of voids through alternate path selection mechanism but still research needs improvement. This paper also focuses the architecture and its operation through merits and demerits of the existing algorithms. This research article further focuses the analytical method of the performance analysis of existing algorithms through which we found the better approach for removal of voids
Decay Property for Solutions to Plate Type Equations with Variable CoefficientsEditor IJCATR
In this paper we consider the initial value problem for a plate type equation with variable coefficients and memory in
1 n R n ), which is of regularity-loss property. By using spectrally resolution, we study the pointwise estimates in the spectral
space of the fundamental solution to the corresponding linear problem. Appealing to this pointwise estimates, we obtain the global
existence and the decay estimates of solutions to the semilinear problem by employing the fixed point theorem
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
Classification Model to Detect Malicious URL via Behaviour Analysis
1. International Journal of Computer Applications Technology and Research
Volume 6–Issue 3, 133-140, 2017, ISSN:-2319–8656
www.ijcat.com 133
Classification Model to Detect Malicious URL via
Behaviour Analysis
N.Jayakanthan
Department of Computer Applications
Kumaraguru College of Technology
Coimbatore, India
A.V.Ramani
Department of Computer Science
Sri Ramakrishna Mission Vidyalaya College of Arts
and Science
Coimbatore, India
Abstract: The challenging task in cyber space is to detect malicious URLs. The websites pointed by the malicious URLs injects
malicious code into the client machine or steals the crucial information. As detecting a phishing URL is a challenging task, it is
essential to enhance detection techniques against the emerging attacks. The most of the existing approaches are feature based and
cannot detect dynamic attacks. Mostly the attacker uses the input form, active content and embeds @ symbol in URL for malicious
attack. To detect this attack, a Behaviour based Malicious URL Finder (BMUF) algorithm is proposed. It analyzes the behaviour of
the URL. The FSM based state transition diagram is used to model the URL behaviour into various states. The state transition from
initial to final state is used for classification. This approach tests the genuine and malicious behavior of the URL based on the
responses to the user. It accurately detects the nature of the URL.
Keywords: Malicious URL, Behaviour based Malicious URL Finder, Finite State Machine, Input form, Active Content
1. INTRODUCTION
The Malicious URL leads the user to phishing websites. These
websites steal the user’s confidential information without their
knowledge using fake information form, active contents, and
embed @ symbols in URL. These attacks inject malicious
code in the client machine, and it controls the machine and
spreads the malicious code to other machines in the same
network[ 22]. The malicious web sites resemble the websites
of the trusted organizations such as banks, government
agencies, and e-commerce websites.
Generally most of the phishing attacks are Drive-by-
downloaded attack. It installs the malicious code in users
system to generate attack [7]. The code is automatically
downloaded from the web page of the attacker without the
permission of the user. This behavior is an important feature
to detect web attacks.
The URL redirection mechanism is commonly used to carry
out web attacks. The attacker redirects the visitor to the
malicious website [15].The attacker r performs the following
activities to make a successful attack. They are
developing fraudulent websites and motivating users to visit
those sites through malicious URL. The @ symbol is used to
embed a malicious URL with a genuine URL. Apart from that
input form, active contents also redirect the user to the
malicious websites.
A number of approaches have been developed in recent years
to detect the malicious attacks.
These include detecting suspicious websites [10], educating
and training users [12], white list and black list based fault
detection and feature based analysis of legitimate and
malicious URLs.
Most of the web browsers are having built-in phishing
detection abilities based on white and black lists. There exists
no testing approach for anti-phishing professionals to
manually verify suspected URL and intimate the
administrators to take down the fake URLs. More over the
phishers can exploit the cross site scripting (XSS)
vulnerabilities by generating forms, active contents and @
symbol, motivating us to device behaviour based testing
approach for malicious URL detection.
The proposed approach detects the malicious URLs based on
the behaviour. Most of the existing approach detects the
malicious URLs using lexical and host based features. But
attacks in present scenario are highly dynamic which is not
detectable through feature analysis. So we propose a
behaviour based approach to detect the malicious URL.
The contribution of the proposed approach is as follows.
It is a dynamic approach that detects the malicious URLs
based on their behaviour.
Behaviour based Malicious URL Finder algorithm is
developed to detect the nature of the URL.
FSM based state transition diagram is developed to
model the URL behaviour in various states
It improves the accuracy of the classification
It is a light weight approaches capable of detecting
malicious URL with low performance overhead.
The paper is organized as follows. Section 2 describes the
related work done for malicious URL detection. Architecture
of the proposed system is given in section 3. Section 4 deals
with methodology. Section 5 discusses the analysis of the
URL. Finally section 6 concludes the paper.
2. REVIEW OF RELATED WORK
Hossian Shahriar and Mohammed zulkernine [10] developed
a tool phishTester to test the trustworthiness of the website
2. International Journal of Computer Applications Technology and Research
Volume 6–Issue 3, 133-140, 2017, ISSN:-2319–8656
www.ijcat.com 134
based on the behavior of the web application. They used
Finite State Machine that captures the submission of forms
with random inputs and corresponding responses.
Hyunsang Choi et al[11] analyze various types of
discriminative features acquired from lexical, webpage, DNS,
network, and link popularity properties of the associated
URLs. The used SVM to detect malicious URLs, and both
RAkEL and ML-kNN were used to identify attack types.
Sidharth Chhabra[6] et al and Y. Alshboul et al[1] found some
malicious attacks obfuscating the host with largest host
names, another domains and misspelled various. All these
attacks hide the malicious URL behind the genuine URL. It
leads the user to the malicious website.
Cheng Cao and James Caverlee[4] proposed a method to
identify the malicious URLs through posting based features
and click based features. The behavioral signals are analyzed
for classification and this method yields 86% accuracy. Few
machine learning approaches extract the URL features to train
the classification model through training data. The features
are categorized in to two classes- static and dynamic. In static
analysis [4][13][14][2], the information is analyzed without
the execution of the URL, but in dynamic approach the run
time behavior of the URL is used for classification.
Charmi Patel and Hiteishi Diwanji[5] analyze the lexical and
network based features using URL pattern matching
algorithm. This algorithm analyzes the different patterns of
URL to detect the malicious one. R.K. Nepali et al[16] use
four machine learning algorithms - Naïve Bayes, random
forest, support vector machine, and logistic regression to
detect malicious URL.and obtain an accuracy of 97% using
random forest algorithm. Y. Tao [21] proposed a dynamic
method which mines the internet access log file to detect the
malicious activity.
Peilin Zhao and Steven C H Hoi [18] proposed a Cost-
Sensitive Online Active Learning (CSOAL) frame work to
detect malicious URLs. The experimental results proved the
efficiency of algorithm in classification. The black list based
approaches [20][3][9] detects the URLs using the blacklisted
profile. But they are incapable to detect emerging attacks.
H.K. Pao[17] et al method calculates Conditional
Kolmogorov Complexity of the URL’s with reference to
genuine and malicious URLs. It compares the given URL with
malicious or genuine URL databases for classification. W.
Chu et[8] proposed a phishing detection method based on
machine learning approach. The lexical and domain based
features are analyzed for classification. This method properly
classifies even the changes in the phishing URLs. E.
Sorio[19] proposed a method to detect the hidden URLs based
on their lexical features. Nearly 100 URLs are analyzed and
experimental results show the efficiency of this approach
3.ARCHITECTURE OF THE
PROPOSED SYSTEM
The architecture of the proposed system is given in figure 1.
The components are browser, Behavioural Extraction, FSM
Model, BMUL Classifier and Final classification.
Figure 1.Architecture Diagram
available, try the font named Computer Modern Roman. On a
Macintosh, use the font named Times. Right margins should
be justified, not ragged.
3.1Browser:
The URL is the input to the browser. The behaviour of the
URL is extracted for the analysis.
3.3 FSM Model:
FSM state transition diagram model the URL behaviour in
two various states. The states are derived from 3 inputs and 13
responses. The transition from the initial to final state leads to
the classification.
3.3. BMUF Classifier:
The Malicious URL Finder (MUF) is rule based algorithm
which analyzes the URL through using FSM state transition
diagram. If any malicious behaviour is detected it marks the
URL as malicious and collects the behaviour and reports it to
the user.
4. METHODOLOGY
The proposed method uses the Behaviour based Malicious
URL Finder (BMUF) algorithm to analyze the behavior of
URL to detect whether the URL is genuine or malicious. The
FSM based state transition diagram is used capture the URL’s
behavior in various states. The state transition from initial to
finial state classifies the natural of the URL. This classifier
improves the accuracy of the malicious URL Detection.
4.1 Algorithm:
Behaviour based Algorithm Malicious URL Finder
(BMUF)
Input URL Behaviour
Extraction
FSM Model
BMUF Classifier
Final
Classification
Maliciou
ss
Genuine
3. International Journal of Computer Applications Technology and Research
Volume 6–Issue 3, 133-140, 2017, ISSN:-2319–8656
www.ijcat.com 135
//Input: URL of the Webpage
//Output: Genuine or Malicious
MB=Null // set of malicious behaviour.
Step1 : Consider the Input URL
If (Automatic content (ad) download occurs) then
Set Status = Malicious
MB = MB U ad
Step 2 : Check the webpage pointed by the URL contains
input form
When the user submits input form
a. If the user is redirected to a new webpage and
URL of the page contain malicious
words (mw) then
Set Status = Malicious
MB= MB U mw
b. If the user is redirected to a new webpage and
content is automatically downloaded
(ad) then
Set Status = Malicious
MB= MB U ad
Step 4 : Check the webpage pointed by the URL contains
active content/s(ac) When the user access the active
content then
a. If the user is redirected to a new webpage
andURL of the page contain malicious
words(mw) then
Set Status = Malicious
MB= MB U mw
b. If the user is redirected to a new webpage
and content is automatically downloaded
(ad) then
Set Status = Malicious
MB= MB U ad
Step 5: Check the webpage pointed by the URL contains @
symbol
a. It redirects the user to a webpage and URL of
the webpage contain malicious word/s(mw)
then
Set Status = Malicious
MB= MB U mw
b. If the user is redirected to a new webpage and
content is automatically downloaded (ad) then
Set Status = Malicious
MB= MB U ad
Step 6 :
If status=”Malicious” Then
Display URL is malicious
Display Set of Malicious behaviour MS
Else
Display URL is genuine
4.2 FSM Model
The behaviours of the URL are modeled using Finite State
Machine (FSM). Various symptoms of malicious and genuine
URLs for FSM are developed based on submission of the
information window, accessing active content. The norms are
established by our literature survey. The malicious behaviours
are identified as follows
a. Malicious content automatically downloaded from
Web page of the URL
b. User access the input form or active content which
leads to another webpage where content is
automatically downloaded from the webpage.
c. Malicious Word or @ symbol in the URL.
The FSM is represented by <Q, ∑, q0, δ, F> where F is the
finite set of states, q0 is the initial state, ∑ is the finite set of
inputs, δ is the state transition function, and F is the set of
final states.
(i) Q is a finite nonempty set of states. q0 to q13
that represents the various behavioral states of
URL.
(ii) ∑ is finite non empty set of inputs called an
input alphabet. It is a combination of test cases
<I , Ki >
(iii) δ is a function which maps Q * ∑ into Q and is
usually called direct transition function. This is
the function which describes the change of the
state during transition. The mapping is usually
represented by a transition table or transition
diagram. The transition represents the
behavioral change of the URL.
(iv) q0 is the initial state. It represents the initial
stage of the URL.
(v) F is the set of final states. It is assumed here
that there may be more than one final state.
The final states characterize the genuine or
malicious behavior of the URL.
∑ = <I0,K1>…..<I1,Kn> is the set of input symbols. Let
q0 be the initial state of the machine. It represents the
input URL. The state q1 of the machine for the input
<I0,K1> is as follows.
4. International Journal of Computer Applications Technology and Research
Volume 6–Issue 3, 133-140, 2017, ISSN:-2319–8656
www.ijcat.com 136
q1 = δ(q0 , < I0,K1>) = δ1(q0 , < I0,K1>) where δ = δ1 : Q X ∑
The change in the state due to the second input symbol
< I0,K2>) is q2.
q2 =δ(q1,< I0,K2>) = δ(δ1(q0, < I0,K1>),<I0,K2>)
= δ2 (q0 , < I0,K1><I0,K2>) (1)
Where δ2 : Q * I 2
→ F
The function of the FSM is defined as follows
qn = δn (q0 , <I0,K1>…….<In,Kn>)
= δ(δn-1 (q0, , < I0,K1 >……<In-2 , kn-2 >,< In-1 , kn-1 >) (2)
Equations 1 and 2 show the mapping function from one state
to another state in the proposed approach
The FSM model is represented as a set of inputs (I0 to I2) and
corresponding responses (K0 to K13) are discussed in detail in
the following paragraphs. A URL is classified as malicious or
benign based on the traversal from initial state to final state.
The state transition is given in figure 2. The final stages (q2,
q5, q7, q8, q11, q13 ) are legitimate and some of the final states
(q4, q6, q9, q10, q12 ) are phishing.
A state transition occurs for a given input and subsequent
response. The transition is represented as <Input, response>
pair as shown in the figure 2. The pair <I2, K2> represents the
input I2 and its corresponding response K2. There are three
kinds of inputs.
1. The input URL (U),
2. URL leads to a webpage which contains malicious Active
content.
3. URL leads to a webpage which contains input form
[Example:https://www.perspectiverisk.com/wp-
content/uploads/2016/09/Login.png]
The features that represent the set of responses are given
below.
iw : Indicates the user fills the information window and
submits it.
ac : The user access the active content
@ : The presence of @ symbol in the URL
re : The page is redirected. It may happen due to the
submission of input form or response
of user interaction with active content or
malicious domain pointed by the @ symbol.
mw : URL contains malicious word
p : Presence of the redirected page.
d : The content gets automatically downloaded from
URLs web page. They are counterfeit
executable programs.
These features are used to classify whether the URL is
phishing or genuine. The ! symbol represents the absence of a
particular feature (!iw represents the absence of the
information window).
The proposed approach distinguishes the malicious URL from
the legitimate one based on the behavior of the URL. The
input and responses are given in the table1.
Table 1. Input and responses
Name Representation Explanation
I0 U Input URL
I1 AC The URL leads to
a webpage that
contains active
content
I2 I The URL leads to
a webpage that
contains Input
form.
K1 !iw !ac !@ !re !mw !p !d No Information
window, no active
content, no @
symbol present in
URL, no
redirection, no
malicious word,
no redirected page
, no automatic
content download.
K2 iw !ac !@ !re !mw !p !d User submit
information
window, no active
content, no @
symbol present in
URL, no
redirection, no
malicious word,
no redirected page
, no automatic
content download.
K3 iw !ac !@ re mw p !d User submit
information
window , no
active content, no
@ symbol present
in URL,
redirection
occurs, malicious
word present in
the URL,
redirected page
present, no
automatic content
download.
K4 iw !ac !@ re !mw p !d User submit
information
window , no
active content, no
@ symbol present
in URL,
redirection
occurs, no
malicious word,
redirected page
present, no
automatic content
5. International Journal of Computer Applications Technology and Research
Volume 6–Issue 3, 133-140, 2017, ISSN:-2319–8656
www.ijcat.com 137
download .
K5 iw !ac !@ re !mw p d User submit
information
window, no active
content, no @
symbol present in
URL, redirection
occurs, no
malicious word ,
redirected page
present,
automatic content
download occurs.
K6 !iw ac !@ !re !mw !p !d No Information
window present,
active content
occurs, no @
symbol present in
URL, no
redirection
occurs, no
malicious word,
no redirected page
present, no
automatic content
download
K7 !iw ac !@ re !mw p !d No Information
window present,
active content
occurs, no @
symbol present in
URL, redirection
occurs, no
malicious word ,
redirected page
present, no
automatic content
download
K8 !iw ac !@ re mw p !d No Information
window present,
active content
occurs, no @
symbol present in
URL, redirection
occurs,
malicious word
present in the
URL, redirected
page present, no
content download
occurs.
K9 !iw ac !@ re !mw p d No Information
window present,
no active content,
no @ symbol
present in URL,
redirection
occurs, no
malicious word,
redirected page
present, automatic
content download
occurs.
.
K10 !iw !ac @ !re !mw !p !d No Information
window present,
no active content,
@ symbol present
in URL, no
redirection
occurs, no
malicious word,
no redirected page
present, no
content download
occurs.
.
K11 !iw !ac @ re mw p !d No information
window present,
no active content,
@ symbol present
in URL,
redirection
occurs,
malicious word
present in the
URL ,
redirected page
present, no
downloads
occurs.
K12 !iw !ac @ re !mw p !d No information
window present,
no active content,
@ symbol present
in URL,
redirection
occurs, no
malicious word,
redirected page
present, no
downloads
occurs.
K13 !iw !ac @ re !mw p d No information
window present,
no active content,
@ symbol present
in URL,
redirection
occurs, no
malicious word,
redirected page
present, automatic
content download
occurs.
K14 !iw ! ac !@ !re !mw !p d No information
window present,
no active content,
no @ symbol
present in URL,
no redirection
occurs, no
malicious word,
no redirected page
present, automatic
content download
occurs.
6. International Journal of Computer Applications Technology and Research
Volume 6–Issue 3, 133-140, 2017, ISSN:-2319–8656
www.ijcat.com 138
Figure 2 FSM state transition diagram
Figure 2 shows the state diagram of the FSM model. Here q0
is the initial state. If the page downloaded from the given
URL contains no information window, no active content, no
@ symbol, no malicious word , no redirection and no
automatic content downloads (code injection), then the next
state is considered q1. The finite set of states is as
follows(q1,q2,q3,q4,,q5,q6,q7,q8,q9,q10,q11,q12,q13). Each state
represents the behavioral state of the URL.
4.3. State transition
The traversals of the URL help to identify it as malicious or
legitimate. If the URL is having the state sequences given in
table2, then it is considered as malicious (malicious final
states are shown in a dark circle in the figure2.
Table 2 List of malicious states
States Test cases Description
q0 , q6 <I0,K14> The content is
automatically
downloaded
from the web
page of the
URL.
q0, q2, q4 (<I2,K2>,<I2,K3>) The user fills the
information
window and
clicks the submit
button, The user
is redirected to a
new page where
the URL
contains
malicious word.
q0,q2,q5,q6 (<I2 , K2> , <I2, K4>,
<I2 , K5 > )
The user fills the
information
window and
clicks the submit
button, the user
is redirected to a
new page, and
the contents are
automatically
downloaded
from that page.
q0,q1,q7,
q8,q10
(<I0 , K1> , <I1, K6> ,
< I1, K7 > , <I1,K9 > )
The user
accesses the
active content, it
leads the user to
a new web page
and the contents
are automatically
downloaded
from the page.
q0,q1,q7,q9 (<I0 , K1> , <I1, K6>,
<I1,K8> )
The user
accesses the
active content, it
leads the user to
a new web page
where the URL
contains
malicious word
q0,q1, q11, q12 (<I0 , K1>, <I1, K10> ,
<I1,K11> )
The URL
contain @
symbol it
redirect the user
to a another
webpage it
contains
malicious word
in the URL
q0,q1,q11,q13,
q10
(<I0 , K1>, <I1, K10>,
<I1, K12 >, <I1,K13 >)
The URL
contains @
symbol that
leads the user to
a new web page
and the contents
are automatically
downloaded
from the page.
5.ANALYSIS OF THE URL
The set of URLs used for analysis are given below
1.https://www.perspectiverisk.com/wp-
content/uploads/2016/09/Login.png
2.http://demo.smartscreen.msft.net/other/explotframe.html
3.www.yahoo.com
4.https://phishme.com/macro-based-anti-analysis/acf.css
7. International Journal of Computer Applications Technology and Research
Volume 6–Issue 3, 133-140, 2017, ISSN:-2319–8656
www.ijcat.com 139
5.http://geniune.com@malicious.com/config/change.html
The proposed Behaviour based Malicious URL Finder
(BMUF) algorithm analyze the behaviour of each URL using
various rules and classify it as genuine or malicious. The
behavior and classification are given in the following table3.
Table 3 BMUF Classification
U
R
L
.
N
O
U
R
L
Input
Form
Active
Conten
t
@
Sy
mb
ol
Re
dir
ecti
on
Malic
ious
Word
Auto
Cont
ent
down
load
Classificati
on
1 Y Y N N Y N Y Malicious
2 Y N Y N Y Y N Malicious
3 Y N N N N N N Genuine
4 Y N Y N N N Y Malicious
5 Y N N Y N Y N Malicious
Finite State Machine (FSM) model the behavior as various
states. The state transition from the initial state to final state is
used for classification. The state transition for each URL is
given in table 4.
Table 4 State Transition
U
R
L
N
O
States Test case Description Classifica
tion
1 q0 , q6 <I0,K14> The content is
automatically
downloaded
from the web
page of the
URL.
Malicious
2 q0,q1,q7,q
9
(<I0 , K1> ,
<I1, K6>,
<I1,K8> )
The user
accesses the
active content,
it leads the
user to a new
web page
where the
URL contains
malicious
word
Malicious
3 q0,q1 (<I0 , K1>) URL is
present but no
malicious
activities
detected.
Genuine
4 q0,q1,q7,
q8,q10
(<I0 , K1> ,
<I1, K6> , <
I1, K7 > ,
<I1,K9 > )
The user
accesses the
active content,
it leads the
user to a new
web page and
the contents
are
automatically
downloaded
from the page.
Malicious
5 q0,q1,
q11, q12
(<I0 , K1>,
<I1, K10> ,
<I1,K11> )
The URL
contain @
symbol it
redirect the
user to a
another
webpage it
contains
malicious
word in the
URL
Malicious
The classification of the list of URL is given in table 5.
Table 5 Classification
URL.
No
URL Classific
ation
1 https://www.perspectiverisk.com/wp-
content/uploads/2016/09/Login.png
Malicious
2 http://demo.smartscreen.msft.net/other/
explotframe.html
Malicious
3 www.yahoo.com Genuine
4 https://phishme.com/macro-based-anti-
analysis/acf.css
Malicious
5 .http://geniune.com@malicious.com/co
nfig/change.html
Malicious
6. CONCLUSION:
The web attacks are challenging problem for the web users.
Detecting malicious URL is a complex task due to the
dynamic behavior of the URL. The proposed classification
model to detect malicious URL is based on the behavior. The
Behaviour based Malicious URL Finder (BMUF) algorithm
analyzes the behavior in sequence of steps to detect the URL
is genuine or malicious. Finite State Machine state transition
diagram capture the behaviour into various states. The state
transition from initial to final states leads to classification.
Thirteen states are derived from 3 inputs and 13 responses.
The final states represents whether a URL is genuine or
malicious. The proposed algorithm improves the accuracy of
the classification.
7. REFERENCES
1. Y. Alshboul, R. Nepali, and Y. Wang, “Detecting malicious
short urls on twitter,” 2015.
2. Birhanu Eshete, A. Villafiorita, and K. Weldemariam,
“Binspect: Holistic analysis and detection of malicious
web pages,” in Security and Privacy in Communication
Networks. Springer, 2013, pp. 149–166.
8. International Journal of Computer Applications Technology and Research
Volume 6–Issue 3, 133-140, 2017, ISSN:-2319–8656
www.ijcat.com 140
3. S. Bo, M. Akiyama, Y. Takeshi, and M. Hatada,
“Automating url blacklist generation with similarity
search approach,” IEICE TRANSACTIONS on
Information and Systems, vol. 99, no. 4, pp. 873–882,
2016.
4. C. Cao, J. Caverlee, Detecting spam urls in social media via
behavioral analysis, in: Advances in Information Retrieval,
Springer, 2015, pp. 703– 714.
5. Charmi Patel , Hiteishi Diwanji Research on Web Content
Extraction and Noise Reduction through Text Density
Using Malicious URL Pattern Detection",International
Journal of Secientific Research in Science Engineering and
Technology",Volume 2, Issue 3, ISSN : 2394-4099,May-
June 2016.
6. S. Chhabra, A. Aggarwal, F. Benevenuto, and P.
Kumaraguru, “Phi. sh/$ ocial: the phishing landscape
through short urls,” in Proceedings of the 8th Annual
Collaboration, Electronic messaging, Anti-Abuse and Spam
Conference. ACM, 2011, pp. 92–101.
7. S.Chitra K. S. JayanthanS. PreethaR. N. Uma Shankar
“Predicate based Algorithm for Malicious Web Page
Detection using Genetic Fuzzy Systems and Support Vector
Machine”International Journal of Computer Applications.
Volume 40 - Number 10.2012. DOI:10.5120/5000-7277.
8. W. Chu, B. B. Zhu, F. Xue, X. Guan, and Z. Cai, “Protect
sensitive sites from phishing attacks using features
extractable from inaccessible phishing urls,” in
Communications (ICC), 2013 IEEE International
Conference on. IEEE, 2013, pp. 1990–1994.
9. M. Felegyhazi, C. Kreibich, and V. Paxson, “On the
potential of proactive domain Blacklisting.” LEET, vol. 10,
pp. 6–6, 2010.
10. Hossain Shahriar and Mohammad Zulkernine
Trustworthiness testing of phishing websites: A behavior
model-based approach. Future Generation Comp. Syst.
28(8):1258-1271(2012).DOI 10.1016/j.future.2011.02.001.
11. Hyunsang Choi, Bin B. Zhu, Heejo Lee, “Detecting
Malicious Web Links and Identifying Their Attack
Types”, InWebApps, June 2011.
12. D. Irani, S. Webb, J. Giffin, C. Pu, Evolutionary study of
phishing, in: Proc. Of the 3rd Anti- Phishing Working
Group eCrime Researchers Summit, Atlanta,Georgia,
October 2008, pp.1–10.
13. J. Ma, L. K. Saul, S. Savage, and G. M. Voelker, “Beyond
blacklists: learning to detect malicious web sites from
suspicious urls,” in Proceedings of the 15th ACM
international conference on Knowledge discovery and data
mining. ACM, 2009, pp. 1245– 1254.
14.J.Ma, L. K. Saul, S. Savage, and G. M. Voelker “Learning
to detect malicious urls,” ACM Transactions on Intelligent
Systems and Technology (TIST), vol. 2, no. 3, p. 30, 2011.
15.Mitsuaki Akiyama, Takeshi Yagi, Takeshi Yada, Tatsuya
Mori, Youki Kadobayashi,"Analyzing the ecosystem of
malicious URL redirection through longitudinal
observation from honeypots", Journal of Computer &
Security, January 2017.
16.R. K. Nepali and Y. Wang, “You look suspicious!!:
Leveraging visible attributes to classify malicious short
urls on twitter,” in 2016 49th
Hawaii International
Conference on System Sciences (HICSS). IEEE, 2016, pp.
2648–2655.
17.H.K. Pao, Y.-L. Chou, and Y.-J. Lee, “Malicious url
detection based on kolmogorov complexity estimation,” in
Proceedings of the The 2012 IEEE/WIC/ACM
International Joint Conferences on Web Intelligence and
Intelligent Agent Technology-Volume 01. IEEE
Computer Society,2012, pp. 380–387.
18.Peilin Zhao,Steven C H Hoi"Cost-sensitive online active
learning with application to malicious URL
detection",Proceedings of the 19th ACM SIGKDD
international conference on Knowledge discovery and data
mining,Chicago, Illinois, USA — August 11 - 14, 2013.
919-927.[
19. E. Sorio, A. Bartoli, and E. Medvet, “Detection of hidden
fraudulent urls within trusted sites using lexical features,”
in Availability, Reliability and Security (ARES), 2013
Eighth International Conference on.IEEE, 2013, pp. 242–
247.
20. B. Sun, M. Akiyama, T. Yagi, M. Hatada, and T. Mori,
“Autoblg: Automatic url blacklist generator using search
space expansion and filters,” in 2015 IEEE Symposium
on Computers and Communication (ISCC). IEEE, 2015,
pp. 625–631.
21.Y. Tao, “Suspicious url and device detection by log
mining,” Ph.D. dissertation, Applied Sciences: School of
Computing Science, 2014.
22. Zhi-Yong,Ran Tao, Zhen-He cai and Hao Zhang Li A.
“Web Page Malicious Code Detect Approach Based on
Script Execution”. International Conference on Natural
Computation 009.308-312. DOI:10.1109/ICNC.2009.363.