The increasing use of internet all over the world, be it in households or in corporate firms, has led to an unprecedented rise in cyber-crimes. Amongst these the major chunk consists of
Internet attacks which are the most popular and common attacks are carried over the internet. Generally phishing attacks, SSL attacks and some other hacking attacks are kept into this
category. Security against these attacks is the major issue of internet security in today’s scenario where internet has very deep penetration. Internet has no doubt made our lives very
convenient. It has provided many facilities to us at penny’s cost. For instance it has made communication lightning fast and that too at a very cheap cost. But internet can pose added
threats for those users who are not well versed in the ways of internet and unaware of the security risks attached with it. Phishing Attacks, Nigerian Scam, Spam attacks, SSL attacks and other hacking attacks are some of the most common and recent attacks to compromise the privacy of the internet users. This paper discusses a Knowledge Base Compound approach
which is based on query operations and parsing techniques to counter these internet attacks using the web browser itself. In this approach we propose to analyze the web URLs before
visiting the actual site, so as to provide security against web attacks mentioned above. This approach employs various parsing operations and query processing which use many techniques to detect the phishing attacks as well as other web attacks. The aforementioned approach is completely based on operation through the browser and hence only affects the speed of browsing. This approach also includes Crawling operation to detect the URL details to further enhance the precision of detection of a compromised site. Using the proposed methodology, a new browser can easily detects the phishing attacks, SSL attacks, and other hacking attacks.
With the use of this browser approach, we can easily achieve 96.94% security against phishing as well as other web based attacks
Review of the machine learning methods in the classification of phishing attackjournalBEEI
The development of computer networks today has increased rapidly. This can be seen based on the trend of computer users around the world, whereby they need to connect their computer to the Internet. This shows that the use of Internet networks is very important, whether for work purposes or access to social media accounts. However, in widely using this computer network, the privacy of computer users is in danger, especially for computer users who do not install security systems in their computer. This problem will allow hackers to hack and commit network attacks. This is very dangerous, especially for Internet users because hackers can steal confidential information such as bank login account or social media login account. The attacks that can be made include phishing attacks. The goal of this study is to review the types of phishing attacks and current methods used in preventing them. Based on the literature, the machine learning method is widely used to prevent phishing attacks. There are several algorithms that can be used in the machine learning method to prevent these attacks. This study focused on an algorithm that was thoroughly made and the methods in implementing this algorithm are discussed in detail.
The document describes a proposed system called Link Guard for detecting phishing websites and emails. Link Guard utilizes the characteristics of hyperlinks in phishing attacks to classify links as legitimate or phishing. It works by collecting URL information, storing it in a database, analyzing the links using the Link Guard algorithm, alerting users to potential phishing links, and logging events. The algorithm aims to detect both known and unknown phishing attacks in real-time across email and notification systems.
This document outlines an intelligent phishing detection and protection scheme using neuro fuzzy modeling. It extracts 288 features from 5 inputs - legitimate site rules, user behavior profiles, a phishing website database, user specific sites, and email pop-ups. These features are analyzed and assigned values from 0 to 1. A neuro fuzzy model is trained using 2-fold cross validation on these features to classify websites as phishing, legitimate, or suspicious. The proposed scheme aims to accurately detect phishing sites in real time to better protect online users. Future work includes adding more features and parameters to achieve 100% accuracy for a browser plugin.
This document summarizes literature on detecting phishing attacks. It begins with an introduction defining phishing and explaining the broad scope of the problem. It then outlines the document's objectives and various definitions related to phishing. Several techniques for mitigating, detecting, and evaluating phishing attacks are discussed, including user training, software classification, offensive defense, correction approaches, and prevention. Evaluation metrics and examples of detection methods like passive/active warnings, visual similarity analysis, and blacklists are also summarized. The conclusion recommends education as the best defense and outlines common characteristics of phishing attacks.
Author: Dr Sandeep Sood
Password-based authentication is used in online web applications due to its simplicity and convenience. Efficient password-based authentication schemes are required to authenticate the legitimacy of remote users, or data origin over an insecure communication channel. Password-based authentication schemes are highly susceptible to phishing attacks.
The document discusses ethical hacking. It begins by defining hacking and different types of hackers, including white hat, black hat, and grey hat hackers. It then defines ethical hacking as hacking done with consent and for beneficial purposes, such as identifying security vulnerabilities. The document outlines the techniques used in ethical hacking, including information gathering, vulnerability scanning, exploitation, and analysis. It discusses the importance of ethical hacking for organizations and the code of conduct ethical hackers follow. Overall, the document provides an overview of ethical hacking, its purpose, and the methods used.
In spite of the development of aversion strategies, phishing remains an essential risk even after the
primary countermeasures and in view of receptive URL blacklisting. This strategy is insufficient because of the
short lifetime of phishing websites. In order to overcome this problem, developing a real-time phishing website
detection method is an effective solution. This research introduces the PrePhish algorithm which is an automated
machine learning approach to analyze phishing and non-phishing URL to produce reliable result. It represents that
phishing URLs typically have couple of connections between the part of the registered domain level and the path
or query level URL. Using these connections URL is characterized by inter-relatedness and it estimates using
features mined from attributes. These features are then used in machine learning technique to detect phishing
URLs from a real dataset. The classification of phishing and non-phishing website has been implemented by
finding the range value and threshold value for each attribute using decision making classification. This method is
also evaluated in Matlab using three major classifiers SVM, Random Forest and Naive Bayes to find how it works
on the dataset assessed
Detecting phishing websites using associative classification (2)Alexander Decker
This document summarizes research on using data mining techniques like associative classification algorithms to detect phishing websites. It discusses how phishing aims to steal personal information through fake websites mimicking real ones. The paper reviews previous work applying classification and association rule mining to phishing detection and compares algorithms like CBA and MCAR. The goal is to investigate using automated data mining to help classify websites as phishing or not based on characteristics like URL errors.
Review of the machine learning methods in the classification of phishing attackjournalBEEI
The development of computer networks today has increased rapidly. This can be seen based on the trend of computer users around the world, whereby they need to connect their computer to the Internet. This shows that the use of Internet networks is very important, whether for work purposes or access to social media accounts. However, in widely using this computer network, the privacy of computer users is in danger, especially for computer users who do not install security systems in their computer. This problem will allow hackers to hack and commit network attacks. This is very dangerous, especially for Internet users because hackers can steal confidential information such as bank login account or social media login account. The attacks that can be made include phishing attacks. The goal of this study is to review the types of phishing attacks and current methods used in preventing them. Based on the literature, the machine learning method is widely used to prevent phishing attacks. There are several algorithms that can be used in the machine learning method to prevent these attacks. This study focused on an algorithm that was thoroughly made and the methods in implementing this algorithm are discussed in detail.
The document describes a proposed system called Link Guard for detecting phishing websites and emails. Link Guard utilizes the characteristics of hyperlinks in phishing attacks to classify links as legitimate or phishing. It works by collecting URL information, storing it in a database, analyzing the links using the Link Guard algorithm, alerting users to potential phishing links, and logging events. The algorithm aims to detect both known and unknown phishing attacks in real-time across email and notification systems.
This document outlines an intelligent phishing detection and protection scheme using neuro fuzzy modeling. It extracts 288 features from 5 inputs - legitimate site rules, user behavior profiles, a phishing website database, user specific sites, and email pop-ups. These features are analyzed and assigned values from 0 to 1. A neuro fuzzy model is trained using 2-fold cross validation on these features to classify websites as phishing, legitimate, or suspicious. The proposed scheme aims to accurately detect phishing sites in real time to better protect online users. Future work includes adding more features and parameters to achieve 100% accuracy for a browser plugin.
This document summarizes literature on detecting phishing attacks. It begins with an introduction defining phishing and explaining the broad scope of the problem. It then outlines the document's objectives and various definitions related to phishing. Several techniques for mitigating, detecting, and evaluating phishing attacks are discussed, including user training, software classification, offensive defense, correction approaches, and prevention. Evaluation metrics and examples of detection methods like passive/active warnings, visual similarity analysis, and blacklists are also summarized. The conclusion recommends education as the best defense and outlines common characteristics of phishing attacks.
Author: Dr Sandeep Sood
Password-based authentication is used in online web applications due to its simplicity and convenience. Efficient password-based authentication schemes are required to authenticate the legitimacy of remote users, or data origin over an insecure communication channel. Password-based authentication schemes are highly susceptible to phishing attacks.
The document discusses ethical hacking. It begins by defining hacking and different types of hackers, including white hat, black hat, and grey hat hackers. It then defines ethical hacking as hacking done with consent and for beneficial purposes, such as identifying security vulnerabilities. The document outlines the techniques used in ethical hacking, including information gathering, vulnerability scanning, exploitation, and analysis. It discusses the importance of ethical hacking for organizations and the code of conduct ethical hackers follow. Overall, the document provides an overview of ethical hacking, its purpose, and the methods used.
In spite of the development of aversion strategies, phishing remains an essential risk even after the
primary countermeasures and in view of receptive URL blacklisting. This strategy is insufficient because of the
short lifetime of phishing websites. In order to overcome this problem, developing a real-time phishing website
detection method is an effective solution. This research introduces the PrePhish algorithm which is an automated
machine learning approach to analyze phishing and non-phishing URL to produce reliable result. It represents that
phishing URLs typically have couple of connections between the part of the registered domain level and the path
or query level URL. Using these connections URL is characterized by inter-relatedness and it estimates using
features mined from attributes. These features are then used in machine learning technique to detect phishing
URLs from a real dataset. The classification of phishing and non-phishing website has been implemented by
finding the range value and threshold value for each attribute using decision making classification. This method is
also evaluated in Matlab using three major classifiers SVM, Random Forest and Naive Bayes to find how it works
on the dataset assessed
Detecting phishing websites using associative classification (2)Alexander Decker
This document summarizes research on using data mining techniques like associative classification algorithms to detect phishing websites. It discusses how phishing aims to steal personal information through fake websites mimicking real ones. The paper reviews previous work applying classification and association rule mining to phishing detection and compares algorithms like CBA and MCAR. The goal is to investigate using automated data mining to help classify websites as phishing or not based on characteristics like URL errors.
Phishing is a social engineering Technique which they main aim is to target the user Information like user id, password, credit card information and so on. Which result a financial loss to the user. Detecting Phishing is the one of the challenge problem that relay to human vulnerabilities. This paper proposed the Detecting Phishing Web Sites using different Machine Learning Approaches. In this to evaluate different classification models to predict malicious and benign websites by using Machine Learning Algorithms. Experiments are performed on data set consisting malicious and benign, In This paper the results shows the proposed Algorithms has high detection accuracy. Nakkala Srinivas Mudiraj ""Detecting Phishing using Machine Learning"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-4 , June 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23755.pdf
Paper URL: https://www.ijtsrd.com/computer-science/computer-security/23755/detecting-phishing-using-machine-learning/nakkala-srinivas-mudiraj
This document describes Mudpile, a system for detecting malicious URLs using machine learning. It collects data from URLs, extracts features related to phishing indicators, trains a classification model to label URLs as legitimate or phishing, and exposes the model as a REST API. The system is deployed to classify incoming web traffic in real-time and block phishing sites. It is retrained periodically for improved accuracy and to address new phishing techniques.
A survey on detection of website phishing using mcac techniquebhas_ani
This document discusses a technique called Multi-label Classifier based Associative Classification (MCAC) for detecting phishing websites. MCAC is a data mining approach that uses machine learning algorithms to generate rules for classifying websites as phishing or legitimate. It works by extracting features from websites and training a classifier on these features to accurately identify phishing websites. The proposed system uses MCAC to extract 16 features from websites and generate rules to classify websites, with the goal of detecting phishing attacks and warning users. MCAC is shown to identify phishing websites with high accuracy.
Detecting malicious URLs using binary classification through ada boost algori...IJECEIAES
Malicious Uniform Resource Locator (URL) is a frequent and severe menace to cybersecurity. Malicious URLs are used to extract unsolicited information and trick inexperienced end users as a sufferer of scams and create losses of billions of money each year. It is crucial to identify and appropriately respond to such URLs. Usually, this discovery is made by the practice and use of blacklists in the cyber world. However, blacklists cannot be exhaustive, and cannot recognize zero-day malicious URLs. So to increase the observation of malicious URL indicators, machine learning procedures should be incorporated. In this study, we have developed a complete prototype of Malicious URL Detection using machine learning methods. In particular, we have attempted an exact formulation of Malicious URL exposure from a machine learning perspective and proposed an approach using the AdaBoost algorithm - the proposed approach has brought forward more accuracy than other existing algorithms.
This document presents a proposed system for detecting phishing websites using a Chrome extension. The system compares URLs to entries in two databases - the Phishtank database of known phishing sites, and a local IndexedDB of frequently visited sites. If a match is found in either database, the Chrome extension will flag the site as potentially malicious by changing color. The system was tested on 53 URLs, achieving an accuracy of 92.45% at detecting phishing sites. The proposed system aims to alert users to phishing sites and protect them from disclosing sensitive information to attackers.
This document discusses phishing and a novel phishing page detection mechanism. It defines phishing as using social engineering to steal personal information. Phishing is commonly done through emails targeting companies like eBay and banks. The document provides statistics on potential rewards from phishing and notes that phishing techniques are becoming more sophisticated. It outlines the domestic and international impacts of phishing, including erosion of public trust and direct financial losses. Finally, it provides tips to avoid phishing and lists additional resources on the topic.
A Hybrid Approach For Phishing Website Detection Using Machine Learning.vivatechijri
In this technical age there are many ways where an attacker can get access to people’s sensitive information illegitimately. One of the ways is Phishing, Phishing is an activity of misleading people into giving their sensitive information on fraud websites that lookalike to the real website. The phishers aim is to steal personal information, bank details etc. Day by day it’s getting more and more risky to enter your personal information on websites fearing that it might be a phishing attack and can steal your sensitive information. That’s why phishing website detection is necessary to alert the user and block the website. An automated detection of phishing attack is necessary one of which is machine learning. Machine Learning is one of the efficient techniques to detect phishing attack as it removes drawback of existing approaches. Efficient machine learning model with content based approach proves very effective to detect phishing websites.
Our proposed system uses Hybrid approach which combines machine learning based method and content based method. The URL based features will be extracted and passed to machine learning model and in content based approach, TF-IDF algorithm will detect a phishing website by using the top keywords of a web page. This hybrid approach is used to achieve highly efficient result. Finally, our system will notify and alert user if the website is Phishing or Legitimate.
Multi level parsing based approach against phishing attacks with the help of ...IJNSA Journal
The increasing use of internet all over the world, be it in households or in corporate firms, has led to an
unprecedented rise in cyber-crimes. Amongst these the major chunk consists of Internet attacks which are
the most popular and common attacks are carried over the internet. Generally phishing attacks, SSL
attacks and some other hacking attacks are kept into this category. Security against these attacks is the
major issue of internet security in today’s scenario where internet has very deep penetration. Internet has
no doubt made our lives very convenient. It has provided many facilities to us at penny’s cost. For instance
it has made communication lightning fast and that too at a very cheap cost. But internet can pose added
threats for those users who are not well versed in the ways of internet and unaware of the security risks
attached with it. Phishing Attacks, Nigerian Scam, Spam attacks, SSL attacks and other hacking attacks are
some of the most common and recent attacks to compromise the privacy of the internet users. Many a times
if the user isn’t careful, then these attacks are able to steal the confidential information of user (or
unauthorized access). Generally these attacks are carried out with the help of social networking sites,
popular mail server sites, online chatting sites etc. Nowadays, Facebook.com, gmail.com, orkut.com and
many other social networking sites are facing these security attack problems.
Analyzing the effectualness of Phishing Algorithms in Web Applications Inques...Editor IJMTER
The initial and proficient loss of deception is belief. A wolf in sheep’s clothing is tough
to recognize, similar is the schema of a phishing website. Phishing is the emulsion of social
engineering and technical exploits designed to persuade a victim to provide personal information, for
the fiscal gain of the attacker. It is a new kind of network assault where the attacker creates a spitting
image of an already existing Web Page to delude users. In this paper, we will study two anti-phishing
algorithms, one an end-host based algorithm known as the LinkGuard Algorithm, while the other a
content based approach known as the CANTINA.
I published a paper on "Ethical Hacking And Hacking Attacks". The purpose of the paper is to tell that what is hacking, who are hackers, their types and some hacking attacks performed by them. In the paper I also discussed that how these attacks are performed.
Today's security is that the main downside and every one the work is finished over the net mistreatment knowledge. whereas the information is out there, there square measure many varieties of users who act with knowledge and a few of them for his or her would like it all for his or her gaining data. There square measure numerous techniques used for cover of information however the hacker or cracker is a lot of intelligent to hack the security, there square measure 2 classes of hackers theyre completely different from one another on the idea of their arrange. The one who has smart plans square measure referred to as moral hackers as a result of the ethics to use their talent and techniques of hacking to supply security to the organization. this idea describes concerning the hacking, styles of hackers, rules of moral hacking and also the blessings of the moral hacking. Mukesh. M | Dr. S. Vengateshkumar "Ethical Hacking" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-6 , October 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29351.pdf Paper URL: https://www.ijtsrd.com/engineering/computer-engineering/29351/ethical-hacking/mukesh-m
What are the most common application level attacks? To find out, take a look at these slides! Click here to learn how CASE can help you create secure applications: http://ow.ly/rARK50BVi4b
IRJET- Advanced Phishing Identification Technique using Machine LearningIRJET Journal
1) The document describes a machine learning technique to identify phishing websites using a random forest algorithm.
2) It trains the random forest classifier on extracted URL features from a dataset of known phishing and legitimate websites.
3) The trained model is then used in a Chrome browser extension to analyze URLs and classify them as phishing or legitimate in real-time as users browse the web.
2014 Threat Detection Checklist: Six ways to tell a criminal from a customerEMC
This solution overview highlights six features that strengthen an organization's fraud and threat detection capabilities in today's increasingly complicated web environment.
IRJET- Phishing Website Detection based on Machine LearningIRJET Journal
This document proposes a machine learning model to detect phishing websites. It discusses how data mining algorithms can be used to classify websites as legitimate or phishing based on their characteristics. The proposed system aims to optimize detection by analyzing URL features, checking blacklists, and using a WHOIS database. It claims this method could decrease the error rate of existing detection systems by 30% and provide a more efficient way to identify phishing websites.
Spear phishing attacks are a growing problem because they are highly targeted and effective at tricking users into revealing sensitive information or installing malware. Spear phishing emails impersonate trusted sources and use personal details of targets to bypass filters. A famous example is the 2011 RSA attack, where a spear phishing email downloaded malware that ultimately compromised several defense contractors. To stop these advanced attacks, organizations need integrated security across email and web that uses dynamic analysis to detect zero-day exploits and block malicious files and network callbacks, while also providing threat intelligence.
This document discusses digital payment card skimming attacks. It provides context on a July 2019 incident where 17,000 domains were compromised due to misconfigured Amazon S3 buckets, allowing attackers to inject JavaScript card skimming code. The document outlines the anatomy of such attacks, including how attackers scan for vulnerable websites and insert malicious code to steal payment details. It also discusses the challenges in detecting these attacks and potential countermeasures around JavaScript controls, website hardening, and configuration settings.
MULTI-LEVEL PARSING BASED APPROACH AGAINST PHISHING ATTACKS WITH THE HELP OF ...IJNSA Journal
The increasing use of internet all over the world, be it in households or in corporate firms, has led to an unprecedented rise in cyber-crimes. Amongst these the major chunk consists of Internet attacks which are the most popular and common attacks are carried over the internet. Generally phishing attacks, SSL attacks and some other hacking attacks are kept into this category. Security against these attacks is the major issue of internet security in today’s scenario where internet has very deep penetration. Internet has no doubt made our lives very convenient. It has provided many facilities to us at penny’s cost. For instance it has made communication lightning fast and that too at a very cheap cost. But internet can pose added threats for those users who are not well versed in the ways of internet and unaware of the security risks attached with it. Phishing Attacks, Nigerian Scam, Spam attacks, SSL attacks and other hacking attacks are some of the most common and recent attacks to compromise the privacy of the internet users. Many a times if the user isn’t careful, then these attacks are able to steal the confidential information of user (or unauthorized access). Generally these attacks are carried out with the help of social networking sites, popular mail server sites, online chatting sites etc. Nowadays, Facebook.com, gmail.com, orkut.com and many other social networking sites are facing these security attack problems.
This paper discusses a Knowledge Base Compound approach which is based on query operations and parsing techniques to counter these internet attacks using the web browser itself. In this approach we propose to analyze the web URLs before visiting the actual site, so as to provide security against web attacks mentioned above. This approach employs various parsing operations and query processing which use many techniques to detect the phishing attacks as well as other web attacks. The aforementioned approach is completely based on operation through the browser and hence only affects the speed of browsing. This approach also includes Crawling operation to detect the URL details to further enhance the precision of detection of a compromised site. Using the proposed methodology, a new browser can easily detects the phishing attacks, SSL attacks, and other hacking attacks. With the use of this browser approach, we can easily achieve 96.94% security against phishing as well as other web based attacks.
IRJET- Detecting the Phishing Websites using Enhance Secure AlgorithmIRJET Journal
This document discusses detecting phishing websites using an enhanced secure algorithm. It begins by defining phishing attacks and how they are used to steal personal information from users. It then discusses how current techniques are not fully effective at stopping sophisticated phishing attacks. The proposed methodology checks for features of phishing websites, especially in URLs and domain names, to identify fake websites. Some features checked include IP addresses, long URLs, prefixes/suffixes, and symbols. Future work could involve updating datasets, detecting other attacks, and improving accuracy and efficiency. In conclusion, education is important to help users identify phishing attacks, as technical solutions are still limited.
Phishing is the fraudulent acquisition of personal information like username, password, credit card information, etc. by tricking an individual into believing that the attacker is a trustworthy entity. It is affecting all the major sector of industry day by day with lots of misuse of user’s credentials. So in today
online environment we need to protect the data from phishing and safeguard our information, which can be done through anti-phishing tools. Currently there are many freely available anti-phishing browser extensions tools that warns user when they are browsing a suspected phishing site. In this paper we did a literature survey of some of the commonly and popularly used anti-phishing browser extensions by reviewing the existing anti-phishing techniques along with their merits and demerits.
IRJET - Chrome Extension for Detecting Phishing WebsitesIRJET Journal
This document describes a Chrome browser extension that was developed to detect phishing websites using machine learning. The extension extracts features from URLs and classifies them as legitimate or phishing using a Random Forest classifier trained on a dataset of URLs. A user interface was designed for the extension using HTML, CSS and JavaScript. When a user visits a website, the extension automatically extracts 16 features from the URL and page content and inputs them into the Random Forest model to determine if the site is phishing or legitimate. The model was trained on a dataset of over 11,000 URLs labeled as phishing or legitimate. Evaluation of the model showed it achieved high accuracy in detecting phishing URLs. The goal of the extension is to help protect users from revealing
Phishing is a social engineering Technique which they main aim is to target the user Information like user id, password, credit card information and so on. Which result a financial loss to the user. Detecting Phishing is the one of the challenge problem that relay to human vulnerabilities. This paper proposed the Detecting Phishing Web Sites using different Machine Learning Approaches. In this to evaluate different classification models to predict malicious and benign websites by using Machine Learning Algorithms. Experiments are performed on data set consisting malicious and benign, In This paper the results shows the proposed Algorithms has high detection accuracy. Nakkala Srinivas Mudiraj ""Detecting Phishing using Machine Learning"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-4 , June 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23755.pdf
Paper URL: https://www.ijtsrd.com/computer-science/computer-security/23755/detecting-phishing-using-machine-learning/nakkala-srinivas-mudiraj
This document describes Mudpile, a system for detecting malicious URLs using machine learning. It collects data from URLs, extracts features related to phishing indicators, trains a classification model to label URLs as legitimate or phishing, and exposes the model as a REST API. The system is deployed to classify incoming web traffic in real-time and block phishing sites. It is retrained periodically for improved accuracy and to address new phishing techniques.
A survey on detection of website phishing using mcac techniquebhas_ani
This document discusses a technique called Multi-label Classifier based Associative Classification (MCAC) for detecting phishing websites. MCAC is a data mining approach that uses machine learning algorithms to generate rules for classifying websites as phishing or legitimate. It works by extracting features from websites and training a classifier on these features to accurately identify phishing websites. The proposed system uses MCAC to extract 16 features from websites and generate rules to classify websites, with the goal of detecting phishing attacks and warning users. MCAC is shown to identify phishing websites with high accuracy.
Detecting malicious URLs using binary classification through ada boost algori...IJECEIAES
Malicious Uniform Resource Locator (URL) is a frequent and severe menace to cybersecurity. Malicious URLs are used to extract unsolicited information and trick inexperienced end users as a sufferer of scams and create losses of billions of money each year. It is crucial to identify and appropriately respond to such URLs. Usually, this discovery is made by the practice and use of blacklists in the cyber world. However, blacklists cannot be exhaustive, and cannot recognize zero-day malicious URLs. So to increase the observation of malicious URL indicators, machine learning procedures should be incorporated. In this study, we have developed a complete prototype of Malicious URL Detection using machine learning methods. In particular, we have attempted an exact formulation of Malicious URL exposure from a machine learning perspective and proposed an approach using the AdaBoost algorithm - the proposed approach has brought forward more accuracy than other existing algorithms.
This document presents a proposed system for detecting phishing websites using a Chrome extension. The system compares URLs to entries in two databases - the Phishtank database of known phishing sites, and a local IndexedDB of frequently visited sites. If a match is found in either database, the Chrome extension will flag the site as potentially malicious by changing color. The system was tested on 53 URLs, achieving an accuracy of 92.45% at detecting phishing sites. The proposed system aims to alert users to phishing sites and protect them from disclosing sensitive information to attackers.
This document discusses phishing and a novel phishing page detection mechanism. It defines phishing as using social engineering to steal personal information. Phishing is commonly done through emails targeting companies like eBay and banks. The document provides statistics on potential rewards from phishing and notes that phishing techniques are becoming more sophisticated. It outlines the domestic and international impacts of phishing, including erosion of public trust and direct financial losses. Finally, it provides tips to avoid phishing and lists additional resources on the topic.
A Hybrid Approach For Phishing Website Detection Using Machine Learning.vivatechijri
In this technical age there are many ways where an attacker can get access to people’s sensitive information illegitimately. One of the ways is Phishing, Phishing is an activity of misleading people into giving their sensitive information on fraud websites that lookalike to the real website. The phishers aim is to steal personal information, bank details etc. Day by day it’s getting more and more risky to enter your personal information on websites fearing that it might be a phishing attack and can steal your sensitive information. That’s why phishing website detection is necessary to alert the user and block the website. An automated detection of phishing attack is necessary one of which is machine learning. Machine Learning is one of the efficient techniques to detect phishing attack as it removes drawback of existing approaches. Efficient machine learning model with content based approach proves very effective to detect phishing websites.
Our proposed system uses Hybrid approach which combines machine learning based method and content based method. The URL based features will be extracted and passed to machine learning model and in content based approach, TF-IDF algorithm will detect a phishing website by using the top keywords of a web page. This hybrid approach is used to achieve highly efficient result. Finally, our system will notify and alert user if the website is Phishing or Legitimate.
Multi level parsing based approach against phishing attacks with the help of ...IJNSA Journal
The increasing use of internet all over the world, be it in households or in corporate firms, has led to an
unprecedented rise in cyber-crimes. Amongst these the major chunk consists of Internet attacks which are
the most popular and common attacks are carried over the internet. Generally phishing attacks, SSL
attacks and some other hacking attacks are kept into this category. Security against these attacks is the
major issue of internet security in today’s scenario where internet has very deep penetration. Internet has
no doubt made our lives very convenient. It has provided many facilities to us at penny’s cost. For instance
it has made communication lightning fast and that too at a very cheap cost. But internet can pose added
threats for those users who are not well versed in the ways of internet and unaware of the security risks
attached with it. Phishing Attacks, Nigerian Scam, Spam attacks, SSL attacks and other hacking attacks are
some of the most common and recent attacks to compromise the privacy of the internet users. Many a times
if the user isn’t careful, then these attacks are able to steal the confidential information of user (or
unauthorized access). Generally these attacks are carried out with the help of social networking sites,
popular mail server sites, online chatting sites etc. Nowadays, Facebook.com, gmail.com, orkut.com and
many other social networking sites are facing these security attack problems.
Analyzing the effectualness of Phishing Algorithms in Web Applications Inques...Editor IJMTER
The initial and proficient loss of deception is belief. A wolf in sheep’s clothing is tough
to recognize, similar is the schema of a phishing website. Phishing is the emulsion of social
engineering and technical exploits designed to persuade a victim to provide personal information, for
the fiscal gain of the attacker. It is a new kind of network assault where the attacker creates a spitting
image of an already existing Web Page to delude users. In this paper, we will study two anti-phishing
algorithms, one an end-host based algorithm known as the LinkGuard Algorithm, while the other a
content based approach known as the CANTINA.
I published a paper on "Ethical Hacking And Hacking Attacks". The purpose of the paper is to tell that what is hacking, who are hackers, their types and some hacking attacks performed by them. In the paper I also discussed that how these attacks are performed.
Today's security is that the main downside and every one the work is finished over the net mistreatment knowledge. whereas the information is out there, there square measure many varieties of users who act with knowledge and a few of them for his or her would like it all for his or her gaining data. There square measure numerous techniques used for cover of information however the hacker or cracker is a lot of intelligent to hack the security, there square measure 2 classes of hackers theyre completely different from one another on the idea of their arrange. The one who has smart plans square measure referred to as moral hackers as a result of the ethics to use their talent and techniques of hacking to supply security to the organization. this idea describes concerning the hacking, styles of hackers, rules of moral hacking and also the blessings of the moral hacking. Mukesh. M | Dr. S. Vengateshkumar "Ethical Hacking" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-6 , October 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29351.pdf Paper URL: https://www.ijtsrd.com/engineering/computer-engineering/29351/ethical-hacking/mukesh-m
What are the most common application level attacks? To find out, take a look at these slides! Click here to learn how CASE can help you create secure applications: http://ow.ly/rARK50BVi4b
IRJET- Advanced Phishing Identification Technique using Machine LearningIRJET Journal
1) The document describes a machine learning technique to identify phishing websites using a random forest algorithm.
2) It trains the random forest classifier on extracted URL features from a dataset of known phishing and legitimate websites.
3) The trained model is then used in a Chrome browser extension to analyze URLs and classify them as phishing or legitimate in real-time as users browse the web.
2014 Threat Detection Checklist: Six ways to tell a criminal from a customerEMC
This solution overview highlights six features that strengthen an organization's fraud and threat detection capabilities in today's increasingly complicated web environment.
IRJET- Phishing Website Detection based on Machine LearningIRJET Journal
This document proposes a machine learning model to detect phishing websites. It discusses how data mining algorithms can be used to classify websites as legitimate or phishing based on their characteristics. The proposed system aims to optimize detection by analyzing URL features, checking blacklists, and using a WHOIS database. It claims this method could decrease the error rate of existing detection systems by 30% and provide a more efficient way to identify phishing websites.
Spear phishing attacks are a growing problem because they are highly targeted and effective at tricking users into revealing sensitive information or installing malware. Spear phishing emails impersonate trusted sources and use personal details of targets to bypass filters. A famous example is the 2011 RSA attack, where a spear phishing email downloaded malware that ultimately compromised several defense contractors. To stop these advanced attacks, organizations need integrated security across email and web that uses dynamic analysis to detect zero-day exploits and block malicious files and network callbacks, while also providing threat intelligence.
This document discusses digital payment card skimming attacks. It provides context on a July 2019 incident where 17,000 domains were compromised due to misconfigured Amazon S3 buckets, allowing attackers to inject JavaScript card skimming code. The document outlines the anatomy of such attacks, including how attackers scan for vulnerable websites and insert malicious code to steal payment details. It also discusses the challenges in detecting these attacks and potential countermeasures around JavaScript controls, website hardening, and configuration settings.
MULTI-LEVEL PARSING BASED APPROACH AGAINST PHISHING ATTACKS WITH THE HELP OF ...IJNSA Journal
The increasing use of internet all over the world, be it in households or in corporate firms, has led to an unprecedented rise in cyber-crimes. Amongst these the major chunk consists of Internet attacks which are the most popular and common attacks are carried over the internet. Generally phishing attacks, SSL attacks and some other hacking attacks are kept into this category. Security against these attacks is the major issue of internet security in today’s scenario where internet has very deep penetration. Internet has no doubt made our lives very convenient. It has provided many facilities to us at penny’s cost. For instance it has made communication lightning fast and that too at a very cheap cost. But internet can pose added threats for those users who are not well versed in the ways of internet and unaware of the security risks attached with it. Phishing Attacks, Nigerian Scam, Spam attacks, SSL attacks and other hacking attacks are some of the most common and recent attacks to compromise the privacy of the internet users. Many a times if the user isn’t careful, then these attacks are able to steal the confidential information of user (or unauthorized access). Generally these attacks are carried out with the help of social networking sites, popular mail server sites, online chatting sites etc. Nowadays, Facebook.com, gmail.com, orkut.com and many other social networking sites are facing these security attack problems.
This paper discusses a Knowledge Base Compound approach which is based on query operations and parsing techniques to counter these internet attacks using the web browser itself. In this approach we propose to analyze the web URLs before visiting the actual site, so as to provide security against web attacks mentioned above. This approach employs various parsing operations and query processing which use many techniques to detect the phishing attacks as well as other web attacks. The aforementioned approach is completely based on operation through the browser and hence only affects the speed of browsing. This approach also includes Crawling operation to detect the URL details to further enhance the precision of detection of a compromised site. Using the proposed methodology, a new browser can easily detects the phishing attacks, SSL attacks, and other hacking attacks. With the use of this browser approach, we can easily achieve 96.94% security against phishing as well as other web based attacks.
IRJET- Detecting the Phishing Websites using Enhance Secure AlgorithmIRJET Journal
This document discusses detecting phishing websites using an enhanced secure algorithm. It begins by defining phishing attacks and how they are used to steal personal information from users. It then discusses how current techniques are not fully effective at stopping sophisticated phishing attacks. The proposed methodology checks for features of phishing websites, especially in URLs and domain names, to identify fake websites. Some features checked include IP addresses, long URLs, prefixes/suffixes, and symbols. Future work could involve updating datasets, detecting other attacks, and improving accuracy and efficiency. In conclusion, education is important to help users identify phishing attacks, as technical solutions are still limited.
Phishing is the fraudulent acquisition of personal information like username, password, credit card information, etc. by tricking an individual into believing that the attacker is a trustworthy entity. It is affecting all the major sector of industry day by day with lots of misuse of user’s credentials. So in today
online environment we need to protect the data from phishing and safeguard our information, which can be done through anti-phishing tools. Currently there are many freely available anti-phishing browser extensions tools that warns user when they are browsing a suspected phishing site. In this paper we did a literature survey of some of the commonly and popularly used anti-phishing browser extensions by reviewing the existing anti-phishing techniques along with their merits and demerits.
IRJET - Chrome Extension for Detecting Phishing WebsitesIRJET Journal
This document describes a Chrome browser extension that was developed to detect phishing websites using machine learning. The extension extracts features from URLs and classifies them as legitimate or phishing using a Random Forest classifier trained on a dataset of URLs. A user interface was designed for the extension using HTML, CSS and JavaScript. When a user visits a website, the extension automatically extracts 16 features from the URL and page content and inputs them into the Random Forest model to determine if the site is phishing or legitimate. The model was trained on a dataset of over 11,000 URLs labeled as phishing or legitimate. Evaluation of the model showed it achieved high accuracy in detecting phishing URLs. The goal of the extension is to help protect users from revealing
Phishing Website Detection using Classification AlgorithmsIRJET Journal
This document discusses using machine learning algorithms to classify phishing websites. It begins with background on phishing and then discusses prior research applying algorithms like random forest, decision trees, SVM and KNN to detect phishing websites. The paper aims to address phishing website classification using various classifiers and ensemble learning approaches. It tests classifiers like random forest, decision tree, KNN, AdaBoost and GradientBoost on a phishing testing dataset and evaluates performance using metrics like accuracy, f1-score, precision and recall. The proposed approach achieves 97% accuracy in classifying phishing websites according to experimental results.
Study on Phishing Attacks and Antiphishing ToolsIRJET Journal
This document discusses phishing attacks and anti-phishing tools. It begins by defining phishing as fraudulent attempts to steal users' sensitive information by impersonating trustworthy entities. The document then outlines the common steps in phishing attacks, including planning, setup, attack, collection, fraud, and post-attack actions. It describes different types of phishing attacks and analyzes security issues. The document concludes by describing some popular anti-phishing tools, including Mail-Secure and the Netcraft security toolbar.
IRJET- Preventing Phishing Attack using Evolutionary AlgorithmsIRJET Journal
This document proposes using evolutionary algorithms and support vector machine (SVM) classification to detect phishing attacks more effectively than existing techniques. It summarizes previous approaches like blacklisting, neuro-fuzzy systems, and discusses their limitations in terms of feature extraction time and training requirements. The proposed system extracts URL features, trains an SVM model on a dataset of phishing and legitimate sites, then classifies new URLs based on a threshold derived from their feature values. It is claimed this approach reduces time consumption, increases processing speed, and achieves over 99% accuracy in detecting phishing sites.
Malicious-URL Detection using Logistic Regression TechniqueDr. Amarjeet Singh
Over the last few years, the Web has seen a
massive growth in the number and kinds of web services.
Web facilities such as online banking, gaming, and social
networking have promptly evolved as has the faith upon them
by people to perform daily tasks. As a result, a large amount
of information is uploaded on a daily to the Web. As these
web services drive new opportunities for people to interact,
they also create new opportunities for criminals. URLs are
launch pads for any web attacks such that any malicious
intention user can steal the identity of the legal person by
sending the malicious URL. Malicious URLs are a keystone
of Internet illegitimate activities. The dangers of these sites
have created a mandates for defences that protect end-users
from visiting them. The proposed approach is that classifies
URLs automatically by using Machine-Learning algorithm
called logistic regression that is used to binary classification.
The classifiers achieves 97% accuracy by learning phishing
URLs
This document discusses the development of a machine learning model to accurately detect phishing websites in real-time. It begins with an introduction to the problem of phishing and the need for reliable phishing detection systems. It then discusses using supervised machine learning, specifically logistic regression, to classify websites as legitimate or phishing based on discriminatory features. The goal is to determine an optimal feature combination to train a classifier with high performance at detecting phishing sites. Previous literature on phishing detection is reviewed, including techniques using fuzzy rough set theory and a three-tiered approach using web crawler traffic data, content analysis, and URL analysis.
How Can I Reduce The Risk Of A Cyber-Attack?Osei Fortune
A professional guide to reducing the risks of a cyber attack on your business. A professionally written article that would be suitable for a technical IT blog.
We are a new generation IT Software Company, helping our customers to optimize their IT investments, while preparing them for the best-in-class operating model, for delivering that “competitive edge” in their marketplace.
The document discusses various measures that companies can take to avoid cyber attacks. It recommends that companies train employees on cybersecurity awareness, keep systems fully updated to patch vulnerabilities, implement zero trust and SSL inspection for security, examine permissions of frequently used apps, create mobile device management plans, use passwordless authentication and behavior monitoring, regularly audit networks to detect threats, develop strong data governance, automate security practices, and have an incident response plan in place. Taking a proactive approach to cybersecurity through multiple defensive strategies is crucial for businesses of all sizes to protect against increasing cyber attacks.
HOST PROTECTION USING PROCESS WHITE-LISTING, DECEPTION AND REPUTATION SERVICESAM Publications,India
The Internet or World Wide Web has become prominent platform for business and commerce and is witnessing user growth with increased penetration of mobile Internet. Huge traffic is being generated, some of it being legitimate and the rest being malicious. Hence the implementation and maintenance of Information Security programs is been done .In the age of the Internet, protecting our information has become just as important as protecting our property. Malware authors have found and exploited new zero-day vulnerabilities resulting in damage to end-user system. Ransomware, a malware that has taken malware attacks to a new level by locking files of the affected user and demand Bitcoin payment to unlock those files. On the other hand the Volume and frequency of Distributed Denial of Service (DDoS) attacks have increased. Many unpatched machines without the knowledge of its owners have become a part of Botnets which carry out DDoS attacks. This paper focuses on strategies to be adopted to protect individual hosts from malware attacks and other types of intrusions using Deception, White-Listing and Reputation Services.
This document evaluates the performance of various classification techniques for detecting phishing websites. It uses a dataset from the UCI Machine Learning Repository containing attributes of phishing websites and a target variable to classify websites as phishing or legitimate. Several classification algorithms from the Weka tool are applied to the dataset, including Naive Bayes, decision trees, and k-nearest neighbor. Their performances are compared to determine the most accurate technique for phishing detection. Feature selection and data preprocessing are also discussed as important steps for building an effective classification model.
IRJET - Phishing Attack Detection and Prevention using Linkguard AlgorithmIRJET Journal
This document discusses a machine learning approach to detecting phishing attacks based on URL analysis. It begins with an abstract that outlines using machine learning classifiers to analyze URL features to determine if a webpage is legitimate or a phishing attack. It then provides background on phishing attacks and discusses previous research on detection techniques, including whitelist, blacklist, content-based, and visual similarity approaches. The document focuses on using a URL-based detection technique with machine learning. It evaluates popular classifiers like SVM, Naive Bayes, and Decision Trees on their accuracy in detecting phishing URLs based on analyzed features like length, special characters, and IP addresses.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Phishing Website Detection Paradigm using XGBoostIRJET Journal
This document presents research on using the XGBoost (Extreme Gradient Boosting) machine learning algorithm to detect phishing websites. The researchers collected a dataset from Kaggle containing URL features and used it to train and test an XGBoost model. They found that the XGBoost model was able to accurately predict whether a URL led to a phishing website or legitimate website, achieving 86.4% accuracy according to the confusion matrix. The researchers concluded that XGBoost is a robust and efficient approach for phishing website detection due to its ability to generate highly accurate results with low bias and variance from the ensemble of decision trees.
IRJET- Detecting Malicious URLS using Machine Learning Techniques: A Comp...IRJET Journal
This document provides a literature review of different machine learning techniques for detecting malicious URLs. It first discusses traditional methods like blacklisting and heuristic approaches, noting their limitations in detecting newly generated malicious URLs. It then focuses on machine learning techniques, which involve feature extraction and representation phases to accurately detect malicious URLs while providing false positive rates. The document reviews various machine learning algorithms used for URL detection and discusses the advantages of machine learning over other techniques, as well as challenges it faces. Overall, the document analyzes the state of the art in using machine learning for malicious URL detection.
State of the Art Analysis Approach for Identification of the Malignant URLsIOSRjournaljce
Malicious URLs have been universally used to ascend various cyber attacks including spamming, phishing and malware. Malware, short term for malicious software, is software which is developed to penetrate computers in a network without the user’s permission or notification. Existing methods typically detect malicious URLs of a single attack type. Hence such detection systems are failed to protect the users from various attacks. Malware spreading widely throughout the area of network as consequence of this it becomes predicament in distributed computer and network systems. Malicious links are the place of origin of all attacks which circulated all over the web. Hence malicious URLs should be detected for the prevention of users from these malware attacks. In this paper we described a novel approach which analyze all types of attacks by identifying malicious URLs and secure the web users from them. This technique prevents the users from malignant URLs before visiting them. Therefore efficiency of web security gets maintained. For such anatomization we developed an analyzer which identifies URLs and examine as malicious or benign. We also developed five processes which crawl for suspicious URLs. This approach will prevent the users from all types of attacks and increase efficiency of web crawling phase.
Security Testing Approach for Web Application Testing.pdfAmeliaJonas2
There are numerous web security testing tools available to aid in the process. One such tool is Astra's Pentest Solution. Astra offers a comprehensive suite of Security Testing Services, including vulnerability scanning, penetration testing, and code reviews. It provides automated scanning and analysis of web applications to identify vulnerabilities and suggest remediation measures.
Similar to KNOWLEDGE BASE COMPOUND APPROACH AGAINST PHISHING ATTACKS USING SOME PARSING TECHNIQUES (20)
ANALYSIS OF LAND SURFACE DEFORMATION GRADIENT BY DINSAR cscpconf
The progressive development of Synthetic Aperture Radar (SAR) systems diversify the exploitation of the generated images by these systems in different applications of geoscience. Detection and monitoring surface deformations, procreated by various phenomena had benefited from this evolution and had been realized by interferometry (InSAR) and differential interferometry (DInSAR) techniques. Nevertheless, spatial and temporal decorrelations of the interferometric couples used, limit strongly the precision of analysis results by these techniques. In this context, we propose, in this work, a methodological approach of surface deformation detection and analysis by differential interferograms to show the limits of this technique according to noise quality and level. The detectability model is generated from the deformation signatures, by simulating a linear fault merged to the images couples of ERS1 / ERS2 sensors acquired in a region of the Algerian south.
4D AUTOMATIC LIP-READING FOR SPEAKER'S FACE IDENTIFCATIONcscpconf
A novel based a trajectory-guided, concatenating approach for synthesizing high-quality image real sample renders video is proposed . The lips reading automated is seeking for modeled the closest real image sample sequence preserve in the library under the data video to the HMM predicted trajectory. The object trajectory is modeled obtained by projecting the face patterns into an KDA feature space is estimated. The approach for speaker's face identification by using synthesise the identity surface of a subject face from a small sample of patterns which sparsely each the view sphere. An KDA algorithm use to the Lip-reading image is discrimination, after that work consisted of in the low dimensional for the fundamental lip features vector is reduced by using the 2D-DCT.The mouth of the set area dimensionality is ordered by a normally reduction base on the PCA to obtain the Eigen lips approach, their proposed approach by[33]. The subjective performance results of the cost function under the automatic lips reading modeled , which wasn’t illustrate the superior performance of the
method.
MOVING FROM WATERFALL TO AGILE PROCESS IN SOFTWARE ENGINEERING CAPSTONE PROJE...cscpconf
Universities offer software engineering capstone course to simulate a real world-working environment in which students can work in a team for a fixed period to deliver a quality product. The objective of the paper is to report on our experience in moving from Waterfall process to Agile process in conducting the software engineering capstone project. We present the capstone course designs for both Waterfall driven and Agile driven methodologies that highlight the structure, deliverables and assessment plans.To evaluate the improvement, we conducted a survey for two different sections taught by two different instructors to evaluate students’ experience in moving from traditional Waterfall model to Agile like process. Twentyeight students filled the survey. The survey consisted of eight multiple-choice questions and an open-ended question to collect feedback from students. The survey results show that students were able to attain hands one experience, which simulate a real world-working environment. The results also show that the Agile approach helped students to have overall better design and avoid mistakes they have made in the initial design completed in of the first phase of the capstone project. In addition, they were able to decide on their team capabilities, training needs and thus learn the required technologies earlier which is reflected on the final product quality
PROMOTING STUDENT ENGAGEMENT USING SOCIAL MEDIA TECHNOLOGIEScscpconf
This document discusses using social media technologies to promote student engagement in a software project management course. It describes the course and objectives of enhancing communication. It discusses using Facebook for 4 years, then switching to WhatsApp based on student feedback, and finally introducing Slack to enable personalized team communication. Surveys found students engaged and satisfied with all three tools, though less familiar with Slack. The conclusion is that social media promotes engagement but familiarity with the tool also impacts satisfaction.
A SURVEY ON QUESTION ANSWERING SYSTEMS: THE ADVANCES OF FUZZY LOGICcscpconf
In real world computing environment with using a computer to answer questions has been a human dream since the beginning of the digital era, Question-answering systems are referred to as intelligent systems, that can be used to provide responses for the questions being asked by the user based on certain facts or rules stored in the knowledge base it can generate answers of questions asked in natural , and the first main idea of fuzzy logic was to working on the problem of computer understanding of natural language, so this survey paper provides an overview on what Question-Answering is and its system architecture and the possible relationship and
different with fuzzy logic, as well as the previous related research with respect to approaches that were followed. At the end, the survey provides an analytical discussion of the proposed QA models, along or combined with fuzzy logic and their main contributions and limitations.
DYNAMIC PHONE WARPING – A METHOD TO MEASURE THE DISTANCE BETWEEN PRONUNCIATIONS cscpconf
Human beings generate different speech waveforms while speaking the same word at different times. Also, different human beings have different accents and generate significantly varying speech waveforms for the same word. There is a need to measure the distances between various words which facilitate preparation of pronunciation dictionaries. A new algorithm called Dynamic Phone Warping (DPW) is presented in this paper. It uses dynamic programming technique for global alignment and shortest distance measurements. The DPW algorithm can be used to enhance the pronunciation dictionaries of the well-known languages like English or to build pronunciation dictionaries to the less known sparse languages. The precision measurement experiments show 88.9% accuracy.
INTELLIGENT ELECTRONIC ASSESSMENT FOR SUBJECTIVE EXAMS cscpconf
In education, the use of electronic (E) examination systems is not a novel idea, as Eexamination systems have been used to conduct objective assessments for the last few years. This research deals with randomly designed E-examinations and proposes an E-assessment system that can be used for subjective questions. This system assesses answers to subjective questions by finding a matching ratio for the keywords in instructor and student answers. The matching ratio is achieved based on semantic and document similarity. The assessment system is composed of four modules: preprocessing, keyword expansion, matching, and grading. A survey and case study were used in the research design to validate the proposed system. The examination assessment system will help instructors to save time, costs, and resources, while increasing efficiency and improving the productivity of exam setting and assessments.
TWO DISCRETE BINARY VERSIONS OF AFRICAN BUFFALO OPTIMIZATION METAHEURISTICcscpconf
African Buffalo Optimization (ABO) is one of the most recent swarms intelligence based metaheuristics. ABO algorithm is inspired by the buffalo’s behavior and lifestyle. Unfortunately, the standard ABO algorithm is proposed only for continuous optimization problems. In this paper, the authors propose two discrete binary ABO algorithms to deal with binary optimization problems. In the first version (called SBABO) they use the sigmoid function and probability model to generate binary solutions. In the second version (called LBABO) they use some logical operator to operate the binary solutions. Computational results on two knapsack problems (KP and MKP) instances show the effectiveness of the proposed algorithm and their ability to achieve good and promising solutions.
DETECTION OF ALGORITHMICALLY GENERATED MALICIOUS DOMAINcscpconf
In recent years, many malware writers have relied on Dynamic Domain Name Services (DDNS) to maintain their Command and Control (C&C) network infrastructure to ensure a persistence presence on a compromised host. Amongst the various DDNS techniques, Domain Generation Algorithm (DGA) is often perceived as the most difficult to detect using traditional methods. This paper presents an approach for detecting DGA using frequency analysis of the character distribution and the weighted scores of the domain names. The approach’s feasibility is demonstrated using a range of legitimate domains and a number of malicious algorithmicallygenerated domain names. Findings from this study show that domain names made up of English characters “a-z” achieving a weighted score of < 45 are often associated with DGA. When a weighted score of < 45 is applied to the Alexa one million list of domain names, only 15% of the domain names were treated as non-human generated.
GLOBAL MUSIC ASSET ASSURANCE DIGITAL CURRENCY: A DRM SOLUTION FOR STREAMING C...cscpconf
The document proposes a blockchain-based digital currency and streaming platform called GoMAA to address issues of piracy in the online music streaming industry. Key points:
- GoMAA would use a digital token on the iMediaStreams blockchain to enable secure dissemination and tracking of streamed content. Content owners could control access and track consumption of released content.
- Original media files would be converted to a Secure Portable Streaming (SPS) format, embedding watermarks and smart contract data to indicate ownership and enable validation on the blockchain.
- A browser plugin would provide wallets for fans to collect GoMAA tokens as rewards for consuming content, incentivizing participation and addressing royalty discrepancies by recording
IMPORTANCE OF VERB SUFFIX MAPPING IN DISCOURSE TRANSLATION SYSTEMcscpconf
This document discusses the importance of verb suffix mapping in discourse translation from English to Telugu. It explains that after anaphora resolution, the verbs must be changed to agree with the gender, number, and person features of the subject or anaphoric pronoun. Verbs in Telugu inflect based on these features, while verbs in English only inflect based on number and person. Several examples are provided that demonstrate how the Telugu verb changes based on whether the subject or pronoun is masculine, feminine, neuter, singular or plural. Proper verb suffix mapping is essential for generating natural and coherent translations while preserving the context and meaning of the original discourse.
EXACT SOLUTIONS OF A FAMILY OF HIGHER-DIMENSIONAL SPACE-TIME FRACTIONAL KDV-T...cscpconf
In this paper, based on the definition of conformable fractional derivative, the functional
variable method (FVM) is proposed to seek the exact traveling wave solutions of two higherdimensional
space-time fractional KdV-type equations in mathematical physics, namely the
(3+1)-dimensional space–time fractional Zakharov-Kuznetsov (ZK) equation and the (2+1)-
dimensional space–time fractional Generalized Zakharov-Kuznetsov-Benjamin-Bona-Mahony
(GZK-BBM) equation. Some new solutions are procured and depicted. These solutions, which
contain kink-shaped, singular kink, bell-shaped soliton, singular soliton and periodic wave
solutions, have many potential applications in mathematical physics and engineering. The
simplicity and reliability of the proposed method is verified.
AUTOMATED PENETRATION TESTING: AN OVERVIEWcscpconf
The document discusses automated penetration testing and provides an overview. It compares manual and automated penetration testing, noting that automated testing allows for faster, more standardized and repeatable tests but has limitations in developing new exploits. It also reviews some current automated penetration testing methodologies and tools, including those using HTTP/TCP/IP attacks, linking common scanning tools, a Python-based tool targeting databases, and one using POMDPs for multi-step penetration test planning under uncertainty. The document concludes that automated testing is more efficient than manual for known vulnerabilities but cannot replace manual testing for discovering new exploits.
CLASSIFICATION OF ALZHEIMER USING fMRI DATA AND BRAIN NETWORKcscpconf
Since the mid of 1990s, functional connectivity study using fMRI (fcMRI) has drawn increasing
attention of neuroscientists and computer scientists, since it opens a new window to explore
functional network of human brain with relatively high resolution. BOLD technique provides
almost accurate state of brain. Past researches prove that neuro diseases damage the brain
network interaction, protein- protein interaction and gene-gene interaction. A number of
neurological research paper also analyse the relationship among damaged part. By
computational method especially machine learning technique we can show such classifications.
In this paper we used OASIS fMRI dataset affected with Alzheimer’s disease and normal
patient’s dataset. After proper processing the fMRI data we use the processed data to form
classifier models using SVM (Support Vector Machine), KNN (K- nearest neighbour) & Naïve
Bayes. We also compare the accuracy of our proposed method with existing methods. In future,
we will other combinations of methods for better accuracy.
VALIDATION METHOD OF FUZZY ASSOCIATION RULES BASED ON FUZZY FORMAL CONCEPT AN...cscpconf
The document proposes a new validation method for fuzzy association rules based on three steps: (1) applying the EFAR-PN algorithm to extract a generic base of non-redundant fuzzy association rules using fuzzy formal concept analysis, (2) categorizing the extracted rules into groups, and (3) evaluating the relevance of the rules using structural equation modeling, specifically partial least squares. The method aims to address issues with existing fuzzy association rule extraction algorithms such as large numbers of extracted rules, redundancy, and difficulties with manual validation.
PROBABILITY BASED CLUSTER EXPANSION OVERSAMPLING TECHNIQUE FOR IMBALANCED DATAcscpconf
In many applications of data mining, class imbalance is noticed when examples in one class are
overrepresented. Traditional classifiers result in poor accuracy of the minority class due to the
class imbalance. Further, the presence of within class imbalance where classes are composed of
multiple sub-concepts with different number of examples also affect the performance of
classifier. In this paper, we propose an oversampling technique that handles between class and
within class imbalance simultaneously and also takes into consideration the generalization
ability in data space. The proposed method is based on two steps- performing Model Based
Clustering with respect to classes to identify the sub-concepts; and then computing the
separating hyperplane based on equal posterior probability between the classes. The proposed
method is tested on 10 publicly available data sets and the result shows that the proposed
method is statistically superior to other existing oversampling methods.
CHARACTER AND IMAGE RECOGNITION FOR DATA CATALOGING IN ECOLOGICAL RESEARCHcscpconf
Data collection is an essential, but manpower intensive procedure in ecological research. An
algorithm was developed by the author which incorporated two important computer vision
techniques to automate data cataloging for butterfly measurements. Optical Character
Recognition is used for character recognition and Contour Detection is used for imageprocessing.
Proper pre-processing is first done on the images to improve accuracy. Although
there are limitations to Tesseract’s detection of certain fonts, overall, it can successfully identify
words of basic fonts. Contour detection is an advanced technique that can be utilized to
measure an image. Shapes and mathematical calculations are crucial in determining the precise
location of the points on which to draw the body and forewing lines of the butterfly. Overall,
92% accuracy were achieved by the program for the set of butterflies measured.
SOCIAL MEDIA ANALYTICS FOR SENTIMENT ANALYSIS AND EVENT DETECTION IN SMART CI...cscpconf
Smart cities utilize Internet of Things (IoT) devices and sensors to enhance the quality of the city
services including energy, transportation, health, and much more. They generate massive
volumes of structured and unstructured data on a daily basis. Also, social networks, such as
Twitter, Facebook, and Google+, are becoming a new source of real-time information in smart
cities. Social network users are acting as social sensors. These datasets so large and complex
are difficult to manage with conventional data management tools and methods. To become
valuable, this massive amount of data, known as 'big data,' needs to be processed and
comprehended to hold the promise of supporting a broad range of urban and smart cities
functions, including among others transportation, water, and energy consumption, pollution
surveillance, and smart city governance. In this work, we investigate how social media analytics
help to analyze smart city data collected from various social media sources, such as Twitter and
Facebook, to detect various events taking place in a smart city and identify the importance of
events and concerns of citizens regarding some events. A case scenario analyses the opinions of
users concerning the traffic in three largest cities in the UAE
SOCIAL NETWORK HATE SPEECH DETECTION FOR AMHARIC LANGUAGEcscpconf
The anonymity of social networks makes it attractive for hate speech to mask their criminal
activities online posing a challenge to the world and in particular Ethiopia. With this everincreasing
volume of social media data, hate speech identification becomes a challenge in
aggravating conflict between citizens of nations. The high rate of production, has become
difficult to collect, store and analyze such big data using traditional detection methods. This
paper proposed the application of apache spark in hate speech detection to reduce the
challenges. Authors developed an apache spark based model to classify Amharic Facebook
posts and comments into hate and not hate. Authors employed Random forest and Naïve Bayes
for learning and Word2Vec and TF-IDF for feature selection. Tested by 10-fold crossvalidation,
the model based on word2vec embedding performed best with 79.83%accuracy. The
proposed method achieve a promising result with unique feature of spark for big data.
GENERAL REGRESSION NEURAL NETWORK BASED POS TAGGING FOR NEPALI TEXTcscpconf
This article presents Part of Speech tagging for Nepali text using General Regression Neural
Network (GRNN). The corpus is divided into two parts viz. training and testing. The network is
trained and validated on both training and testing data. It is observed that 96.13% words are
correctly being tagged on training set whereas 74.38% words are tagged correctly on testing
data set using GRNN. The result is compared with the traditional Viterbi algorithm based on
Hidden Markov Model. Viterbi algorithm yields 97.2% and 40% classification accuracies on
training and testing data sets respectively. GRNN based POS Tagger is more consistent than the
traditional Viterbi decoding technique.
Elevate Your Nonprofit's Online Presence_ A Guide to Effective SEO Strategies...TechSoup
Whether you're new to SEO or looking to refine your existing strategies, this webinar will provide you with actionable insights and practical tips to elevate your nonprofit's online presence.
🔥🔥🔥🔥🔥🔥🔥🔥🔥
إضغ بين إيديكم من أقوى الملازم التي صممتها
ملزمة تشريح الجهاز الهيكلي (نظري 3)
💀💀💀💀💀💀💀💀💀💀
تتميز هذهِ الملزمة بعِدة مُميزات :
1- مُترجمة ترجمة تُناسب جميع المستويات
2- تحتوي على 78 رسم توضيحي لكل كلمة موجودة بالملزمة (لكل كلمة !!!!)
#فهم_ماكو_درخ
3- دقة الكتابة والصور عالية جداً جداً جداً
4- هُنالك بعض المعلومات تم توضيحها بشكل تفصيلي جداً (تُعتبر لدى الطالب أو الطالبة بإنها معلومات مُبهمة ومع ذلك تم توضيح هذهِ المعلومات المُبهمة بشكل تفصيلي جداً
5- الملزمة تشرح نفسها ب نفسها بس تكلك تعال اقراني
6- تحتوي الملزمة في اول سلايد على خارطة تتضمن جميع تفرُعات معلومات الجهاز الهيكلي المذكورة في هذهِ الملزمة
واخيراً هذهِ الملزمة حلالٌ عليكم وإتمنى منكم إن تدعولي بالخير والصحة والعافية فقط
كل التوفيق زملائي وزميلاتي ، زميلكم محمد الذهبي 💊💊
🔥🔥🔥🔥🔥🔥🔥🔥🔥
This presentation was provided by Rebecca Benner, Ph.D., of the American Society of Anesthesiologists, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
THE SACRIFICE HOW PRO-PALESTINE PROTESTS STUDENTS ARE SACRIFICING TO CHANGE T...indexPub
The recent surge in pro-Palestine student activism has prompted significant responses from universities, ranging from negotiations and divestment commitments to increased transparency about investments in companies supporting the war on Gaza. This activism has led to the cessation of student encampments but also highlighted the substantial sacrifices made by students, including academic disruptions and personal risks. The primary drivers of these protests are poor university administration, lack of transparency, and inadequate communication between officials and students. This study examines the profound emotional, psychological, and professional impacts on students engaged in pro-Palestine protests, focusing on Generation Z's (Gen-Z) activism dynamics. This paper explores the significant sacrifices made by these students and even the professors supporting the pro-Palestine movement, with a focus on recent global movements. Through an in-depth analysis of printed and electronic media, the study examines the impacts of these sacrifices on the academic and personal lives of those involved. The paper highlights examples from various universities, demonstrating student activism's long-term and short-term effects, including disciplinary actions, social backlash, and career implications. The researchers also explore the broader implications of student sacrifices. The findings reveal that these sacrifices are driven by a profound commitment to justice and human rights, and are influenced by the increasing availability of information, peer interactions, and personal convictions. The study also discusses the broader implications of this activism, comparing it to historical precedents and assessing its potential to influence policy and public opinion. The emotional and psychological toll on student activists is significant, but their sense of purpose and community support mitigates some of these challenges. However, the researchers call for acknowledging the broader Impact of these sacrifices on the future global movement of FreePalestine.
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
This presentation was provided by Racquel Jemison, Ph.D., Christina MacLaughlin, Ph.D., and Paulomi Majumder. Ph.D., all of the American Chemical Society, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
Level 3 NCEA - NZ: A Nation In the Making 1872 - 1900 SML.pptHenry Hollis
The History of NZ 1870-1900.
Making of a Nation.
From the NZ Wars to Liberals,
Richard Seddon, George Grey,
Social Laboratory, New Zealand,
Confiscations, Kotahitanga, Kingitanga, Parliament, Suffrage, Repudiation, Economic Change, Agriculture, Gold Mining, Timber, Flax, Sheep, Dairying,
Temple of Asclepius in Thrace. Excavation resultsKrassimira Luka
The temple and the sanctuary around were dedicated to Asklepios Zmidrenus. This name has been known since 1875 when an inscription dedicated to him was discovered in Rome. The inscription is dated in 227 AD and was left by soldiers originating from the city of Philippopolis (modern Plovdiv).
2. 34 Computer Science & Information Technology (CS & IT)
an information technology infrastructure , data interference (which includes unauthorized
damaging, deletion, deterioration, alteration or suppression of computer data), Unethical access of
web services, Disturbance of social-peace, systems interference (interfering with the functioning
of a computer system by inputting, transferring, destroying, removing, deteriorating, altering or
suppressing computer data), misuse of devices, forgery (ID theft), and electronic fraud.[1][4]
A Knowledge Base is the modelling of previously occurred events in order to predict future
events by employing some artificial intelligence techniques [13]. It is a sort of database for
knowledge management, providing the means for the computerized collection, organization, and
retrieval of knowledge. They are basically artificial intelligent tools providing intelligent
decisions. Knowledge is obtained and represented using various knowledge representation
techniques rules, frames and scripts. The basic advantages offered by such system are
documentation of knowledge, intelligent decision support, self-learning, reasoning and
explanation. [14]
As we all know that phishing attack is a URL based attack which happens between the Internet
user and the browser, so our proposed methodology gives the new security layer between browser
and the User using the Knowledge Base and some parsing operations.
2. LITERATURE REVIEW
Commonly, anti-phishing tools use two major approaches for mitigating phishing sites. The first
approach is based on heuristics to check the host name and the URL for common spoofing
techniques. The second method lists out some blacklist phishing URLs. The heuristics approach is
not 100% accurate since it produces low false negatives (FN), i.e. a phishing site is mistakenly
judged as legitimate, which implies they do not correctly identify all phishing sites. The heuristics
often produce high false positives (FP), i.e. incorrectly identifying a legitimate site as fraudulent.
Blacklists have a high level of accuracy because they are constructed by paid experts who verify a
reported URL and add it to the blacklists if it is considered as a phishing website. [1][4][8]
Delayed password disclosure [15] is another new method to avoid phishing attacks. This is based
on the feedback generated by the interface as user enters the password; hence if the feedback
generated is not according to the authentic website an alarm is triggered.
Another method to create awareness amongst users against phishing is Trust bar construction
[16]. This method associates logos with the public key of the website being visited hence easing
the way of authentication of website. PassmarkTM
is a similar method currently being used by
Bank of America.
The detection and identification of phishing websites in real-time, particularly for e-
banking/payment gateway website, is a very complex and dynamic problem which involves many
factors and criteria. Many methods like improving site authenticity, one time passwords, having
separate login and transaction passwords, personalized e-mail communication, user education
about phishing are being implemented to prevent phishing attacks, but they don’t provide high
security.
3. PROPOSED METHODOLOGY
Here we propose a knowledge base approach against phishing attacks which also uses some
parsing techniques to detect the attack.
3. Computer Science & Information Technology (CS & IT) 35
3.1. Knowledge Bases
Our methodology uses some knowledge bases which are I, T, A, B & C. Knowledge Base Initial
or KBI stores the pattern and other detection methods of previously detected phishing attacks and
other web attacks. It validates the URL and also relates the URL with the previously detected
phishing attacks. If pattern of new URL matches with the previously stored Phishing attacks, then
it generates a phishing alert before visiting the URL. Knowledge Base Trusted or KBT maintains
all the trusted and secure URLs which are previously visited on the same browser. The user can
further manually add the frequently visited legitimate websites to this knowledge base for whom
he wishes not to carry out security checks every. Knowledge Base A defines all the URL-pattern
based phishing and SSL attacks which have detected previously by the browser till date. This
Knowledge Base is used before the operation of ‘Parser-1’.
Knowledge Base B stores the all information (like license year, rating of the domain, popularity
of the domain etc.) of the URLs which is previously visited and detected as the Phishing attacks.
Knowledge Base C stores the result of Fraud Check analysis of URLs and generates queries when
the URL is analyzed using fraud check with the previous history. Finally, Knowledge Base D is
responsible for maintaining the history of the all the URLs which are previously visited.
3.2. Parsers
Some Parsers are also used in the detection of URL based attack in the proposed methodology.
Parser 1 is used to detect the pattern based URL attacks. This parser provides the security against
phishing attacks as well as SSL attacks. It also analyzes the usage of some special character (like
‘-‘,’.’ etc.) in the URL to detect the attacks. This parser’s operation is based on the fact, that
phishing attackers use the some fraction of the actual legitimate URL so as to generate a close to
real phishing URL. Then we have Parser 2, which pulls out all the details of the website such as
license year, rating of the domain, popularity of the domain etc. when a URL is passed through it.
Using these details parser-B can declares if the URL is phishing website URL or a legitimate
website URL. This parser takes account of the fact that phishing URLs are newly registered one
with low rating and popularity. Hence if the URL is newly registered, then it can be a phishing
attack on any existing URLs.
Parser 3 performs an important step for security against the phishing attacks. It performs the fraud
check analysis of an URL and generates a warning message if URL is not secure. Parser 4
searches for other URLs whose pattern matches with the requested URL. It finds all details of the
other similar URLs and compares all the details (like year of domain registration, rating of the
domain, popularity of the domain etc.) with the requested URL details. It then displays all the
results in the preference on the browser screen before visiting the requested URL.
In implementation of parser 4 and 5, the Open Source Crawler “crawler4j” has been used. [17]
3.3. Re-visit Policy
In the proposed methodology, the parsers also use the re-visit policy when needed because web
has its dynamic nature. The re-visit policy can be easily understood using the freshness function
described in two ways viz. Freshness and Age. Freshness is used as binary measure which
indicates whether the local copy is accurate or not. The freshness of any page ‘p’ in the repository
at time t is defined as:
4. 36 Computer Science & Information Technology (CS & IT)
Age is a measure which indicates how outdated the local copy of page is. The age of a page ‘p’ in
the repository, at time t is defined as:
4. EXECUTION OF THE PROPOSED METHODOLOGY
Execution of proposed methodology depends on the sequence of knowledge bases and
corresponding parsers. Final result of the proposed methodology is not affected by the sequence
of the operations. Sequence affects only the space complexity and time complexity of the
methodology.
Execution of proposed methodology in divided into several steps which are described as follows
in the following sections
Figure 1. Flowchart of Step 1 of the proposed methodology
4.1 Historical Attack Detection
This step is composed with 2 operations which are occurred using ‘Knowledge Base I’ and
‘Knowledge Base T’.
Knowledge Base Initial (KBI) is used to detect the attacks which has the same pattern with the
previous detected attacks stored in it. Knowledge Base Trusted (KBT) is used to find the trusted
status of requested URL which was previously declared by the user. In Historical attack detection
the browser first tallies the URL with the KBI to check if its pattern matches that of any frequent
phishing attack stored in the knowledge base. If it is safe then it proceeds to match up with the
KBT. In this knowledge base it matches the URL against the trusted URLs stored by the user.
4.2 URL Pattern based Attack Detection
It is composed 2 operations which are related to ‘Knowledge Base A’ and ‘parser 1’. This Step 2
provides the security against those attacks which are purely URL-pattern based phishing as well
as SSL attacks. Knowledge Base A detects only those attacks which were detected previously by
the browser and were stored in its database. During the step 2, ‘parser 1’ scans the requested URL
and finds the occurrence of special characters (‘-‘,’.’ etc) and their repetition in the URL. It is
used to detect the pattern based phishing attacks. Working of step2 is represented in Figure 2.
5. Computer Science & Information Technology (CS & IT) 37
Figure 2. Flowchart of Step 2 of the proposed methodology
4.3 URL Information Analysis
URL information can be very helpful in detection of the phishing attacks. This step is based on
the fact that the phishing URLs are newly registered and have lower rating and popularity over
the internet. Figure 3 represents the working of URL information analysis step. In this step
requested URL is analyzed with the Knowledge Base B and information of URL is analyzed
using the historical data of URL (if the URL was visited previously) and displays the results and
generates warning if URL is phishing attack based URL.
If the URL is not present in the history of Knowledge Base B then it goes to the parser 2 for the
information analysis. ‘Parser 2’ works to finds the information of URL as a web crawler (which is
described above) and performs the proper analysis after crawling for the details of the URL over
the internet.
Figure 3. Flowchart of Step 3 of the proposed methodology
4.4 Fraud URL Detection
This step is performed by the Knowledge Base C and parser 3.Knowledge Base C performs the
fraud check analysis of the requested URL (if it is available in the history of Knowledge base). It
displays the result and appropriate messages. If the URL is not visited previously then parser 3
performs the Fraud check analysis to provide security against phishing attacks (or other web
attacks) using some security algorithms. Figure 4 describes the fraud check analysis.
6. 38 Computer Science & Information Technology (CS & IT)
Figure 4. Flowchart of Fraud Check Analysis
5. IMPLEMENTATION AND RESULTS
We have implemented the proposed methodology in a virtual scenario, where we explored all the
visited URLs of browsers on different machines using the history feature. All the URLs have
been stored in a database for detect the phishing attacks and perform the analysis. We are
planning to implement this methodology with some new add-ons to install in present web
browsers (like other Firefox add-ons).
We have analyzed the URLs visited over the 5-months. In the initial stage of implementation,
security risks are more because of absence of data in the different knowledge base. The
implemented scenario provides 97.98 % security against phishing attacks and some hacking
attacks. We have not executed our proposed methodology for the duration of Dec, 2012 and Jan,
2013 but during Feb, 2013 to April, 2013, we have executed the above methodology.
The following table data represents the recorded activities of the Web URLs in the implemented
scenario and detected the phishing attacks and some hacking attacks.
Table 1: URL and some Web Attacks Analysis (2012-13)
Month Dec Jan Feb Mar Apr
No. of URLs visited 978 967 897 1023 1218
Phishing Attacks 21 18 17 22 24
Detected phishing attacks with the
browser
15 12 15 20 24
SSL Attacks 19 13 12 9 14
Detected SSL attacks with the browser 13 11 12 8 14
Execution Time (in minutes) 0 0 165 204 284
The above table shows the number of phishing attacks encountered and the execution time taken
by our methodology from December 2011 to April 2012. The execution time for the first two
months is actually zero as we have not implemented our methodology then. We have
implemented our methodology from February 2012 onwards.
Kindly note that the approximate time of execution per URL visit, for the first month comes out
to be 11 seconds. This increases to 12 seconds in the second month and to 14 seconds in the third
month. This gradual increase can be attributed to the fact that the knowledge base is increasing in
size hence the browser searches for more security attack then before
7. Computer Science & Information Technology (CS & IT) 39
6. CONCLUSIONS
We have recorded the web URLs activities of with the usage of proposed methodology and
without usage of proposed methodology over 5 months. From the data, we have analyzed the
attacks and detected attacks over the time. Our system indicated that the 97.98% security against
phishing attacks as well as SSL-attacks over the browsing. Table 1 represents the recorded data
over the 5 months’ time period. Limitations of the proposed method are that due to various
parsing operations, its time complexity and space complexity is higher. So many times, it
increases the browsing time of web browser. Due to slower speed of browsing, generally web
users avoid this type of higher web security.
REFERENCES
[1] Ollmann G., The Phishing Guide Understanding & Preventing Phishing Attacks, NGS Software
Insight Security Research
[2] Aburrous ,Maher Ragheb, Alamgir, Hossain,Keshav Dahal, Thabatah, Fadi, "Modelling Intelligent
Phishing Detection System for E-banking Using Fuzzy Data Mining," cw, pp.265-272, 2009
International Conference on CyberWorlds, (2009)
[3] Abu-Nimeh, S.; Nair, S., "Bypassing Security Toolbars and Phishing Filters via DNS Poisoning,"
Global Telecommunications Conference, 2008. IEEE GLOBECOM 2008. IEEE , vol., no., pp.1-6,
(Nov. 30 2008-Dec. 4 2008)
[4] Yu, W.D.; Nargundkar, S.; Tiruthani, N., "A phishing vulnerability analysis of web based systems,"
Computers and Communications, 2008. ISCC 2008. IEEE Symposium on, vol., no., pp.326-331, 6-9
(July 2008)
[5] Alnajim, A. and Munro, M. 2009. An Anti-Phishing Approach that Uses Training Intervention for
Phishing Websites Detection. In Proceedings of the 2009 Sixth international Conference on
information Technology: New Generations (2009). ITNG. IEEE Computer Society, Washington, DC,
405-410. DOI= http://dx.doi.org/10.1109/ITNG.2009.109
[6] Chen ,Juan and Guo ,Chuanxiong, Online Detection and Prevention of Phishing Attacks, in Proc.
Chinacom 06
[7] Beginning PHP5, Apache, and MySQL Web Development by Elizabeth Naramore, Jason Gerner,
Yann Le Scouarnec, Jeremy Stolz, Michael K. Glass; ISBN: 9780764579660
[8] Sophos White Paper, Phishing and the threat to corporate networks,( 2005)
[9] PHP, AJAX, MySql and JavaScript Tutorials, http://www.w3schools.com/
[10] Prentice Hall - Deitel - Java How to Program, 4th Edition, Java_2_Complete_Reference_5E , Java -
How To Program, 6th Edition
[11] Ahn ,Luis von, Blum, Manuel, Hopper, Nicholas, and Langford ,John. CAPTCHA: Using Hard AI
Problems for Security. In Eurocrypt
[12] Gedam,Dhiraj Nilkanthrao,RSA Based Confidentiality And Integrity Enhancements in SCOSTA-CL,
A thesis report,Department of Computer Science and engineering,Indian Institute of Technology
,Kanpur, India, (July,2009)
[13] J. Ullman, Database and knowledge base systems, In Database and knowledge-base systems, volume
2, Computer Science Press, 1989
[14] Akerkar RA and Sajja Priti Srinivas: “Knowledge-based systems”, Jones & Bartlett Publishers,
Sudbury, MA, USA (2009)
[15] Delayed password disclosure Markus Jakobsson, Steven Myers, Palo Alto Research Center, 3333
Coyote Hill Road, Palo Alto, CA 94303, USA. School of Informatics, Indiana University,
Bloomington, IN, USA Journal: Int. J. of Applied Cryptography 2008 - Vol. 1, No.1 pp. 47 - 59
[16] Herzberg, A. and Gbara, A. (2004) Technical Report 2004-23, Protecting (even) Naïve Web Users,
or: Preventing Spoofing and Establishing Credentials of Web Sites, DIMACS, October 30, Available
at: http://dimacs.rutgers.edu/TechnicalReports/2004.html.
[17] The java project of the “crawler4j” can be downloaded from here:
http://www.code.google.com/p/crawler4j/
8. 40 Computer Science & Information Technology (CS & IT)
Authors
Gaurav Kumar Tak is an assistant professor in the School of Computer Engineering,
Lovely Professional University. His primary research areas of interest are Cyber-crime
and Security, Wireless Ad-hoc Network and Web Technologies. He has written several
research papers in these areas and continues to work for improving the security of web
applications and making the web safe to surf.
Gaurav Ojha is a student in the Department of Information Technology, Indian Institute
of Information Technology and Management Gwalior. His areas of interest are Web
technologies, Open source software and Internet Security.