This document discusses insecure trends in web technologies such as social networks, AJAX, Flash, and Silverlight. It provides an analysis of security risks with each technology. As a case study, it presents how a malicious user could potentially create a self-replicating worm using Silverlight on a social networking site by exploiting a weakness that allowed controlling user account features. The document aims to educate developers and users about security issues that can arise from these technologies.
This document discusses cross-site scripting (XSS) attacks. It begins by defining XSS and explaining that it occurs when an attacker uses a victim's browser to run malicious scripts. There are three main types of XSS attacks: reflected, stored, and DOM-based. The document then discusses the history and evolution of XSS attacks, providing examples over time that increased in scale and sophistication. It covers technical details of how the different XSS attacks work and potential impacts from a professional, social, and ethical perspective. The goal is to raise awareness about XSS vulnerabilities and prevention.
ISSA Journal Paper - JavaScript Infection ModelAditya K Sood
This document discusses how advancements in Web 2.0 technologies have created security threats to the World Wide Web. It focuses on the negative exploitation of JavaScript, which is heavily used by malware writers to spread infections online. The document describes the JavaScript Infection Model (JIM), which outlines the typical methods used by attackers to inject malware onto websites. These include using an attacker-controlled domain to host malware, exploiting vulnerabilities to inject malicious JavaScript onto legitimate sites, and using JavaScript's abilities to conduct stealth attacks and spread the infection across multiple domains.
This document discusses clickjacking attacks, which hijack users' clicks to perform unintended actions. It provides an overview of clickjacking, describes different types of attacks, and analyzes vulnerabilities that make websites susceptible. Experiments are conducted on a sample social networking site, applying various clickjacking techniques. Potential defenses are tested, including X-Frame-Options headers and frame busting code. A proposed solution detects transparent iframes to warn users and check for hidden mouse pointers to mitigate cursorjacking. Analysis of top Jammu and Kashmir websites found most were vulnerable, while browser behavior studies showed varying support for defenses.
"How To Defeat Advanced Malware: New Tools for Protection and Forensics" is a FREE continuing education class that has been designed specifically for CIO's, CTO's, CISO's and senior executives who work within the financial industry and are responsible for their company's endpoint protection.
This document discusses strategies for achieving bulletproof IT security. It recommends establishing strong security policies, frequent employee training, ongoing self-assessments, encryption, asset management, and testing business continuity plans. It also stresses the importance of system hardening through vulnerability management and addressing issues like BYOD. The document provides numerous free tools and resources organizations can use to identify vulnerabilities, harden systems, and prevent malware.
This document discusses software security vulnerability analysis, with a focus on SQL injection vulnerabilities. It provides an overview of common web application vulnerabilities, describes the state of SQL injection and examples of how it works. It also discusses best practices for protecting against SQL injection, such as input validation and using prepared statements. The document examines SQL injection scanning tools and provides examples of their use.
These are the slides from my "Active Defense - Helping threat actors hack themselves!" presentation at the BSides Columbus Information Security Conference on 03/02/2018 in Columbus, Ohio.
This document discusses insecure trends in web technologies such as social networks, AJAX, Flash, and Silverlight. It provides an analysis of security risks with each technology. As a case study, it presents how a malicious user could potentially create a self-replicating worm using Silverlight on a social networking site by exploiting a weakness that allowed controlling user account features. The document aims to educate developers and users about security issues that can arise from these technologies.
This document discusses cross-site scripting (XSS) attacks. It begins by defining XSS and explaining that it occurs when an attacker uses a victim's browser to run malicious scripts. There are three main types of XSS attacks: reflected, stored, and DOM-based. The document then discusses the history and evolution of XSS attacks, providing examples over time that increased in scale and sophistication. It covers technical details of how the different XSS attacks work and potential impacts from a professional, social, and ethical perspective. The goal is to raise awareness about XSS vulnerabilities and prevention.
ISSA Journal Paper - JavaScript Infection ModelAditya K Sood
This document discusses how advancements in Web 2.0 technologies have created security threats to the World Wide Web. It focuses on the negative exploitation of JavaScript, which is heavily used by malware writers to spread infections online. The document describes the JavaScript Infection Model (JIM), which outlines the typical methods used by attackers to inject malware onto websites. These include using an attacker-controlled domain to host malware, exploiting vulnerabilities to inject malicious JavaScript onto legitimate sites, and using JavaScript's abilities to conduct stealth attacks and spread the infection across multiple domains.
This document discusses clickjacking attacks, which hijack users' clicks to perform unintended actions. It provides an overview of clickjacking, describes different types of attacks, and analyzes vulnerabilities that make websites susceptible. Experiments are conducted on a sample social networking site, applying various clickjacking techniques. Potential defenses are tested, including X-Frame-Options headers and frame busting code. A proposed solution detects transparent iframes to warn users and check for hidden mouse pointers to mitigate cursorjacking. Analysis of top Jammu and Kashmir websites found most were vulnerable, while browser behavior studies showed varying support for defenses.
"How To Defeat Advanced Malware: New Tools for Protection and Forensics" is a FREE continuing education class that has been designed specifically for CIO's, CTO's, CISO's and senior executives who work within the financial industry and are responsible for their company's endpoint protection.
This document discusses strategies for achieving bulletproof IT security. It recommends establishing strong security policies, frequent employee training, ongoing self-assessments, encryption, asset management, and testing business continuity plans. It also stresses the importance of system hardening through vulnerability management and addressing issues like BYOD. The document provides numerous free tools and resources organizations can use to identify vulnerabilities, harden systems, and prevent malware.
This document discusses software security vulnerability analysis, with a focus on SQL injection vulnerabilities. It provides an overview of common web application vulnerabilities, describes the state of SQL injection and examples of how it works. It also discusses best practices for protecting against SQL injection, such as input validation and using prepared statements. The document examines SQL injection scanning tools and provides examples of their use.
These are the slides from my "Active Defense - Helping threat actors hack themselves!" presentation at the BSides Columbus Information Security Conference on 03/02/2018 in Columbus, Ohio.
1. The document discusses phishing attacks, which involve tricking users into providing private information through fraudulent emails or websites.
2. Phishing attacks are classified into two categories: social engineering attacks, which use psychological tricks, and malware-based attacks, which install harmful software.
3. More recent phishing scams activate positive feelings like hope and gratitude through appealing to emotions and incorporating elements of social networks, which makes the scams more effective at gaining user trust and spreading widely.
CSRF Attacks and its Defence using Middlewareijtsrd
A common solution to the issue of CSRF vulnerability is to restrict malicious requests from reaching the core of the application, where all the data and business logic is present. But the most challenging part is to identify when a request is malicious and when is it healthy. Implementing a simple solution would lead to more vulnerabilities and implementing too strict a solution would lead to breakages where projects depend on cross site requests like third party authentication and payment gateways etc. The solution being proposed in this paper constitutes the design and implementation of a request filtering mechanism that can precisely distinguish between malicious and healthy requests, and automatically decide to restrict them or allow them to get further deep into the system. This paper briefly explains what a Cross Site Request Forgery attack is, and then goes into a step by step explanation on the prevention of CSRF attacks using a middleware. The proposed system is very strict in filtering out HTTP requests but also has an option to exempt certain cross site requests based on their domain or URL, with which payment hooks and other third party authentication calls can be exempted from the CSRF middleware. Shubham Kumar Jha | Raghavendra R "CSRF Attacks and its Defence using Middleware" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-4 , June 2021, URL: https://www.ijtsrd.compapers/ijtsrd42476.pdf Paper URL: https://www.ijtsrd.comcomputer-science/world-wide-web/42476/csrf-attacks-and-its-defence-using-middleware/shubham-kumar-jha
WatchGuard Technologies is promoting its cloud-based web security solution called Reputation Enabled Defense. This solution uses reputation scoring from a large database to determine if URLs are safe to access, allowing safe URLs to bypass antivirus scanning for faster page loading. It provides an extra layer of protection against evolving web threats in real-time without sacrificing performance. The benefits include improved security against malware, better performance through reduced scanning, and a more proactive approach to fighting threats.
Cyberthreats broke new ground with mobile devices, while reaching deeper into social media. Online criminals also stepped up attacks via email, web and other traditional vectors.
1. The number of malicious web links grew by almost 600% worldwide according to data from Websense Security Labs.
2. 85% of malicious web links were found on legitimate web hosts that had been compromised, indicating websites can no longer be trusted based on their reputation.
3. Traditional anti-virus and firewall defenses are no longer sufficient to prevent web-borne threats, as the web serves both as an attack vector and in supporting other attack vectors like social media, mobile, and email. Advanced defenses that can identify compromised legitimate sites in real-time are needed.
Don’t let Your Website Spread Malware – a New Approach to Web App SecuritySasha Nunke
This document discusses how malware risks on websites are increasing and how traditional antivirus approaches are unable to detect zero-day malware attacks. It proposes a better approach to detecting website malware that involves behaviorally analyzing a website by running it on an instrumented virtual machine to watch for exploitation in real-time, allowing detection of zero-day malware. This approach is passive and safe for daily scans, unlike traditional vulnerability scanning. The document recommends using both passive malware scanning and active vulnerability scanning regularly to identify malware as well as vulnerabilities that could be exploited to infect a site or its users. This dual approach protects brands, revenue, and users from the impacts of malware and exploitation.
This document discusses how web-based attacks have become the biggest threat today. It provides statistics showing the rise in malicious websites and vulnerabilities. Common web attacks like SQL injection and cross-site scripting are demonstrated. Social networks are highlighted as particularly dangerous due to their combination of technical vulnerabilities and the trusting social environment. Examples are given of actual attacks on social networks, including hacking of popular applications to spread malware. The overall message is that social networks have become a major conduit for web-based threats due to their technical issues compounded by users' trusting social behaviors.
Content Management System Security.
How to secure your CMS?
Common rules:
+ Choose your CMS with both functionality and security in mind
+ Update with urgency
+ Use a strong password (admin dashboard access, database users, etc.)
+ Have a firewall in place (detect or prevent suspicious requests)
+ Keep track of the changes to your site and their source code
+ Give the user permissions (and their levels of access) a lot of thought
+ Limit the type of files to non-executables and monitor them closely
+ Backup your CMS (daily backups of your files and databases)
+ Uninstall plugins you do not use or trust.
Web Application Security Guide by Qualys 2011 nat page
This document provides an overview of web application security and vulnerabilities. It describes how various exploits like SQL injection and cross-site scripting can compromise web applications. The document also categorizes common types of vulnerabilities like authentication issues, authorization problems, client-side attacks, and information disclosure. It emphasizes that automated scanning tools are effective for detecting many syntax-based vulnerabilities, while more complex logical flaws may require manual code analysis.
While phishing is an “old-fashioned” cyber security threat, attacks continue to increase. This course will better prepare you to defend against this threat.
Methods Hackers Use to Attack a Network can include software-based attacks like cross-site scripting (XSS) and buffer overflows, infrastructure attacks such as denial-of-service (DOS) attacks and viruses, and physical attacks involving theft of hardware, information, or other resources. Software attacks target application vulnerabilities, infrastructure attacks compromise network resources, and physical attacks involve directly accessing systems or stealing equipment. Defenses include keeping software updated, using firewalls and antivirus software, and protecting physical access to systems and sensitive data.
The document discusses the challenges of combating computer viruses given their ability to spread rapidly. It notes that a single virus writer can trigger a chain reaction that infects thousands of computers. Recent viruses like SoBig demonstrated this danger, with one version found in half of all emails scanned. While some blame careless users, the document argues users are overwhelmed by the complex tech landscape. It also discusses challenges faced by administrators and software companies in keeping systems fully protected given the difficulties of eliminating all vulnerabilities from hugely complex programs like Windows.
The WannaCry ransomware virus infected over 200,000 organizations in 150 countries, crippling many hospitals and other organizations. It exploited a vulnerability in Windows to encrypt files and demand ransom payments in bitcoin. While a "kill switch" was discovered that stopped the spread, many systems already infected could not be recovered without paying ransom. It highlighted the need to keep systems updated and have backups to prevent future attacks.
The document discusses the concept of "secure pipes", which refers to internet service providers integrating security functions directly into their network infrastructure to filter traffic before it reaches customers. This represents a paradigm shift from the traditional approach where customers were responsible for security after receiving traffic. Secure pipes involve three stages: 1) Filtering to block known bad traffic using signatures, 2) Exposing unknown malicious content through advanced analytics, and 3) Predicting future attacks by analyzing digital breadcrumbs from reconnaissance activities. The key benefits are applying security at internet speeds, gaining visibility from millions of endpoints, and allowing security teams to focus on more sophisticated threats.
This document presents a proposed system for detecting phishing websites using a Chrome extension. The system compares URLs to entries in two databases - the Phishtank database of known phishing sites, and a local IndexedDB of frequently visited sites. If a match is found in either database, the Chrome extension will flag the site as potentially malicious by changing color. The system was tested on 53 URLs, achieving an accuracy of 92.45% at detecting phishing sites. The proposed system aims to alert users to phishing sites and protect them from disclosing sensitive information to attackers.
WHITE PAPER▶ Symantec Security Response Presents:The Waterbug Attack GroupSymantec
Waterbug is a cyberespionage group that uses sophisticated malware to systematically target government-related entities in a range of countries.
The group uses highly-targeted spear-phishing and watering-hole attack campaigns to target victims. The group has also been noted for its use of zero-day exploits and signing its malware with stolen certificates.
Once the group gains a foothold, it shifts focus to long-term persistent monitoring tools which can be used to exfiltrate data and provide powerful spying capabilities. Symantec has tracked the development of one such tool, Trojan.Turla, and has identified four unique variants being used in the wild.
Ransomware has become the most profitable malware type due to its use of strong encryption. Recent ransomware campaigns have increasingly targeted businesses and entire industries. Attackers now use vulnerabilities in servers and network infrastructure like JBoss to quietly spread ransomware through organizations. This allows them to encrypt large amounts of data and demand high ransom payments. Looking ahead, ransomware is expected to evolve new self-propagation abilities, increasing its impact. Organizations must improve patching of vulnerabilities, backup critical data, and develop incident response plans to address growing ransomware threats.
1. The document discusses phishing attacks, which involve tricking users into providing private information through fraudulent emails or websites.
2. Phishing attacks are classified into two categories: social engineering attacks, which use psychological tricks, and malware-based attacks, which install harmful software.
3. More recent phishing scams activate positive feelings like hope and gratitude through appealing to emotions and incorporating elements of social networks, which makes the scams more effective at gaining user trust and spreading widely.
CSRF Attacks and its Defence using Middlewareijtsrd
A common solution to the issue of CSRF vulnerability is to restrict malicious requests from reaching the core of the application, where all the data and business logic is present. But the most challenging part is to identify when a request is malicious and when is it healthy. Implementing a simple solution would lead to more vulnerabilities and implementing too strict a solution would lead to breakages where projects depend on cross site requests like third party authentication and payment gateways etc. The solution being proposed in this paper constitutes the design and implementation of a request filtering mechanism that can precisely distinguish between malicious and healthy requests, and automatically decide to restrict them or allow them to get further deep into the system. This paper briefly explains what a Cross Site Request Forgery attack is, and then goes into a step by step explanation on the prevention of CSRF attacks using a middleware. The proposed system is very strict in filtering out HTTP requests but also has an option to exempt certain cross site requests based on their domain or URL, with which payment hooks and other third party authentication calls can be exempted from the CSRF middleware. Shubham Kumar Jha | Raghavendra R "CSRF Attacks and its Defence using Middleware" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-4 , June 2021, URL: https://www.ijtsrd.compapers/ijtsrd42476.pdf Paper URL: https://www.ijtsrd.comcomputer-science/world-wide-web/42476/csrf-attacks-and-its-defence-using-middleware/shubham-kumar-jha
WatchGuard Technologies is promoting its cloud-based web security solution called Reputation Enabled Defense. This solution uses reputation scoring from a large database to determine if URLs are safe to access, allowing safe URLs to bypass antivirus scanning for faster page loading. It provides an extra layer of protection against evolving web threats in real-time without sacrificing performance. The benefits include improved security against malware, better performance through reduced scanning, and a more proactive approach to fighting threats.
Cyberthreats broke new ground with mobile devices, while reaching deeper into social media. Online criminals also stepped up attacks via email, web and other traditional vectors.
1. The number of malicious web links grew by almost 600% worldwide according to data from Websense Security Labs.
2. 85% of malicious web links were found on legitimate web hosts that had been compromised, indicating websites can no longer be trusted based on their reputation.
3. Traditional anti-virus and firewall defenses are no longer sufficient to prevent web-borne threats, as the web serves both as an attack vector and in supporting other attack vectors like social media, mobile, and email. Advanced defenses that can identify compromised legitimate sites in real-time are needed.
Don’t let Your Website Spread Malware – a New Approach to Web App SecuritySasha Nunke
This document discusses how malware risks on websites are increasing and how traditional antivirus approaches are unable to detect zero-day malware attacks. It proposes a better approach to detecting website malware that involves behaviorally analyzing a website by running it on an instrumented virtual machine to watch for exploitation in real-time, allowing detection of zero-day malware. This approach is passive and safe for daily scans, unlike traditional vulnerability scanning. The document recommends using both passive malware scanning and active vulnerability scanning regularly to identify malware as well as vulnerabilities that could be exploited to infect a site or its users. This dual approach protects brands, revenue, and users from the impacts of malware and exploitation.
This document discusses how web-based attacks have become the biggest threat today. It provides statistics showing the rise in malicious websites and vulnerabilities. Common web attacks like SQL injection and cross-site scripting are demonstrated. Social networks are highlighted as particularly dangerous due to their combination of technical vulnerabilities and the trusting social environment. Examples are given of actual attacks on social networks, including hacking of popular applications to spread malware. The overall message is that social networks have become a major conduit for web-based threats due to their technical issues compounded by users' trusting social behaviors.
Content Management System Security.
How to secure your CMS?
Common rules:
+ Choose your CMS with both functionality and security in mind
+ Update with urgency
+ Use a strong password (admin dashboard access, database users, etc.)
+ Have a firewall in place (detect or prevent suspicious requests)
+ Keep track of the changes to your site and their source code
+ Give the user permissions (and their levels of access) a lot of thought
+ Limit the type of files to non-executables and monitor them closely
+ Backup your CMS (daily backups of your files and databases)
+ Uninstall plugins you do not use or trust.
Web Application Security Guide by Qualys 2011 nat page
This document provides an overview of web application security and vulnerabilities. It describes how various exploits like SQL injection and cross-site scripting can compromise web applications. The document also categorizes common types of vulnerabilities like authentication issues, authorization problems, client-side attacks, and information disclosure. It emphasizes that automated scanning tools are effective for detecting many syntax-based vulnerabilities, while more complex logical flaws may require manual code analysis.
While phishing is an “old-fashioned” cyber security threat, attacks continue to increase. This course will better prepare you to defend against this threat.
Methods Hackers Use to Attack a Network can include software-based attacks like cross-site scripting (XSS) and buffer overflows, infrastructure attacks such as denial-of-service (DOS) attacks and viruses, and physical attacks involving theft of hardware, information, or other resources. Software attacks target application vulnerabilities, infrastructure attacks compromise network resources, and physical attacks involve directly accessing systems or stealing equipment. Defenses include keeping software updated, using firewalls and antivirus software, and protecting physical access to systems and sensitive data.
The document discusses the challenges of combating computer viruses given their ability to spread rapidly. It notes that a single virus writer can trigger a chain reaction that infects thousands of computers. Recent viruses like SoBig demonstrated this danger, with one version found in half of all emails scanned. While some blame careless users, the document argues users are overwhelmed by the complex tech landscape. It also discusses challenges faced by administrators and software companies in keeping systems fully protected given the difficulties of eliminating all vulnerabilities from hugely complex programs like Windows.
The WannaCry ransomware virus infected over 200,000 organizations in 150 countries, crippling many hospitals and other organizations. It exploited a vulnerability in Windows to encrypt files and demand ransom payments in bitcoin. While a "kill switch" was discovered that stopped the spread, many systems already infected could not be recovered without paying ransom. It highlighted the need to keep systems updated and have backups to prevent future attacks.
The document discusses the concept of "secure pipes", which refers to internet service providers integrating security functions directly into their network infrastructure to filter traffic before it reaches customers. This represents a paradigm shift from the traditional approach where customers were responsible for security after receiving traffic. Secure pipes involve three stages: 1) Filtering to block known bad traffic using signatures, 2) Exposing unknown malicious content through advanced analytics, and 3) Predicting future attacks by analyzing digital breadcrumbs from reconnaissance activities. The key benefits are applying security at internet speeds, gaining visibility from millions of endpoints, and allowing security teams to focus on more sophisticated threats.
This document presents a proposed system for detecting phishing websites using a Chrome extension. The system compares URLs to entries in two databases - the Phishtank database of known phishing sites, and a local IndexedDB of frequently visited sites. If a match is found in either database, the Chrome extension will flag the site as potentially malicious by changing color. The system was tested on 53 URLs, achieving an accuracy of 92.45% at detecting phishing sites. The proposed system aims to alert users to phishing sites and protect them from disclosing sensitive information to attackers.
WHITE PAPER▶ Symantec Security Response Presents:The Waterbug Attack GroupSymantec
Waterbug is a cyberespionage group that uses sophisticated malware to systematically target government-related entities in a range of countries.
The group uses highly-targeted spear-phishing and watering-hole attack campaigns to target victims. The group has also been noted for its use of zero-day exploits and signing its malware with stolen certificates.
Once the group gains a foothold, it shifts focus to long-term persistent monitoring tools which can be used to exfiltrate data and provide powerful spying capabilities. Symantec has tracked the development of one such tool, Trojan.Turla, and has identified four unique variants being used in the wild.
Ransomware has become the most profitable malware type due to its use of strong encryption. Recent ransomware campaigns have increasingly targeted businesses and entire industries. Attackers now use vulnerabilities in servers and network infrastructure like JBoss to quietly spread ransomware through organizations. This allows them to encrypt large amounts of data and demand high ransom payments. Looking ahead, ransomware is expected to evolve new self-propagation abilities, increasing its impact. Organizations must improve patching of vulnerabilities, backup critical data, and develop incident response plans to address growing ransomware threats.
2. Figure 1. Defaced websites from the year 2009–2017 [6]
II. BACKGROUND
This section discusses related work which includes “Web
injection attacks”, “Implementing a web browser with web de-
facement detection technique”, “Detection of web defacement
by means of Genetic Programming”, “A Web Vulnerability
Scanner”, and “SQL-injection vulnerability scanning tool for
automatic creation of SQL-injection attacks”.
Medvet et al [7] proposes that “Most Web sites lack a
systematic surveillance of their integrity and the detection of
web defacement’s is often dependent on occasional checks
by administrators or feedback from users”. Different website
defacement tools have been proposed using techniques such as
reaction time sensors and cardinality sensors [7], [8].
A. Web Injection Attacks
Morgan [9] discusses code injection and specifically cross
site scripting. Code injection refers to the malicious insertion
of processing logic to change the behaviour of a website to the
benefit of an attacker. Furthermore, Morgan identified cross site
scripting as a code injection attack which is a very common
attack that executes malicious scripts into a legitimate website
or web application. Mitigating against code insertion is also
discussed in this article, which proposes stages on how to
possibly prevent code injection/insertion attacks.
B. Implementing A Web Browser With Web Defacement De-
tection Techniques
Kanti et al [10] discusses a prototype web browser which
can be used to check the defacement of a website, also
proposing a recovery mechanism for the defaced pages using
a checksum based approach. The proposed algorithm in the
study was used for defacement detection; the algorithm was
implemented in the prototype web browser that has inbuilt
defacement detection techniques. The web browser would
periodically calculate the checksum for the defacement de-
tection technique’s frequency which is based on the saved
hash code or checksum for each web page. Upon comparing
their algorithm against existing integrity based web defacement
detection methods, they found their algorithm to be detecting
approximately 45 times more anomalies than other web de-
facement detection methods.
C. Detection of Web Defacement By Means of Genetic Pro-
gramming
Medvet et al [7] discusses how Genetic Programming (GP)
was used to establish a evolutionary paradigm for automatic
generation of algorithms for detecting web defacement. GP
would build an algorithm based on a sequence of readings
of the remote page to be monitored and on a sample set of
attacks. A framework that was proposed prior to this paper
was used to prove the concept of a GP oriented approach
to defacement detection. A specialised tool for monitoring
a collection of web pages that are typically remote, hosted
by different organisations and whose content, appearance and
degree of dynamism was not known prior to this research.
Medvet et al conducted tests by executing a learning phase
for each web page for constructing a profile that will be used
in the monitoring phase. During implementation when the
reading failed to match the profile created during the learning
phase, the tool would raise alerts and send notifications to the
administrator.
D. SQL-Injection Vulnerability Scanning Tool For Automatic
Creation of SQL-Injection Attacks
Alostad et al [11] discusses the development of a web
scanning tool that has enhanced features that was able to
conduct efficient penetration testing on Hypertext Preprocesses
(PHP) based websites to detect SQL injection vulnerabilities.
The MySQL1injector tool, as proposed by Alostad et al,
would automate and simplify the penetration testing process
73
73
73
3. (as much as possible). The proposed tool utilises different
attacking patterns, vectors and modes for shaping each attack.
The tool is also able to conduct efficient penetration testing
on PHP based websites in order to detect SQL injection
vulnerabilities and subsequently notify the web developers of
each vulnerability that needs to be fixed. The tool houses a
total number of 10 attacking patterns, if one pattern failed to
expose a vulnerability the others would succeed in tricking the
web server if it is vulnerable. The tool is operated remotely
through a client side interface. MYSQL1Injector tool allows
web developers, that are not trained in penetration testing,
an opportunity to conduct penetration testing on their web
database servers.
E. A Web Vulnerability Scanner
Kals et al [12] discusses vulnerabilities that web applica-
tions are exposed to as a result of generic input validation
problems identifying SQL injection and cross site scripting
(XSS). Furthermore, it demonstrates how easy it is for at-
tackers to automatically discover and exploit application-level
vulnerabilities in a large number of web applications. Kals
et al proposes a web vulnerability scanner that automatically
analyses web sites with the aim of finding exploitable SQL
and XSS vulnerabilities. The proposed tool, SecuBat, has
three main components namely, crawling component, attack
component and an analysis module. The crawling component
needs to be seeded with a root web address, which will step
down the link tree, collecting all pages and included web
form. The attack component scans each page for the presence
of web forms, extracting the action address and the method
(i.e GET OR POST) used to submit the form content. The
analysis modules parses and interprets the server response after
an attack has been launched. The analysis modules use attack-
specific criteria and keywords to calculate the confidence value
for deciding if the attack was successful. The tool was tested
with both injection attacks by means of a case study.
The proposed WDIMT makes use of the discussed concepts
which include creating tools that may prevent and detect
possible code injections, developing algorithms which are
compromised with some form of checksum or hashing methods
that will be used to detect web defacement, identification of
any web pages that may have been removed by unauthorised
personnel and regeneration of the original content belonging to
a website that has been defaced. The proposed WDIMT com-
prises of functionalities used by the tools discussed prior such
as rapid identification and notification of a defaced web page
or website, regeneration of original content, rapid identification
of intruder files and unauthorised changes made on webpages.
However, the WDIMT is more client-side dependant, as a
user has full control and view of each web page belonging
to them. This tool also provides a visual representation of
any new files that were added without the users consent,
deleting them automatically which may minimise the injection
of unknown/unauthorised scripts. Manipulated files or web
pages are also visually represented allowing the user to easily
identify affected files more efficiently. The response time of
detection and reuploading of original content is rapid which
may also serve as a preventative method for a denial-of-service
attack on a website. Restoration of any deleted file or web page
in the result of an attack, will be automatically regenerated.
III. WEB DEFACEMENT AND INTRUSION MONITORING
TOOL
The Web Defacement and Intrusion Monitoring Tool
(WDIMT) is a tool that will be used for website defacement
detection. Making use of a website for displaying the cur-
rent websites defacement status. Monitoring each website’s
web pages individually, automatic regeneration of defaced or
deleted pages will be done automatically once the script is
executed in the Linux terminal. The WDIMT can detect any
defacement anomalies that have been made onto a web page
or an entire site.
A. WDIMT System Architecture
The system architecture of the WDIMT is structured into 3
layers, seen in Figure 2, presentation layer which represents the
different graphical user interfaces that will be used for display-
ing the users information and the execute of commands in the
Linux terminal; The execution of the WDIMT commands are
not limited to being specifically executed on a Linux terminal;
Business Layer where most of the communication between the
database and the presentation layer occur, this layer controls
the communication channels between the presentation layer
and the data access layer; The data access layer displays the
database which will store user information and hash of each
web page. How the different layers communicate with each
other is displayed in Figure 2.
B. WDIMT Requirements
Using the WDIMT currently requires a user to have a
Windows machine and the Linux machine. Both machines and
the database were connected to the same network, this was
done for immediate communication among the two machines
and the database. The Linux machine is used to execute the
commands and the Windows machine was used to access the
WDIMT web page for registering and viewing a users data.
The communication among the machine and the database are
illustrated on Figure 3. A user has a option of using either a
desktop computer or a laptop, or mixing the the devices used.
The requirements presented below are a bare minimum
of the system specifications used when the WDIMT was
being developed. The WDIMT requirements are subject to
change depending on a users machine. A user may have
system specifications that exceed or are below the minimum
requirements and this will not affect the performance and
functionality of the WDIMT.
The following section illustrates the minimum requirements
for each machine:
Linux Machine:
• Linux Ubuntu v14.04 or later.
• Hypertext Preprocessor (PHP).
• 80GB Hard drive.
• 2GB RAM.
• Intel Core i7 Extreme QUAD CORE 965 3.2GHz.
Windows Machine:
74
74
74
4. Figure 2. System architecture of the WDIMT.
• Windows 7 or later.
• 80GB Hard drive.
• 2GB RAM.
• Intel Core i7 Extreme QUAD CORE 965 3.2GHz.
Figure 3. Connection between the Windows and Linux Machine.
C. WDIMT Features
The WDIMT makes use of the Linux terminal for executing
the script that will hash an entire site’s web pages, the hashes
are stored into a database. The database is hosted off-site, away
from a website’s web server.The hashes are used to calculate
the hash of each web page individually every time the script
is executed in the Linux terminal. The script will be appended
with various commands that will initialise, verify, force and
delete. A user will be able to use the WDIMT’s website to
view the status of each web page which is represented by a
colour schema on the website, green indicating that a web
page has not been altered in any form, red indicating that a
web page has been removed/deleted and yellow indicating that
a web page has been altered as seen in Figure 4.
Figure 4. The home page of WDIMT.
1) Initialise: The initialise command is executed with a ‘-
i’. This command is used to read the full folder of a website,
hashing each web page found, the stored hash will be used at a
later stage by the verify command. A copy of the web page is
also made and stored onto the database that is hosted off-site.
The stored copies are retrieved when the original content of a
web page is reloaded.
This command sets the status of a file to green and sets
the reloading of the original content to a default value where
the original content will not be reloaded. The reloading value
may be altered by a user on the WDIMT web page.
2) Verify: The verify command is executed with a ‘-v’.
This command is used to verify the hashes of the web pages
and check if any form of defacement has been detected. If
the hash of any web page has been changed, the user will
be able to identify the defaced web page of a particular page
both on the Linux terminal and WDIMT web page as seen in
Figure 11. If a user would like to have the original content of
the identified web page reloaded. The user will have to click
on the “More Info” link. This link will redirect the user to
a web page which has the functionality of changing a flag
on the identified web page. The flag option gives a user the
option of having the original content of the identified web page
75
75
75
5. reloaded with its original content as seen in Figure 6. After a
user has selected that the original content of the web page be
reloaded, the verify command will need to be executed once
more in the Linux terminal in order for the status of the file
to change from yellow to green indicating that the content of
the web page has been reloaded. This command also has the
functionality of identifying any intruder files and these intruder
files are removed immediately. Any file that has been deleted
is recreated and the file’s status is identified by the colour red
on the WDIMT’s web page.
Figure 5. WDIMT web page displaying a file that has been altered.
Figure 6. WDIMT’s web page with flag options.
3) Force: The force command is executed with a ‘-f’. This
command overrides the verify command, as it reloads the
original content of every web page. The comparing of hashes is
used in order to identify any altered web page, the comparison
process is executed within this command. Any intruder files
that have been identified will be removed immediately. Files
that have been deleted will be regenerated and all file statuses
will be set to green. The command does not require a user
to have the option of reloading the original content set as the
command bypasses any option a user may have selected.
4) Delete: The delete command is executed with a ‘-d’.
This command is used to delete all the web pages that belong
to a particular website that a user has specified. This identifies
the website’s full folder path, a user has provided and deletes
all the content that has been stored. This includes the copies
and web page hashes that were generated when the initialise
command was executed for the first time. This will clear all
instances on the database. When a user accesses the WDIMT’s
web page they will have no files displayed to them as they have
deleted all instances.
D. WDIMT Flow Diagram
The commands are in a sequential flow, with initialise
being the first step required as the user will be registered and
providing a path to the web pages which need do be monitored.
Once the files have been successfully hashed, copied and
stored. The user will be able to view the files that they have
uploaded on the WDIMT web page. The verify command may
be executed once a user has files uploaded. This command
will check each individual web page for defacement, once
defacement has been detected the web pages original content
may be re-uploaded. A user may prefer to delete their content
from the WDIMT. The delete command will delete all the web
pages that belong to a user. Furthermore the force command
will execute the delete command followed by the initialise
command. The flow diagram of the commands is seen in Figure
7.
Figure 7. Flow diagram of command execution WDIMT.
E. WDIMT Usage
A user will be required to use the Linux terminal where the
user will be providing a full path to the location of a websites
web pages which will be monitored. The Linux terminal will
also be used for executing of the WDIMT script that will be run
with additional commands for initialising, verifying, forcing
and deleting of web pages belonging to the user.
After successful registration on the WDIMT website, a user
will need to use a Linux terminal to execute the script with a
initialise command, where a user will provide the full path to
the location of the website’s web pages as seen in Figure 8.
IV. WDIMT ANALYSIS
This section analysises the data generated when the
WDIMT’s php script is executed in the Linux terminal. The
script is executed with different commands that have been
discussed in the previous section. The following subsections
identify the different outputs generated when the script is
executed with the verify command.
76
76
76
6. Figure 8. Linux terminal commands.
A. WDIMT Script
The WDIMT’s php script that is executed in the Linux
terminal has a number of functions. Most of these functions
are executed once the verify command has been instantiated.
The script is executed with the different options that were
discussed in the previous section. The options invoke different
methods and functions within the script, one function the
hashes all the file and web pages is shown in Figure 9.
Figure 9. Code snip of a function on the WDIMT script.
B. Intruder Files
Files that have been identified as intruder files are removed
immediately when the verify command is executed, possibly
minimising any chance of intruder files going undetected.
A user will be able to identify these intruder files on the
Linux terminal once the verify command has been executed
as seen in Figure 10.
C. Changed Files
Web pages and file that have been altered by an unautho-
rised personnel are identified on the Linux terminal, once the
verify command has been executed as seen in Figure 11.
The altered web pages and files will only be reuploaded
with their original content once a user has changed the flag
option of the affected web page or file and executed the verify
command again, after the flag option has been altered on the
WDIMT’s web page.
Figure 10. Identified intruder files.
Figure 11. Identified changed files.
D. Removed Files
Web pages and files that have been removed by unautho-
rised personnel are identified and regenerated in the respective
directory where the file is located.
On the Linux terminal the user will be able to identify
these files and also confirm that the files have been regenerated
into the original path where the file was located, possibly
77
77
77
7. minimising the chances of the website’s being unavailable as
seen in Figure 12.
Figure 12. Identified removed files.
V. DISCUSSION
Numerous web defacement monitoring tools have been
developed, offering different web defacement detection tech-
niques. Some of these techniques are being used in real
life scenarios. The WDIMT has automated swift detection
of defacement, swift notifications of defacement and swift
reuploading of a web pages original content.
The strength of the WDIMT lays in its swift detection of
defacement of an entire website’s web pages, identifying each
defaced web page individually, swiftly notifying a user which
web pages have been defaced. The WDIMT rapidly re-uploads
the original content of a defaced web page, visiually identifies
which web pages were defaced and provides a user with flag
options which will be used to indicate whether the web pages
original content should be reuploaded or left in the current
defaced state. The WDIMT availability is also a strength as a
user is able to identify the affected web pages on the Linux
terminal without accessing the WDIMT website.
The WDIMT has some known limitations. For example,
the usage of the Linux terminal for executing the commands
associated with the WDIMT, this will require a user to have a
computer that has the Linux operating system installed on it.
This limits user’s that only have Windows operating system
installed on their computers. Authentication on the Linux
terminal commands does not require a user to authenticate
themselves before executing a command, this may raise se-
curity concerns as we are assuming that the person executing
the commands is the website’s owner. Executing the commands
requires a manual approval rather than being automated.
Further development on the WDIMT would increase the
tools swift defacement detection, notifications and reupload-
ing of original content. The following subsections provide
scenarios which will illustrate use cases of the WDIMT.
These scenarios show that the WDIMT being proposed has
capabilities beyond just monitoring of a user’s web pages,
regeneration of web pages after penetration testing has been
conducted, allowing user’s to visually identify the affected web
pages giving full control over reuploading of the web page’s
original content.
A. Detection of Intruder Files
A company has a website that advertises the services
they offer. The websites administrator has access to some of
the company’s confidential information. The information is
accessed through SQL commands. An intruder could put a
file that will access or duplicate the SQL commands which
could possibly give the intruder administrative rights.
The intruder’s file could be inserted deep into the websites
folder structure, which may not be visible when manually
searching through the folders. The tool can be used to identify
the intruder file and immediately remove it.
B. Detection of Altered Web Pages
The local government elections are about to commence is
the country. Parties involved in the elections could possibly
sabotage another party’s website which contains the party’s
campaign information. A party could possibly render the
services of a hacker that will alter the opposition party’s
website.
The hacker could possibly alter all web pages to give false
information about the party. The tool can be used to identify
the web pages that have been altered by the hacker and re-
upload the original content of the pages that were altered.
C. Detection of Deleted Web Pages
A web developer whom was previously employed by a
company for web development services is bitter after dismissal.
The web developer gained access to the company’s website and
has deleted some web pages thus making the website inactive.
The inactive website could possibly cost the company a
undisclosed amount of money. The tool could be used to
identify deleted web pages, swiftly regenerate any of them
and reupload each web page’s original content.
VI. CONCLUSION
Accessing the Internet has been made easier with the
advancement of technology. The volume of information that is
being distributed on the Internet may increase daily. With this
information being accessed daily it is not likely that everyone
making use of the Internet may be using it for legal purposes.
The WDIMT is a web defacement detection tool that
monitors web pages for defacement, which makes use of
the Linux terminal command which is used for executing
commands and a web page that will visually represent all
web pages belonging to a user. Upon executing commands
in the Linux terminal, the WDIMT will be able to detect any
78
78
78
8. defacement that may have occurred or is currently in progress.
If any defacement was detected it will be visually represented
on the WDIMT’s web page which gives a user the option
to identify a defaced web page and have the original content
of that web page re-uploaded. Swift reuploading of a web
pages original content is one strong point of the WDIMT. The
proposed tool aligns well with the scenarios identified in the
previous section, as the tools full capacity and all its features
are used in fulfilment of the identified scenarios.
Future work will be done on the WDIMT to allow the
commands to be executable on a Windows machine. An impact
study will be done using data gathered from the usage of the
WDIMT. This gathered data may be used for analytic purposes
that may assist in identification of which web pages get
commonly defaced. For future avenues of research, it would
be useful for a user to be able to have a mobile application
that will allow the same functionalities as the WDIMT’s web
page.
REFERENCES
[1] Lady Ninja86. (2016, Dec.) What is the differ-
ence between webpage, website, web server, and
search engine? Mozilla Developer Network. [Online].
Available: https://developer.mozilla.org/en-US/docs/Learn/Common-
questions/Pages-sites-servers-and-search-engines
[2] T. Perez. (2015) Why websites get hacked. Sucuri Inc.
[Online]. Available: https://blog.sucuri.net/2015/02/why-websites-get-
hacked.html
[3] G. Davanzo, E. Medvet, and A. Bartoli, “Anomaly detection techniques
for a web defacement monitoring service,” Expert Systems with Appli-
cations, vol. 38, no. 10, pp. 12 521–12 530, 2011.
[4] J. Lyon. (2014) What are the 5 most common attacks on websites?
Quora. [Online]. Available: https://www.quora.com/What-are-the-5-
most-common-attacks-on-websites
[5] Cybercrime.org.za. (2016) Website defacement definition. ISC
AFRICA. [Online]. Available: https://cybercrime.org.za/website-
defacement
[6] Zone-h. (2017, 05) zone-h.org. Zone-H. [Online]. Available:
http://www.zone-h.org/stats/ymd
[7] E. Medvet, C. Fillon, and A. Bartoli, “Detection of web defacements by
means of genetic programming,” in Information Assurance and Security,
2007. IAS 2007. Third International Symposium on. IEEE, 2007, pp.
227–234.
[8] A. Bartoli, G. Davanzo, and E. Medvet, “The reaction time to web site
defacements,” IEEE Internet Computing, vol. 13, no. 4, 2009.
[9] D. Morgan, “Web injection attacks,” Network Security, vol. 2006, no. 3,
pp. 8–10, 2006.
[10] T. Kanti, V. Richariya, and V. Richariya, “Implementing a web browser
with web defacement detection techniques,” World of Computer Science
and Information Technology Journal (WCSIT), vol. 1, no. 7, pp. 307–
310, 2011.
[11] A. B. M. Ali, M. S. Abdullah, J. Alostad et al., “Sql-injection vul-
nerability scanning tool for automatic creation of sql-injection attacks,”
Procedia Computer Science, vol. 3, pp. 453–458, 2011.
[12] S. Kals, E. Kirda, C. Kruegel, and N. Jovanovic, “Secubat:
A web vulnerability scanner,” in Proceedings of the 15th
International Conference on World Wide Web, ser. WWW ’06.
New York, NY, USA: ACM, 2006, pp. 247–256. [Online]. Available:
http://doi.acm.org/10.1145/1135777.1135817
79
79
79