The document analyzes a spam campaign from April to June 2012 that distributed malware via the Blackhole Exploit Kit. It found 245 separate spam runs spoofing 17-40 organizations each month. The spam used social engineering to trick users into clicking links that led to compromised websites and exploit pages hosting the Blackhole Exploit Kit. These pages attempted to exploit vulnerabilities in browsers and software to download malware like ZeuS and Cridex. The campaign was highly effective due to its scale and use of redirection, compromised sites and thousands of URLs daily, making it difficult for traditional security methods to keep up.
IRJET - An Automated System for Detection of Social Engineering Phishing Atta...IRJET Journal
1) The document presents a machine learning approach to detect phishing URLs using logistic regression. It trains a logistic regression model on a dataset of 420,467 URLs that have been classified as either phishing or legitimate.
2) It preprocesses the URLs using tokenization before training the logistic regression model. The trained model is able to classify new URLs with 96% accuracy as either phishing or legitimate based on the URL features.
3) The proposed approach provides an automated way to detect phishing URLs in real-time and help prevent phishing attacks. Future work could involve developing a browser extension using this approach and increasing the dataset size for higher accuracy.
IBM X-Force Threat Intelligence Quarterly Q4 2015Andreanne Clarke
The document discusses four key cybercrime trends observed by IBM's Emergency Response Services team in 2015: 1) an increase in "onion-layered" security incidents involving both unsophisticated and advanced attackers; 2) a rise in ransomware attacks that encrypt files and demand ransom; 3) growing threats from insider attacks; and 4) cybersecurity becoming a higher priority issue for management. It provides details on each trend and recommendations for organizations to improve security practices such as patching systems, increasing network visibility, training users, and having proper backup and response plans in place.
Problems With Battling Malware Have Been Discussed, Moving...Deb Birch
This document discusses several new methods for detecting malware, including CPU analyzers, holography, eigenvirus detection, differential fault analysis, and whitelist protection. It notes that due to a focus on deobfuscation, these ideas have only recently been explored and are still underdeveloped. Specific methods like CPU analyzers and holography are examined in more detail.
WHITE PAPER▶ Symantec Security Response Presents:The Waterbug Attack GroupSymantec
Waterbug is a cyberespionage group that uses sophisticated malware to systematically target government-related entities in a range of countries.
The group uses highly-targeted spear-phishing and watering-hole attack campaigns to target victims. The group has also been noted for its use of zero-day exploits and signing its malware with stolen certificates.
Once the group gains a foothold, it shifts focus to long-term persistent monitoring tools which can be used to exfiltrate data and provide powerful spying capabilities. Symantec has tracked the development of one such tool, Trojan.Turla, and has identified four unique variants being used in the wild.
Malware's Most Wanted: Malvertising Attacks on Huffingtonpost, Yahoo, AOLCyphort
Malvertising Attacks on Huffingtonpost, Yahoo, AOL
Cyphort Labs has reported an uptick in drive-by-infection through malvertising in 2014 and sounded alarms for the web property owners regarding this emerging trend. We believe that this trend presents a significant cybersecurity challenge in 2015. In this session, we will discuss this increasing trend of drive-by attacks by dissecting examples of recent web infections, as well as share observed, sophisticated behavior of modern exploit pack and the challenges for research and discovery. As we present exploit kit information, trends and statistics from research derived from our Cyphort Crawler, you will gain an awareness and an understanding of these malvertising threats to better protect your site visitors from malware infection.
Symantec propone un'analisi approfondita sui Rogue Security Software. I RSS sono applicazioni fasulle che fingono di fornire servizi di tutela della sicurezza informatica ma che, al contrario, hanno come obiettivo quello di installare dei codici maligni che compromettono la sicurezza generale della macchina.
Panoramica - Rischi - Principali modalità di diffusione e distribuzione.
Il periodo di osservazione va da luglio 2008 a giugno 2009, qui è presentato un sommario dello Studio.
Analyzing the effectualness of Phishing Algorithms in Web Applications Inques...Editor IJMTER
The initial and proficient loss of deception is belief. A wolf in sheep’s clothing is tough
to recognize, similar is the schema of a phishing website. Phishing is the emulsion of social
engineering and technical exploits designed to persuade a victim to provide personal information, for
the fiscal gain of the attacker. It is a new kind of network assault where the attacker creates a spitting
image of an already existing Web Page to delude users. In this paper, we will study two anti-phishing
algorithms, one an end-host based algorithm known as the LinkGuard Algorithm, while the other a
content based approach known as the CANTINA.
The document summarizes a cyberthreat report from April 2015. It discusses the growing risk of data breaches and malware attacks targeting businesses. Specifically, it highlights attacks targeting healthcare organizations for their valuable personal data, and the increasing use of web malware and macro malware to infiltrate enterprise networks. It provides statistics on prevalent web threats in Q1 2015 like exploit kits and malware focused on generating revenue through hijacking and scams. It emphasizes the ongoing challenge for IT administrators to keep security software updated on all systems to prevent exploits of known vulnerabilities.
IRJET - An Automated System for Detection of Social Engineering Phishing Atta...IRJET Journal
1) The document presents a machine learning approach to detect phishing URLs using logistic regression. It trains a logistic regression model on a dataset of 420,467 URLs that have been classified as either phishing or legitimate.
2) It preprocesses the URLs using tokenization before training the logistic regression model. The trained model is able to classify new URLs with 96% accuracy as either phishing or legitimate based on the URL features.
3) The proposed approach provides an automated way to detect phishing URLs in real-time and help prevent phishing attacks. Future work could involve developing a browser extension using this approach and increasing the dataset size for higher accuracy.
IBM X-Force Threat Intelligence Quarterly Q4 2015Andreanne Clarke
The document discusses four key cybercrime trends observed by IBM's Emergency Response Services team in 2015: 1) an increase in "onion-layered" security incidents involving both unsophisticated and advanced attackers; 2) a rise in ransomware attacks that encrypt files and demand ransom; 3) growing threats from insider attacks; and 4) cybersecurity becoming a higher priority issue for management. It provides details on each trend and recommendations for organizations to improve security practices such as patching systems, increasing network visibility, training users, and having proper backup and response plans in place.
Problems With Battling Malware Have Been Discussed, Moving...Deb Birch
This document discusses several new methods for detecting malware, including CPU analyzers, holography, eigenvirus detection, differential fault analysis, and whitelist protection. It notes that due to a focus on deobfuscation, these ideas have only recently been explored and are still underdeveloped. Specific methods like CPU analyzers and holography are examined in more detail.
WHITE PAPER▶ Symantec Security Response Presents:The Waterbug Attack GroupSymantec
Waterbug is a cyberespionage group that uses sophisticated malware to systematically target government-related entities in a range of countries.
The group uses highly-targeted spear-phishing and watering-hole attack campaigns to target victims. The group has also been noted for its use of zero-day exploits and signing its malware with stolen certificates.
Once the group gains a foothold, it shifts focus to long-term persistent monitoring tools which can be used to exfiltrate data and provide powerful spying capabilities. Symantec has tracked the development of one such tool, Trojan.Turla, and has identified four unique variants being used in the wild.
Malware's Most Wanted: Malvertising Attacks on Huffingtonpost, Yahoo, AOLCyphort
Malvertising Attacks on Huffingtonpost, Yahoo, AOL
Cyphort Labs has reported an uptick in drive-by-infection through malvertising in 2014 and sounded alarms for the web property owners regarding this emerging trend. We believe that this trend presents a significant cybersecurity challenge in 2015. In this session, we will discuss this increasing trend of drive-by attacks by dissecting examples of recent web infections, as well as share observed, sophisticated behavior of modern exploit pack and the challenges for research and discovery. As we present exploit kit information, trends and statistics from research derived from our Cyphort Crawler, you will gain an awareness and an understanding of these malvertising threats to better protect your site visitors from malware infection.
Symantec propone un'analisi approfondita sui Rogue Security Software. I RSS sono applicazioni fasulle che fingono di fornire servizi di tutela della sicurezza informatica ma che, al contrario, hanno come obiettivo quello di installare dei codici maligni che compromettono la sicurezza generale della macchina.
Panoramica - Rischi - Principali modalità di diffusione e distribuzione.
Il periodo di osservazione va da luglio 2008 a giugno 2009, qui è presentato un sommario dello Studio.
Analyzing the effectualness of Phishing Algorithms in Web Applications Inques...Editor IJMTER
The initial and proficient loss of deception is belief. A wolf in sheep’s clothing is tough
to recognize, similar is the schema of a phishing website. Phishing is the emulsion of social
engineering and technical exploits designed to persuade a victim to provide personal information, for
the fiscal gain of the attacker. It is a new kind of network assault where the attacker creates a spitting
image of an already existing Web Page to delude users. In this paper, we will study two anti-phishing
algorithms, one an end-host based algorithm known as the LinkGuard Algorithm, while the other a
content based approach known as the CANTINA.
The document summarizes a cyberthreat report from April 2015. It discusses the growing risk of data breaches and malware attacks targeting businesses. Specifically, it highlights attacks targeting healthcare organizations for their valuable personal data, and the increasing use of web malware and macro malware to infiltrate enterprise networks. It provides statistics on prevalent web threats in Q1 2015 like exploit kits and malware focused on generating revenue through hijacking and scams. It emphasizes the ongoing challenge for IT administrators to keep security software updated on all systems to prevent exploits of known vulnerabilities.
The document is an internship report that includes:
- Details about the internship organization and the internship period.
- An overview of ethical hacking and the internship project involving identifying vulnerabilities.
- A description of tasks completed including Portswigger labs, detecting vulnerabilities on a banking website, and executing a payload on a vulnerable website.
- Results from ethical hacking quizzes and a generated vulnerability report using OWASP-ZAP.
- Conclusions about gaining technical security knowledge around hacking techniques and prevention.
MALICIOUS URL DETECTION USING CONVOLUTIONAL NEURAL NETWORKijcseit
The World Wide Web has become an important part of our everyday life for information communication
and knowledge dissemination. It helps to transact information timely, rapidly and easily. Identifying theft
and identity fraud are referred as two sides of cyber-crime in which hackers and malicious users obtain the
personal data of existing legitimate users to attempt fraud or deception motivation for financial gain.
Malicious URLs host unsolicited content (spam, phishing, drive-by exploits, etc.) and lure unsuspecting
users to become victims of scams (monetary loss, theft of private information, and malware installation),
and cause losses of billions of dollars every year. To detect such crimes systems should be fast and precise
with the ability to detect new malicious content. Traditionally, this detection is done mostly through the
usage of blacklists. However, blacklists cannot be exhaustive, and lack the ability to detect newly generated
malicious URLs. To improve the generality of malicious URL detectors, machine learning techniques have
been explored with increasing attention in recent years. In this paper, I use a simple algorithm to detect
and predicting URLs it is good or bad and compared with two other algorithms to know (SVM, LR).
MALICIOUS URL DETECTION USING CONVOLUTIONAL NEURAL NETWORKijcseit
The World Wide Web has become an important part of our everyday life for information communication
and knowledge dissemination. It helps to transact information timely, rapidly and easily. Identifying theft
and identity fraud are referred as two sides of cyber-crime in which hackers and malicious users obtain the
personal data of existing legitimate users to attempt fraud or deception motivation for financial gain.
Malicious URLs host unsolicited content (spam, phishing, drive-by exploits, etc.) and lure unsuspecting
users to become victims of scams (monetary loss, theft of private information, and malware installation),
and cause losses of billions of dollars every year. To detect such crimes systems should be fast and precise
with the ability to detect new malicious content. Traditionally, this detection is done mostly through the
usage of blacklists. However, blacklists cannot be exhaustive, and lack the ability to detect newly generated
malicious URLs. To improve the generality of malicious URL detectors, machine learning techniques have
been explored with increasing attention in recent years. In this paper, I use a simple algorithm to detect
and predicting URLs it is good or bad and compared with two other algorithms to know (SVM, LR).
Anatomy of a breach - an e-book by Microsoft in collaboration with the EUUniversity of Essex
1) Hackers gain initial access to networks through techniques like exploiting vulnerabilities, password spraying, or phishing. They then work to gain elevated privileges on internal systems.
2) Once hackers have higher level access, they use that privilege to scan for valuable data and credentials to access other parts of the network. Their goal is widespread access across the network.
3) With control over many systems, hackers implant backdoors to maintain long-term access and control networks from a central command point while evading detection. Companies need comprehensive defenses, data awareness, and protection policies to detect and respond to network intrusions.
This document discusses the design and implementation of a hybrid client honeypot that incorporates features of both low and high interaction honeypots. The hybrid honeypot aims to detect client-side malware attacks by actively visiting websites while also monitoring the system for any malware downloaded without user interaction, like zero-day attacks. It analyzes detected malware to classify known and unknown malware varieties. The hybrid approach combines the fast processing of low interaction honeypots with the ability of high interaction honeypots to detect zero-day attacks.
This document discusses the design and implementation of a hybrid client honeypot that incorporates features of both low and high interaction honeypots. It aims to detect client-side attacks by actively visiting websites while also monitoring the system for malware. The hybrid approach aims to overcome limitations of existing client honeypots by detecting zero-day attacks while remaining efficient. It discusses background on honeypots, client honeypots, and detection approaches. The system framework incorporates both emulation and real execution to balance detection abilities and resource usage.
company names mentioned herein are for identification and educational purposes only and are the property of, and may be trademarks of, their respective owners.
Ransomware has become the most profitable malware type due to its use of strong encryption. Recent ransomware campaigns have increasingly targeted businesses and entire industries. Attackers now use vulnerabilities in servers and network infrastructure like JBoss to quietly spread ransomware through organizations. This allows them to encrypt large amounts of data and demand high ransom payments. Looking ahead, ransomware is expected to evolve new self-propagation abilities, increasing its impact. Organizations must improve patching of vulnerabilities, backup critical data, and develop incident response plans to address growing ransomware threats.
Ransomware has become the most profitable malware type due to its use of strong encryption that victims cannot easily decrypt. It is dominated by ransomware paid in bitcoin, which allows anonymous payments. Recent ransomware campaigns have targeted vulnerabilities in enterprise application software like JBoss to quietly encrypt networks. This threatens entire industries if left unpatched. While paying the ransom may decrypt files, there are risks of data loss, tampering or reinfection. Ransomware is expected to evolve further to propagate itself for even greater profitability. Organizations must back up critical data and prepare incident response plans.
This document discusses insecure trends in web technologies such as social networks, AJAX, Flash, and Silverlight. It provides an analysis of security risks with each technology. As a case study, it presents how a malicious user could potentially create a self-replicating worm using Silverlight on a social networking site by exploiting a weakness that allowed controlling user account features. The document aims to educate developers and users about security issues that can arise from these technologies.
For more course tutorials visit
www.tutorialrank.com
SEC 572 Week 1 iLab Denial of Service Attacks
In this lab, you will discover and analyze one of two different real network attacks. This will give you insight into the motivation, vulnerabilities, threats, and countermeasures associated with your selected network attack.
A First Look at the Crypto-Mining Malware Ecosystem: A Decade of Unrestricted...eraser Juan José Calderón
A First Look at the Crypto-Mining Malware Ecosystem: A Decade of Unrestricted Wealth
Sergio Pastrana
Universidad Carlos III de Madrid*
Guillermo Suarez-Tangil
King’s College London
Abstract—Illicit crypto-mining leverages resources stolen from
victims to mine cryptocurrencies on behalf of criminals. While recent works have analyzed one side of this threat, i.e.: web-browser
cryptojacking, only white papers and commercial reports have
partially covered binary-based crypto-mining malware. In this
paper, we conduct the largest measurement of crypto-mining
malware to date, analyzing approximately 4.4 million malware
samples (1 million malicious miners), over a period of twelve
years from 2007 to 2018. Our analysis pipeline applies both static
and dynamic analysis to extract information from the samples,
such as wallet identifiers and mining pools. Together with OSINT
data, this information is used to group samples into campaigns.
We then analyze publicly-available payments sent to the wallets
from mining-pools as a reward for mining, and estimate profits
for the different campaigns.
Our profit analysis reveals campaigns with multi-million earnings, associating over 4.3% of Monero with illicit mining. We
analyze the infrastructure related with the different campaigns,
showing that a high proportion of this ecosystem is supported by
underground economies such as Pay-Per-Install services. We also
uncover novel techniques that allow criminals to run successful
campaigns.
The document proposes an International Consortium of Freelance Hackers (ICFH) to facilitate collaboration between organizations and ethical hackers. This would help address vulnerabilities before malicious attackers can exploit them. Traditional security testing is reactive and often misses new attacks. ICFH would maintain a pool of vetted hackers to proactively search for vulnerabilities. Found issues would be reported to companies, who would then fix them. This approach could help reduce organizations' cybersecurity costs compared to dealing with actual data breaches and damage control. Existing vulnerability reward programs have already proven effective at strengthening security at a lower cost than internal testing alone.
This document outlines a lab assignment to design a secure wireless network for a small home or office (SOHO) environment. Students are instructed to identify the hardware and software needed to meet the network security policies and user requirements defined in an earlier lab. The design should include an overview of the technical functionality and requirements, as well as a logical illustration of the network design. The goal is to gain experience selecting wireless network technologies to satisfy typical requirements.
The document discusses malware analysis using machine learning. It proposes collecting malware binaries from online sources and using Cuckoo Sandbox to analyze their behavior dynamically. Features would be extracted from the analysis reports and used to classify the malware into families using machine learning algorithms. The goal is to develop an automated malware classification system that can identify both known and unknown malware types.
For more classes visit
www.snaptutorial.com
SEC 572 Week 1 iLab Denial of Service Attacks
In this lab, you will discover and analyze one of two different real network attacks. This will give you insight into the motivation, vulnerabilities, threats, and countermeasures associated with your selected network attack.
The Murky Waters of the Internet: Anatomy of Malvertising and Other e-Threats- Mark - Fullbright
company names mentioned herein are for identification and educational purposes only and are the property of, and may be trademarks of, their respective owners.
Sec 572 Effective Communication - tutorialrank.comBartholomew99
For more course tutorials visit
www.tutorialrank.com
SEC 572 Week 1 iLab Denial of Service Attacks
In this lab, you will discover and analyze one of two different real network attacks. This will give you insight into the motivation, vulnerabilities, threats, and countermeasures associated with your selected network attack.
There are two categories of network attacks you will be concerned with this week. The first is a network denial of service (DoS) attack, and the second is a targeted attack on a networ
A Mitigation Technique For Internet Security Threat of Toolkits AttackCSCJournals
The development of attack toolkits conforms that cybercrime is driven primarily by financial motivations as noted from the significant profits made by both the developers and buyers. In this paper, an enhanced hybrid attack toolkit mitigation model was designed to tackle the economy of the attack toolkits using different techniques to discredit it. The mitigation looked into Zeus, a common and the most frequently used attack toolkit to discover the hidden information used by the attackers to launch attacks. This information helped in creating honey toolkits, honeybot and honeytokens. Honeybots are used to submit honeytoken to botmasters, who sells to the internet black market. Both the botmasters, his mules and buyers attempts to steal huge amount of money using the stolen credentials which includes both real and honeytokens and will be detected by an attack detector which sends an alert on any transaction involving the honeytokens. A reconfirmation process which is secured using enhanced RC6 cryptosystem is enacted. The reconfirmation message in plain text is securely encrypted into cipher text and transmitted from the bank to the legitimate account owner and vise visa. The result of the crypto analysis carried out on the encrypted text using RC6 encryption algorithm showed that the cipher text is not transparent.
Similarity digests have gained popularity for many
security applications like blacklisting/whitelisting, and finding
similar variants of malware. TLSH has been shown to be
particularly good at hunting similar malware, and is resistant to
evasion as compared to other similarity digests like ssdeep and
sdhash. Searching and clustering are fundamental tools which
help the security analysts and security operations center (SOC)
operators in hunting and analyzing malware. Current approaches
which aim to cluster malware are not scalable enough to keep
up with the vast amount of malware and goodware available
in the wild. In this paper, we present techniques which allow
for fast search and clustering of TLSH hash digests which
can aid analysts to inspect large amounts of malware/goodware.
Our approach builds on fast nearest neighbor search techniques
to build a tree-based index which performs fast search based
on TLSH hash digests. The tree-based index is used in our
threshold based Hierarchical Agglomerative Clustering (HAC-T)
algorithm which is able to cluster digests in a scalable manner.
Our clustering technique can cluster digests in O(n logn) time on
average. We performed an empirical evaluation by comparing our
approach with many standard and recent clustering techniques.
We demonstrate that our approach is much more scalable and
still is able to produce good cluster quality. We measured
cluster quality using purity on 10 million samples obtained from
VirusTotal. We obtained a high purity score in the range from
0.97 to 0.98 using labels from five major anti-virus vendors
(Kaspersky, Microsoft, Symantec, Sophos, and McAfee) which
demonstrates the effectiveness of the proposed method.
- The document discusses TLSH (Trendmicro Locality Sensitive Hash), a fuzzy hashing algorithm invented by the author that is useful for processing malware at scale.
- It provides an example of using TLSH to cluster samples from Malware Bazaar into 16452 clusters and predict the family of new samples with 80% accuracy.
- TLSH works by generating k-skip n-grams from files and calculating the distance between hashes to measure similarity, allowing large malware databases to be clustered and labeled for analysis.
The document is an internship report that includes:
- Details about the internship organization and the internship period.
- An overview of ethical hacking and the internship project involving identifying vulnerabilities.
- A description of tasks completed including Portswigger labs, detecting vulnerabilities on a banking website, and executing a payload on a vulnerable website.
- Results from ethical hacking quizzes and a generated vulnerability report using OWASP-ZAP.
- Conclusions about gaining technical security knowledge around hacking techniques and prevention.
MALICIOUS URL DETECTION USING CONVOLUTIONAL NEURAL NETWORKijcseit
The World Wide Web has become an important part of our everyday life for information communication
and knowledge dissemination. It helps to transact information timely, rapidly and easily. Identifying theft
and identity fraud are referred as two sides of cyber-crime in which hackers and malicious users obtain the
personal data of existing legitimate users to attempt fraud or deception motivation for financial gain.
Malicious URLs host unsolicited content (spam, phishing, drive-by exploits, etc.) and lure unsuspecting
users to become victims of scams (monetary loss, theft of private information, and malware installation),
and cause losses of billions of dollars every year. To detect such crimes systems should be fast and precise
with the ability to detect new malicious content. Traditionally, this detection is done mostly through the
usage of blacklists. However, blacklists cannot be exhaustive, and lack the ability to detect newly generated
malicious URLs. To improve the generality of malicious URL detectors, machine learning techniques have
been explored with increasing attention in recent years. In this paper, I use a simple algorithm to detect
and predicting URLs it is good or bad and compared with two other algorithms to know (SVM, LR).
MALICIOUS URL DETECTION USING CONVOLUTIONAL NEURAL NETWORKijcseit
The World Wide Web has become an important part of our everyday life for information communication
and knowledge dissemination. It helps to transact information timely, rapidly and easily. Identifying theft
and identity fraud are referred as two sides of cyber-crime in which hackers and malicious users obtain the
personal data of existing legitimate users to attempt fraud or deception motivation for financial gain.
Malicious URLs host unsolicited content (spam, phishing, drive-by exploits, etc.) and lure unsuspecting
users to become victims of scams (monetary loss, theft of private information, and malware installation),
and cause losses of billions of dollars every year. To detect such crimes systems should be fast and precise
with the ability to detect new malicious content. Traditionally, this detection is done mostly through the
usage of blacklists. However, blacklists cannot be exhaustive, and lack the ability to detect newly generated
malicious URLs. To improve the generality of malicious URL detectors, machine learning techniques have
been explored with increasing attention in recent years. In this paper, I use a simple algorithm to detect
and predicting URLs it is good or bad and compared with two other algorithms to know (SVM, LR).
Anatomy of a breach - an e-book by Microsoft in collaboration with the EUUniversity of Essex
1) Hackers gain initial access to networks through techniques like exploiting vulnerabilities, password spraying, or phishing. They then work to gain elevated privileges on internal systems.
2) Once hackers have higher level access, they use that privilege to scan for valuable data and credentials to access other parts of the network. Their goal is widespread access across the network.
3) With control over many systems, hackers implant backdoors to maintain long-term access and control networks from a central command point while evading detection. Companies need comprehensive defenses, data awareness, and protection policies to detect and respond to network intrusions.
This document discusses the design and implementation of a hybrid client honeypot that incorporates features of both low and high interaction honeypots. The hybrid honeypot aims to detect client-side malware attacks by actively visiting websites while also monitoring the system for any malware downloaded without user interaction, like zero-day attacks. It analyzes detected malware to classify known and unknown malware varieties. The hybrid approach combines the fast processing of low interaction honeypots with the ability of high interaction honeypots to detect zero-day attacks.
This document discusses the design and implementation of a hybrid client honeypot that incorporates features of both low and high interaction honeypots. It aims to detect client-side attacks by actively visiting websites while also monitoring the system for malware. The hybrid approach aims to overcome limitations of existing client honeypots by detecting zero-day attacks while remaining efficient. It discusses background on honeypots, client honeypots, and detection approaches. The system framework incorporates both emulation and real execution to balance detection abilities and resource usage.
company names mentioned herein are for identification and educational purposes only and are the property of, and may be trademarks of, their respective owners.
Ransomware has become the most profitable malware type due to its use of strong encryption. Recent ransomware campaigns have increasingly targeted businesses and entire industries. Attackers now use vulnerabilities in servers and network infrastructure like JBoss to quietly spread ransomware through organizations. This allows them to encrypt large amounts of data and demand high ransom payments. Looking ahead, ransomware is expected to evolve new self-propagation abilities, increasing its impact. Organizations must improve patching of vulnerabilities, backup critical data, and develop incident response plans to address growing ransomware threats.
Ransomware has become the most profitable malware type due to its use of strong encryption that victims cannot easily decrypt. It is dominated by ransomware paid in bitcoin, which allows anonymous payments. Recent ransomware campaigns have targeted vulnerabilities in enterprise application software like JBoss to quietly encrypt networks. This threatens entire industries if left unpatched. While paying the ransom may decrypt files, there are risks of data loss, tampering or reinfection. Ransomware is expected to evolve further to propagate itself for even greater profitability. Organizations must back up critical data and prepare incident response plans.
This document discusses insecure trends in web technologies such as social networks, AJAX, Flash, and Silverlight. It provides an analysis of security risks with each technology. As a case study, it presents how a malicious user could potentially create a self-replicating worm using Silverlight on a social networking site by exploiting a weakness that allowed controlling user account features. The document aims to educate developers and users about security issues that can arise from these technologies.
For more course tutorials visit
www.tutorialrank.com
SEC 572 Week 1 iLab Denial of Service Attacks
In this lab, you will discover and analyze one of two different real network attacks. This will give you insight into the motivation, vulnerabilities, threats, and countermeasures associated with your selected network attack.
A First Look at the Crypto-Mining Malware Ecosystem: A Decade of Unrestricted...eraser Juan José Calderón
A First Look at the Crypto-Mining Malware Ecosystem: A Decade of Unrestricted Wealth
Sergio Pastrana
Universidad Carlos III de Madrid*
Guillermo Suarez-Tangil
King’s College London
Abstract—Illicit crypto-mining leverages resources stolen from
victims to mine cryptocurrencies on behalf of criminals. While recent works have analyzed one side of this threat, i.e.: web-browser
cryptojacking, only white papers and commercial reports have
partially covered binary-based crypto-mining malware. In this
paper, we conduct the largest measurement of crypto-mining
malware to date, analyzing approximately 4.4 million malware
samples (1 million malicious miners), over a period of twelve
years from 2007 to 2018. Our analysis pipeline applies both static
and dynamic analysis to extract information from the samples,
such as wallet identifiers and mining pools. Together with OSINT
data, this information is used to group samples into campaigns.
We then analyze publicly-available payments sent to the wallets
from mining-pools as a reward for mining, and estimate profits
for the different campaigns.
Our profit analysis reveals campaigns with multi-million earnings, associating over 4.3% of Monero with illicit mining. We
analyze the infrastructure related with the different campaigns,
showing that a high proportion of this ecosystem is supported by
underground economies such as Pay-Per-Install services. We also
uncover novel techniques that allow criminals to run successful
campaigns.
The document proposes an International Consortium of Freelance Hackers (ICFH) to facilitate collaboration between organizations and ethical hackers. This would help address vulnerabilities before malicious attackers can exploit them. Traditional security testing is reactive and often misses new attacks. ICFH would maintain a pool of vetted hackers to proactively search for vulnerabilities. Found issues would be reported to companies, who would then fix them. This approach could help reduce organizations' cybersecurity costs compared to dealing with actual data breaches and damage control. Existing vulnerability reward programs have already proven effective at strengthening security at a lower cost than internal testing alone.
This document outlines a lab assignment to design a secure wireless network for a small home or office (SOHO) environment. Students are instructed to identify the hardware and software needed to meet the network security policies and user requirements defined in an earlier lab. The design should include an overview of the technical functionality and requirements, as well as a logical illustration of the network design. The goal is to gain experience selecting wireless network technologies to satisfy typical requirements.
The document discusses malware analysis using machine learning. It proposes collecting malware binaries from online sources and using Cuckoo Sandbox to analyze their behavior dynamically. Features would be extracted from the analysis reports and used to classify the malware into families using machine learning algorithms. The goal is to develop an automated malware classification system that can identify both known and unknown malware types.
For more classes visit
www.snaptutorial.com
SEC 572 Week 1 iLab Denial of Service Attacks
In this lab, you will discover and analyze one of two different real network attacks. This will give you insight into the motivation, vulnerabilities, threats, and countermeasures associated with your selected network attack.
The Murky Waters of the Internet: Anatomy of Malvertising and Other e-Threats- Mark - Fullbright
company names mentioned herein are for identification and educational purposes only and are the property of, and may be trademarks of, their respective owners.
Sec 572 Effective Communication - tutorialrank.comBartholomew99
For more course tutorials visit
www.tutorialrank.com
SEC 572 Week 1 iLab Denial of Service Attacks
In this lab, you will discover and analyze one of two different real network attacks. This will give you insight into the motivation, vulnerabilities, threats, and countermeasures associated with your selected network attack.
There are two categories of network attacks you will be concerned with this week. The first is a network denial of service (DoS) attack, and the second is a targeted attack on a networ
A Mitigation Technique For Internet Security Threat of Toolkits AttackCSCJournals
The development of attack toolkits conforms that cybercrime is driven primarily by financial motivations as noted from the significant profits made by both the developers and buyers. In this paper, an enhanced hybrid attack toolkit mitigation model was designed to tackle the economy of the attack toolkits using different techniques to discredit it. The mitigation looked into Zeus, a common and the most frequently used attack toolkit to discover the hidden information used by the attackers to launch attacks. This information helped in creating honey toolkits, honeybot and honeytokens. Honeybots are used to submit honeytoken to botmasters, who sells to the internet black market. Both the botmasters, his mules and buyers attempts to steal huge amount of money using the stolen credentials which includes both real and honeytokens and will be detected by an attack detector which sends an alert on any transaction involving the honeytokens. A reconfirmation process which is secured using enhanced RC6 cryptosystem is enacted. The reconfirmation message in plain text is securely encrypted into cipher text and transmitted from the bank to the legitimate account owner and vise visa. The result of the crypto analysis carried out on the encrypted text using RC6 encryption algorithm showed that the cipher text is not transparent.
Similarity digests have gained popularity for many
security applications like blacklisting/whitelisting, and finding
similar variants of malware. TLSH has been shown to be
particularly good at hunting similar malware, and is resistant to
evasion as compared to other similarity digests like ssdeep and
sdhash. Searching and clustering are fundamental tools which
help the security analysts and security operations center (SOC)
operators in hunting and analyzing malware. Current approaches
which aim to cluster malware are not scalable enough to keep
up with the vast amount of malware and goodware available
in the wild. In this paper, we present techniques which allow
for fast search and clustering of TLSH hash digests which
can aid analysts to inspect large amounts of malware/goodware.
Our approach builds on fast nearest neighbor search techniques
to build a tree-based index which performs fast search based
on TLSH hash digests. The tree-based index is used in our
threshold based Hierarchical Agglomerative Clustering (HAC-T)
algorithm which is able to cluster digests in a scalable manner.
Our clustering technique can cluster digests in O(n logn) time on
average. We performed an empirical evaluation by comparing our
approach with many standard and recent clustering techniques.
We demonstrate that our approach is much more scalable and
still is able to produce good cluster quality. We measured
cluster quality using purity on 10 million samples obtained from
VirusTotal. We obtained a high purity score in the range from
0.97 to 0.98 using labels from five major anti-virus vendors
(Kaspersky, Microsoft, Symantec, Sophos, and McAfee) which
demonstrates the effectiveness of the proposed method.
- The document discusses TLSH (Trendmicro Locality Sensitive Hash), a fuzzy hashing algorithm invented by the author that is useful for processing malware at scale.
- It provides an example of using TLSH to cluster samples from Malware Bazaar into 16452 clusters and predict the family of new samples with 80% accuracy.
- TLSH works by generating k-skip n-grams from files and calculating the distance between hashes to measure similarity, allowing large malware databases to be clustered and labeled for analysis.
2019 TrustCom: The role of ML and AI in SecurityJonathanOliver26
Discusses the role of ML and AI in Security.
Discusses some problems with training and decision surfaces.
Explains why ML models in security overestimate their accuracy.
Using lexigraphical distancing, an algorithm that estimates the probability that one term is an edited version of another, can help identify variants of spam terms like "Viagra" and improve spam detection rates. The algorithm identified 51 out of 60 variants of Viagra, while spell checking only caught 24. When included as a pre-processing step in a naive Bayes classifier, it incorrectly flagged few additional good emails as spam but caught 27% more spam messages, improving spam detection rates. Lexigraphical distancing provides a robust way to identify term variants for applications like spam filtering and content analysis.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise boosts blood flow, releases endorphins, and promotes changes in the brain which help regulate emotions and stress levels.
This document summarizes technical privacy solutions. It begins with an introduction and agenda. It then discusses privacy use cases and types of data. It provides legal and practical definitions of privacy. It outlines implementing privacy using security approaches like network segmentation and data segmentation. The main body discusses formal approaches to privacy like differential privacy, k-anonymity, homomorphic encryption, Monero-style privacy, secure multiparty computation, and federated learning. Each approach is described along with examples and limitations. The document concludes that privacy solutions should generate business value and notes tools are still maturing.
Limitations of Privacy Solutions for Log Files
We have considered applying a range of privacy solutions to log files.
We found that methods such as differential privacy and k-anonymity are not suitable for log files.
We make a proposal that replaces personal identifiers with ring signatures when collecting log files.
In particular we offer a light weight ring signature proposal which significantly improves the privacy for collecting log files while allowing
processing of those log files for tasks such as identifying IoCs.
AI in the Workplace Reskilling, Upskilling, and Future Work.pptxSunil Jagani
Discover how AI is transforming the workplace and learn strategies for reskilling and upskilling employees to stay ahead. This comprehensive guide covers the impact of AI on jobs, essential skills for the future, and successful case studies from industry leaders. Embrace AI-driven changes, foster continuous learning, and build a future-ready workforce.
Read More - https://bit.ly/3VKly70
In our second session, we shall learn all about the main features and fundamentals of UiPath Studio that enable us to use the building blocks for any automation project.
📕 Detailed agenda:
Variables and Datatypes
Workflow Layouts
Arguments
Control Flows and Loops
Conditional Statements
💻 Extra training through UiPath Academy:
Variables, Constants, and Arguments in Studio
Control Flow in Studio
"Scaling RAG Applications to serve millions of users", Kevin GoedeckeFwdays
How we managed to grow and scale a RAG application from zero to thousands of users in 7 months. Lessons from technical challenges around managing high load for LLMs, RAGs and Vector databases.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
"What does it really mean for your system to be available, or how to define w...Fwdays
We will talk about system monitoring from a few different angles. We will start by covering the basics, then discuss SLOs, how to define them, and why understanding the business well is crucial for success in this exercise.
The Department of Veteran Affairs (VA) invited Taylor Paschal, Knowledge & Information Management Consultant at Enterprise Knowledge, to speak at a Knowledge Management Lunch and Learn hosted on June 12, 2024. All Office of Administration staff were invited to attend and received professional development credit for participating in the voluntary event.
The objectives of the Lunch and Learn presentation were to:
- Review what KM ‘is’ and ‘isn’t’
- Understand the value of KM and the benefits of engaging
- Define and reflect on your “what’s in it for me?”
- Share actionable ways you can participate in Knowledge - - Capture & Transfer
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
Discover the Unseen: Tailored Recommendation of Unwatched ContentScyllaDB
The session shares how JioCinema approaches ""watch discounting."" This capability ensures that if a user watched a certain amount of a show/movie, the platform no longer recommends that particular content to the user. Flawless operation of this feature promotes the discover of new content, improving the overall user experience.
JioCinema is an Indian over-the-top media streaming service owned by Viacom18.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
ScyllaDB is making a major architecture shift. We’re moving from vNode replication to tablets – fragments of tables that are distributed independently, enabling dynamic data distribution and extreme elasticity. In this keynote, ScyllaDB co-founder and CTO Avi Kivity explains the reason for this shift, provides a look at the implementation and roadmap, and shares how this shift benefits ScyllaDB users.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
Lee Barnes - Path to Becoming an Effective Test Automation Engineer.pdfleebarnesutopia
So… you want to become a Test Automation Engineer (or hire and develop one)? While there’s quite a bit of information available about important technical and tool skills to master, there’s not enough discussion around the path to becoming an effective Test Automation Engineer that knows how to add VALUE. In my experience this had led to a proliferation of engineers who are proficient with tools and building frameworks but have skill and knowledge gaps, especially in software testing, that reduce the value they deliver with test automation.
In this talk, Lee will share his lessons learned from over 30 years of working with, and mentoring, hundreds of Test Automation Engineers. Whether you’re looking to get started in test automation or just want to improve your trade, this talk will give you a solid foundation and roadmap for ensuring your test automation efforts continuously add value. This talk is equally valuable for both aspiring Test Automation Engineers and those managing them! All attendees will take away a set of key foundational knowledge and a high-level learning path for leveling up test automation skills and ensuring they add value to their organizations.
Lee Barnes - Path to Becoming an Effective Test Automation Engineer.pdf
blackhole.pdf
1. Trend Micro Incorporated
Research Paper
2012
Blackhole Exploit Kit:
A Spam Campaign,
Not a Series of Individual Spam Runs
AN IN-DEPTH ANALYSIS
By: Jon Oliver, Sandra Cheng, Lala Manly, Joey Zhu,
Roland Dela Paz, Sabrina Sioting, and
Jonathan Leopando
2. i | Blackhole Exploit Kit: A Spam Campaign, Not a Series of Individual Spam Runs
CONTENTS
Introduction............................................................................................................................ 1
Technical Hurdles................................................................................................................4
Feedback and Correlation.................................................................................................5
The Blackhole Exploit Kit...................................................................................................6
Blackhole Exploit Kit Payloads........................................................................................13
ZeuS....................................................................................................................................13
Cridex .................................................................................................................................15
Conclusion ........................................................................................................................... 18
3. 1 | Blackhole Exploit Kit: A Spam Campaign, Not a Series of Individual Spam Runs
INTRODUCTION
In the past few months, we investigated several high-
volume spam runs that sent users to websites that
hosted the Blackhole Exploit Kit. The investigation was
prompted by a rise in the number of these spam runs.
The spam in these outbreaks claim to be from legitimate
companies such as Intuit, LinkedIn, the US Postal
Service (USPS), US Airways, Facebook, and PayPal,
among others.
The emails in the spam runs appear like those from
legitimate sources and attempt to trick recipients into
clicking links. In a few cases, the outbreaks led to
traditional phishing websites. However, in the majority
of cases, these led to malicious websites that use the
Blackhole Exploit Kit to infect systems with malware.
The attacks typically ensued in the following manner:
1. The spam arrives in a user’s inbox.
2. A link embedded in the email leads to a
compromised website.
3. A page on the compromised website redirects
the user to a malicious website, aka a “landing
page.”
4. The landing page attempts to exploit various
software vulnerabilities in the user’s system.
5. If one of the attempts to exploit vulnerabilities
works (typically because the user’s computer
has not been updated with the latest security
patches), a malware variant is downloaded,
infecting the user’s computer.
Figure 1. Typical Blackhole Exploit Kit infection diagram
4. 2 | Blackhole Exploit Kit: A Spam Campaign, Not a Series of Individual Spam Runs
On their own, none of the tactics above are particularly new. However, in the case of these particular runs, the tactics
were carried out in a highly effective and well-designed manner.
Below is a summary of the spam runs we have been tracking in April, May, and June 2012 by date, along with the
company names used.
• April 2012: 45 separate spam runs, 17 organizations
Figure 2. April 2012 spam runs
5. 3 | Blackhole Exploit Kit: A Spam Campaign, Not a Series of Individual Spam Runs
• May 2012: 66 separate spam runs, 21 organizations
Figure 3. May 2012 spam runs
• June 2012: 134 separate spam runs, 40 organizations
Figure 4. June 2012 spam runs
6. 4 | Blackhole Exploit Kit: A Spam Campaign, Not a Series of Individual Spam Runs
Some apparent patterns were seen among the
institutions spoofed in the spam runs. Attackers
frequently used the names of financial institutions such
as American Express, BancorpSouth, Citibank, NACHA,
and Wells Fargo. Telecommunications companies such
as AT&T and Verizon were also prominent in the
attackers’ list.
All of the outbreaks in the calendars were investigated.
The infection chain for each attack was also mapped.
Many striking similarities in the outbreaks were also
found. The links in the spam, which took users to
compromised websites (i.e., step 2 in Figure 1), had three
distinct formats across the listed attacks:
• Format 1: http://<compromised domain>/<8
alphanumeric characters>/index.html
• Format 2: http://<compromised
domain>/<short word>.html
• Format 3: http://<compromised domain>/wp-
content/<path>/report.htm
Another striking similarity had to do with the landing
pages, which typically had URL formats such as:
• Landing page 1: http://literal-
IP/showthread.php?t=<16-digit-hex-number>
• Landing page 2: http://literal-
IP/page.php?p=<16-digit-hex-number>
Trend Micro established criteria based on the sequence
of events in order to identify if each outbreak was a
Blackhole Exploit Kit spam run.
A few outbreaks broke the trend as these used
traditional phishing instead of Blackhole Exploit Kit
hosting pages.
TECHNICAL HURDLES
The spam runs pose difficulties for traditional antispam
methods. Content-based filters, for instance, have a
problem with the attacks because these use modified
versions of legitimate emails, making detection and
blocking more difficult to do.
Some traditional methods related to identifying or
blocking spam runs include:
• Using IP reputation to block emails sent out by
botnets
• Using email authentication to verify legitimate
emails
• Using web reputation to check links embedded
in messages
The particular spam runs discussed in this research
paper posed significant difficulties for each of the
above-mentioned traditional security approaches, if the
following are accomplished in isolation:
• IP reputation: The botnets that send out spam
are in a constant flux with nodes added and
removed at regular intervals. In addition, some
of the spam in the attacks came from
compromised legitimate ISPs. In such a
situation, dropping email traffic connections
from legitimate ISPs would be inappropriate.
• Email authentication: Industry skepticism and
reluctance to use email authentication to block
spam exist. The Blackhole Exploit Kit attacks
prompted us to consider if the usefulness of
email authentication as well as initiatives such
as Domain-Based Message Authentication,
Reporting, and Conformance (DMARC)1
should
be reexamined.
1
http://www.dmarc.org/
7. 5 | Blackhole Exploit Kit: A Spam Campaign, Not a Series of Individual Spam Runs
• Web reputation: Because of the attacks’ scale,
trying to protect users via web reputation
systems that attempt to block access to links
became difficult to do. The attack on May 17, for
instance, used at least 1,960 distinct URLs on
291 compromised legitimate domains. The use
of redirection pages injected to legitimate sites
also made blocking difficult to do since this
forced security companies to sort legitimate
from illegitimate pages on otherwise legitimate
websites.
The use of thousands of URLs posed a significant
problem, as sourcing and blocking all of them required a
lot of effort. This effort had to be duplicated daily, as
spammers moved on to a new set of compromised
domains and pages each day.
Trend Micro believes using the multilayer approach is
more effective to deal with this threat. Each of the
above-mentioned technologies plays a role in defending
against the attacks. In addition, other techniques such
as exploit prevention and detection as well as user
education can also play their respective roles. However,
to properly deal with the Blackhole Exploit Kit spam
campaign, a single coherent approach utilizing multiple
layers of defense that integrate all of the above-
mentioned technologies is required.
FEEDBACK AND CORRELATION
As previously stated, the Blackhole Exploit Kit spam
campaign poses a considerable challenge to
conventional techniques because of the skill with which
the attacks are conducted as well as the mechanics used
that overwhelm conventional methods of detection and
blocking by sheer number.
Trend Micro developed a system that uses big data
analytics and harnesses the power of the Trend Micro™
Smart Protection Network™ infrastructure to provide a
unique view of the attacks as they occur. Using big data
analytics, Trend Micro is able to identify patterns and
correlations that are characteristic to the attacks. Using
this unique and requisite information, systems directed
at efficient identification of attacks were created for use
with the Smart Protection Network.
Traditional approaches to collecting bad URLs, such as
those used in the spam campaigns, include using spam
honey pots or community submission approaches
wherein users report bad URLs to a central repository.
Due to the sheer scale (i.e., the use of thousands of
URLs on compromised websites) of the Blackhole Exploit
Kit spam campaign, Trend Micro saw that these
approaches could simply not keep up.
As such, Trend Micro approached the problem using
automated correlation backed by the power of the
Smart Protection Network. Users from all over the world
could opt in to send feedback, which help us pinpoint
potentially malicious or suspicious behaviors. In
response to the Blackhole Exploit Kit spam campaign,
we extended the capability of the Smart Protection
Network to automatically identify key components and
behavior sequences, which were uniquely malicious. We
configured the network to automatically identify and
block activities that matched those related to the
outbreaks.
One component of the Trend Micro Smart Protection
Network is the Browser Exploit Solution (BES), which is
deployed on customer endpoints. The BES emulates
obfuscated JavaScript and sends feedback if the
content is identified as malicious if the user opted in to
contribute feedback.
8. 6 | Blackhole Exploit Kit: A Spam Campaign, Not a Series of Individual Spam Runs
Another component of the Trend Micro Smart
Protection Network then identifies if a link in an email
directs a user to a redirection page, which often
contains obfuscated JavaScript that redirects the user
to a landing page. Identifying this sequence of activities
requires a tightly integrated antispam product that
works hand in hand with a web reputation service. As
soon as such an activity is identified, all of the
components related to it are blocked.
This way, each component of the Trend Micro Smart
Protection Network improves the threat response
capability of every other node in the network.
THE BLACKHOLE EXPLOIT KIT
An exploit kit is a web application that allows an attacker
to take advantage of most known vulnerabilities in
popular applications such as Internet Explorer as well as
Adobe Acrobat, Reader, and Flash Player. The kit is
installed in a web server somewhere that is connected
to a database for logging and reporting. This server,
which uses web technologies such as PHP and database
products such as MySQL, is also used as an
administrative interface.
The Blackhole Exploit Kit has been the most popular
exploit kit among cybercriminals since 2011. Its first
version was released in 2010. The most recent version,
v.1.2.3, was released in March 2012. The PHP source
code at the server is encrypted and protected by
IONCube.
Figure 5. PHP code sample of the Blackhole Exploit Kit
9. 2 | Blackhole Exploit Kit: A Spam Campaign, Not a Series of Individual Spam Runs
Like other exploit kits, the Blackhole Exploit Kit provides
attackers a lot of detailed information about their
victims. The attackers can, for instance, discover what
browsers or OSs their victims use and where they are
geographically located. In addition, the attackers can
determine which exploits were most successful in
targeting users. In the past year, Java vulnerabilities
have had the most success in infecting users’ systems.
In the past, the Blackhole Exploit Kit was only available
as an application that would-be attackers had to install
in a server. Recently, however, Blackhole Exploit Kit
hosting websites have been made available on a for-rent
basis, lessening the difficulty of using the said kit.
Several components make up the typical Blackhole
Exploit Kit. The first component focuses on controlling
user web traffic that is usually related to a compromised
website, an advertisement, and a spam with a link. A
landing page is used to check the victims’ environments
and record their information in a database before
redirecting them to exploit pages based on their
environments.
The following is an example of a Blackhole Exploit Kit
infection chain. Note that the initial URL was taken from
a spam that claimed to be a Xanga notification.
hxxp://www.oes.actmasons.com.au/wp-
content/themes/esp/wp-local.htm
hxxp://pushkidamki.ru:8080/forum/showthread.
php?page=5fa58bce769e5c2c
hxxp://pushkidamki.ru:8080/forum/w.php?f=182
b5&e=2
hxxp://pushkidamki.ru:8080/forum/data/ap1.ph
p?f=182b5
hxxp://pushkidamki.ru:8080/forum/Set.jar
hxxp://pushkidamki.ru:8080/forum/readme.exe
The first phishing link leads to a compromised website
with the following HTML source code.
Figure 6. Obfuscated code
As shown above, the website contains heavily obfuscated JavaScript code. Once the said code is decrypted, however, an
invisible iframe is opened, which leads to the Blackhole Exploit Kit hosting website,
hxxp://pushkidamki.ru:8080/forum/showthread.php?page=5fa58bce769e5c2c.
10. 9 | Blackhole Exploit Kit: A Spam Campaign, Not a Series of Individual Spam Runs
Figure 7. Deobfuscated code
Using Firebug to debug the compromised web page allowed us to monitor the invisible frame.
Figure 8. Firebug showing the invisible frame
The invisible frame contains the Blackhole Exploit Kit code. It specifically contains JavaScript that tries to load a .JAR
applet.
11. 10 | Blackhole Exploit Kit: A Spam Campaign, Not a Series of Individual Spam Runs
Figure 9. Encrypted JavaScript code
Once the code is decrypted, a function call is shown and a Java applet is launched. This applet contains the malicious
exploit code, which downloads a malicious file onto the user’s system.
Figure 10. Decrypted JavaScript code
12. 11 | Blackhole Exploit Kit: A Spam Campaign, Not a Series of Individual Spam Runs
Spamming is not the only means by which Blackhole
Exploit Kit attacks are launched. In the following
example, a user was directed to a malicious website by
clicking a poisoned Google search result link.
chain_info {
referer:
http://www.google.co.in/url?sa=t&rct=j&q=&es
rc=s&frm=1&source=web&cd=2&ved=0CEcQFjAB&url
=http%3A%2F%2Fwww.nmtv.tv%2Ftop-
stories%2Fnew-ac-bus-inaugurated-on-
kalamboli-mantralaya-route-by-thane-
guardian-minister-ganesh-
naik&ei=CjuBT6iMIIy3rAfVofjwBQ&usg=AFQjCNHy8
6Z0Nfqo6QjfI0cotYarP99FsA&sig2=sOB1aq0ROH2UW
UsI4-Ynsw
Level1: compromised site
http://www.nmtv.tv/top-stories/new-ac-bus-
inaugurated-on-kalamboli-mantralaya-route-
by-thane-guardian-minister-ganesh-naik
Level2: landing page
http://lomk.slx.nl/in.cgi?2
Level3: redirection
http://nu.onevacation.mobi/direct.php?page=c
73186d3bf7e2cf6
Level4: exploit page
http://ve.romanceme.us/direct.php?page=28610
4111ed1d51b
}
The attack described above is noteworthy because of
the way by which the website was compromised.
Attackers typically directly insert an iframe or a script
tag to HTML code. In this particular attack, however, the
attackers compromised a website running WordPress by
injecting their own code to the blog’s file. This makes it
difficult for researchers to find out how such a site was
compromised.
Figure 11. Modified WordPress code
The final exploit is typically encrypted using JavaScript as well.
13. 12 | Blackhole Exploit Kit: A Spam Campaign, Not a Series of Individual Spam Runs
Figure 12. Code including the Java applet
After decryption, the code becomes readable. The sample above contains two exploits for Java and .PDF files. Both files
trigger vulnerability exploitation, resulting in the download and execution of a malicious executable file.
Figure 13. Decrypted JavaScript code
14. 13 | Blackhole Exploit Kit: A Spam Campaign, Not a Series of Individual Spam Runs
BLACKHOLE EXPLOIT KIT
PAYLOADS
In theory, cybercriminals can distribute any malware via
the Blackhole Exploit Kit. In practice, however, the
majority of malware the spam campaigns distributed
were ZeuS and Cridex variants.
Figure 14. Malware families distributed via the Blackhole
Exploit Kit spam campaign
ZeuS
ZeuS is perhaps one of the most notorious malware
families in the threat landscape to date. ZeuS variants
are designed to steal sensitive online banking
information such as credit card numbers, account user
names, and account passwords. In addition, these have
the capability to drop backdoors onto compromised
systems, giving attackers remote control over the
machines.
Since its discovery in 2007, the ZeuS toolkit has also
quickly become the crimeware of choice by many
cybercriminals, as evidenced by its use in numerous
attacks. Trend Micro researchers have published two
research papers discussing ZeuS—“ZeuS: A Persistent
Criminal Enterprise“2
and “File-Patching ZBOT Variants:
ZeuS 2.0 Levels Up.”3
In order to steal sensitive information, ZeuS variants
download an encrypted configuration file. This file
contains a list of target banks and corresponding scripts
for web injection. The malware then monitors the
address bar of an affected user’s browser. Every time
the user visits any of the target banks listed in the
configuration file, the malware logs user form inputs
such as login names and passwords and sends these to
the attacker.
In addition, ZeuS variants can also locally modify a
banking site’s code in order to steal more information
from an affected user via web injection.
2
http://www.trendmicro.com/cloud-content/us/pdfs/security-
intelligence/white-papers/wp_zeus-persistent-criminal-
enterprise.pdf
3
http://www.trendmicro.com/cloud-content/us/pdfs/security-
intelligence/white-papers/wp__file-partching-zbot-varians-
zeus-2-9.pdf
15. 14 | Blackhole Exploit Kit: A Spam Campaign, Not a Series of Individual Spam Runs
Figure 15. Web injection code and result
In 2011, ZeuS’s creator stopped developing the
crimeware. Its source code was later leaked in
underground forums,4
This leakage gave rise to several “new” ZeuS versions,
including one that used peer-to-peer (P2P) technology.
consequently allowing other
cybercriminals to alter and update the dangerous tool’s
code.
5
4
This version is the specific variant we saw being
distributed via the Blackhole Exploit Kit spam campaign.
http://blog.trendmicro.com/the-zeus-source-code-leaked-
now-what/
5
http://blog.trendmicro.com/another-modified-zeus-variant-
seen-in-the-wild/
Figure 16. Geographic distribution of ZeuS malware
victims in the past 30 days
16. 15 | Blackhole Exploit Kit: A Spam Campaign, Not a Series of Individual Spam Runs
Cridex
In September 2011, the first WORM_CRIDEX sample was
found in the wild. It became one of the many malware
families that stole banking information from victims.
Several aspects of the malware family’s behavior,
however, soon set it apart from the rest.
Cridex malware have two components—a main binary
file and a configuration file. Upon arriving on users’
systems, the malware drops and executes a copy of
itself, injects itself into running processes, then deletes
the initially executed copy. It then tries to connect to a
command-and-control (C&C) server, the address of
which is randomly generated using a Domain Generation
Algorithm (DGA).
DGAs have been used by various malware families to
periodically and randomly generate a large number of
domain names that can serve as rendezvous points by
their controllers. First seen employed by DOWNAD or
Conficker, this method makes tracking and shutting
down botnet servers very difficult to do. Once the
malware finds and successfully connects to a live C&C
server, it downloads a customized configuration file that
is then saved as a registry entry.
Figure 17. Cridex configuration file
17. 16 | Blackhole Exploit Kit: A Spam Campaign, Not a Series of Individual Spam Runs
Figure 18. Cridex configuration file saved in the registry
Cridex’s configuration file almost uses the same format
as ZeuS’s, except that it does not contain the URLs from
which updated copies may be downloaded and to which
stolen data should be sent. Like ZeuS variants, however,
Cridex malware steal login credentials and inject HTML
code to the websites indicated in their configuration
files. The websites these monitor include online banking
and social media sites.
Cridex variants also possess backdoor capabilities to
monitor cookies, collect secure certificates, download
updated copies, as well as download and execute other
files. These also send all of the data they collect back to
their respective C&C servers.
Figure 19. Geographical distribution of Cridex malware
victims in the past 30 days
18. 17 | Blackhole Exploit Kit: A Spam Campaign, Not a Series of Individual Spam Runs
Figure 20. Map of Cridex infections
19. 18 | Blackhole Exploit Kit: A Spam Campaign, Not a Series of Individual Spam Runs
CONCLUSION
Throughout this research paper, we referred to the
Blackhole Exploit Kit spam runs as part of a “campaign.”
This was deliberate as Trend Micro believes these
attacks were conducted by a single group or several
groups acting in concert with one another. Based on the
information we collected, we concluded that:
• The botnets sending out spam had a high
degree of overlap from one day to the next. In
several cases, the same IP address was
identified as part of Blackhole Exploit Kit
attacks across different days.
• Compromised sites were used and reused
from one attack to another. Websites that
hosted a malicious Blackhole Exploit Kit landing
page rarely hosted only one such page.
Websites usually hosted several landing pages
used in distinct spam runs. Spam runs
frequently went on until the security holes that
allowed websites to be compromised were
patched.
In addition, certain URL patterns were seen
across different spam runs and compromised
websites, leading us to believe that the distinct
attacks were related.
• The exploit methods used in attacks were
similar. While the Blackhole Exploit Kit can
carry out exploit attacks in various ways, most
of the attacks we have seen as part of the spam
campaign used similar methods.
• The malware payloads all had similar eventual
behaviors. The vast majority of payloads we
have seen as part of this spam campaign
include information-stealing malware that
intend to steal users’ online banking
information.
Taken together, the conclusions indicate that the series
of spam runs make up a coherent campaign that is
being carried out by attackers who are organized in
some manner.
Because of the correlation capabilities of the Trend
Micro Smart Protection Network infrastructure, we are
able to gather a comprehensive picture of the
campaign. This allowed and continues to allow us to
provide more effective, comprehensive, and timely
protection to our customers. In addition, the Smart
Protection Network allows us to obtain extensive
information about this threat as outlined in this
research paper.