company names mentioned herein are for identification and educational purposes only and are the property of, and may be trademarks of, their respective owners.
A Comparison Study of Open Source Penetration Testing Toolsijtsrd
Penetration testing also known as Pen Test is a series of activities which is performed by authorized simulated attack on computer system, network or web application to find vulnerabilities that an attacker could exploit. It helps confirm the efficiency and effectiveness of the various security measures that have been implemented. In the world of Open Source Software, even Penetration Testing is not untouched. The purpose of this pilot study was to compare various the open source penetration testing tools. Nilesh Bhingardeve | Seeza Franklin"A Comparison Study of Open Source Penetration Testing Tools" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-4 , June 2018, URL: http://www.ijtsrd.com/papers/ijtsrd15662.pdf http://www.ijtsrd.com/computer-science/computer-security/15662/a-comparison-study-of-open-source-penetration-testing-tools/nilesh-bhingardeve
Protecting Enterprise - An examination of bugs, major vulnerabilities and exp...ESET Middle East
This white paper focuses on the dramatic growth in the number and severity of software vulnerabilities, and discusses how multilayered endpoint security is needed to mitigate the threats they pose.
company names mentioned herein are for identification and educational purposes only and are the property of, and may be trademarks of, their respective owners.
A Comparison Study of Open Source Penetration Testing Toolsijtsrd
Penetration testing also known as Pen Test is a series of activities which is performed by authorized simulated attack on computer system, network or web application to find vulnerabilities that an attacker could exploit. It helps confirm the efficiency and effectiveness of the various security measures that have been implemented. In the world of Open Source Software, even Penetration Testing is not untouched. The purpose of this pilot study was to compare various the open source penetration testing tools. Nilesh Bhingardeve | Seeza Franklin"A Comparison Study of Open Source Penetration Testing Tools" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-4 , June 2018, URL: http://www.ijtsrd.com/papers/ijtsrd15662.pdf http://www.ijtsrd.com/computer-science/computer-security/15662/a-comparison-study-of-open-source-penetration-testing-tools/nilesh-bhingardeve
Protecting Enterprise - An examination of bugs, major vulnerabilities and exp...ESET Middle East
This white paper focuses on the dramatic growth in the number and severity of software vulnerabilities, and discusses how multilayered endpoint security is needed to mitigate the threats they pose.
Security Software Supply Chains - Sonatype - DevSecCon Singapore March 2019. Modern organisations innovate through the massive use of Open Source Software. However open source software can introduce security vulnerabilities. Here we show trends in the use of Open Source Software across Modern Software Supply Chains.
"Быстрое обнаружение вредоносного ПО для Android с помощью машинного обучения...Yandex
В докладе речь пойдёт о применении алгоритмов машинного обучения для обнаружения вредоносных приложений для Android. Я расскажу, как на базе Матрикснета в Яндексе был спроектирован высокопроизводительный инструмент для решения этой задачи. А также продемонстрирую, в каких случаях аналитические методы выявления вредоносного ПО помогают блокировать множество простых образцов вирусного кода. Затем мы поговорим о том, как можно усовершенствовать такие методы для обнаружения более хитроумных вредных программ.
All regulatory requirements (HIPAA, PCI, etc.) include a mandate for assessing vulnerabilities in systems that manage or store sensitive data. Organizations often opt to conduct vulnerability assessments on an annual, quarterly, or even monthly basis. But while vulnerability assessment tools can identify unpatched or misconfigured code bases, these tools overlook a large portion of an organization’s attack surface: known vulnerabilities in applications that are built in-house. These applications will not have public updates, nor will the thousands of open source components they utilize be included in public disclosures. This is concerning because over 6,000 vulnerabilities in open source projects have been reported since 2014. Register for this webinar to discover how to protect yourself.
Malware analysis on android using supervised machine learning techniquesMd. Shohel Rana
In recent years, a widespread research is conducted with the growth of malware resulted in the domain of malware analysis and detection in Android devices. Android, a mobile-based operating system currently having more than one billion active users with a high market impact that have inspired the expansion of malware by cyber criminals. Android implements a different architecture and security controls to solve the problems caused by malware, such as unique user ID (UID) for each application, system permissions, and its distribution platform Google Play. There are numerous ways to violate that fortification, and how the complexity of creating a new solution is enlarged while cybercriminals progress their skills to develop malware. A community including developer and researcher has been evolving substitutes aimed at refining the level of safety where numerous machine learning algorithms already been proposed or applied to classify or cluster malware including analysis techniques, frameworks, sandboxes, and systems security. One of the most promising techniques is the implementation of artificial intelligence solutions for malware analysis. In this paper, we evaluate numerous supervised machine learning algorithms by implementing a static analysis framework to make predictions for detecting malware on Android.
The Information Security Community on LinkedIn, with the support of Cybereason, conducted a comprehensive online research project to gain
more insight into the state of threat hunting in security
operation centers (SOCs). When the 330 cybersecurity and IT professionals were asked what keeps them up at night, many comments revolved around a central theme of undetected threats slipping through an organization’s defenses. Many
responses included “unknown” and “advanced” when
describing threats, indicating the respondents understand
the challenges and fear those emerging threats.
Read the full report here.
When dealing with over 300 hundred thousand of malware samples every day, we had to deploy the state-of-the-art techniques to combat cyberthreats. And among them - machine learning algorithms.
In this whitepaper, we start from describing the basic approaches and proceed to explaining the key applications of machine learning algorithms to automated malware detection. Learn more about how Kaspersky Lab protects businesses like yours => https://kas.pr/8dxv
Deepfake anyone, the ai synthetic media industry enters a dangerous phaseaditi agarwal
The innovation, scarcely four years of age, might be at a critical point, as per Reuters interviews with organizations, specialists, policymakers and campaigners.
Earlier this year, during a security sweep, Kaspersky Lab detected a cyber intrusion affecting several of its internal systems. Following this finding, we launched a large-scale investigation, which led to the discovery of a new malware platform from one of the most skilled, mysterious and
powerful groups in the APT world – Duqu. The Duqu threat actor went dark in 2012 and was believed to have stopped working on this project - until now. Our technical analysis
indicates the new round of attacks include an updated version of the infamous 2011 Duqu malware, sometimes referred to as the step-brother of Stuxnet. We named this
new malware and its associated platform “Duqu 2.0”.
Victims of Duqu 2.0 have been found in several places, including western countries, the Middle East and Asia. The actor appears to compromise both final and utilitarian targets, which allow them to improve their cyber capabilities. Most notably, some of the new 2014-2015 infections are linked to the P5+1 events and venues related to the negotiations with Iran about a nuclear deal. The threat actor behind Duqu appears to have launched attacks at the venues for some of these high level talks.
In addition to the P5+1 events, the Duqu 2.0 group has launched a similar attack in relation to the 70th anniversary event of the liberation of Auschwitz-Birkenau.
In the case of Kaspersky Lab, the attack took advantage of a zero-day (CVE-2015-2360) in the WindowsKernel, patched by Microsoft on June 9 2015 and possibly up to two other, currently patched vulnerabilities, which were zeroday at that time.
Hands on Security, Disrupting the Kill Chain, SplunkLive! AustinSplunk
Splunk for Security Workshop
Join our Splunk Security Experts and learn how to use Splunk Enterprise in a live, hands-on incident investigation session. We'll use Splunk to disrupt an adversary's Kill Chain by finding the Actions on Intent, Exploitation Methods, and Reconnaissance Tactics used against a simulated organization. Data investigated will include threat list intelligence feeds, endpoint activity logs, e-mail logs, and web access logs. This session is a must for all security experts! Please bring your laptop as this is a hands-on session.
[CB20] Explainable malicious domain diagnosis by Tsuyoshi TaniguchiCODE BLUE
Cyber security has been a game of cat-and-mouse recently.
Adversaries create techniques for evading detection, then defensive researchers struggle to analyze the evasion and develop detection techniques.
However, the adversaries come to identify the detection, then repeatedly create next evasive techniques.
The defensive researchers have been in an overwhelming disadvantage situation.
Under such the situation, are the developed detection techniques not available if the adversaries identify?
That's not true.
Adversaries have intention for their activity.
Their purpose is often business, then their funds and selected techniques depend on targets, like a particular organization or clients with low security literacy.
All adversaries do not always use state-of-the-art techniques.
In short, there are differences of clues between targeted attacks and broad ones.
SOC operators are always busy coping with a various kind of attacks, then difficult to deal with all alerts.
They have to set priority of alerts, sometimes explain the reason why the alerts occur for management or responsible person.
They have overwork because of limited time.
We aim to enable SOC operators to reduce tasks related to explanations for alerts.
We have developed a method for identifying attack types with explainable diagnosis by taking advantage of advanced adversary's evasive behavior.
In addition to differences between legitimate and malicious behavior, we learn from comparison of targeted attacks and broad ones.
This learning is a basis for explainable detection of attack types for unidentified domains.
In this presentation, we will show that advanced adversaries rarely leave traces which defensive researchers are easy to detect then compare traces of targeted attacks with ones of broad attacks.
For unidentified domains, we will demonstrate that our system identifies attack types with explainable diagnosis.
Security Software Supply Chains - Sonatype - DevSecCon Singapore March 2019. Modern organisations innovate through the massive use of Open Source Software. However open source software can introduce security vulnerabilities. Here we show trends in the use of Open Source Software across Modern Software Supply Chains.
"Быстрое обнаружение вредоносного ПО для Android с помощью машинного обучения...Yandex
В докладе речь пойдёт о применении алгоритмов машинного обучения для обнаружения вредоносных приложений для Android. Я расскажу, как на базе Матрикснета в Яндексе был спроектирован высокопроизводительный инструмент для решения этой задачи. А также продемонстрирую, в каких случаях аналитические методы выявления вредоносного ПО помогают блокировать множество простых образцов вирусного кода. Затем мы поговорим о том, как можно усовершенствовать такие методы для обнаружения более хитроумных вредных программ.
All regulatory requirements (HIPAA, PCI, etc.) include a mandate for assessing vulnerabilities in systems that manage or store sensitive data. Organizations often opt to conduct vulnerability assessments on an annual, quarterly, or even monthly basis. But while vulnerability assessment tools can identify unpatched or misconfigured code bases, these tools overlook a large portion of an organization’s attack surface: known vulnerabilities in applications that are built in-house. These applications will not have public updates, nor will the thousands of open source components they utilize be included in public disclosures. This is concerning because over 6,000 vulnerabilities in open source projects have been reported since 2014. Register for this webinar to discover how to protect yourself.
Malware analysis on android using supervised machine learning techniquesMd. Shohel Rana
In recent years, a widespread research is conducted with the growth of malware resulted in the domain of malware analysis and detection in Android devices. Android, a mobile-based operating system currently having more than one billion active users with a high market impact that have inspired the expansion of malware by cyber criminals. Android implements a different architecture and security controls to solve the problems caused by malware, such as unique user ID (UID) for each application, system permissions, and its distribution platform Google Play. There are numerous ways to violate that fortification, and how the complexity of creating a new solution is enlarged while cybercriminals progress their skills to develop malware. A community including developer and researcher has been evolving substitutes aimed at refining the level of safety where numerous machine learning algorithms already been proposed or applied to classify or cluster malware including analysis techniques, frameworks, sandboxes, and systems security. One of the most promising techniques is the implementation of artificial intelligence solutions for malware analysis. In this paper, we evaluate numerous supervised machine learning algorithms by implementing a static analysis framework to make predictions for detecting malware on Android.
The Information Security Community on LinkedIn, with the support of Cybereason, conducted a comprehensive online research project to gain
more insight into the state of threat hunting in security
operation centers (SOCs). When the 330 cybersecurity and IT professionals were asked what keeps them up at night, many comments revolved around a central theme of undetected threats slipping through an organization’s defenses. Many
responses included “unknown” and “advanced” when
describing threats, indicating the respondents understand
the challenges and fear those emerging threats.
Read the full report here.
When dealing with over 300 hundred thousand of malware samples every day, we had to deploy the state-of-the-art techniques to combat cyberthreats. And among them - machine learning algorithms.
In this whitepaper, we start from describing the basic approaches and proceed to explaining the key applications of machine learning algorithms to automated malware detection. Learn more about how Kaspersky Lab protects businesses like yours => https://kas.pr/8dxv
Deepfake anyone, the ai synthetic media industry enters a dangerous phaseaditi agarwal
The innovation, scarcely four years of age, might be at a critical point, as per Reuters interviews with organizations, specialists, policymakers and campaigners.
Earlier this year, during a security sweep, Kaspersky Lab detected a cyber intrusion affecting several of its internal systems. Following this finding, we launched a large-scale investigation, which led to the discovery of a new malware platform from one of the most skilled, mysterious and
powerful groups in the APT world – Duqu. The Duqu threat actor went dark in 2012 and was believed to have stopped working on this project - until now. Our technical analysis
indicates the new round of attacks include an updated version of the infamous 2011 Duqu malware, sometimes referred to as the step-brother of Stuxnet. We named this
new malware and its associated platform “Duqu 2.0”.
Victims of Duqu 2.0 have been found in several places, including western countries, the Middle East and Asia. The actor appears to compromise both final and utilitarian targets, which allow them to improve their cyber capabilities. Most notably, some of the new 2014-2015 infections are linked to the P5+1 events and venues related to the negotiations with Iran about a nuclear deal. The threat actor behind Duqu appears to have launched attacks at the venues for some of these high level talks.
In addition to the P5+1 events, the Duqu 2.0 group has launched a similar attack in relation to the 70th anniversary event of the liberation of Auschwitz-Birkenau.
In the case of Kaspersky Lab, the attack took advantage of a zero-day (CVE-2015-2360) in the WindowsKernel, patched by Microsoft on June 9 2015 and possibly up to two other, currently patched vulnerabilities, which were zeroday at that time.
Hands on Security, Disrupting the Kill Chain, SplunkLive! AustinSplunk
Splunk for Security Workshop
Join our Splunk Security Experts and learn how to use Splunk Enterprise in a live, hands-on incident investigation session. We'll use Splunk to disrupt an adversary's Kill Chain by finding the Actions on Intent, Exploitation Methods, and Reconnaissance Tactics used against a simulated organization. Data investigated will include threat list intelligence feeds, endpoint activity logs, e-mail logs, and web access logs. This session is a must for all security experts! Please bring your laptop as this is a hands-on session.
[CB20] Explainable malicious domain diagnosis by Tsuyoshi TaniguchiCODE BLUE
Cyber security has been a game of cat-and-mouse recently.
Adversaries create techniques for evading detection, then defensive researchers struggle to analyze the evasion and develop detection techniques.
However, the adversaries come to identify the detection, then repeatedly create next evasive techniques.
The defensive researchers have been in an overwhelming disadvantage situation.
Under such the situation, are the developed detection techniques not available if the adversaries identify?
That's not true.
Adversaries have intention for their activity.
Their purpose is often business, then their funds and selected techniques depend on targets, like a particular organization or clients with low security literacy.
All adversaries do not always use state-of-the-art techniques.
In short, there are differences of clues between targeted attacks and broad ones.
SOC operators are always busy coping with a various kind of attacks, then difficult to deal with all alerts.
They have to set priority of alerts, sometimes explain the reason why the alerts occur for management or responsible person.
They have overwork because of limited time.
We aim to enable SOC operators to reduce tasks related to explanations for alerts.
We have developed a method for identifying attack types with explainable diagnosis by taking advantage of advanced adversary's evasive behavior.
In addition to differences between legitimate and malicious behavior, we learn from comparison of targeted attacks and broad ones.
This learning is a basis for explainable detection of attack types for unidentified domains.
In this presentation, we will show that advanced adversaries rarely leave traces which defensive researchers are easy to detect then compare traces of targeted attacks with ones of broad attacks.
For unidentified domains, we will demonstrate that our system identifies attack types with explainable diagnosis.
Android is a Linux based operating system used for smart phone devices. Since 2008, Android devices gained huge market share due to its open architecture and popularity. Increased popularity of the Android devices and associated primary benefits attracted the malware developers. Rate of Android malware applications increased between 2008 and 2016. In this paper, we proposed dynamic malware detection approach for Android applications. In dynamic analysis, system calls are recorded to calculate the density of the system calls. For density calculation, we used two different lengths of system calls that are 3 gram and 5 gram. Furthermore, Naive Bayes algorithm is applied to classify applications as benign or malicious. The proposed algorithm detects malware using 100 real world samples of benign and malware applications. We observe that proposed method gives effective and accurate results. The 3 gram Naive Bayes algorithm detects 84 malware application correctly and 14 benign application incorrectly. The 5 gram Naive Bayes algorithm detects 88 malware application correctly and 10 benign application incorrectly. Mr. Tushar Patil | Prof. Bharti Dhote "Malware Detection in Android Applications" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26449.pdfPaper URL: https://www.ijtsrd.com/engineering/computer-engineering/26449/malware-detection-in-android-applications/mr-tushar-patil
Bitdefender - Solution Paper - Active Threat ControlJose Lopez
This Solution Paper describes how Bitdefender's Active Threat Control can protect Windows Endpoints both desktops and servers from Advanced and 0-day threats like Cryptomalware thanks to a proactive-by-design, dynamic detection technology, based on monitoring processes’ behavior, along with tagging and correlating suspect activities with minimal footprint
A FRAMEWORK FOR ANALYSIS AND COMPARISON OF DYNAMIC MALWARE ANALYSIS TOOLSIJNSA Journal
Malware writers have employed various obfuscation and polymorphism techniques to thwart static analysis
approaches and bypassing antivirus tools. Dynamic analysis techniques, however, have essentially
overcome these deceits by observing the actual behaviour of the code execution. In this regard, various
methods, techniques and tools have been proposed. However, because of the diverse concepts and
strategies used in the implementation of these methods and tools, security researchers and malware
analysts find it difficult to select the required optimum tool to investigate the behaviour of a malware and to
contain the associated risk for their study. Focusing on two dynamic analysis techniques: Function Call
monitoring and Information Flow Tracking, this paper presents a comparison framework for dynamic
malware analysis tools. The framework will assist the researchers and analysts to recognize the tool’s
implementation strategy, analysis approach, system-wide analysis support and its overall handling of
binaries, helping them to select a suitable and effective one for their study and analysis.
CS266 Software Reverse Engineering (SRE)
Identifying, Monitoring, and Reporting Malware
Teodoro (Ted) Cipresso, teodoro.cipresso@sjsu.edu
Department of Computer Science
San José State University
Spring 2015
A FRAMEWORK FOR ANALYSIS AND COMPARISON OF DYNAMIC MALWARE ANALYSIS TOOLSIJNSA Journal
Malware writers have employed various obfuscation and polymorphism techniques to thwart static analysis approaches and bypassing antivirus tools. Dynamic analysis techniques, however, have essentially overcome these deceits by observing the actual behaviour of the code execution. In this regard, various methods, techniques and tools have been proposed. However, because of the diverse concepts and strategies used in the implementation of these methods and tools, security researchers and malware analysts find it difficult to select the required optimum tool to investigate the behaviour of a malware and to contain the associated risk for their study. Focusing on two dynamic analysis techniques: Function Call monitoring and Information Flow Tracking, this paper presents a comparison framework for dynamic malware analysis tools. The framework will assist the researchers and analysts to recognize the tool’s implementation strategy, analysis approach, system-wide analysis support and its overall handling of binaries, helping them to select a suitable and effective one for their study and analysis.
Basic survey on malware analysis, tools and techniquesijcsa
The term malware stands for malicious software. It is a program installed on a system without the
knowledge of owner of the system. It is basically installed by the third party with the intention to steal some
private data from the system or simply just to play pranks. This in turn threatens the computer’s security,
wherein computer are used by one’s in day-to-day life as to deal with various necessities like education,
communication, hospitals, banking, entertainment etc. Different traditional techniques are used to detect
and defend these malwares like Antivirus Scanner (AVS), firewalls, etc. But today malware writers are one
step forward towards then Malware detectors. Day-by-day they write new malwares, which become a great
challenge for malware detectors. This paper focuses on basis study of malwares and various detection
techniques which can be used to detect malwares.
What is SPYWARE?
Spyware is a type of malware that's hard to detect.
It collects information about your surfing habits, browsing history, or personal information (such as credit card numbers), and often uses the internet to pass this information along to third parties without you knowing.
o Key loggers are a type of spyware that monitors your key strokes.
Spyware is mostly classified into four types:
1.System monitors
2.Trojans
3.Adware
4.Tracking Cookies
spyware is mostly used for the purposes of tracking and storing internet users' movements on the web and serving up pop-up ads to internet users.
History and development of spyware.
The first recorded on October 16, 1995 in a UseNet post that poked fun at microsoft's business model.
Spyware at first denoted software meant for espionage purposes.
However, in early 2000 the founder of zone labs, gregor freund, used the term in a press release for the zone alarm personal firewall.
Use of exploits in JavaScript, internet explorer and windows to install.
Effect and behavior.
Unwanted behavior and degradation of system performance.
Unwanted CPU activity, disk usage, and network traffic.
Stability issues:-
Application's freezing.
Failure to boot.
System-wide crashes.
Difficulty connecting to the internet.
Disable software firewalls and anti-virus software.
Routes of infection.
Installed when you open an email attachment.
Spyware installs itself
Install by using deceptive tactics
Common tactics are using a Trojan horse.
USB Keylogger.
browser forces the download and installation of spyware.
Security Practices.
• Installing anti-spyware programs.
• Network firewalls and web proxies to block access to web sites known to install spyware
• Individual users can also install firewalls.
• Install a large hosts file.
• It Install shareware programs offered for download.
• Downloading programs only from reputable sources can provide some protection from this source of attack
Anti-spyware Programs
• Products dedicated to remove or block spyware.
• Programs such as pc tool’s spyware doctor, lava soft's ad-aware se and patrick kolla's spybot - search & destroy.
Legal Issues.
Criminal law
US FTC actions
Netherlands OPTA
Civil law
Libel suits by spyware developers
Webcam Gate
Thank You!
Stay Connected
Stay connected with me at Facebook :- https://www.facebook.com/mangesh.wadibhasme
Follow at Instagram: - @mangesh_hkr
Stephanie Vanroelen - Mobile Anti-Virus apps exposedNoNameCon
Talk by Stephanie Vanroelen at NoNameCon 2019.
https://nonamecon.org
https://cfp.nonamecon.org/nnc2019/talk/ZFJFW8/
This talk is about top anti-virus apps on Mobile. An in depth look on how they work and what they do. Do they add to or break the security of the mobile OS?
This talk is about top anti-virus apps on Android. An in-depth look at how they work and what they do.
The focus will be on the top 5 android apps:
Kaspersky Mobile Antivirus
Avast Mobile Security
Norton Security & Antivirus
Sophos Mobile Security
Security Master
This talk will try to answer the following questions: Do they add to or break the security of the Android sandbox system? What type of information is being shared back to the company (if any)? Are these apps well built?
Finally, I will address the following: Do I recommend any of these apps and if so which one and why?
Hii assessing the_effectiveness_of_antivirus_solutions
1. December 2012
Hacker Intelligence Initiative, Monthly Trend Report #14
Assessing the Effectiveness of Antivirus Solutions
Executive Summary
In 2012, Imperva, with a group of students from The Technion – Israeli Institute of Technology, conducted a study of more than 80
malware samples to assess the effectiveness of antivirus software. Based on our review, we believe:
1. The initial detection rate of a newly created virus is less than 5%. Although vendors try to update their detection
mechanisms, the initial detection rate of new viruses is nearly zero. We believe that the majority of antivirus products on the
market can’t keep up with the rate of virus propagation on the Internet.
2. For certain antivirus vendors, it may take up to four weeks to detect a new virus from the time of the initial scan.
3. The vendors with the best detection capabilities include those with free antivirus packages, Avast and Emisoft,
though they do have a high false positive rate.
These findings have several ramifications:
1. Enterprises and consumers spend on antivirus is not proportional to its effectiveness. In 2011, Gartner reported that
consumers spent $4.5 billion on antivirus, while enterprises spent $2.9 billion, a total of $7.4 billion. This represents more
than a third of the total of $17.7 billion spent on security software. We believe both consumers and enterprises should look
into freeware as well as new security models for protection.
2. Compliance mandates requiring antivirus should ease up on this obligation. One reason why security budgets
devote too much money to antivirus is compliance. Easing the need for AV could free up money for more effective
security measures.
3. Security teams should focus more on identifying aberrant behavior to detect infection. Though we don’t
recommend removing antivirus altogether, a bigger portion of the security focus should leverage technologies that detect
abnormal behavior such as unusually fast access speeds or large volume of downloads.
To be clear, we don’t recommend eliminating antivirus.
2. Hacker Intelligence Initiative, Monthly Trend Report
Table of Contents
Executive Summary 1
Introduction and Motivation 3
Background 3
Locating and Collecting Viruses 4
Honey Pots 4
Google Search 4
Hacker Forums 4
Evaluating the Samples Against Antivirus Products 5
Analyzing the Results 7
General Statistics 7
Specific Samples 10
Fake Google Chrome Installer 10
Multipurpose Trojan with Fake AV 11
Conclusion 12
Limitations, Objections and Methodology 12
References 13
Report #14, December 2012 2
3. Hacker Intelligence Initiative, Monthly Trend Report
Introduction and Motivation
Over the years and as the result of technological developments, the importance of personal computers in our lives has
grown significantly. This has resulted in a desire by some to develop malicious applications, whether lone teenagers or
nation states, and distribute them across the Internet where they attack a range of computer systems. As a result, the
importance of antivirus software has grown significantly and has resulted in increasing demand for dependable antivirus
products that can defend against the range of malicious viruses.
Anti-virus programs are meant to locate computer viruses and protect computers from their actions. Currently, antivirus
software is considered a reliable and effective defense against viruses and in protecting computers. According to Gartner,
enterprises and consumers spent $7.4 billion on antivirus in 2011 – a five-fold increase from 2002.1 Antivirus, by contrast, has
not seen a fivefold increase in effectiveness.
Every day, viruses and malicious programs are created and distributed across the Internet. In order to guarantee
effectiveness and maximum protection, antivirus software must be continuously updated. This is no small undertaking
when taking into consideration the fact that computers connected to the Internet are exposed to viruses from every
direction and delivered using any range of methods: Infected servers and files, USB drives, and more. Viruses involuntarily
draft consumers into bot armies while employees can become unknowing compromised insiders helping foreign
governments or competitors.
Background
In 1988, ‘Antivir’ was the first antivirus product that came to market and was meant to protect against more than a single
virus. The age of the Internet had brought about the proliferation of viruses, their method of infection, and means of
distribution. Subsequently, antivirus companies were forced to combat this threat. They began to release new versions of
their products at a much faster rate and began to update the signature database of their products via the Internet.
In today’s market, there is a wide variety of antivirus products, some that are freeware, and others that cost money. Studies
show that the majority of people prefer and settle for freeware antivirus. Furthermore, the popularity of any given antivirus
product does not reflect its effectiveness. The below diagram illustrates the popularly of the major antivirus products
with the largest market share. Though as noted, the percentages in this diagram do not necessarily reflect given products
capabilities.
According to one study, here are the most popular antivirus products:2
› Avast - 17.4% worldwide market share
› Microsoft - 13.2% worldwide market share
› ESET - 11.1% worldwide market share
› Symantec - 10.3% worldwide market share
› AVG - 10.1% worldwide market share
› Avira - 9.6% worldwide market share
› Kaspersky - 6.7% worldwide market share
› McAfee - 4.9% worldwide market share
› Panda - 2.9% worldwide market share
› Trend Micro - 2.8% worldwide market share
› Other - 11.1% worldwide market share
1
Gartner, Worldwide Spending on Security by Technology Segment, Country and Region, 2010-2016 and 2002
2
http://www.zdnet.com/blog/security/which-is-the-most-popular-antivirus-software/12608
Report #14, December 2012 3
4. Hacker Intelligence Initiative, Monthly Trend Report
Locating and Collecting Viruses
The purpose of this work was to evaluate AV software’s ability to detect previously non-cataloged malware samples. Hence,
we could not rely on any of the existing malware databases. We therefore resorted to other means of virus hunting over the
Web. We have employed various methods for collecting malware samples as described below. We executed the samples in a
controlled environment to make sure that they display behavior indicative of malware. Using the methods described below,
we were able to collect 82 samples.
Honey Pots
We have a number of Web honey pots deployed over the Web. Through these servers, we were able to detect access by
hackers to Web repositories where they deposit the malware they have acquired. We then visited these repositories and
were able to obtain the deposited files.
Google Search
We searched Google for specific patterns that yield references to small malware repositories. We then accessed these
repositories to obtain samples. We used distinguishable file names we have seen through our honey pot (see above) to
successfully find and collect more samples. Names like 1.exe or add-credit-facebook1.exe yielded good results.
Hacker Forums
We looked through hacker forums for references to copies of malware. Focus was Russian language forums such as the
one below:
The screenshot displays one of the websites that we found effective. In the menu on its left-hand side, users can obtain the
following malicious software:
› Program for hacking ICQ
› Program for hacking e-mail
› Program for hacking Skype
› Program for hacking accounts on Odnoklassniki and vkontakte (Russian Social Networks)
Report #14, December 2012 4
5. Hacker Intelligence Initiative, Monthly Trend Report
Evaluating the Samples Against Antivirus Products
Now that we had 82 malware samples, we needed an infrastructure that would allow us to evaluate them with as many AV
products as possible, repeatedly over time.
VirusTotal (www.virustotal.com) is a website that provides a free online service that analyzes files and URLs enabling the
identification of viruses, worms, trojans, and other kinds of malicious content detected by antivirus engines and website
scanners. At the time of our work, each sample was tested by 40 different products. A detailed report is produced for each
analysis indicating, for each AV product, whether the sample was identified as malware, and if so, which malware was
detected. The following figures show sample screenshots of a manual evaluation process (in which a user uploads the
malware sample through a browser and reviews results in HTML form).
VirusTotal File Upload Page
Last Scan Results
Report #14, December 2012 5
6. Hacker Intelligence Initiative, Monthly Trend Report
Current Scan Results
Additional Details
On top of the manual submission interface, VirusTotal also provides an API (https://www.virustotal.com/documentation/
public-api/) that can be used for automating the submission and result analysis process. The API is HTTP based and uses
simple POST requests and JSON replies. We used a set of homegrown Python scripts to schedule an automated scan of all
the samples in our data set on a weekly basis. Results were stored in a relational database for further analysis. We ran the
experiment for six weeks and collected a total of 13,000 entries in our database, where each entry represents the result of a
specific scan of a specific sample file by a specific product.
Report #14, December 2012 6
7. Hacker Intelligence Initiative, Monthly Trend Report
Analyzing the Results
General Statistics
In our analysis, we looked at two types of measurements: static and dynamic. The static measurements look at AV coverage
regardless of the timeline. The dynamic measurements look at the evolution of AV coverage over time.
The first measurement we took is coverage by most popular AV products (see above). For this static measurement, we
picked up both commercial and free AV products and looked only at those samples that, by the end of the testing period,
were identified by at least 50% of evaluated products (we used this criteria to reduce noise and potential dispute claims).
The results are displayed in Table 1 where blue area matches the portion of the sample that was detected.
Table 1: Viruses Identified vs. Not Detected, by Antivirus Vendor
Report #14, December 2012 7
8. Hacker Intelligence Initiative, Monthly Trend Report
Tables 1-2 show the rate of detection by the 6 least effective antivirus products in our study relative to the group of files in
which more than 50% of antivirus products that were tested identified the viruses (during the final scan). Notice that some
of the products in this group are commercial products for which customers actually pay license fees.
Table 2: Least Effective Products
Report #14, December 2012 8
9. Hacker Intelligence Initiative, Monthly Trend Report
Out first dynamic measurement compares each AV product’s detection capability at the beginning of the test (first run,
colored in blue) with its detection rate at the end of the test (last run, colored in red). It indicates how well AV products
process new inputs in general. The diagram below includes only those products for which an improvement was shown.
The diagram shows that AV products, indeed, are highly dependent on their input, and most products, in fact, have a solid
process of turning their input into detection signatures.
Table 3: Virus Detection between First and Last Run, by Antivirus Vendor
Now we get to the very interesting question of how long does it take for an AV product to incorporate detection for a
previously undetected sample. The following chart shows the average time, by the vendor listed, to detect those samples
that were not recognized as malware in the first run. For each vendor, we took the average for files not detected by that
vendor alone. We chose to show the progress rate only for the most prominent product out there. We chose the AV with
biggest market share (AVAST) and then 4 commercial products from the largest Security / AV vendors. The data in this
chart gives us an idea about the size of the “window of opportunity” for an attacker to take advantage of a freshly compiled
malware. Do notice that none of the malware samples we used were identified by ANY of the products as an entirely new
type of malware – rather, they were all recompilations of existing malware families. As one can see, the typical window of
opportunity for the listed AV products is as long as four weeks!
Table 4: Number of Weeks Required to Identify Infected File not identified in First Run
Report #14, December 2012 9
10. Hacker Intelligence Initiative, Monthly Trend Report
When we checked the dynamics of the least detected samples, we came up with even worse results. We checked how many
weeks are required for samples to reach a rate greater than 50% detection that were detected less than 25% of the time
during their initial scan. By analyzing our results database, we discovered that 12 files had a detection rate of less than 25%
when they were first scanned, yet not a single one of them came close to being detected 50% of the time in following scans.
Another phenomenon that we discovered after analyzing the results, which were obtained across the period of a few weeks
and after scanning was finished, was that not only did detection change, but the association made by antivirus products
changed. This means that we encountered a situation in which, over the period of three weeks, antivirus products classified
a file as “Unclassified Malware,” and only in the fourth week did it finally classify it as a specific type of malware (Trojan
Horse). We additionally encountered cases in which the antivirus completely changed the classification that it made of a
specific file. For example, one week the antivirus product ByteHero identified a file as Trojan Malware, and another as Virus.
Win32. Consequently, we can conclude that antivirus products occasionally are not consistent in the results they provide.
In our analysis, we have tried to come up with an effective combination of AV products that would yield the best protection
against our data set. We have considered, for the sake of this experiment, only those files that were detected by more
than 25% of AV products. None of the individual AV products were able to provide coverage for this set of samples. To our
surprise, the set of antivirus products that has the best detection rates included two freeware antivirus products, Avast and
Emsisoft. Another interesting point is that, while the most well-known AV products provided the best standalone coverage,
their coverage could not be effectively enhanced using another single product.
Specific Samples
Fake Google Chrome Installer
One of the samples in our data set was a fake Google Chrome installer name “Google setup.exe.” When executed, it
attempts to access a command and control center and takes over important functionality, closing down many programs
and, in particular, preventing the user from opening the “Task Manager” tool (which is an attempt to hide the presence of a
rogue process in the system). Below, we can see a screenshot of some of the (very apparent) visual effects observed when
executing this malware. The file was first reported to VirusTotal.com and analyzed on February 9, 2012. Yet, by the end of our
study, only 20 out of 42 products were able to detect it. By November 17, 2012, only 23 of 42 products were able to detect it.
Only a portion of those products that do detect it actually identify it correctly as being a disabler/dropper.
Report #14, December 2012 10
11. Hacker Intelligence Initiative, Monthly Trend Report
Multipurpose Trojan with Fake AV
“Hot_Girls_Catalog_2012_August.zip” is one of the samples we chose to track individually, that we picked up from a large
Phishing campaign. We decided to put some emphasis on it because we knew it was quickly spreading through the web
and thus must have captured the attention of AV product developers. The file contains a general purpose Trojan (of the
Kulouz family), capable of communicating with a larger highly redundant network of C&Cs. The Trojan receives execution
modules and configuration files from its C&C and has been known to grab passwords, send out spam, attack other servers,
and display Fake AV to the user. We tracked the detection rate for this sample on a daily basis for two weeks. As can be seen
from the chart below, the initial detection rate of the sample is around 30% of AV products. The sample is quickly picked
up by AV vendors through the first week and detection rate settles to just below 80% after that. A few other recompilations
of the same malware that were distributed in the same campaign did not reach more than 40% detection during the time
frame of the study, evading even some of the most popular AV products. Detection rate for those variations also eventually
settled near 80%.
Table 5: Kulouz Sample Rate of Detection Over Time
Report #14, December 2012 11
12. Hacker Intelligence Initiative, Monthly Trend Report
Conclusion
The issue of antivirus effectiveness is something close to us. There’s no doubt that many of us have lost both information
and wasted time trying to recover after a virus succeeded in infecting our computers. Sadly, an industry exists to produce
new viruses on a massive scale, making antivirus products mostly unreliable. Attackers understand antivirus products in
depth, become familiar with their weak points, identify antivirus product’s strong points, and understand their methods for
handling the high incidence of new virus propagation in the Internet.
The question also arises regarding how a virus manages to sneak by and cause damage when a leading antivirus product is
installed on our computer. There are several conclusions:
1. Antivirus products (as demonstrated by our study and by incidents like Flame) are much better at detecting malware
that spreads rapidly in massive quantities of identical samples, while variants that are of limited distribution (such as
government sponsored attacks) usually leave a large window of opportunity.
2. The window of opportunity mentioned in the preceding bullet point creates a massive blind spot for security teams.
For example, when attackers breached the state of South Carolina, the attack went unnoticed because the security
team was not able to monitor and control data access across DoRs internal network and servers, making them the
cyber equivalent of deaf and blind to the attack. They likely had antivirus technology intended to block the initial
infection. When their first line of defense was breached, due to antivirus’ limitations, they were left unaware and
defenseless against the attack.
3. A new security model is required to cover this blind spot. Investing in the right “ears and eyes” to monitor the access
of servers, databases, and files, would make the detection of malware attacks an easier task, as many attacks are very
“noisy.” In many cases, attackers seek access to privileged data on an arbitrary time from an arbitrary process with
read permissions, while usually the data gets accessed only by the internal backup process, with the backup account
privileges, on the regular backup times with write permissions. In the case of South Carolina, for example, the attacker
moved and processed the data many times before sending it out of the network, giving a lot of missed chances to set
off an alarm.
Limitations, Objections and Methodology
VirusTotal did not participate in our study. VirusTotal indicates that its services were not designed as a tool to perform
antivirus comparative analyses, and that there are many implicit errors in methodology when using VirusTotal to perform
antivirus comparative analyses. Implicit errors include the following:
1. VirusTotal AV engines are commandline versions, so depending on the product, they will not behave quite like the
desktop versions.
2. VirusTotal desktop-oriented solutions coexist with perimeter-oriented solutions; heuristics in this latter group may be
more aggressive and paranoid, since impact of false positives is less visible in the perimeter.
Several objections can be and have been raised regarding this study:
Objection #1: VirusTotal was used for comparative purposes.
The essence of the report is not a comparison of antivirus products. Rather, the purpose is to measure the efficacy of a single
antivirus solution as well as combined antivirus solutions given a random set of malware samples.
Objection #2: Our random sampling process is flawed.
Instead of testing a huge pile of samples taken from databases or standard antivirus traps, we looked for samples in a limited
manner which is not biased in any way towards a specific type of malware. Our selection of malware was not biased but
was randomly taken from the Web reflecting a potential method for constructing an attack. We believe our approach is an
effective approach since this reflects how malware writers create malware variants constantly. Our methodology closely
mimics what most enterprises or consumers encounter especially in an APT scenario.
Report #14, December 2012 12