This document describes a whole product dynamic "real-world" protection test conducted from August to November 2012. It tested 20 security products against over 2000 malicious URLs to evaluate each product's ability to protect a system from internet-based threats under real-world conditions. The test aimed to simulate the everyday experience of users by testing products with their default settings and incorporating factors like automatic updates. Products that blocked threats without requiring user interaction, or where the system was still protected after user-dependent alerts, were considered protected. The test found variation in results between products and over time.
This document describes the methodology used in the Whole Product Dynamic Test conducted in 2010 by AV-Comparatives. The test aimed to simulate real-world conditions by evaluating how whole security products protected against malware encountered through browsing the web. Each day, the tested products were updated and then exposed to malware through automated testing of hundreds of URLs. The test provided an overview of the detection abilities of security products against current internet threats as experienced by average users.
This document summarizes the results of a "Whole Product Dynamic 'Real-World' Protection Test" conducted from March to June 2014 on 23 security products. The test aimed to simulate real-world conditions by exposing the products to over 4,000 malicious URLs and evaluating how well each product was able to protect the system without any user interaction. The results showed protection levels achieved by each product on a monthly basis over the test period, with some products reaching the highest award level for their ability to block malware without issues.
This document summarizes the results of a whole product dynamic "real-world" protection test conducted from February to June 2016. It tested 18 antivirus and internet security products on 1868 malicious URLs. The top performing products like F-Secure and Trend Micro blocked all threats without any system compromises. Products like Bitdefender, Kaspersky Lab and Avira blocked over 99% of threats with only a few user-dependent results. The test aims to simulate real-world browsing conditions and how well products can protect against internet-based malware threats.
This document summarizes the results of a performance test conducted by AV-Comparatives in May 2012 on various internet security suite products. It tested the impact of these products on common tasks like file copying, archiving, encoding, installing/uninstalling applications, launching applications, and downloading files. The test was conducted on a Windows 7 system and results were grouped into categories based on the level of observed impact, from "very fast" to "slow". The document provides details on the test methods, products tested, and factors that can also influence system performance.
The document summarizes the results of an anti-virus file detection test conducted in March 2013 by AV-Comparatives on 20 antivirus products. It found that G DATA 2013 detected 99.9% of malware files with few false alarms, earning it the top award level. Microsoft Security Essentials detected 92% of malware with very few false alarms. Overall detection rates ranged from 99.9% to 91.2%, and false alarms ranged from 0 to 38 across the tested products. The test aimed to evaluate how well products can distinguish malware from good files through detection rates and false alarm results.
Webroot Cloud vs. Six Traditional Antivirus ProductsWebroot
The document is a performance benchmark report that compares Webroot SecureAnywhere Cloud Antivirus to six traditional antivirus products. It tests the products across 14 metrics related to system performance, including initial scan time, installation size, boot time, memory usage, and impact on common tasks. Webroot had the best performance in most categories, including significantly faster scan times, smaller installation size, and lower impact on system resources and user tasks. The report provides objective data on the relative performance of each product.
The document summarizes the results of a test comparing the malware protection of Windows 8 and Kaspersky Internet Security. Kaspersky blocked all 42 real-world malware attacks in tests of URLs and emails, while Windows 8 failed to block 5 attacks. In static detection tests, Kaspersky detected 99% of over 111,000 malware files while Windows 8 only detected 90%. Both products detected all 2,500 prevalent malware files and had no false positives on 345,900 clean files. The results indicate Kaspersky provides better protection against modern malware threats than Windows 8 alone.
Scott Brown tested 13 popular antivirus programs using 10 zero-day viruses and 2 exploits. NOD32 and Kaspersky detected the most viruses initially, while after 1 week NOD32, F-Secure, and Kaspersky detected the most with updated definitions. Based on the tests, NOD32 used the fewest system resources and had the best support, leading Brown to conclude it was the best option. Brown will require NOD32 on all computers connecting to his college network.
This document describes the methodology used in the Whole Product Dynamic Test conducted in 2010 by AV-Comparatives. The test aimed to simulate real-world conditions by evaluating how whole security products protected against malware encountered through browsing the web. Each day, the tested products were updated and then exposed to malware through automated testing of hundreds of URLs. The test provided an overview of the detection abilities of security products against current internet threats as experienced by average users.
This document summarizes the results of a "Whole Product Dynamic 'Real-World' Protection Test" conducted from March to June 2014 on 23 security products. The test aimed to simulate real-world conditions by exposing the products to over 4,000 malicious URLs and evaluating how well each product was able to protect the system without any user interaction. The results showed protection levels achieved by each product on a monthly basis over the test period, with some products reaching the highest award level for their ability to block malware without issues.
This document summarizes the results of a whole product dynamic "real-world" protection test conducted from February to June 2016. It tested 18 antivirus and internet security products on 1868 malicious URLs. The top performing products like F-Secure and Trend Micro blocked all threats without any system compromises. Products like Bitdefender, Kaspersky Lab and Avira blocked over 99% of threats with only a few user-dependent results. The test aims to simulate real-world browsing conditions and how well products can protect against internet-based malware threats.
This document summarizes the results of a performance test conducted by AV-Comparatives in May 2012 on various internet security suite products. It tested the impact of these products on common tasks like file copying, archiving, encoding, installing/uninstalling applications, launching applications, and downloading files. The test was conducted on a Windows 7 system and results were grouped into categories based on the level of observed impact, from "very fast" to "slow". The document provides details on the test methods, products tested, and factors that can also influence system performance.
The document summarizes the results of an anti-virus file detection test conducted in March 2013 by AV-Comparatives on 20 antivirus products. It found that G DATA 2013 detected 99.9% of malware files with few false alarms, earning it the top award level. Microsoft Security Essentials detected 92% of malware with very few false alarms. Overall detection rates ranged from 99.9% to 91.2%, and false alarms ranged from 0 to 38 across the tested products. The test aimed to evaluate how well products can distinguish malware from good files through detection rates and false alarm results.
Webroot Cloud vs. Six Traditional Antivirus ProductsWebroot
The document is a performance benchmark report that compares Webroot SecureAnywhere Cloud Antivirus to six traditional antivirus products. It tests the products across 14 metrics related to system performance, including initial scan time, installation size, boot time, memory usage, and impact on common tasks. Webroot had the best performance in most categories, including significantly faster scan times, smaller installation size, and lower impact on system resources and user tasks. The report provides objective data on the relative performance of each product.
The document summarizes the results of a test comparing the malware protection of Windows 8 and Kaspersky Internet Security. Kaspersky blocked all 42 real-world malware attacks in tests of URLs and emails, while Windows 8 failed to block 5 attacks. In static detection tests, Kaspersky detected 99% of over 111,000 malware files while Windows 8 only detected 90%. Both products detected all 2,500 prevalent malware files and had no false positives on 345,900 clean files. The results indicate Kaspersky provides better protection against modern malware threats than Windows 8 alone.
Scott Brown tested 13 popular antivirus programs using 10 zero-day viruses and 2 exploits. NOD32 and Kaspersky detected the most viruses initially, while after 1 week NOD32, F-Secure, and Kaspersky detected the most with updated definitions. Based on the tests, NOD32 used the fewest system resources and had the best support, leading Brown to conclude it was the best option. Brown will require NOD32 on all computers connecting to his college network.
This document discusses implementing AppLocker whitelisting to prevent malware execution. It begins by explaining the limitations of traditional antivirus and introduces AppLocker as a "whitelisting" approach that allows only approved applications to run. It then provides guidance on planning and deploying AppLocker, including determining scope, generating application rules, selecting rule types, and configuring Group Policy for enforcement. The presentation aims to demonstrate how AppLocker can eliminate many IT problems by preventing the execution of unauthorized or unknown software.
Four consumer antivirus products were able to block all attacks against recent Microsoft vulnerabilities over both HTTP and HTTPS. The other products had varying degrees of success in blocking the attacks, with some unable to block any HTTPS attacks. While antivirus software provides some protection, users should also use patching and additional security layers like firewalls for full defense. Enterprises need to carefully monitor and protect systems with outdated software.
This document proposes a runtime behavior-based browser solution called Browser Guard to protect against drive-by download attacks. Browser Guard monitors the download behavior of files loaded in the browser and restricts execution of any automatically downloaded files without user consent. It works in two phases, first distinguishing trusted from malicious files based on download context, then prohibiting execution of files on the blacklist. The solution aims to enhance browser security without requiring file/script analysis or reputation checks.
This document summarizes the results of an anti-virus test conducted in March 2012. 20 anti-virus products were tested on their ability to detect malware. G Data detected 99.7% of malware samples, scoring highest. Microsoft detected 93.1% of samples, scoring lowest. The test also evaluated false alarms on clean files. Microsoft generated 0 false alarms, while Webroot generated 428 false alarms, the most of any product. Based on detection rates and false alarms, products received awards of Advanced+, Advanced, Standard or Tested.
This document summarizes the results of an anti-virus test conducted in March 2012. 20 anti-virus products were tested on their ability to detect malware. G Data had the highest detection rate at 99.7%, while AhnLab had the lowest at 94%. Microsoft had the fewest false positives at 0, while Webroot had the most at 428. Based on detection rates and false positives, products received awards of Advanced+, Advanced, Standard or Tested. G Data, AVIRA and Kaspersky received Advanced+.
This document summarizes security threats and attacks on the Android system. It outlines the Android threat model and discusses attacks from computers, firmware, NFC, Bluetooth, and malicious apps. Specific attack vectors are described, such as exploiting update mechanisms, customization vulnerabilities, and speech recognition from gyroscope data. Countermeasures like updating apps and closing unused services are recommended for users. Developers are advised to follow basic security practices like code reviews and penetration testing.
Webroot SecureAnywhere Cloud vs. Six Traditional Security ProductsWebroot
This document summarizes the results of performance testing conducted on eight security software products, including Webroot SecureAnywhere Cloud and six traditional security products. Webroot SecureAnywhere Complete 2012 scored highest overall based on its performance across fourteen metrics such as installation size, memory usage, and file operations. Webroot SecureAnywhere Essentials 2012 was second highest. Norton 360, ESET Smart Security 5, Trend Maximum Security 2012, McAfee Total Protection 2012, and AVG Premium Security 2012 followed respectively in terms of overall scores.
This document summarizes the results of an on-demand malware detection test conducted by AV-Comparatives in February 2010. 20 antivirus products were tested on their ability to detect malware samples from a set containing over 1.2 million files. The summary includes detection rates for each product, with G DATA, AVIRA, and Panda detecting over 99% of malware. It also includes false positive results, with eScan, F-Secure, and others having very few false alarms. Finally, it shows the award levels reached by each product based on detection and false alarms, with some products reaching the highest ADVANCED+ level.
Reversing & Malware Analysis Training Part 13 - Future Roadmapsecurityxploded
This presentation is part of our Reverse Engineering & Malware Analysis Training program.
For more details refer our Security Training page
http://securityxploded.com/security-training.php
ZENworks Application Virtualization for NGN DummiesRoel van Bueren
The document provides an agenda and overview for a training on ZENworks Application Virtualization (ZAV). It discusses the basics of application virtualization using ZAV, how to set up a ZAV environment, and how to build virtual applications using different ZAV tools and techniques like templates, snapshots, and conversion. It also covers ZAV configuration options, interactions with the operating system, variables, sandboxing, and integration with Novell ZENworks Configuration Management (ZCM).
Advanced Malware Analysis Training Session 8 - Introduction to Androidsecurityxploded
This presentation is part of our Advanced Malware Analysis Training Series program.
For more details refer our Security Training page
http://securityxploded.com/security-training-advanced-malware-analysis.php
Advanced Malware Analysis Training Session 4 - Anti-Analysis Techniquessecurityxploded
This document provides an overview of an advanced malware analysis training program. It includes sections on anti-reversing techniques used by malware like anti-debugging and anti-VM methods. It also covers anti-anti-reversing techniques and includes an agenda for the training with topics like API-based debugging detection, flags-based checks, and virtual machine detection techniques.
Video at http://mrkn.co/andsec
With Android activations reaching a million devices per day, it is no surprise that security threats against our favorite mobile platform have been on the rise.
In this session, you will learn all about Android's security model, including application isolation (sandboxing) and provenance (signing), its permission system and enforcement, data protection features and encryption, as well as enterprise device administration.
Together, we will dig into Android's own internals to see how its security model is applied through the entire Android stack - from the Linux kernel, to the native layers, to the Application Framework services, and to the applications themselves.
Finally, you’ll learn about some of the weaknesses in the Android's model (including rooting, tap-jacking, malware, social-engineering) as well as what can be done to mitigate those threats, such as SE-Linux, memory protection, anti-malware, firewall, and developer best practices.
By the end of this session you will have a better understanding of what it takes to make Android a more trusted component of our personal and professional lives.
ZeroVM backgroud: Introduction to some of the concept behind zerovm. Little discussion of google native client project, Software based fault isolation is also provided.
All of the endpoint protection products tested were unable to fully block the Internet Explorer zero-day exploit, with some blocking URL access or detecting malware payloads after exploitation. Kaspersky blocked and warned on URL access while Sophos warned but did not properly block. For exploit blocking, only Kaspersky was able to fully block the exploit code from executing. Malware detection abilities varied, with some products quarantining payloads after execution.
This document provides an overview of Android application sandboxes and how they can be used for suspicious software detection. It discusses how Android uses a modified Linux kernel to run Java-based apps in an isolated environment. The document describes how an Android application sandbox called AASandbox works by hijacking system calls using a loadable kernel module and then performing both static analysis of app files and dynamic analysis by running apps in an Android emulator and monitoring system calls. It provides examples of analyzing self-written apps and experiments using over 150 popular apps from the Android Market.
Silver Fox develops and manufactures labelling solutions in the UK. This document provides an overview of Silver Fox's label solutions, including their Labacus Innovator labelling software, Fox-in-a-Box thermal printer starter kit, various label types like Fox-Flo and Legend labels, and other solutions like their Prolab and Endurance ranges. The document also discusses Silver Fox's commitment to quality, reliability, environmental initiatives and customer service.
El documento discute los efectos dañinos de la contaminación electromagnética en la salud humana. Explica que los campos electromagnéticos producidos por dispositivos como cables de alta tensión, electrodomésticos y teléfonos celulares pueden causar daño celular y están relacionados con un mayor riesgo de cáncer. Los niños son especialmente vulnerables debido a su desarrollo. Se recomienda limitar el uso de teléfonos celulares, usar filtros electromagnéticos y consultar a expertos para acc
We looked at the data. Here’s a breakdown of some key statistics about the nation’s incoming presidents’ addresses, how long they spoke, how well, and more.
This document discusses how emojis, emoticons, and text speak can be used to teach students. It provides background on the origins of emoticons in 1982 as ways to convey tone and feelings in text communications. It then suggests that with text speak and emojis, students can translate, decode, summarize, play with language, and add emotion to language. A number of websites and apps that can be used for emoji-related activities, lessons, and discussions are also listed.
This document discusses implementing AppLocker whitelisting to prevent malware execution. It begins by explaining the limitations of traditional antivirus and introduces AppLocker as a "whitelisting" approach that allows only approved applications to run. It then provides guidance on planning and deploying AppLocker, including determining scope, generating application rules, selecting rule types, and configuring Group Policy for enforcement. The presentation aims to demonstrate how AppLocker can eliminate many IT problems by preventing the execution of unauthorized or unknown software.
Four consumer antivirus products were able to block all attacks against recent Microsoft vulnerabilities over both HTTP and HTTPS. The other products had varying degrees of success in blocking the attacks, with some unable to block any HTTPS attacks. While antivirus software provides some protection, users should also use patching and additional security layers like firewalls for full defense. Enterprises need to carefully monitor and protect systems with outdated software.
This document proposes a runtime behavior-based browser solution called Browser Guard to protect against drive-by download attacks. Browser Guard monitors the download behavior of files loaded in the browser and restricts execution of any automatically downloaded files without user consent. It works in two phases, first distinguishing trusted from malicious files based on download context, then prohibiting execution of files on the blacklist. The solution aims to enhance browser security without requiring file/script analysis or reputation checks.
This document summarizes the results of an anti-virus test conducted in March 2012. 20 anti-virus products were tested on their ability to detect malware. G Data detected 99.7% of malware samples, scoring highest. Microsoft detected 93.1% of samples, scoring lowest. The test also evaluated false alarms on clean files. Microsoft generated 0 false alarms, while Webroot generated 428 false alarms, the most of any product. Based on detection rates and false alarms, products received awards of Advanced+, Advanced, Standard or Tested.
This document summarizes the results of an anti-virus test conducted in March 2012. 20 anti-virus products were tested on their ability to detect malware. G Data had the highest detection rate at 99.7%, while AhnLab had the lowest at 94%. Microsoft had the fewest false positives at 0, while Webroot had the most at 428. Based on detection rates and false positives, products received awards of Advanced+, Advanced, Standard or Tested. G Data, AVIRA and Kaspersky received Advanced+.
This document summarizes security threats and attacks on the Android system. It outlines the Android threat model and discusses attacks from computers, firmware, NFC, Bluetooth, and malicious apps. Specific attack vectors are described, such as exploiting update mechanisms, customization vulnerabilities, and speech recognition from gyroscope data. Countermeasures like updating apps and closing unused services are recommended for users. Developers are advised to follow basic security practices like code reviews and penetration testing.
Webroot SecureAnywhere Cloud vs. Six Traditional Security ProductsWebroot
This document summarizes the results of performance testing conducted on eight security software products, including Webroot SecureAnywhere Cloud and six traditional security products. Webroot SecureAnywhere Complete 2012 scored highest overall based on its performance across fourteen metrics such as installation size, memory usage, and file operations. Webroot SecureAnywhere Essentials 2012 was second highest. Norton 360, ESET Smart Security 5, Trend Maximum Security 2012, McAfee Total Protection 2012, and AVG Premium Security 2012 followed respectively in terms of overall scores.
This document summarizes the results of an on-demand malware detection test conducted by AV-Comparatives in February 2010. 20 antivirus products were tested on their ability to detect malware samples from a set containing over 1.2 million files. The summary includes detection rates for each product, with G DATA, AVIRA, and Panda detecting over 99% of malware. It also includes false positive results, with eScan, F-Secure, and others having very few false alarms. Finally, it shows the award levels reached by each product based on detection and false alarms, with some products reaching the highest ADVANCED+ level.
Reversing & Malware Analysis Training Part 13 - Future Roadmapsecurityxploded
This presentation is part of our Reverse Engineering & Malware Analysis Training program.
For more details refer our Security Training page
http://securityxploded.com/security-training.php
ZENworks Application Virtualization for NGN DummiesRoel van Bueren
The document provides an agenda and overview for a training on ZENworks Application Virtualization (ZAV). It discusses the basics of application virtualization using ZAV, how to set up a ZAV environment, and how to build virtual applications using different ZAV tools and techniques like templates, snapshots, and conversion. It also covers ZAV configuration options, interactions with the operating system, variables, sandboxing, and integration with Novell ZENworks Configuration Management (ZCM).
Advanced Malware Analysis Training Session 8 - Introduction to Androidsecurityxploded
This presentation is part of our Advanced Malware Analysis Training Series program.
For more details refer our Security Training page
http://securityxploded.com/security-training-advanced-malware-analysis.php
Advanced Malware Analysis Training Session 4 - Anti-Analysis Techniquessecurityxploded
This document provides an overview of an advanced malware analysis training program. It includes sections on anti-reversing techniques used by malware like anti-debugging and anti-VM methods. It also covers anti-anti-reversing techniques and includes an agenda for the training with topics like API-based debugging detection, flags-based checks, and virtual machine detection techniques.
Video at http://mrkn.co/andsec
With Android activations reaching a million devices per day, it is no surprise that security threats against our favorite mobile platform have been on the rise.
In this session, you will learn all about Android's security model, including application isolation (sandboxing) and provenance (signing), its permission system and enforcement, data protection features and encryption, as well as enterprise device administration.
Together, we will dig into Android's own internals to see how its security model is applied through the entire Android stack - from the Linux kernel, to the native layers, to the Application Framework services, and to the applications themselves.
Finally, you’ll learn about some of the weaknesses in the Android's model (including rooting, tap-jacking, malware, social-engineering) as well as what can be done to mitigate those threats, such as SE-Linux, memory protection, anti-malware, firewall, and developer best practices.
By the end of this session you will have a better understanding of what it takes to make Android a more trusted component of our personal and professional lives.
ZeroVM backgroud: Introduction to some of the concept behind zerovm. Little discussion of google native client project, Software based fault isolation is also provided.
All of the endpoint protection products tested were unable to fully block the Internet Explorer zero-day exploit, with some blocking URL access or detecting malware payloads after exploitation. Kaspersky blocked and warned on URL access while Sophos warned but did not properly block. For exploit blocking, only Kaspersky was able to fully block the exploit code from executing. Malware detection abilities varied, with some products quarantining payloads after execution.
This document provides an overview of Android application sandboxes and how they can be used for suspicious software detection. It discusses how Android uses a modified Linux kernel to run Java-based apps in an isolated environment. The document describes how an Android application sandbox called AASandbox works by hijacking system calls using a loadable kernel module and then performing both static analysis of app files and dynamic analysis by running apps in an Android emulator and monitoring system calls. It provides examples of analyzing self-written apps and experiments using over 150 popular apps from the Android Market.
Silver Fox develops and manufactures labelling solutions in the UK. This document provides an overview of Silver Fox's label solutions, including their Labacus Innovator labelling software, Fox-in-a-Box thermal printer starter kit, various label types like Fox-Flo and Legend labels, and other solutions like their Prolab and Endurance ranges. The document also discusses Silver Fox's commitment to quality, reliability, environmental initiatives and customer service.
El documento discute los efectos dañinos de la contaminación electromagnética en la salud humana. Explica que los campos electromagnéticos producidos por dispositivos como cables de alta tensión, electrodomésticos y teléfonos celulares pueden causar daño celular y están relacionados con un mayor riesgo de cáncer. Los niños son especialmente vulnerables debido a su desarrollo. Se recomienda limitar el uso de teléfonos celulares, usar filtros electromagnéticos y consultar a expertos para acc
We looked at the data. Here’s a breakdown of some key statistics about the nation’s incoming presidents’ addresses, how long they spoke, how well, and more.
This document discusses how emojis, emoticons, and text speak can be used to teach students. It provides background on the origins of emoticons in 1982 as ways to convey tone and feelings in text communications. It then suggests that with text speak and emojis, students can translate, decode, summarize, play with language, and add emotion to language. A number of websites and apps that can be used for emoji-related activities, lessons, and discussions are also listed.
Study: The Future of VR, AR and Self-Driving CarsLinkedIn
We asked LinkedIn members worldwide about their levels of interest in the latest wave of technology: whether they’re using wearables, and whether they intend to buy self-driving cars and VR headsets as they become available. We asked them too about their attitudes to technology and to the growing role of Artificial Intelligence (AI) in the devices that they use. The answers were fascinating – and in many cases, surprising.
This SlideShare explores the full results of this study, including detailed market-by-market breakdowns of intention levels for each technology – and how attitudes change with age, location and seniority level. If you’re marketing a tech brand – or planning to use VR and wearables to reach a professional audience – then these are insights you won’t want to miss.
Artificial intelligence (AI) is everywhere, promising self-driving cars, medical breakthroughs, and new ways of working. But how do you separate hype from reality? How can your company apply AI to solve real business problems?
Here’s what AI learnings your business should keep in mind for 2017.
The document describes a whole product dynamic "real-world" protection test conducted from March to June 2013 by AV-Comparatives. It tested the ability of various antivirus products to protect against malware in real-world browsing scenarios. Over 1,900 malicious URLs were tested on computers installed with different antivirus products using default settings. The results showed the protection levels achieved by each product on a monthly basis from being fully protected to being compromised. The test aimed to simulate everyday browsing threats that ordinary users may encounter online.
The AV-Comparatives Guide to the Best Cybersecurity Solutions of 2017Jermund Ottermo
The document summarizes the results of AV-Comparatives' Whole Product Dynamic "Real-World" Protection Test conducted between July and November 2017. It tested 19 security programs and found that most programs were able to block over 99% of malware with few systems compromised. Panda blocked all malware attempts without any issues. Programs like Bitdefender and Trend Micro blocked nearly all attempts, allowing just 1 malware case. Others like Symantec and Kaspersky blocked the majority but had some user-dependent cases. eScan and Adaware had lower protection rates between 95-97%.
This document summarizes the results of a performance test of 19 antivirus products conducted in October 2012. A variety of common computer tasks were performed on a test system with each antivirus product installed to measure the impact on system performance. The tested products achieved different award levels based on their results. Users are advised to consider how antivirus products may impact system performance when choosing security software.
The document summarizes the results of a performance test of 21 antivirus programs conducted by AV-Comparatives in November 2010. The test measured the impact of each antivirus program on system performance across various tasks like file copying, application launching, and using a performance testing suite. On-access scanners were found to have some negative impact on performance due to the system resources required to continuously monitor files. Factors other than the antivirus program itself, like outdated hardware or a cluttered hard drive, could also negatively influence performance. The test aimed to provide an indication of each product's performance impact rather than definitive comparisons.
AV Comparatives 2013 (Comparación de Antivirus)Doryan Mathos
This document summarizes the results of a performance test conducted by AV-Comparatives in April 2013 on 21 antivirus products. The test evaluated the impact of each product on system performance across various tasks like file copying, archiving, application launching etc. Products were grouped into categories based on their impact: slow, mediocre, fast, very fast. Most products had a mediocre or fast impact, with a few being slow or very fast. The test aimed to help users understand real-world performance impacts but noted that individual systems may produce different results.
This document summarizes the results of a performance test conducted by AV-Comparatives in April 2013 on 21 antivirus products. The test evaluated the impact of each product's real-time scanning components on system performance across various tasks like file copying, archiving, installing applications and using PC Mark 7. Most products had some negative impact on performance, with suite products generally having a higher impact than antivirus-only products. The test aimed to help users understand how different antivirus solutions affect system speeds so they can choose optimal protection for their hardware configurations and needs.
The document summarizes the results of a performance test of 21 antivirus and internet security products on a Windows 8.1 system. The test measured the impact of the security software on common tasks like file copying and downloading. It found that suite products have a higher impact than antivirus-only products due to running more background processes. The test used standard tools to ensure accurate and replicable results. Overall system performance is also affected by factors like hardware specifications, having unnecessary programs running, and failing to keep software up to date.
This document summarizes the results of an anti-virus file detection test conducted in September 2012. 20 major anti-virus products were tested on their ability to detect malware in a set of over 240,000 recent malicious files. G DATA detected 99.9% of the malware and had few false alarms. Webroot detected below 80% of malware and had many false alarms. The results showed detection rates and false alarms for each product, with the products ranked and receiving awards based on their combined performance.
Outpost Security Pro 7.5: What's Inside?Lubov Putsko
The document describes Outpost PRO, a proactive PC security software. It provides anti-malware, firewall, and privacy protection features. Version 7.5 introduces new capabilities like SmartDecision technology to rate starting files/processes and customizable USB/CD/DVD protection. It uses a proactive approach including system guarding, application control, and data privacy tools. The software works on Windows platforms and supports home, business, and educational licenses with various price options.
Metascan is a multi-scanning software that provides powerful malware detection capabilities. It has multiple anti-malware scanning engines embedded at the API level for high performance scanning. Metascan can be used for analyzing large file databases to provide data on which engines detected each threat. It also integrates easily with other analysis software. VirusTotal is a free online service owned by Google that analyzes files and URLs using multiple antivirus engines and website scanners. It helps improve security industries and makes the internet safer. Jotti's Malware Scan is a free online antivirus service that uses 20 antivirus software to scan files uploaded by users to determine if they are infected.
The document discusses different types of testing for window/desktop applications, including compatibility testing and install/uninstall testing. Compatibility testing focuses on the software's performance in different configurations, and includes upgrade and backward compatibility testing to ensure new versions work properly with assets from older versions. Install/uninstall testing verifies the installation and uninstallation processes on different platforms, and provides a checklist of items to test such as disk space usage, file associations, and permission handling.
This document provides an overview of Android malware analysis training. It begins with a disclaimer and acknowledgements. It then introduces the speaker and provides a basic overview of Android architecture, security features, application format, and permissions. It discusses Dalvik bytecode and sets up an analysis lab with tools like emulators, decompilers, and reverse engineering VMs. Finally it ends with references to malware analysis projects and a tutorial on the Dalvik bytecode.
This document summarizes the results of an anti-virus comparative retrospective test conducted in March 2012 that evaluated the abilities of various anti-virus programs to detect new malware through heuristic detection before signature updates as well as behavioral protections after execution; over 4,000 new malware samples from around March 2nd 2012 were used in the test which evaluated 17 anti-virus programs with the results showing the detection rates for scanning and any additional protection provided by behavioral analysis after execution.
This document reviews several security products for Macs, including avast! Free Antivirus for Mac, Avira Free Mac Security, eScan Anti-Virus for Mac, ESET Cyber Security Pro, F-Secure Anti-Virus for Mac, Kaspersky Security for Mac, and ZeoBit MacKeeper. It finds that all products detected 100% of malware samples tested, except for Kaspersky at 98.5%, ESET at 96.9%, and F-Secure at 90.6%. The document provides a brief overview of the installation process and key features of avast! Free Antivirus for Mac.
The document summarizes and dispels five common myths about open source security software:
1. Open source software is too risky for IT security. However, open source is already widely used in enterprise IT infrastructure and can be more secure due to many experts reviewing code.
2. Open source software is free. While the code is free to download, significant resources are required to manage, support, and maintain open source solutions. Commercial open source vendors provide support and integration.
3. Open source vendors add little value. Vendors contribute to open source communities and add features for enterprise use cases like documentation, interfaces and integration between projects.
4. Proprietary solutions are more reliable. Experts already
The document discusses the anti-malware capabilities of Outpost 7.5 security software. Key features include a comprehensive file scanner, real-time virus protection, email security, optimized scanning performance, and SmartDecision technology which analyzes files before launch. Benchmark results show Outpost provides fast scanning speeds and minimal impact on system performance.
This document discusses vulnerabilities in antivirus software. It begins by noting that over 165 vulnerabilities have been reported in antivirus software in the past 4 years according to the US National Vulnerability Database. It then examines why antivirus software is a target for attackers, including that users have blind faith in it and its error-prone nature in processing many file formats. The document outlines techniques used to find vulnerabilities, including source code audits, reverse engineering, and fuzzing. It also looks at exploiting found vulnerabilities, such as through weak permissions. The overall aim is to raise awareness of security issues in antivirus products.
Outpost Security Pro 7.5 - Extended TourLubov Putsko
Outpost PRO is security software that provides proactive protection against malware, viruses, and other threats. Version 7.5 introduces new features like SmartDecision technology which provides a rating system for files and processes to help users make more informed security decisions. It also improves existing features such as faster scanning, more secure auto-learning, and customizable protection for removable drives. Outpost PRO uses a combination of anti-malware, firewall, proactive protection of the operating system and applications, and other tools to secure PCs from a wide range of threats.
The document summarizes the results of an on-demand anti-virus detection test conducted by AV-Comparatives in February 2010. It tested 20 major anti-virus products on their ability to detect malicious software. The results showed detection rates ranging from 99.7% to 97.1%, with lower percentages indicating more missed malware samples. A graph visually depicted the differences in missed samples between the products. The report also included sections on false positive testing and scanning speed.
AV-Comparatives’ 2017 business software reviewJermund Ottermo
The review looks at security products for business Windows endpoints, focusing the following:
- EDR features
- Management Console
- Windows client (desktop and server) protection software
2. Whole Product Dynamic “Real‐World” Protection Test – (August‐November) 2012 www.av‐comparatives.org
Content
Test Procedure .......................................................... 4
Settings.................................................................... 5
Preparation for every Testing Day ............................... 5
Testing Cycle for each malicious URL .......................... 5
Test Set .................................................................... 6
Tested products......................................................... 7
Test Cases................................................................. 8
Results ..................................................................... 8
Summary Results (August-November) ......................... 9
Award levels reached in this test ................................ 12
‐ 2 ‐
3. Whole Product Dynamic “Real‐World” Protection Test – (August‐November) 2012 www.av‐comparatives.org
Introduction
The threat posed by malicious software is growing day by day. Not only is the number of malware pro-
grams increasing, also the very nature of the threats is changing rapidly. The way in which harmful code
gets onto computers is changing from simple file-based methods to distribution via the Internet. Mal-
ware is increasingly infecting PCs through e.g. users deceived into visiting infected web pages, installing
rogue/malicious software or opening emails with malicious attachments.
The scope of protection offered by antivirus programs is extended by the inclusion of e.g. URL-blockers,
content filtering, anti-phishing measures and user-friendly behaviour-blockers. If these features are per-
fectly coordinated with the signature-based and heuristic detection, the protection provided against
threats increases.
In spite of these new technologies, it remains very important that the signature-based and heuristic
detection abilities of antivirus programs continue to be tested. It is precisely because of the new threats
that signature/heuristic detection methods are becoming ever more important too. The growing frequen-
cy of zero-day attacks means that there is an increasing risk of malware infection. If this is not inter-
cepted by “conventional” or “non-conventional” methods, the computer will be compromised, and it is
only by using an on-demand scan with signature and heuristic-based detection that the malware can be
found (and hopefully removed). The additional protection technologies also offer no means of checking
existing data stores for already-infected files, which can be found on the file servers of many companies.
Those new security layers should be understood as an addition to good detection rates, not as replace-
ment.
In this test, all features of the product contribute protection, not only one part (like signatures/ heuris-
tic file scanning). Therefore, the protection provided should be higher than in testing only parts of the
product. We would recommend that all parts of a product should be high in detection, not only single
components (e.g. URL blocking protects only while browsing the web, but not against malware intro-
duced by other means or already present on the system).
The Whole-Product Dynamic “Real-World” Protection test is a joint project of AV-
Comparatives and the University of Innsbruck’s Faculty of Computer Science and Quality
Engineering. It is partially funded by the Austrian Government.
‐ 3 ‐
4. Whole Product Dynamic “Real‐World” Protection Test – (August‐November) 2012 www.av‐comparatives.org
Test Procedure
Testing dozens of antivirus products with hundreds of URLs each per day is a lot of work which cannot be
done manually (as it would be thousands of websites to visit and in parallel), so it is necessary to use
some sort of automation.
Lab-Setup
Every security program to be tested is installed on its own test computer. All computers are connected to
the Internet (details below). The system is frozen, with the operating system and security program in-
stalled. The entire test is performed on real workstations. We do not use any kind of virtualization. Each
workstation has its own internet connection with its own external IP. We have special agreements with
several providers (failover clustering and no traffic blocking) to ensure a stable internet connection. The
tests are performed using a live internet connection. We took the necessary precautions (with specially
configured firewalls, etc.) not to harm other computers (i.e. not to cause outbreaks).
Hardware and Software
For this test, we used identical workstations, a control and command server and network attached stor-
age.
Vendor Type CPU RAM Hard Disk
Workstations Dell Optiplex 755 Intel Core 2 Duo 4 GB 80 GB
Control Server Dell Optiplex 755 Intel Core 2 Duo 8 GB 2 x 500 GB
Storage Eurostor ES8700-Open-E Dual Xenon 32 GB 140 TB Raid 6
The tests are performed under Windows XP SP3 with updates of 1st August 2012. Some further installed
vulnerable software includes:
Vendor Product Version Vendor Product Version
Adobe Flash Player ActiveX 10.1 Microsoft Office Professional 2003
Adobe Flash Player Plug-In 10.0 Microsoft .NET Framework 4.0
Adobe Acrobat Reader 8.0 Mozilla Firefox 9.0.1
Apple QuickTime 7.1 Oracle Java 1.6.0.7
Microsoft Internet Explorer 7.0 VideoLAN VLC Media Player 2.0.1
Initially we planned to test this year with a fully updated/patched system, but we had to switch back
using older/vulnerable/unpatched OS and software versions due to lack of enough exploits in-the-field to
test against. This should remind users to keep their systems and applications always up-to-date, in order
to minimize the risk of getting infected through exploits using unpatched software vulnerabilities.
In 2013 we plan to perform the test under Windows 7 64 Bit SP1 and more up-to-date software.
‐ 4 ‐
5. Whole Product Dynamic “Real‐World” Protection Test – (August‐November) 2012 www.av‐comparatives.org
Settings
We use every security suite with its default (out-of-the-box) settings. Our whole-product dynamic protec-
tion test aims to simulate real-world conditions as experienced every day by users. If user interactions
are required, we choose allow. If the system will be protected anyway, we count it as blocked even if
there was first a user interaction. If the system gets compromised, we count it as user-dependent. We
consider “protection” to mean that the system is not compromised. This means that the malware is not
running (or is removed/terminated) and there are no significant/malicious system changes. An out-
bound-firewall alert about a running malware process, which asks whether to block traffic form the users’
workstation to the internet is too little, too late and not considered by us to be protection.
Preparation for every Testing Day
Every morning, any available security software updates are downloaded and installed, and a new base
image is made for that day. This ensures that even in the case the security product would not finish a
bigger update during the day (products are being updated before each test case) or is not reachable, it
would at least use the updates of the morning, as it would happen to the user in the real world.
Testing Cycle for each malicious URL
Before browsing to each new malicious URL/test-case we update the programs/signatures. New major
product versions (i.e. the first digit of the build number is different) are installed once at the begin of
the month, which is why in each monthly report we only give the product main version number. Our test
software starts monitoring the PC, so that any changes made by the malware will be recorded. Further-
more, the recognition algorithms check whether the antivirus program detects the malware. After each
test case the machine is reverted to its clean state.
Protection
Security products should protect the user’s PC. It is not very important at which stage the protection
takes place. This can be either while browsing to the website (e.g. protection through URL Blocker),
while an exploit tries to run or while the file is being downloaded/created or while the malware is exe-
cuted (either by the exploit or by the user). After the malware is executed (if not blocked before), we
wait several minutes for malicious actions and also to give e.g. behaviour-blockers time to react and
remedy actions performed by the malware. If the malware is not detected and the system is indeed in-
fected/compromised, the process goes to “System Compromised”. If a user interaction is required and it
is up to the user to decide if something is malicious, and in the case of the worst user decision, the sys-
tem gets compromised, we rate this as “user-dependent”. Due to that, the yellow bars in the results
graph can be interpreted either as protected or not protected (it’s up to the user).
‐ 5 ‐
6. Whole Product Dynamic “Real‐World” Protection Test – (August‐November) 2012 www.av‐comparatives.org
Due to the dynamic nature of the test, i.e. mimicking real-world conditions, and because of the way sev-
eral different technologies (such as cloud scanners, reputation services, etc.) work, it is a matter of fact
that such tests cannot be repeated or replicated in the way that e.g. static detection rate tests can. An-
yway, we log as much as reasonably possible to prove our findings and results. Vendors are invited to
provide useful logs inside their products, which can provide the additional data they want in case of
disputes. Vendors were given after each testing month the possibility to dispute our conclusion about
the compromised cases, so that we could recheck if there were maybe some problems in the automation
or with our analysis of the results.
In the case of cloud products, we will only consider the results that the products had at the time of test-
ing; sometimes the cloud services provided by the security vendors are down due to faults or mainte-
nance downtime by the vendors, but these cloud-downsides are often not disclosed/communicated to the
users by the vendors. This is also a reason why products relying too much on cloud services (and not
making use of local heuristics, behavior blockers, etc.) can be risky, as in such cases, the security provid-
ed by the products can decrease significantly. Cloud signatures/ reputation should be implemented in the
products to complement the other local/offline protection features and not replace them completely, as
e.g. offline cloud services would mean the PCs being exposed to higher risks.
Test Set
We are focusing to include mainly current, visible and relevant malicious websites/malware that are cur-
rently out there and problematic to ordinary users. We are trying to include about 50% URLs pointing
directly to malware (for example, if the user is tricked by social-engineering into follow links in spam
mails or websites, or if the user is tricked into installing some Trojan or other rogue software). The rest
are drive-by exploits - these usually are well covered by almost all major security products, which may be
one reason why the scores look relatively high.
We use our own crawling system to search continuously for malicious sites and extract malicious URLs
(including spammed malicious links). We also research manually for malicious URLs. If our in-house
crawler does not find enough valid malicious URLs on one day, we have contracted some external re-
searchers to provide additional malicious URLs first exclusively to AV-Comparatives and look for addition-
al (re)sources.
In this kind of testing, it is very important to use enough test cases. If an insufficient number of sam-
ples are used in comparative tests, differences in results may not indicate actual differences among the
tested products1. In fact, we consider even in our tests (with thousands of test-cases) products in the
same protection cluster to be more or less equally good; as long as they do not wrongly block clean
files/sites more than the industry average.
1
Read more in the following paper: http://www.av-comparatives.org/images/stories/test/statistics/somestats.pdf
‐ 6 ‐
7. Whole Product Dynamic “Real‐World” Protection Test – (August‐November) 2012 www.av‐comparatives.org
Comments
Most operating systems already include their own firewalls, automatic updates, and may even ask the
user before downloading or executing files if they really want to do that, warning that download-
ing/executing files can be dangerous. Mail clients and web mails include spam filters too. Furthermore,
most browsers include Pop-Up blockers, Phishing/URL-Filters and the ability to remove cookies. Those are
just some of the build-in protection features, but despite all of them, systems can get infected anyway.
The reason for this in most cases is the ordinary user, who may get tricked by social engineering into
visiting malicious websites or installing malicious software. Users expect a security product not to ask
them if they really want to execute a file etc. but expect that the security product will protect the sys-
tem in any case without them having to think about it, and despite what they do (e.g. executing un-
known files). We try to deliver good and easy-to-read test reports for end-users. We are continuously
working on improving further our automated systems to deliver a better overview of product capabilities.
Tested products
The following products were tested in the official Whole-Product Dynamic “Real-World” Protection test
series. In this type of test we usually include Internet Security Suites, although also other product ver-
sions fit (and are included/replaced on vendors request), because what is tested is the “protection” pro-
vided by the various products against a set of real-world threats.
Main product versions used for the monthly test-runs:
Vendor Product Version Version Version Version
August September October November
AhnLab V3 Internet Security 8.0 8.0 8.0 8.0
Avast Free Antivirus 7.0 7.0 7.0 7.0
AVG Internet Security 2012 2012 2013 2013
Avira Internet Security 2012 2012 2013 2013
Bitdefender Internet Security 2012 2012 2013 2013
BullGuard Internet Security 2012 2012 2013 2013
eScan Internet Security 11.0 11.0 11.0 11.0
ESET Smart Security 5.2 5.2 5.2 5.2
F-Secure Internet Security 2012 2012 2013 2013
Fortinet FortiClient Lite 4.3.4 4.3.5 4.3.5 4.3.5
G DATA Internet Security 2013 2013 2013 2013
GFI Vipre Internet Security 2012 2012 2013 2013
Kaspersky Internet Security 2013 2013 2013 2013
McAfee Internet Security 2013 2013 2013 2013
Panda Cloud Free Antivirus 2.0.0 2.0.1 2.0.1 2.0.1
PC Tools Internet Security 2013 2013 2013 2013
Qihoo 360 Internet Security 3.0 3.0 3.0 3.0
Sophos Endpoint Security 10.0 10.0 10.0 10.2
Tencent QQ PC Manager 6.6 6.6 6.6 7.3
Trend Micro Titanium Internet Security 2013 2013 2013 2013
Webroot SecureAnywhere Complete 2012 2012 2013 2013
‐ 7 ‐
8. Whole Product Dynamic “Real‐World” Protection Test – (August‐November) 2012 www.av‐comparatives.org
Test Cases
Test period Test-cases
th th
10 to 29 August 2012 488
3rd to 29th September 2012 519
4th to 28th October 2012 523
2nd to 19th November 2012 476
TOTAL 2006
Results
Below you see an overview of the past single testing months. Percentages can be seen on the interactive
graph on our website2.
August 2012 – 488 test cases September 2012 – 519 test cases
October 2012 – 523 test cases November 2012 – 476 test cases
We do not give in this report exact numbers for the single months on purpose, to avoid the little differ-
ences of few cases being misused to state that one product is better than the other in a given month and
test-set size. We give the total numbers in the overall reports, where the size of the test-set is bigger,
and more significant differences may be observed. Interested users who want to see the exact protection
rates (without FP rates) every month can see the monthly updated interactive charts on our website3.
2
http://www.av-comparatives.org/comparativesreviews/dynamic-tests
3
http://chart.av-comparatives.org/chart2.php and http://chart.av-comparatives.org/chart3.php
‐ 8 ‐
9. Whole Product Dynamic “Real‐World” Protection Test – (August‐November) 2012 www.av‐comparatives.org
Summary Results (August-November)
Test period: August – November 2012 (2006 Test cases)
User PROTECTION RATE
Blocked Compromised Cluster5
dependent [Blocked % + (User dependent %)/2]4
Trend Micro 2005 - 1 99,9% 1
BitDefender 2004 - 2 99,9% 1
F-Secure 2002 - 4 99,8% 1
G DATA 2002 - 4 99,8% 1
Qihoo 1989 7 10 99,3% 1
Kaspersky 1991 2 13 99,3% 1
BullGuard 1963 36 7 98,8% 1
AVIRA 1946 28 32 97,7% 2
Sophos 1959 - 47 97,7% 2
Avast 1915 65 26 97,1% 2
Tencent 1918 55 33 97,0% 2
ESET 1944 - 62 96,9% 2
PC Tools 1876 111 19 96,3% 3
AVG 1895 69 42 96,2% 3
eScan 1929 - 77 96,2% 3
GFI Vipre 1920 - 86 95,7% 3
McAfee 1913 - 93 95,4% 3
Panda 1910 - 96 95,2% 3
Fortinet 1895 - 111 94,5% 4
Webroot 1807 154 45 93,9% 4
AhnLab 1871 - 135 93,3% 4
The graph below shows the above protection rate (all samples), including the minimum and maximum
protection rates for the individual months.
4
User-dependent cases were given a half credit. Example: if a program gets 80% blocked-rate by itself, plus another
20% user-dependent, we give credit for half the user-dependent one, so it gets 90% altogether.
5
Hierarchical Clustering Method: defining clusters using average linkage between groups (see dendogram on page
11).
‐ 9 ‐
10. Whole Product Dynamic “Real‐World” Protection Test – (August‐November) 2012 www.av‐comparatives.org
Whole-Product “False Alarm” Test (wrongly blocked domains/files)
The false alarm test in the Whole-Product Dynamic “Real-World” Protection test consists of two parts:
wrongly blocked domains (while browsing) and wrongly blocked files (while downloading/installing). It is
necessary to test both scenarios because testing only one of the two above cases could penalize products
which focus mainly on one type of protection method, either e.g. URL/reputation-filtering or e.g. on-
access / behaviour / reputation-based file protection.
a) Wrongly blocked domains (while browsing)
We used around thousand randomly chosen popular domains. Blocked non-malicious domains/URLs were
counted as false positives (FPs). The wrongly blocked domains have been reported to the respective ven-
dors for review and should now no longer be blocked.
By blocking whole domains, the security products are not only risking causing distrust in their warnings,
but also eventually causing potential financial damage (beside the damage on website reputation) to the
domain owners, including loss of e.g. advertisement revenue. Due to this, we strongly recommend ven-
dors to block whole domains only in the case where the domain’s sole purpose is to carry/deliver mali-
cious code, and to otherwise block just the malicious pages (as long as they are indeed malicious). Prod-
ucts which tend to block URLs based e.g. on reputation may be more prone to this and score higher in
protection tests, as they may block many unpopular/new websites.
b) Wrongly blocked files (while downloading/installing)
We used about one hundred different applications listed either as top downloads or as new/recommended
downloads from about a dozen different popular download portals. The applications were downloaded
from the original developer websites of the software (instead of the download portal host), saved to disk
and installed to see if they get blocked at any stage of this procedure. Additionally, we included some
few clean files that were encountered and disputed over the past months of the Real-World Test.
The duty of security products is to protect against malicious sites/files, not to censor or limit the access
only to well-known popular applications and websites. If the user deliberately chooses a high security
setting, which warns that it may block some legitimate sites or files, then this may be considered ac-
ceptable. However, we do not regard it to be acceptable as a default setting, where the user has not
been warned. As the test is done at points in time and FPs on very popular software/websites are usually
noticed and fixed within a few hours, it would be surprising to encounter FPs on very popular applica-
tions. Due to this, FP tests which are done e.g. only on very popular applications, or which use only the
top 50 files from whitelisted/monitored download portals would be a waste of time and resources. Users
do not care whether they are infected by malware that affects only them, just as they do not care if the
FP count affects only them. While it is preferable that FPs do not affect many users, it should be the goal
to avoid having any FPs and to protect against any malicious files, no matter how many users are affect-
ed or targeted. Prevalence of FPs based on user-base data is of interest for internal QA testing of AV
vendors, but for the ordinary user it is important to know how accurately its product distinguishes be-
tween clean and malicious files.
‐ 10 ‐
11. Whole Product Dynamic “Real‐World” Protection Test – (August‐November) 2012 www.av‐comparatives.org
The below table shows the numbers of wrongly blocked domains/files:
Wrongly blocked clean domains/files Wrongly blocked
(blocked / user-dependent6) score7
Qihoo - / 1 (1) 0.5
AVG, Tencent 1 / - (1) 1
Avast, Kaspersky 1 / 1 (2) 1.5
ESET, G DATA, Panda 2 / - (2) 2
Bitdefender 4 / - (4) 4
AhnLab, AVIRA, eScan, Fortinet, 5 / - (5) 5
PC Tools
BullGuard 4 / 2 (6) 5
average (6) average (6)
Webroot 7 / 2 (9) 8
Trend Micro 10 / - (10) 10
Sophos 12 / - (12) 12
GFI Vipre 15 / - (15) 15
F-Secure 17 / 1 (18) 17.5
McAfee 20 / - (20) 20
To determine which products have to be downgraded in our award scheme due to the rate of wrongly
blocked sites/files, we backed up our decision by using a clustering method and by looking at the aver-
age scores. The following products with above average FPs have been downgraded: F-Secure, GFI Vipre,
McAfee, Sophos, Trend Micro and Webroot.
Illustration of how awards were given
The dendogram (using average linkage between groups) shows the results of the hierarchical cluster
analysis. It indicates at what level of similarity the clusters are joined. The red drafted line defines the
level of similarity. Each intersection indicates a group (in this case 4 groups). Products that had above-
average FPs are marked in red (and downgraded according to the ranking system on page 12).
6
Although user dependent cases are extremely annoying (esp. on clean files) for the user, they were counted only
as half for the “wrongly blocked rate” (like for the protection rate).
7
Lower is better.
‐ 11 ‐
12. Whole Product Dynamic “Real‐World” Protection Test – (August‐November) 2012 www.av‐comparatives.org
Award levels reached in this test
The awards are decided and given by the testers based on the observed test results (after consulting
statistical models). The following awards are for the results reached in the Whole-Product Dynamic “Real-
World” Protection Test:
AWARD LEVELS PRODUCTS
Bitdefender
G DATA
Qihoo
Kaspersky
BullGuard
Trend Micro*
F-Secure*
AVIRA
Avast
Tencent
ESET
Sophos*
PC Tools
AVG
eScan
Panda
GFI Vipre*
McAfee*
Fortinet
Webroot
AhnLab
* downgraded by one rank due to the score of wrongly blocked sites/files (FPs).
Protection score Protection score Protection score Protection score
Ranking system
Cluster8 4 Cluster 3 Cluster 2 Cluster 1
< FPs Tested Standard Advanced Advanced+
> FPs Tested Tested Standard Advanced
Expert users who do not care about wrongly blocked files/websites (false alarms) are free to rely on the
protection rates on page 9 instead of our awards ranking which takes FPs in consideration. In future we
might enhance the clustering method for this type of test.
8
See protection score clusters on page 9.
‐ 12 ‐