Deciphering Malware’s use of TLS (without
Decryption)
Blake Anderson
Cisco
[email protected]
Subharthi Paul
Cisco
[email protected]
David McGrew
Cisco
[email protected]
Abstract—The use of TLS by malware poses new challenges
to network threat detection because traditional pattern-matching
techniques can no longer be applied to its messages. However,
TLS also introduces a complex set of observable data features
that allow many inferences to be made about both the client
and the server. We show that these features can be used to
detect and understand malware communication, while at the same
time preserving the privacy of benign uses of encryption. These
data features also allow for accurate malware family attribution
of network communication, even when restricted to a single,
encrypted flow.
To demonstrate this, we performed a detailed study of how
TLS is used by malware and enterprise applications. We provide
a general analysis on millions of TLS encrypted flows, and a
targeted study on 18 malware families composed of thousands
of unique malware samples and ten-of-thousands of malicious
TLS flows. Importantly, we identify and accommodate the bias
introduced by the use of a malware sandbox. The performance
of a malware classifier is correlated with a malware family’s use
of TLS, i.e., malware families that actively evolve their use of
cryptography are more difficult to classify.
We conclude that malware’s usage of TLS is distinct from
benign usage in an enterprise setting, and that these differences
can be effectively used in rules and machine learning classifiers.
I. INTRODUCTION
Encryption is necessary to protect the privacy of end users.
In a network setting, Transport Layer Security (TLS) is the
dominant protocol to provide encryption for network traffic.
While TLS obscures the plaintext, it also introduces a complex
set of observable parameters that allow many inferences to be
made about both the client and the server.
Legitimate traffic has seen a rapid adoption of the TLS
standard over the past decade, with some studies stating that
as much as 60% of network traffic uses TLS [1]. Unfortunately,
malware has also adopted TLS to secure its communication. In
our dataset, ∼10% of the malware samples use TLS. This trend
makes threat detection more difficult because it renders the
use of deep packet inspection (DPI) ineffective. It is important
to determine whether encrypted network traffic is benign or
malicious, and do so in a way that preserves the integrity of
the encryption. And while 10% of malware samples utilizing
TLS seems low, we make the assumption that this number will
increase as the level of encryption in network traffic increases.
Along these lines, we have seen a slight, but statistically
significant, increase in malicious, encrypted traffic over the
past 12 months.
To further motivate the need for a study exposing mal-
ware’s use of TLS, we consider the limitations of a pattern-
matching approach when faced with TLS, .
The document analyzes the prevalence and security impact of HTTPS interception by middleboxes and antivirus software. The researchers developed techniques to detect interception based on differences between the TLS handshake and HTTP user agent. Applying these techniques to billions of connections, they found interception rates over an order of magnitude higher than previous estimates, and that the majority (97-62%) of intercepted connections had reduced security, with 10-40% vulnerable to decryption. Testing of interception products found most reduced security and many introduced severe vulnerabilities. The findings indicate widespread interception negatively impacts security.
The document discusses web security considerations and threats. It provides 3 levels at which security can be implemented - at the IP level using IPSec, at the transport level using SSL/TLS, and at the application level using protocols like SET. SSL/TLS works by establishing an encrypted channel between the client and server for secure communication. It uses handshake, change cipher spec, and alert protocols for negotiation and management of the secure session. Common web security threats include eavesdropping, message modification, denial of service attacks, and impersonation which can be mitigated using encryption, authentication and other cryptographic techniques.
RSA and RC4 Cryptosystem Performance Evaluation Using Image and TextYekini Nureni
This document compares the performance of the RSA and RC4 encryption algorithms. An application was developed to encrypt text and image files of varying sizes (10-200KB) using RSA and RC4. The encryption time (TE) for each algorithm on each file size was measured and recorded. The results showed that RC4 had significantly faster encryption times than RSA for all file sizes, both for text and image files. However, RSA is considered more securely than RC4. In conclusion, while RSA is more secure, RC4 has better performance and faster encryption/decryption speeds compared to RSA.
USING A DEEP UNDERSTANDING OF NETWORK ACTIVITIES FOR SECURITY EVENT MANAGEMENTIJNSA Journal
With the growing deployment of host-based and network-based intrusion detection systems in increasingly
large and complex communication networks, managing low-level alerts from these systems becomes
critically important. Probes of multiple distributed firewalls (FWs), intrusion detection systems (IDSs) or
intrusion prevention systems (IPSs) are collected throughout a monitored network such that large series of
alerts (alert streams) need to be fused. An alert indicates an abnormal behavior, which could potentially be
a sign for an ongoing cyber attack. Unfortunately, in a real data communication network, administrators
cannot manage the large number of alerts occurring per second, in particular since most alerts are false
positives. Hence, an emerging track of security research has focused on alert correlation to better identify
true positive and false positive. To achieve this goal we introduce Mission Oriented Network Analysis
(MONA). This method builds on data correlation to derive network dependencies and manage security
events by linking incoming alerts to network dependencies.
USING A DEEP UNDERSTANDING OF NETWORK ACTIVITIES FOR SECURITY EVENT MANAGEMENTIJNSA Journal
With the growing deployment of host-based and network-based intrusion detection systems in increasingly large and complex communication networks, managing low-level alerts from these systems becomes critically important. Probes of multiple distributed firewalls (FWs), intrusion detection systems (IDSs) or intrusion prevention systems (IPSs) are collected throughout a monitored network such that large series of alerts (alert streams) need to be fused. An alert indicates an abnormal behavior, which could potentially be a sign for an ongoing cyber attack. Unfortunately, in a real data communication network, administrators cannot manage the large number of alerts occurring per second, in particular since most alerts are false positives. Hence, an emerging track of security research has focused on alert correlation to better identify true positive and false positive. To achieve this goal we introduce Mission Oriented Network Analysis (MONA). This method builds on data correlation to derive network dependencies and manage security events by linking incoming alerts to network dependencies.
The Next Generation Cognitive Security Operations Center: Network Flow Forens...Konstantinos Demertzis
A Security Operations Center (SOC) can be defined as an organized and highly skilled team that uses advanced computer forensics tools to prevent, detect and respond to cybersecurity incidents of an organization. The fundamental aspects of an effective SOC is related to the ability to examine and analyze the vast number of data flows and to correlate several other types of events from a cybersecurity perception. The supervision and categorization of network flow is an essential process not only for the scheduling, management, and regulation of the network’s services, but also for attacks identification and for the consequent forensics’ investigations. A serious potential disadvantage of the traditional software solutions used today for computer network monitoring, and specifically for the instances of effective categorization of the encrypted or obfuscated network flow, which enforces the rebuilding of messages packets in sophisticated underlying protocols, is the requirements of computational resources. In addition, an additional significant inability of these software packages is they create high false positive rates because they are deprived of accurate predicting mechanisms.
For all the reasons above, in most cases, the traditional software fails completely to recognize unidentified vulnerabilities and zero-day exploitations. This paper proposes a novel intelligence driven Network Flow Forensics Framework (NF3) which uses low utilization of computing power and resources, for the Next Generation Cognitive Computing SOC (NGC2SOC) that rely solely on advanced fully automated intelligence methods. It is an effective and accurate Ensemble Machine Learning forensics tool to Network Traffic Analysis, Demystification of Malware Traffic and Encrypted Traffic Identification.
A New Way of Identifying DOS Attack Using Multivariate Correlation Analysisijceronline
This document summarizes a research paper that proposes a new method for identifying denial of service (DoS) attacks using multivariate correlation analysis (MCA). The method involves three main steps: 1) generating basic features from network traffic, 2) using MCA to extract correlations between features and generate triangle area maps, and 3) using an anomaly-based detection mechanism to distinguish attacks from normal traffic based on differences from pre-generated normal profiles. The researchers evaluate their method on the KDD Cup 99 dataset and achieve moderate detection performance. However, they identify issues related to differences in feature scales that reduce detection of some attacks. They propose using statistical normalization to address this.
Layered approach using conditional random fields for intrusion detection (syn...Mumbai Academisc
The document proposes a layered approach using conditional random fields for intrusion detection. It aims to improve accuracy and efficiency. Each layer is trained separately to detect different attack types (probe, DoS, R2L, U2R) using relevant features. The layers act as filters to quickly detect and block attacks without passing connections to subsequent layers. Experimental results show the proposed system outperforms other methods with high improvements in detecting certain attack types. Hardware requirements include a Pentium IV PC with 256MB RAM and software requirements include Windows XP and development tools like Java and Eclipse.
The document analyzes the prevalence and security impact of HTTPS interception by middleboxes and antivirus software. The researchers developed techniques to detect interception based on differences between the TLS handshake and HTTP user agent. Applying these techniques to billions of connections, they found interception rates over an order of magnitude higher than previous estimates, and that the majority (97-62%) of intercepted connections had reduced security, with 10-40% vulnerable to decryption. Testing of interception products found most reduced security and many introduced severe vulnerabilities. The findings indicate widespread interception negatively impacts security.
The document discusses web security considerations and threats. It provides 3 levels at which security can be implemented - at the IP level using IPSec, at the transport level using SSL/TLS, and at the application level using protocols like SET. SSL/TLS works by establishing an encrypted channel between the client and server for secure communication. It uses handshake, change cipher spec, and alert protocols for negotiation and management of the secure session. Common web security threats include eavesdropping, message modification, denial of service attacks, and impersonation which can be mitigated using encryption, authentication and other cryptographic techniques.
RSA and RC4 Cryptosystem Performance Evaluation Using Image and TextYekini Nureni
This document compares the performance of the RSA and RC4 encryption algorithms. An application was developed to encrypt text and image files of varying sizes (10-200KB) using RSA and RC4. The encryption time (TE) for each algorithm on each file size was measured and recorded. The results showed that RC4 had significantly faster encryption times than RSA for all file sizes, both for text and image files. However, RSA is considered more securely than RC4. In conclusion, while RSA is more secure, RC4 has better performance and faster encryption/decryption speeds compared to RSA.
USING A DEEP UNDERSTANDING OF NETWORK ACTIVITIES FOR SECURITY EVENT MANAGEMENTIJNSA Journal
With the growing deployment of host-based and network-based intrusion detection systems in increasingly
large and complex communication networks, managing low-level alerts from these systems becomes
critically important. Probes of multiple distributed firewalls (FWs), intrusion detection systems (IDSs) or
intrusion prevention systems (IPSs) are collected throughout a monitored network such that large series of
alerts (alert streams) need to be fused. An alert indicates an abnormal behavior, which could potentially be
a sign for an ongoing cyber attack. Unfortunately, in a real data communication network, administrators
cannot manage the large number of alerts occurring per second, in particular since most alerts are false
positives. Hence, an emerging track of security research has focused on alert correlation to better identify
true positive and false positive. To achieve this goal we introduce Mission Oriented Network Analysis
(MONA). This method builds on data correlation to derive network dependencies and manage security
events by linking incoming alerts to network dependencies.
USING A DEEP UNDERSTANDING OF NETWORK ACTIVITIES FOR SECURITY EVENT MANAGEMENTIJNSA Journal
With the growing deployment of host-based and network-based intrusion detection systems in increasingly large and complex communication networks, managing low-level alerts from these systems becomes critically important. Probes of multiple distributed firewalls (FWs), intrusion detection systems (IDSs) or intrusion prevention systems (IPSs) are collected throughout a monitored network such that large series of alerts (alert streams) need to be fused. An alert indicates an abnormal behavior, which could potentially be a sign for an ongoing cyber attack. Unfortunately, in a real data communication network, administrators cannot manage the large number of alerts occurring per second, in particular since most alerts are false positives. Hence, an emerging track of security research has focused on alert correlation to better identify true positive and false positive. To achieve this goal we introduce Mission Oriented Network Analysis (MONA). This method builds on data correlation to derive network dependencies and manage security events by linking incoming alerts to network dependencies.
The Next Generation Cognitive Security Operations Center: Network Flow Forens...Konstantinos Demertzis
A Security Operations Center (SOC) can be defined as an organized and highly skilled team that uses advanced computer forensics tools to prevent, detect and respond to cybersecurity incidents of an organization. The fundamental aspects of an effective SOC is related to the ability to examine and analyze the vast number of data flows and to correlate several other types of events from a cybersecurity perception. The supervision and categorization of network flow is an essential process not only for the scheduling, management, and regulation of the network’s services, but also for attacks identification and for the consequent forensics’ investigations. A serious potential disadvantage of the traditional software solutions used today for computer network monitoring, and specifically for the instances of effective categorization of the encrypted or obfuscated network flow, which enforces the rebuilding of messages packets in sophisticated underlying protocols, is the requirements of computational resources. In addition, an additional significant inability of these software packages is they create high false positive rates because they are deprived of accurate predicting mechanisms.
For all the reasons above, in most cases, the traditional software fails completely to recognize unidentified vulnerabilities and zero-day exploitations. This paper proposes a novel intelligence driven Network Flow Forensics Framework (NF3) which uses low utilization of computing power and resources, for the Next Generation Cognitive Computing SOC (NGC2SOC) that rely solely on advanced fully automated intelligence methods. It is an effective and accurate Ensemble Machine Learning forensics tool to Network Traffic Analysis, Demystification of Malware Traffic and Encrypted Traffic Identification.
A New Way of Identifying DOS Attack Using Multivariate Correlation Analysisijceronline
This document summarizes a research paper that proposes a new method for identifying denial of service (DoS) attacks using multivariate correlation analysis (MCA). The method involves three main steps: 1) generating basic features from network traffic, 2) using MCA to extract correlations between features and generate triangle area maps, and 3) using an anomaly-based detection mechanism to distinguish attacks from normal traffic based on differences from pre-generated normal profiles. The researchers evaluate their method on the KDD Cup 99 dataset and achieve moderate detection performance. However, they identify issues related to differences in feature scales that reduce detection of some attacks. They propose using statistical normalization to address this.
Layered approach using conditional random fields for intrusion detection (syn...Mumbai Academisc
The document proposes a layered approach using conditional random fields for intrusion detection. It aims to improve accuracy and efficiency. Each layer is trained separately to detect different attack types (probe, DoS, R2L, U2R) using relevant features. The layers act as filters to quickly detect and block attacks without passing connections to subsequent layers. Experimental results show the proposed system outperforms other methods with high improvements in detecting certain attack types. Hardware requirements include a Pentium IV PC with 256MB RAM and software requirements include Windows XP and development tools like Java and Eclipse.
This document provides an overview of Transport Layer Security (TLS) and its predecessor Secure Sockets Layer (SSL). It begins with an introduction to TLS/SSL, explaining what they are and their purposes of providing encryption, authentication and integrity verification. It then discusses digital certificates, the TLS/SSL handshake protocol and record protocol. It explains the four upper layer protocols: record, change cipher spec, alert and handshake. It provides details on SSL, TLS, their implementations and applications. The document is intended to explore how TLS works, best practices for its use, and its various applications in securing business computing.
Network security monitoring elastic webinar - 16 june 2021Mouaz Alnouri
The difference between successfully defending an attack or failing to compromise is your ability to understand what’s happening in your network better than your adversary. Choosing the right network security monitoring (NSM) toolset is crucial to effectively monitor, detect, and respond to any potential threats in an organisation’s network.
In this webinar, we’ll uncover the best practices, trends, and challenges in network security monitoring (NSM) and how Elastic is being used as a core component to network security monitoring.
Highlights:
- What is network security monitoring (NSM)?
- Types of network data
- Common toolset
- Overcoming challenges with network security monitoring
- Using Machine Learning for network security monitoring
- Demo
The document describes a man-in-the-middle attack against server-authenticated SSL sessions. It discusses how an attacker can: (1) redirect traffic by manipulating DNS or network topology; (2) sniff and modify traffic in real-time using a program; and (3) forward modified traffic while handling SSL/TLS encryption to avoid detection. The attack relies on flaws in SSL/TLS implementation and users' tendency to ignore security warnings to intercept secure connections without triggering alerts.
This document contains the answers to 10 short questions related to cloud computing topics. It defines computer viruses, worms, and Trojan horses. It discusses network protocols like FTP, HTTPS, and others used in cloud computing. It explains denial of service (DoS) attacks, resource management in cloud computing, differences between HTTP and HTTPS, scheduling in cloud computing, differences between authentication and authorization, data encryption techniques, what SSL is, and what an identity management system is and how it is helpful in cloud computing.
Decrypting web proxies allow enterprises to inspect encrypted traffic but undermine the security assumptions of TLS. While they can help detect threats, they break TLS authentication and confidentiality without all parties' consent. The legal and ethical implications are unclear. Full disclosure and user opt-in are recommended to balance security and privacy concerns.
The document discusses Cisco's Encrypted Traffic Analytics (ETA) solution. ETA uses machine learning techniques to analyze metadata from encrypted network traffic and detect malware without decrypting traffic. It can identify malware signatures and anomalous behavior in encrypted web, cloud, and internal traffic. ETA extracts features from packet lengths, times, and byte distributions to build detectors that can find known malware in encrypted traffic with high accuracy. The solution provides visibility, compliance monitoring, and threat detection across an organization's entire network, including campus, branch offices, and the cloud.
In this research, we have focused on the most challenging issue that Web Services face, i.e. how to secure their information. Web Services security could be guaranteed by employing security standards, which is the main focus of this search. Every suggested model related to security design should put in the account the securities' objectives; integrity, confidentiality, non- repudiation, authentication, and authorization. The proposed model describes SOAP messages and the way to secure their contents. Due to the reason that SOAP message is the core of the exchanging information in Web Services, this research has developed a security model needed to ensure e-business security. The essence of our model depends on XML encryption and XML signature to encrypt and sign SOAP message. The proposed model looks forward to achieve a high speed of transaction and a strong level of security without jeopardizing the performance of transmission information.
XML Encryption and Signature for Securing Web ServicesCSEIJJournal
In this research, we have focused on the most challenging issue that Web Services face, i.e. how to secure
their information. Web Services security could be guaranteed by employing security standards, which is the
main focus of this search. Every suggested model related to security design should put in the account the
securities' objectives; integrity, confidentiality, non- repudiation, authentication, and authorization. The
proposed model describes SOAP messages and the way to secure their contents. Due to the reason that
SOAP message is the core of the exchanging information in Web Services, this research has developed a
security model needed to ensure e-business security. The essence of our model depends on XML encryption
and XML signature to encrypt and sign SOAP message. The proposed model looks forward to achieve a
high speed of transaction and a strong level of security without jeopardizing the performance of
transmission information.
XML ENCRYPTION AND SIGNATURE FOR SECURING WEB SERVICESijcsit
In this research, we have focused on the most challenging issue that Web Services face, i.e. how to secure their information. Web Services security could be guaranteed by employing security standards, which is the main focus of this search. Every suggested model related to security design should put in the account the securities' objectives; integrity, confidentiality, non- repudiation, authentication, and authorization. The proposed model describes SOAP messages and the way to secure their contents. Due to the reason that SOAP message is the core of the exchanging information in Web Services, this research has developed a security model needed to ensure e-business security. The essence of our model depends on XML encryption
and XML signature to encrypt and sign SOAP message. The proposed model looks forward to achieve a high speed of transaction and a strong level of security without jeopardizing the performance of transmission information.
Team research paper and project on network vulnerabilities with multiple attacks and defesnses:
Cybersecurity
-For this project, our class was paired with teams to attempt to find vulnerabilities in other teams networks and to successfully beach their network.
-My role in this group was to help breach other team vulnerabilities through different attacks like responder attacks, honeypots, etc.
-The main challenges of this project were trying to find the vulnerabilities successfully, as the whole team had troubles with each of our different attacks and defenses.
-We learned how to use cybersecurity tools to help find vulnerabilities in networks and how to protect against them better. For example, in the honeypot we used we deployed it to port 80, when the attacker tried to access our fake server we were notified. We also deployed palto alto firewall to create our private and secure network. For an attack, we also used password crackers like john the ripper. This project taught us how to breach networks as a team.
This document summarizes research on low-resource routing attacks against anonymous communication systems like Tor. The researchers developed attacks where malicious nodes with few resources can compromise the anonymity of many users by exploiting preferential routing mechanisms. In experiments, a small number of malicious nodes falsely claiming high bandwidth compromised over 46% of paths in a Tor network, undermining theoretical models suggesting strong anonymity. Defenses are discussed to prevent low-resource adversaries from influencing routing.
A network behavior analysis method to detect this writes about a method to ...Thang Nguyen
This document proposes a network behavior analysis method to detect reverse remote access trojans (RATs) using machine learning. It extracts 4 network behavior features from TCP sessions: out-in-bytes ratio, PSH flag ratio, early stage packet number, and heartbeat flag. Six machine learning classifiers are tested on a dataset of real RAT and normal traffic. Random forest achieves the best performance with an accuracy of 0.957 and AUC of 0.979, indicating the method can effectively detect encrypted reverse RAT connections by analyzing network behavior features.
This paper analyzes vulnerabilities of the SSL/TLS
Handshake
protocol
, which
is
responsible
for
authentication of
the parties in the
communication
and
negotiation of
security parameters
that
will be used
to protect
confidentiality and
integrity of the
data
. It
will
be
analyzed the
attacks
against the implementation of Handshake
protocol, as well as the
attacks against the other
elements
necessary to SSL/TLS protocol to discover security
flaws that were exploited, modes of
attack, the potential consequences, but also studyi
ng methods of defense
.
All versions of the
protocol are going to be the subject of the researc
h but
emphasis will be placed
on the critical
attack that
the most endanger the safety of data.
The goal of
the research
is
to point out the
danger of
existence
of at least
vulnerability
in the SSL/TLS protocol
, which
can be exploited
and
endanger the safety of
the data
that should be protected.
This paper analyzes vulnerabilities of the SSL/TLS Handshake protocol, which is responsible for authentication of the parties in the communication and negotiation of security parameters that will be used to protect confidentiality and integrity of the data. It will be analyzed the attacks against the implementation of Handshake protocol, as well as the attacks against the other
elements necessary to SSL/TLS protocol to discover security flaws that were exploited, modes of
attack, the potential consequences, but also studying methods of defense. All versions of the
protocol are going to be the subject of the research but emphasis will be placed on the critical attack that the most endanger the safety of data. The goal of the research is to point out the
danger of existence of at least vulnerability in the SSL/TLS protocol, which can be exploited and endanger the safety of the data that should be protected.
A banner is simply the text that is embedded with a message that is received from a host.
Usually this text includes signatures of applications that issue the message. So, they reveal themselves to us.
For more information about ethical hacking log on to http://www.arizonainfotech.com/
Detection of application layer ddos attack using hidden semi markov model (20...Mumbai Academisc
This document discusses a proposed scheme to detect application layer distributed denial of service (App-DDoS) attacks using hidden semi-Markov models. It begins by describing how current techniques have difficulty distinguishing App-DDoS attacks from normal flash crowds based on traffic characteristics alone. The proposed scheme aims to capture spatial-temporal patterns during normal flash crowds using an Access Matrix, and then uses a hidden semi-Markov model to analyze dynamics of the Access Matrix and detect anomalies indicating potential App-DDoS attacks. It argues this approach can more effectively identify if traffic surges are caused by attackers or normal users compared to existing detection systems.
Effectual Routine for Trilateral Authentication in Ad-hoc Networks using Mult...IOSR Journals
This document proposes a protocol for trilateral authentication in ad-hoc networks using multicast conventions. It introduces a central authority that manages key authentication and certification to increase security and reliability. Nodes are grouped into clusters, each with a cluster head. For similar clusters, authentication uses time asymmetry based on TESLA. For cross-cluster traffic, it uses secret information asymmetry where the source sends packets to cluster heads, which relay to members. Evaluation shows the central authority uses less memory than previous methods and the protocol has higher efficiency.
This document proposes a Tiered Authentication scheme called TAM for multicast traffic in ad-hoc networks. TAM exploits network clustering to reduce overhead and ensure scalability. Within a cluster, one-way hash chains authenticate message sources by appending an authentication code to messages. Between clusters, messages include multiple authentication codes based on different keys from the source to authenticate it. TAM aims to securely deliver multicast traffic while addressing challenges like resource constraints and packet loss in ad-hoc networks.
a probabilistic misbehavior detection scheme toward efficient trust establish...swathi78
The document proposes iTrust, a probabilistic misbehavior detection scheme for secure routing in delay-tolerant networks (DTNs). iTrust introduces a periodically available Trusted Authority (TA) that judges nodes' behaviors based on collected routing evidence and probabilistically checks nodes. The TA models iTrust as an inspection game and sets an appropriate investigation probability to ensure security at reduced cost. Detection probability is correlated with node reputation, allowing a dynamic probability determined by user trust. Analysis and simulations show iTrust effectively and efficiently detects misbehavior.
Protecting location privacy in sensor networks against a global eavesdropperShakas Technologies
The document discusses techniques for providing location privacy in sensor networks against a global eavesdropper. It proposes four techniques - periodic collection, source simulation, sink simulation, and backbone flooding - to provide location privacy for monitored objects (source location privacy) and data sinks (sink location privacy). These techniques provide trade-offs between privacy, communication cost, and latency. Analysis and simulation demonstrate that the proposed techniques are efficient and effective for providing source and sink location privacy in sensor networks.
Deadline 6 PM Friday September 27, 201310 Project Management Que.docxedwardmarivel
Deadline 6 PM Friday September 27, 2013
10 Project Management Questions with sub-questions under each question. A word document is provided with all questions and directions.
Problem 1
The following data were obtained from a project to create a new portable electronic.
Activity
Duration
Predecessors
A
5 Days
---
B
6 Days
---
C
8 Days
---
D
4 Days
A, B
E
3 Days
C
F
5 Days
D
G
5 Days
E, F
H
9 Days
D
I
12 Days
G
Step 1: Construct a network diagram for the project.
Step 2: Answer the following questions:
a)
What is the Scheduled Completion of the Project?
b)
What is the Critical Path of the Project?
c)
What is the ES for Activity D?
d)
What is the LS for Activity G?
e)
What is the EF for Activity B?
f)
What is the LF for Activity H?
g)
What is the float for Activity I?
Problem 2
The following data were obtained from a project to build a pressure vessel:
Activity
Duration
Predecessors
A
6 weeks
---
B
6 weeks
---
C
5 weeks
B
D
4 weeks
A, C
E
5 weeks
B
F
7 weeks
D, E, G
G
4 weeks
B
H
8 weeks
F
I
5 weeks
G
J
3 week
I
Step 1: Construct a network diagram for the project.
Step 2: Answer the following questions:
a)
Calculate the scheduled completion time.
b)
Identify the critical path
c)
What is the slack time (float) for activity A?
d)
What is the slack time (float) for activity D?
e) What is the slack time (float) for activity E?
f) What is the slack time (float) for activity G?
Problem 3
The following data were obtained from a project to design a new software package:
Activity
Duration
Predecessors
A
5 Days
---
B
8 Days
---
C
6 Days
A
D
4 Days
C, B
E
5 Days
A
F
4 Days
D, E, G
G
4 Days
B, C
H
3 Day
G
Step 1: Construct a network diagram for the project.
Step 2: Answer the following questions:
a)
Calculate the scheduled completion time.
b)
Identify the critical path(s)
c)
What is the slack time (float) for activity B?
d)
What is the slack time (float) for activity D?
e) What is the slack time (float) for activity E?
f) What is the slack time (float) for activity G?
Problem 4
The following data were obtained from an in-house MIS project:
Activity
Duration
Predecessors
A
5 Days
---
B
8 Days
---
C
5 Days
A
D
4 Days
B
E
5 Days
B
F
3 Day
C, D
G
7 Days
C, D
H
6 Days
E, F, G
I
9 Days
E, F
Step 1: Construct a network diagram for the project.
Step 2: Answer the following questions:
a)
Calculate the scheduled completion time.
b)
Identify the critical path
c)
What is the slack time (float) for activity A?
d)
What is the slack time (float) for activity D?
e)
What is the slack time (float) for activity E?
f)
What is the slack time (float) for activity F?
PROBLEM 5
Use the network diagram below and the additional information provided to answer the corresponding questions.
a) Give the crash cost per day per activity.
b) Which activities should be crash.
More Related Content
Similar to Deciphering Malware’s use of TLS (withoutDecryption)Blak.docx
This document provides an overview of Transport Layer Security (TLS) and its predecessor Secure Sockets Layer (SSL). It begins with an introduction to TLS/SSL, explaining what they are and their purposes of providing encryption, authentication and integrity verification. It then discusses digital certificates, the TLS/SSL handshake protocol and record protocol. It explains the four upper layer protocols: record, change cipher spec, alert and handshake. It provides details on SSL, TLS, their implementations and applications. The document is intended to explore how TLS works, best practices for its use, and its various applications in securing business computing.
Network security monitoring elastic webinar - 16 june 2021Mouaz Alnouri
The difference between successfully defending an attack or failing to compromise is your ability to understand what’s happening in your network better than your adversary. Choosing the right network security monitoring (NSM) toolset is crucial to effectively monitor, detect, and respond to any potential threats in an organisation’s network.
In this webinar, we’ll uncover the best practices, trends, and challenges in network security monitoring (NSM) and how Elastic is being used as a core component to network security monitoring.
Highlights:
- What is network security monitoring (NSM)?
- Types of network data
- Common toolset
- Overcoming challenges with network security monitoring
- Using Machine Learning for network security monitoring
- Demo
The document describes a man-in-the-middle attack against server-authenticated SSL sessions. It discusses how an attacker can: (1) redirect traffic by manipulating DNS or network topology; (2) sniff and modify traffic in real-time using a program; and (3) forward modified traffic while handling SSL/TLS encryption to avoid detection. The attack relies on flaws in SSL/TLS implementation and users' tendency to ignore security warnings to intercept secure connections without triggering alerts.
This document contains the answers to 10 short questions related to cloud computing topics. It defines computer viruses, worms, and Trojan horses. It discusses network protocols like FTP, HTTPS, and others used in cloud computing. It explains denial of service (DoS) attacks, resource management in cloud computing, differences between HTTP and HTTPS, scheduling in cloud computing, differences between authentication and authorization, data encryption techniques, what SSL is, and what an identity management system is and how it is helpful in cloud computing.
Decrypting web proxies allow enterprises to inspect encrypted traffic but undermine the security assumptions of TLS. While they can help detect threats, they break TLS authentication and confidentiality without all parties' consent. The legal and ethical implications are unclear. Full disclosure and user opt-in are recommended to balance security and privacy concerns.
The document discusses Cisco's Encrypted Traffic Analytics (ETA) solution. ETA uses machine learning techniques to analyze metadata from encrypted network traffic and detect malware without decrypting traffic. It can identify malware signatures and anomalous behavior in encrypted web, cloud, and internal traffic. ETA extracts features from packet lengths, times, and byte distributions to build detectors that can find known malware in encrypted traffic with high accuracy. The solution provides visibility, compliance monitoring, and threat detection across an organization's entire network, including campus, branch offices, and the cloud.
In this research, we have focused on the most challenging issue that Web Services face, i.e. how to secure their information. Web Services security could be guaranteed by employing security standards, which is the main focus of this search. Every suggested model related to security design should put in the account the securities' objectives; integrity, confidentiality, non- repudiation, authentication, and authorization. The proposed model describes SOAP messages and the way to secure their contents. Due to the reason that SOAP message is the core of the exchanging information in Web Services, this research has developed a security model needed to ensure e-business security. The essence of our model depends on XML encryption and XML signature to encrypt and sign SOAP message. The proposed model looks forward to achieve a high speed of transaction and a strong level of security without jeopardizing the performance of transmission information.
XML Encryption and Signature for Securing Web ServicesCSEIJJournal
In this research, we have focused on the most challenging issue that Web Services face, i.e. how to secure
their information. Web Services security could be guaranteed by employing security standards, which is the
main focus of this search. Every suggested model related to security design should put in the account the
securities' objectives; integrity, confidentiality, non- repudiation, authentication, and authorization. The
proposed model describes SOAP messages and the way to secure their contents. Due to the reason that
SOAP message is the core of the exchanging information in Web Services, this research has developed a
security model needed to ensure e-business security. The essence of our model depends on XML encryption
and XML signature to encrypt and sign SOAP message. The proposed model looks forward to achieve a
high speed of transaction and a strong level of security without jeopardizing the performance of
transmission information.
XML ENCRYPTION AND SIGNATURE FOR SECURING WEB SERVICESijcsit
In this research, we have focused on the most challenging issue that Web Services face, i.e. how to secure their information. Web Services security could be guaranteed by employing security standards, which is the main focus of this search. Every suggested model related to security design should put in the account the securities' objectives; integrity, confidentiality, non- repudiation, authentication, and authorization. The proposed model describes SOAP messages and the way to secure their contents. Due to the reason that SOAP message is the core of the exchanging information in Web Services, this research has developed a security model needed to ensure e-business security. The essence of our model depends on XML encryption
and XML signature to encrypt and sign SOAP message. The proposed model looks forward to achieve a high speed of transaction and a strong level of security without jeopardizing the performance of transmission information.
Team research paper and project on network vulnerabilities with multiple attacks and defesnses:
Cybersecurity
-For this project, our class was paired with teams to attempt to find vulnerabilities in other teams networks and to successfully beach their network.
-My role in this group was to help breach other team vulnerabilities through different attacks like responder attacks, honeypots, etc.
-The main challenges of this project were trying to find the vulnerabilities successfully, as the whole team had troubles with each of our different attacks and defenses.
-We learned how to use cybersecurity tools to help find vulnerabilities in networks and how to protect against them better. For example, in the honeypot we used we deployed it to port 80, when the attacker tried to access our fake server we were notified. We also deployed palto alto firewall to create our private and secure network. For an attack, we also used password crackers like john the ripper. This project taught us how to breach networks as a team.
This document summarizes research on low-resource routing attacks against anonymous communication systems like Tor. The researchers developed attacks where malicious nodes with few resources can compromise the anonymity of many users by exploiting preferential routing mechanisms. In experiments, a small number of malicious nodes falsely claiming high bandwidth compromised over 46% of paths in a Tor network, undermining theoretical models suggesting strong anonymity. Defenses are discussed to prevent low-resource adversaries from influencing routing.
A network behavior analysis method to detect this writes about a method to ...Thang Nguyen
This document proposes a network behavior analysis method to detect reverse remote access trojans (RATs) using machine learning. It extracts 4 network behavior features from TCP sessions: out-in-bytes ratio, PSH flag ratio, early stage packet number, and heartbeat flag. Six machine learning classifiers are tested on a dataset of real RAT and normal traffic. Random forest achieves the best performance with an accuracy of 0.957 and AUC of 0.979, indicating the method can effectively detect encrypted reverse RAT connections by analyzing network behavior features.
This paper analyzes vulnerabilities of the SSL/TLS
Handshake
protocol
, which
is
responsible
for
authentication of
the parties in the
communication
and
negotiation of
security parameters
that
will be used
to protect
confidentiality and
integrity of the
data
. It
will
be
analyzed the
attacks
against the implementation of Handshake
protocol, as well as the
attacks against the other
elements
necessary to SSL/TLS protocol to discover security
flaws that were exploited, modes of
attack, the potential consequences, but also studyi
ng methods of defense
.
All versions of the
protocol are going to be the subject of the researc
h but
emphasis will be placed
on the critical
attack that
the most endanger the safety of data.
The goal of
the research
is
to point out the
danger of
existence
of at least
vulnerability
in the SSL/TLS protocol
, which
can be exploited
and
endanger the safety of
the data
that should be protected.
This paper analyzes vulnerabilities of the SSL/TLS Handshake protocol, which is responsible for authentication of the parties in the communication and negotiation of security parameters that will be used to protect confidentiality and integrity of the data. It will be analyzed the attacks against the implementation of Handshake protocol, as well as the attacks against the other
elements necessary to SSL/TLS protocol to discover security flaws that were exploited, modes of
attack, the potential consequences, but also studying methods of defense. All versions of the
protocol are going to be the subject of the research but emphasis will be placed on the critical attack that the most endanger the safety of data. The goal of the research is to point out the
danger of existence of at least vulnerability in the SSL/TLS protocol, which can be exploited and endanger the safety of the data that should be protected.
A banner is simply the text that is embedded with a message that is received from a host.
Usually this text includes signatures of applications that issue the message. So, they reveal themselves to us.
For more information about ethical hacking log on to http://www.arizonainfotech.com/
Detection of application layer ddos attack using hidden semi markov model (20...Mumbai Academisc
This document discusses a proposed scheme to detect application layer distributed denial of service (App-DDoS) attacks using hidden semi-Markov models. It begins by describing how current techniques have difficulty distinguishing App-DDoS attacks from normal flash crowds based on traffic characteristics alone. The proposed scheme aims to capture spatial-temporal patterns during normal flash crowds using an Access Matrix, and then uses a hidden semi-Markov model to analyze dynamics of the Access Matrix and detect anomalies indicating potential App-DDoS attacks. It argues this approach can more effectively identify if traffic surges are caused by attackers or normal users compared to existing detection systems.
Effectual Routine for Trilateral Authentication in Ad-hoc Networks using Mult...IOSR Journals
This document proposes a protocol for trilateral authentication in ad-hoc networks using multicast conventions. It introduces a central authority that manages key authentication and certification to increase security and reliability. Nodes are grouped into clusters, each with a cluster head. For similar clusters, authentication uses time asymmetry based on TESLA. For cross-cluster traffic, it uses secret information asymmetry where the source sends packets to cluster heads, which relay to members. Evaluation shows the central authority uses less memory than previous methods and the protocol has higher efficiency.
This document proposes a Tiered Authentication scheme called TAM for multicast traffic in ad-hoc networks. TAM exploits network clustering to reduce overhead and ensure scalability. Within a cluster, one-way hash chains authenticate message sources by appending an authentication code to messages. Between clusters, messages include multiple authentication codes based on different keys from the source to authenticate it. TAM aims to securely deliver multicast traffic while addressing challenges like resource constraints and packet loss in ad-hoc networks.
a probabilistic misbehavior detection scheme toward efficient trust establish...swathi78
The document proposes iTrust, a probabilistic misbehavior detection scheme for secure routing in delay-tolerant networks (DTNs). iTrust introduces a periodically available Trusted Authority (TA) that judges nodes' behaviors based on collected routing evidence and probabilistically checks nodes. The TA models iTrust as an inspection game and sets an appropriate investigation probability to ensure security at reduced cost. Detection probability is correlated with node reputation, allowing a dynamic probability determined by user trust. Analysis and simulations show iTrust effectively and efficiently detects misbehavior.
Protecting location privacy in sensor networks against a global eavesdropperShakas Technologies
The document discusses techniques for providing location privacy in sensor networks against a global eavesdropper. It proposes four techniques - periodic collection, source simulation, sink simulation, and backbone flooding - to provide location privacy for monitored objects (source location privacy) and data sinks (sink location privacy). These techniques provide trade-offs between privacy, communication cost, and latency. Analysis and simulation demonstrate that the proposed techniques are efficient and effective for providing source and sink location privacy in sensor networks.
Similar to Deciphering Malware’s use of TLS (withoutDecryption)Blak.docx (20)
Deadline 6 PM Friday September 27, 201310 Project Management Que.docxedwardmarivel
Deadline 6 PM Friday September 27, 2013
10 Project Management Questions with sub-questions under each question. A word document is provided with all questions and directions.
Problem 1
The following data were obtained from a project to create a new portable electronic.
Activity
Duration
Predecessors
A
5 Days
---
B
6 Days
---
C
8 Days
---
D
4 Days
A, B
E
3 Days
C
F
5 Days
D
G
5 Days
E, F
H
9 Days
D
I
12 Days
G
Step 1: Construct a network diagram for the project.
Step 2: Answer the following questions:
a)
What is the Scheduled Completion of the Project?
b)
What is the Critical Path of the Project?
c)
What is the ES for Activity D?
d)
What is the LS for Activity G?
e)
What is the EF for Activity B?
f)
What is the LF for Activity H?
g)
What is the float for Activity I?
Problem 2
The following data were obtained from a project to build a pressure vessel:
Activity
Duration
Predecessors
A
6 weeks
---
B
6 weeks
---
C
5 weeks
B
D
4 weeks
A, C
E
5 weeks
B
F
7 weeks
D, E, G
G
4 weeks
B
H
8 weeks
F
I
5 weeks
G
J
3 week
I
Step 1: Construct a network diagram for the project.
Step 2: Answer the following questions:
a)
Calculate the scheduled completion time.
b)
Identify the critical path
c)
What is the slack time (float) for activity A?
d)
What is the slack time (float) for activity D?
e) What is the slack time (float) for activity E?
f) What is the slack time (float) for activity G?
Problem 3
The following data were obtained from a project to design a new software package:
Activity
Duration
Predecessors
A
5 Days
---
B
8 Days
---
C
6 Days
A
D
4 Days
C, B
E
5 Days
A
F
4 Days
D, E, G
G
4 Days
B, C
H
3 Day
G
Step 1: Construct a network diagram for the project.
Step 2: Answer the following questions:
a)
Calculate the scheduled completion time.
b)
Identify the critical path(s)
c)
What is the slack time (float) for activity B?
d)
What is the slack time (float) for activity D?
e) What is the slack time (float) for activity E?
f) What is the slack time (float) for activity G?
Problem 4
The following data were obtained from an in-house MIS project:
Activity
Duration
Predecessors
A
5 Days
---
B
8 Days
---
C
5 Days
A
D
4 Days
B
E
5 Days
B
F
3 Day
C, D
G
7 Days
C, D
H
6 Days
E, F, G
I
9 Days
E, F
Step 1: Construct a network diagram for the project.
Step 2: Answer the following questions:
a)
Calculate the scheduled completion time.
b)
Identify the critical path
c)
What is the slack time (float) for activity A?
d)
What is the slack time (float) for activity D?
e)
What is the slack time (float) for activity E?
f)
What is the slack time (float) for activity F?
PROBLEM 5
Use the network diagram below and the additional information provided to answer the corresponding questions.
a) Give the crash cost per day per activity.
b) Which activities should be crash.
DEADLINE 15 HOURS
6 PAGES
UNDERGRADUATE
COURSEWORK
HARVARD FORMATING
DOUBLE SPACING
INSTRUCTIONS
This assignment seeks to assess your ability to:
• Critically evaluate and discuss the major developments during 2017 in corporate taxation from the perspective of multinational companies and their auditors, governments and other stakeholders.
• Apply appropriate knowledge, analytical techniques and concepts to problems and issues arising from both familiar and unfamiliar situations;
• Think critically, examine problems and issues from a number of perspectives, challenge viewpoints, ideas and concepts and make well-reasoned judgements;
• Present, discuss and defend ideas, concepts and views effectively through formal language.
Background:
In the final weeks of 2017 a leading tax expert suggested that “a whirlwind of international tax changes has swept the globe”. He also went on to say that for companies operating in Europe there is no end in sight to the pace of change. The final recommendations on base erosion and profit shifting (BEPS) from the OECD have been endorsed by the EU. In fact a number of European governments have already implemented large parts of these proposals ahead of schedule.
The third quarter of the year saw the European Commission in the spotlight with its landmark decision that the technology giant Apple must repay no less than €13 billion of taxes to the Irish government. This ruling was based on the view that the favourable tax treatment was effectively state aid and hence the Irish government had broken EU law. At the same time countries across the world continue to compete by reducing the rate of corporate taxes. Many commentators suggest that the UK government will cut the corporate tax rate to 10% if the country fails to negotiate a trade deal with the European Union as part of the Brexit process. In a separate development earlier in the year the government of Hungary announced it would become the tax haven of Central Europe with a plan to reduce corporation tax to a mere 9%.
Required:
You are to write a report for the Board of Directors of a listed global company that has manufacturing and R&D activities across Europe, Asia, Australasia and America. The report should assume that the directors have detailed knowledge of the group activities but are not taxation specialists. However they would be aware of issues relating to corporate governance, transparency and reputational risks.
The report should cover the following aspects:
Evaluate the major developments that occurred in corporate taxation in 2017 and the issues that may arise in the current year.
Discuss the implications for the group in regard to the relationship with its auditors.
Consider how other stakeholders and non-governmental organisations (NGOs) may be affected by changes in the level of corporate taxes and their possible reaction.
The resources below are on Blackboard and provide an introduction to the topic.
“Corpor.
De nada.El gusto es mío.Encantada.Me llamo Pepe.Muy bien, grac.docxedwardmarivel
Este documento presenta varios diálogos y conversaciones cortas que incluyen saludos comunes, preguntas sobre el origen y el nombre de las personas, y despedidas. Los diálogos practican vocabulario y estructuras básicas de conversación en español.
DDL 24 hours reading the article and writing a 1-page doubl.docxedwardmarivel
DDL:
24 hours
reading the article and writing a
1-page double space
annotated bibliography
including:
1.reference
2.specify the concept you will use
3.explain its significance to the course
4.specify how you'll use it in your project
see the article and project inf below
.
*
DCF valuation methodSuper-normal growth modelApplications: single CF, annuity, perpetuity, uneven CFs, bond, stock, etc.
LECTURE 2 Valuation Basics
(Chapters 4, 6, 7)
*
Amount of cash flows expectedRisk of the cash flows Timing of the cash flow stream
Factors that Determine Value
*
DCF Method: General Formula
Finding PVs is discounting. The discount factor i is determined by the cost of capital invested.
*
10%
Single Cash Flow
100
0
1
2
3
PV = ?
What’s the PV of $100 due in 3 years if i = 10%?
*
Financial Calculator Setup
BGN END
P/Y 1
FORMAT: DEC 4 or larger
*
Financial Calculator
Solution
s
N I/YR PV PMTFV
?
N = 3, I/YR = 10, PMT = 0, FV = 100
CPT, PV
-75.13
/
INPUTS
OUTPUT
*
Spreadsheet
.
DDBA 8307 Week 2 Assignment Exemplar
John Doe[footnoteRef:1] [1: Type your name here]
DDBA 8307-6[footnoteRef:2] [2: Type in DDBA section number (e.g. DDBA 8307 – 6) ]
Dr. Jane Doe[footnoteRef:3] [3: Enter faculty name here.]
1
Scales of Measurement
Type text here. Discuss the implications of “scales of measurement” in quantitative research. Be sure to use a minimum of two citations to support your position(s). Be sure to review the “Scales of Measurement” media from Week 1. This section should be no more than two paragraphs.
Research Question
What are the means, standard deviations, frequencies, and percentages of the Lesson 21 Exercise File variables?
Presentation of Findings
I analyzed data from Lesson 21 Exercise File [footnoteRef:4]. In this section, I present descriptive statistics for the study quantitative and qualitative variables. Appropriate APA tables and figures accompany the analysis[footnoteRef:5]. [4: Insert the appropriate file name. ] [5: The tables and figures from your SPSS output will need to be copied and pasted in the appropriate location.]
Descriptive Statistics[footnoteRef:6] [6: Detailed information can be found in Lesson 20, “Univariate Descriptive Statistics for Qualitative Variables,” and Lesson 21, “Univariate Descriptive Statistics for Quantitative Variables,” in the Green and Salkind text.
]
Descriptive statistics were run for the quantitative and qualitative variables in the Week 1 Assignment data set. Table 1 depicts the means and standard deviations for the quantitative data. Figure 1 depicts a histogram for the GPA variable. Table 2 depicts the frequencies and percentages for the qualitative (categorical) data. Figure 2 depicts a pie chart for the ethnic variable. Appendix 1 depicts the SPSS output.
Table 1[footnoteRef:7] [7: This is an example of an APA-formatted descriptive statistics table. Refer to Sections 5.01-5.19, in the APA Manual for detailed information on APA tables. The descriptive statistics table here includes the appropriate information derived from the SPSS output that is to be pasted as an appendix. Do not split tables across pages. Note: The numbers in the SPSS output presented here are fictitious numbers and do not represent correct numbers in the data set you will use for this application.
]
Means (M) and Standard Deviations (SD) for Study
Quantitative Variables (N = 105)
Variable[footnoteRef:8] [8: You would simply add rows to the table to accommodate the variables you have used in the analysis (i.e., variable 3, variable 4, etc.). Hint: Use the Microsoft Word Table feature.
]
M
SD
GPA
2.78
.76
Final
61.48
7.94
Percent
80.34
12.12
Figure 1. Histogram of GPA distribution.
Table 2[footnoteRef:9] [9: Recall from Lesson 20, “Univariate Descriptive Statistics for Qualitative Variables” (Green & Salkind, 2017), frequencies and percentages are reported for qualitative (nominal) variables. Note: Frequency and percentages are the only c.
DBM380 v14Create a DatabaseDBM380 v14Page 2 of 2Create a D.docxedwardmarivel
DBM/380 v14
Create a Database
DBM/380 v14
Page 2 of 2Create a Database
The following assignment is based on the business scenario for which you created both an entity-relationship diagram and a normalized database design in Week 2.
For this assignment, you will create multiple related tables that match your normalized database design. In other words, you will implement a physical design (an actual, usable database) based on a logical design.
Refer to the linked W3Schools.com articles “SQL CREATE TABLE Statement,” “SQL PRIMARY KEY Constraint,” “SQL FOREIGN KEY Constraint,” and “SQL INSERT INTO Statement” for help in completing this assignment.
Note: In the industry, even the most carefully thought out database designs can contain mistakes. Feel free to correct in your tables any mistakes you notice in your normalized database design. Also, note that in Microsoft® Access®, you follow the steps below to launch the SQL editor:
Figure 1. To create a SQL query in Microsoft® Access®, begin by clicking the CREATE tab.
To Complete This Assignment:
1. Use the CREATE TABLE statement to create each table in your design. Note that a table in a RDMS corresponds to an entity in an entity-relationship diagram. Recommended tables for this assignment are CUSTOMER, ORDER, ORDER_DETAIL, PRODUCT, EMPLOYEE, and STORE.
2. As part of each CREATE TABLE statement, define all of the columns, or fields, that you want each particular table to contain. Give them short, meaningful names and include constraints; that is, describe what type of data each column (field) is allowed to hold and any other constraints, such as size, range, or uniqueness.
3. Note that any field you marked as a unique identifier in your normalized database design is a key field. Key fields must be described as both UNIQUE and NOT NULL, which means a value must exist for each record and that value must be unique across all records.
4. After you have created all six tables, including relationships between the tables as appropriate (matching the primary key in one table to a foreign key in another table), use the INSERT INTO statement to insert 10 records into each of your tables. You will need to make up the data you insert into your tables. For example, to insert one record into the CUSTOMER table, you will need to invent a customer number, a customer name, and so on—one value for each of the fields you defined for the CUSTOMER table—to insert into the table.
5. To ensure that your INSERT INTO statements succeeded in populating your tables, use the SELECT statement described in Ch. 7, “Introduction to Structured Query Language,” in Database Systems: Design, Implementation, and Management.to retrieve the records you inserted. For example, to see all 10 records you inserted into the CUSTOMER table, you might apply the following SQL statement: SELECT * FROM CUSTOMER;
After you have created all six tables and populated ten records in each table, submit to the Assignment Files tab the database containin.
DB3.1 Mexico corruptionDiscuss the connection between pol.docxedwardmarivel
DB3.1: Mexico corruption
Discuss the connection between politics, corruption, and criminal organizations in Mexico. How would you go about separating these? Give examples and be specific. Support your ideas on why you would do these specific measures.
DB3.2: Collapse of Soviet Union
How has the collapse of the Soviet Union fostered pirate capitalism and organized crime? Be specific with your answer and support your answer. Do you think that if the Soviet Union did not collapse pirate capitalism and organized crime would still flourish? Support your opinion.
300 words per post
.
DB2Pepsi Co and Coke American beverage giants, must adhere to th.docxedwardmarivel
DB2
Pepsi Co and Coke American beverage giants, must adhere to the U.S Foreign Corruption Act wherever their businesses may take them. Both companies expanded their U.S businesses to India with differing initial results. Coke came home (initially) and Pepsi Co prospered.
Do your research and explain the socio-cultural barriers faced by these two companies? What in your view were the reasons which negatively impacted Coke and positively touched Pepsi Co?
WEEK 3:
Interactive
: Select one company other than the 2 mentioned above, and share this company’s experience in the United Arab Emirates. Comment on another learner’s company experience in a different location of the world.
WEEK 4:
Interactive
: Comment on a different learner’s company experience in a totally different location from those completed earlier. Do you feel that cultural training is an essential pre-requisite for expatriates in any host country? Why/Why not?
Remember to use APA referencing in the body of your posting.
.
DB1 What Ive observedHave you ever experienced a self-managed .docxedwardmarivel
DB1: What I've observed
Have you ever experienced a self-managed team? If so, describe it. If not, why do you think your organization has not embraced self managed teams?
DB2: Case Analysis
Review the case study at the end of Chapter 8, Frederick W. Smith - FedEx. Answer the five questions below:
1. How do the standards set by Fred Smith for FedEx teams improve organizational performance?
2. What motivates the members of FedEx to remain highly engaged in their teams?
3. Describe the role FedEx managers play in facilitating team effectiveness.
4. What types of teams does FedEx use? Provide evidence from the case to support your answer.
5. Leaders play a critical role in building effective teams. Cite evidence from the case that FedEx managers performed some of these roles in developing effective teams.
Image Source Team:
http://www.freedigitalphotos.net/images/gallery-thumbnails.php?id=50143103253525199427035558
.
DB Response 1I agree with the decision to search the house. Ther.docxedwardmarivel
DB Response 1
I agree with the decision to search the house. There was reasonable suspicion to believe the fugitive could have been in the home. The homeowner not only consented to the search of the house but requested it for her safety. Complacency kills. In this situation, the officer is very regretful in his decision to conduct a complacent search of the home, and luckily nobody was killed.
My department does not have body cameras, but I still conduct business as if somebody is recording me. We live in a generation of surveillance. You never know when there are hidden cameras, a camera on a business you did not notice, or a cell phone recording from the top floor of a building. We hire police officers with high amounts of integrity because the definition of integrity is doing the right thing even when nobody is looking. I would be lying if I said my grandmother would approve of everything I do on the job. I am most guilty of foul language and it is something that I am working on not doing that. However, I can emphatically say I work with integrity and honesty without a doubt.
I think setting limits on tolerable behavior in regards to sexual and general harassment is appropriate; however, there are too many situations to make a policy for every behavior one could find inappropriate. When it comes to using force again every situation is different but there should be a pretty well laid out policy at departments for when and how an officer should use a certain amount of force. Officers should be trained on de-escalation tactics and alternatives to using force. Tactical training should include strategies to create time, space, and distance, to reduce the likelihood that force will be necessary and should occur in realistic conditions appropriate to the department’s location (U.S. Commission On Civil Rights, 2018).
Philippians 2 verses 3 – 8 is a pretty straightforward verse with great leadership lessons. Be humble, put others before yourself, and be a servant leader.
From the very beginning of any interrogation, the accused has constitutional rights not to speak to police and also to have an attorney present. The Eighth Amendment to the Constitution prohibits cruel and unusual punishments placed upon any persons in the U.S. With these rights in mind I will only go as far as the Constitution allows when interrogating this suspect even if the suspect admits where the child is if the admission was coerced that admission could get thrown out of court. I would never compromise the investigation. There are other ways to find the abducted girl through detective work than just interrogating the suspect. The cost of illegal interrogations is documented in the number of lost prosecutions. Literally, thousands of cases across the country have had to be dismissed because prosecutors could not trust that the evidence provided by police officers was legitimate or the officer had lost credibility as a witness in all cases because of his or her wrongdoing (P.
DB Response prompt ZAKChapter 7, Q1.Customers are expecting.docxedwardmarivel
DB Response prompt ZAK
Chapter 7, Q1.
Customers are expecting more from their service providers. Rather than traditionally accepting boilerplate offerings from service providers, customers desire that service providers cater to their requests. Organizations providing services must keep up with the customer’s demand or risk losing business to others who will. Many service providers have been adopting lean principles to accommodate the needs of their customers in successful attempts to decrease waste, increase efficiency, improve customer service and satisfaction (Daft, 2016, p. 275). From online music providers, customers expect music tracks personalized for their tastes. From airlines, customers can expect preflight seat and meal selections. Amazon.com provides custom personalization to a customers’ home pages by placing personally directed advertisements and products which the customer is more likely to order from the company. Amazon book recommendations are personalized to the specific customer and are provided based upon previous books read. With customers expecting customized and catered experiences, companies need to keep up with this demand and embrace mass customization in order to obtain and retain customers.
Chapter 7, Q2.
While many facets of businesses may involve craft technology, it is still important for business schools to teach management. Some businesses which only expect their leaders to gain knowledge and expertise from experience, may be creating a bureaucratic and restricted model for their business. Companies which rely only on internal training for their leaders can miss opportunities from potential leaders coming in from the outside. Business schools which teach management can provide potential leaders with a foundation to draw from. Teaching management can expose students to issues and opportunities experienced by others, not just ones restricted to one specific company. Teaching management from a textbook is just one method of conveying information. Just as one would not necessarily be proficient in piloting a boat from reading a book, a textbook about doing so would provide the student with underlying concepts which could dramatically increase the success of the student when they move to an actual boat. This textbook based training would be further enhanced with some practical experience.
Chapter 8, Q1.
Technology has progressed allowing real time instant messaging and virtual meetings. High level managers can indeed expect technology to allow them to do their jobs with little face-to-face communication, but they should question if that is something they really want to do. There are currently methods available which could be used effectively to communicate with subordinates, employees and stockholders, such as recorded feeds which would be able to reach every associated individual. These however may not provide a sense of personalization from the managers. Leaders in an organization may resort to using tec.
DB Topic of Discussion Information-related CapabilitiesAnalyze .docxedwardmarivel
DB Topic of Discussion: Information-related Capabilities
Analyze 2 of the 14 information-related capabilities and explain how the joint force can use these capabilities to affect the three dimensions of the information environment. Give examples of real-world or life events for the capabilities and how can you use these concepts as a CSM/SGM.
Consumer Brand Metrics Q3 2015
Eater Archetypes:
Brand usage and preferences by consumer segment
The restaurant industry has long relied on demographic factors to
identify and prioritize consumer groups. For example, many
brands currently obsess over attracting Millennials—some
without pausing to consider the variations among consumers
within this demographic cohort. In addition to life stages,
consumer attitudes about health, value, convenience and the
overall role of foodservice in their lives drive significant
differences in preferences and behavior.
With these distinctions in mind, we have updated the Consumer
Brand Metrics (CBM) survey with questions that allow us to
segment consumers into one of seven Eater Archetypes. Each
segment has a distinct psychographic profile, which is outlined in
our recent Consumer Foodservice Landscape. Accordingly, their
patronage of the segments and brands tracked in CBM varies.
This paper explores some differences we can discern after the
initial quarterly results, including the archetypes’ segment usage,
brand patronage and occasion dynamics. Examining CBM data by
Eater Archetype reveals nuances that complement a demographic
profile of a chain’s guests.
By Colleen Rothman, Manager, Consumer Insights
To learn more about the Consumer Brand Metrics program or to sign up for future
Spotlight by Consumer Brand Metrics white papers, please contact Bart Henyan,
Senior Marketing Manager, at [email protected]
Consumer Brand Metrics Q3 2015
Segmenting consumers by psychographic factors, rather than
just demographic characteristics, can lead to a better
understanding of the consumers that matter to your brand and
how to appeal to them.
Key Takeaways
Busy Balancers and Functional Eaters drive usage across
restaurants and convenience stores. Full-service restaurant
(FSR) operators may also consider targeting Foodservice
Hobbyists and Affluent Socializers, as these archetypes
comprise more than a quarter of FSR patrons, on average.
How does foodservice segment usage vary by archetype?
Driven by unique needs and motivations, Eater Archetypes
gravitate to a wide variety of brands. For example,
McDonald’s, Burger King and Whataburger each
disproportionately attract unique archetypes (Habitual
Matures, Bargain Hunters and Functional Eaters,
respectively).
Which chains do each archetype visit most frequently?
Archetypes that patronize the same restaurant may not use
the brand the same way. For example, usage varies by
daypart, with afternoon snacks skewing to Busy Balancers
and late-night meals d.
DB Instructions Each reply must be 250–300 words with a minim.docxedwardmarivel
DB Instructions:
Each reply must be 250–300 words with a minimum of 1 scholarly source. The scholarly source used for your thread and response should be in addition to the class textbooks.
Reference Book: Young, M. (2017). Learning the Art of Helping. Boston, MA: Pearson. ISBN: 9780134165783.
.
DB Defining White Collar CrimeHow would you define white co.docxedwardmarivel
DB: Defining White Collar Crime
How would you define white collar crime? What are the advantages and disadvantages of the various terms, such as “white collar crime,” “crimes of the powerful,” “elite deviance,” etc., used to describe the type of crimes.
300 Word Minimum
.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
Deciphering Malware’s use of TLS (withoutDecryption)Blak.docx
1. Deciphering Malware’s use of TLS (without
Decryption)
Blake Anderson
Cisco
[email protected]
Subharthi Paul
Cisco
[email protected]
David McGrew
Cisco
[email protected]
Abstract—The use of TLS by malware poses new challenges
to network threat detection because traditional pattern-matching
techniques can no longer be applied to its messages. However,
TLS also introduces a complex set of observable data features
that allow many inferences to be made about both the client
and the server. We show that these features can be used to
detect and understand malware communication, while at the
same
time preserving the privacy of benign uses of encryption. These
data features also allow for accurate malware family attribution
of network communication, even when restricted to a single,
encrypted flow.
To demonstrate this, we performed a detailed study of how
TLS is used by malware and enterprise applications. We provide
a general analysis on millions of TLS encrypted flows, and a
targeted study on 18 malware families composed of thousands
2. of unique malware samples and ten-of-thousands of malicious
TLS flows. Importantly, we identify and accommodate the bias
introduced by the use of a malware sandbox. The performance
of a malware classifier is correlated with a malware family’s
use
of TLS, i.e., malware families that actively evolve their use of
cryptography are more difficult to classify.
We conclude that malware’s usage of TLS is distinct from
benign usage in an enterprise setting, and that these differences
can be effectively used in rules and machine learning
classifiers.
I. INTRODUCTION
Encryption is necessary to protect the privacy of end users.
In a network setting, Transport Layer Security (TLS) is the
dominant protocol to provide encryption for network traffic.
While TLS obscures the plaintext, it also introduces a complex
set of observable parameters that allow many inferences to be
made about both the client and the server.
Legitimate traffic has seen a rapid adoption of the TLS
standard over the past decade, with some studies stating that
as much as 60% of network traffic uses TLS [1]. Unfortunately,
malware has also adopted TLS to secure its communication. In
our dataset, ∼10% of the malware samples use TLS. This trend
makes threat detection more difficult because it renders the
use of deep packet inspection (DPI) ineffective. It is important
to determine whether encrypted network traffic is benign or
malicious, and do so in a way that preserves the integrity of
the encryption. And while 10% of malware samples utilizing
TLS seems low, we make the assumption that this number will
increase as the level of encryption in network traffic increases.
Along these lines, we have seen a slight, but statistically
significant, increase in malicious, encrypted traffic over the
3. past 12 months.
To further motivate the need for a study exposing mal-
ware’s use of TLS, we consider the limitations of a pattern-
matching approach when faced with TLS, and analyzed a
popular community Intrusion Protection System (IPS) rule set
[32]. As of this writing, there were 3,437 rules in that set,
3,307 of which inspect packet contents. Only 48 rules were
TLS specific, and of those, only 6 detected malware, using
strings in self-signed certificates. Of the remainder, 19 detect
Heartbleed or other overflow attacks against TLS implemen-
tations, and 23 detect plaintext over ports typically assigned
to TLS. These numbers show that traditional signature-based
techniques have not heavily invested in TLS-specific malware
signatures to date. However, the rules that match certificate
strings hint that it is possible to detect malware through the
passive inspection of TLS. Our goal in this paper to confirm
and substantiate this idea, by identifying data features and
illustrating methodologies that allow for the creation of rules
and machine learning classifiers that can detect malicious,
encrypted network communication. For instance, we identify
features of both the TLS client and server gathered from
unencrypted handshake messages that could be used to create
IPS rules.
In this paper, we provide a comprehensive study of mal-
ware’s use of TLS by observing the unencrypted TLS hand-
shake messages. We give a high-level overview of malware’s
use of TLS compared to what we have observed on an
enterprise network. Enterprise traffic typically uses up-to-date
cryptographic parameters that are indicative of up-to-date TLS
libraries. On the other hand, malware typically uses older and
weaker cryptographic parameters. Malware’s usage of TLS is
distinct compared to enterprise traffic, and, for most families,
this fact can be leveraged to accurately classify malicious
traffic patterns. We examine these difference from both a TLS
4. client and a TLS server perspective.
In addition to our in-depth technical analysis, it is inter-
esting to note the general tone that malware authors have
towards encryption. There is an FAQ section in the open-
sourced Zeus/Zbot malware [3] where the following question
and answer occur (content left as is):
Question: Why traffic is encrypted with symmetric
encryption method (RC4), but not asymmetric (RSA)?
Answer: Because, in the use of sophisticated algorithms
it makes no sense, encryption only needs to hide traffic.
In the current privacy climate, this attitude most certainly does
not hold for enterprise network traffic [4], [26]. Again, this
ar
X
iv
:1
60
7.
01
63
9v
1
[
cs
.C
5. R
]
6
J
ul
2
01
6
divergence is another tool we can take advantage of to more
accurately classify malicious flows.
When applying machine learning classifiers on a per-family
basis, it is clear that some families/subfamilies are more
difficult to classify. Our goal is not to show optimized machine
learning classifiers, but rather to identify what characteristics
of the specific family make it difficult to classify. For instance,
we can correlate poor classifier performance on encrypted
traffic patterns with one family’s use of strong [33] and varied
cryptography. We also examine additional features extracted
from unencrypted TLS handshake messages that significantly
increase the performance of the classifiers. In general, we have
found this approach to be fruitful: identify weaknesses in the
features used to represent a flow on a per-family basis, and then
augment that representation with more informative features.
Finally, we show how we can perform family attribution
given only network based data. This problem is positioned as a
multi-class classification problem where each malware family
has its own label. We identify families who use identical TLS
6. parameters, but can still be accurately classified because their
traffic patterns with respect to other flow-based features are
distinct. We also identify subfamilies of malware that cannot be
distinguished from one another with only their network data.
We are able to achieve an accuracy of 90.3% for the family
attribution problem when restricted to a single, encrypted flow,
and an accuracy of 93.2% when we make use of all encrypted
flows within a 5-minute window.
We use a commercial sandbox environment to collect the
first five minutes of a malware sample’s network activity.
We collected tens-of-thousands of unique malware samples
and hundreds-of-thousands of malicious, encrypted flows from
these samples. We collected millions of TLS encrypted flows
from an enterprise network to compare against the malware
data. We used an open source project to collect the data
and transform it to a JSON format that contained the typical
network 5-tuple, the sequence of packet lengths and inter-
arrival times, the byte distribution, and the unencrypted TLS
handshake information. All of the analysis done in this paper
uses only network data, and does not assume an endpoint
presence.
The remainder of the paper is organized as follows: Section
II outlines some basic assumptions we make with respect to
the data and our methodology, and Section III reviews how
we obtained our data, specifies the datasets we use for each
experiment, and what features we use to classify the network
flows. Section IV gives an overview of how malware’s usage
of TLS differs from that of an enterprise network from both
the perspective of a TLS client and a TLS server. Section V
shows which families are difficult to classify from a network
flow point-of-view, and explains why this is the case, and
Section VI gives results showing how we can attribute a flow
to a particular family. Section VII reviews previous and related
work, Section VIII outlines some limitations of our approach,
7. and finally, we conclude in Section IX.
II. PRELIMINARIES AND ASSUMPTIONS
Our primary concern in this paper is to categorize and
classify malicious, TLS encrypted flows. While we do use the
serverHello and certificate messages to highlight
Port Percentage of TLS Flows
443 98.4%
9001 1.2%
80 0.1%
9101 0.1%
9002 0.1%
TABLE I: Based on malware data collected between August
2015 and May 2016, we investigated which ports malware used
the most for TLS Encrypted communication.
some interesting features about the servers that the malware
samples are connecting to, our main focus is client oriented.
The classification algorithms we develop are heavily dependent
on client-based features, which allows our algorithms to cor-
rectly classify a malicious agent connecting to google.com
versus a typical enterprise agent connecting to google.com,
i.e., we can leverage the client’s cryptographic parameters to
differentiate these two events. For this reason, we do not filter
the malware’s TLS traffic to only include command and control
flows, but also allow other types of TLS-encrypted traffic such
as click-fraud.
In this paper, we focus on TLS encrypted flows over
port 443 to make the comparisons between enterprise TLS
and malicious TLS be as unbiased as possible. To further
motivate this choice, Table I lists the 5 most used ports for
8. TLS by the malware samples collected between August 2015
and May 2016. To determine if a flow was TLS, we used
deep packet inspection and a custom signature based on the
TLS versions and message types of the clientHello and
serverHello messages. In total, we found 229,364 TLS
flows across 203 unique ports, and port 443 was by far the
most common port for malicious TLS. Although the diversity
of port usage in malware was great, these diverse ports were
relatively uncommon.
Given that our non-malware data was collected on an
enterprise network, it naturally follows that the categorization
and classification results presented in this paper are most
applicable to the enterprise setting. We do not claim that these
results hold for the general class of networks, e.g., service
provider data. That being said, we do believe that securing
enterprise networks is an important use case and that the
conclusions presented in this paper offer enterprise network
operators significantly novel and valuable results.
The enterprise network data used in this paper was initially
filtered using a well known IP blacklist [10]. This removed
∼0.05% of the initial traffic. After this filtering stage, we take
the data “as-is”. We are aware that there is most likely more
malicious traffic in this dataset, but this fact is just taken as a
base assumption for reasons of practicality.
III. DATA
The data for this paper was collected from a commer-
cial sandbox environment where users can submit suspicious
executables. Each submitted sample is allowed to run for 5
minutes. The full packet capture is collected and stored for
each sample. Due to constraints of the sandbox environment,
2
9. Malware Family Unique Samples Encrypted Flows
Bergat 192 332
Deshacop 69 129
Dridex 38 103
Dynamer 118 372
Kazy 228 1,152
Parite 111 275
Razy 117 564
Sality 612 1,200
Skeeyah 81 218
Symmi 494 2,618
Tescrypt 137 205
Toga 156 404
Upatre 377 891
Virlock 1,208 12,847
Virtob 115 511
Yakes 100 337
Zbot 1,291 2,902
Zusy 179 733
Total 5,623 25,793
TABLE II: Summary of the malicious families used in our
analysis. We collected 18 malicious families, 5,623 malicious
samples, and 25,793 encrypted flows that successfully negoti-
ated the TLS handshake and sent application data.
all network traffic observed in the sandbox is considered to be
that of the originally submitted sample. For instance, if sample
A downloads and installs B and C, then the traffic generated
from B and C would be considered A’s.
10. This method of data collection is straightforward, and while
it ignores some details about what is occurring on the endpoint,
it is consistent with our goal of understanding each sample
based solely on its network communications. Some biases were
introduced with this approach. First, to reduce the number of
false positives, we only considered samples that were known
bad. In this setting, known bad means hitting on four or more
antivirus convictions from unique vendors in VirusTotal [2].
Second, due to hardware constraints, the samples are only
allowed to run for 5 minutes in a Windows XP-based virtual
machine. Any encrypted network traffic that happens after this
initial 5 minute window will not be captured. Similarly, any
samples that are not compatible with Windows XP will not
run in this environment.
The enterprise data was collected from an enterprise net-
work with ∼500 active users and ∼4,000 unique IP addresses.
The majority of the machines on this network run Windows
7, with the second most popular operating system being OS X
El Capitan.
A. Dataset and Sample Selection
The malware traffic used in this paper was collected from
August 2015 to May 2016, and the enterprise traffic was
collected during a 4 day period in May 2016 and a 4 day period
in June 2016. In this work, we performed several experiments
on different subsets of this data.
We first analyze the differences between the TLS pa-
rameters typically seen on an enterprise network versus the
TLS parameters used by the general malware population. To
proceed, we first removed all of the TLS flows that offered
an ordered ciphersuite list that matched a list found in the
default Windows XP SChannel implementation [23]. This
was done to help ensure that the TLS clients we observed
11. were representative of the malware’s behavior and not that of
the TLS library provided by the underlying operating system.
This removed ∼40% of the malicious TLS flows and ∼0.4% of
the enterprise TLS flows. After this filtering stage, we used all
of the TLS flows in our dataset. From August 2015 until May
2016, we collected 133,744 TLS flows initiated by malicious
programs. During the 4 day periods in May and June 2016,
we collected 1,500,005 TLS flows from an enterprise network.
All of these TLS flows successfully negotiated the full TLS
handshake and sent application data.
To analyze the differences between the TLS parameters
used by different malware families, we used the malware sam-
ples from October 2015 to May 2016 that had an identifiable
family name. Table II gives a summary of the number of
samples and flows for each malware family. The family name
was generated by a majority vote from the signatures provided
by VirusTotal [2]. Malware samples without a clear family
name were discarded, i.e., any sample without at least four
different antivirus programs using the same name (ignoring
common names such as Trojan). Family names with less
than 100 flows were not used. This process pruned our original
set of 20,548 samples that used TLS to 5,623 unique samples
across 18 families. It is difficult to determine the family, if any,
associated with a malware sample, even with the information
provided through dynamic analysis in a sandbox setting. These
samples generated 25,793 TLS encrypted flows that success-
fully negotiated the TLS handshake and sent application data.
In this paper, we also make use of machine learning
classifiers in three experiments. The first is to demonstrate the
value of the additional TLS features through 10-fold cross-
validation. For this experiment, we use all of the malicious
TLS flows collected from August 2015 until May 2016, and a
random subset of the May and June 2016 enterprise network’s
TLS flows. In total, there were 225,740 malicious and 225,000
12. enterprise flows for this experiment. To account for the bias
that the Windows XP-based sandbox could introduce, we also
present results on a dataset composed of only flows that offered
an ordered ciphersuite list that did not match a list found in
the default Windows XP SChannel implementation: 133,744
malicious and 135,000 enterprise TLS flows.
In the next set of experiments, we analyzed how well a
trained classifier is able to detect the TLS flows generated by
the different malware families. To train the classifier, we used
the same 225,000 enterprise flows as above for the negative
class, and 76,760 malicious TLS flows collected during August
and September 2015 for the positive class. The testing data
consisted of the TLS flows from October 2015 to May 2016
that could be assigned a ground truth family as described
above. Again, Table II gives a summary of the number of
samples and flows for each malware family. While we do
not remove flows that offered an ordered ciphersuite list that
matched a list found in the default Windows XP SChannel
implementation in this experiment, we do make explicit the
3
families that have this bias.
Finally, to assess the malware family attribution potential of
TLS handshake metadata, we used 10-fold cross-validation and
multi-class classification on the data listed in Table II. Again,
we do not remove samples that offered an ordered ciphersuite
list that matched a list found in the default Windows XP
SChannel implementation in this experiment because all of
the samples would have the same bias.
B. Feature Extraction
13. To extract the data features of interest, we wrote software
tools to extract the data features of interest from live traffic
or packet capture files. The open source project will export
all of the data in a convenient JSON format. The machine
learning classifiers are built using traditional flow features,
traditional “side-channel” features, and features collected from
the unencrypted TLS handshake messages.
1) Flow Metadata: The first set of features investigated are
modeled around traditional flow data that is typically collected
in devices configured to export IPFIX/NetFlow. These features
include the number of inbound bytes, outbound bytes, inbound
packets, outbound packets; the source and destination ports;
and the total duration of the flow in seconds. These features
were normalized to have zero mean and unit variance.
2) Sequence of Packet Lengths and Times.: The sequence
of packet lengths and packet inter-arrival times (SPLT) has
been well studied [25], [39]. In our open source imple-
mentation, the SPLT elements are collected for the first 50
packets of a flow. Zero-length payloads (such as ACKs) and
retransmissions are ignored.
A Markov chain representation is used to model the SPLT
data. For both the lengths and times, the values are discretized
into equally sized bins, e.g., for the length data, 150 byte bins
are used where any packet size in the range [0,150) will go
into the first bin, any packet size in the range [150,300) will go
into the second bin, etc. A matrix A is then constructed where
each entry, A[i,j], counts the number of transitions between
the i’th and j’th bin. Finally, the rows of A are normalized to
ensure a proper Markov chain. The entries of A are then used
as features to the machine learning algorithms.
3) Byte Distribution.: The byte distribution is a length-256
14. array that keeps a count for each byte value encountered in
the payloads of the packets for each packet in the flow. The
byte value probabilities can be easily computed by dividing the
byte distribution counts by the total number of bytes found in
the packets’ payloads. The 256 byte distribution probabilities
are used as features by the machine learning algorithms.
The full byte distribution provides a lot of information about
the encoding of the data. Additionally, the byte distribution
can give information about the header-to-payload ratios, the
composition of the application headers, and if any poorly
implemented padding is added.
4) Unencrypted TLS Header Information.: TLS (Transport
Layer Security) is a cryptographic protocol that provides
privacy for applications. TLS is usually implemented on top of
common protocols such as HTTP for web browsing or SMTP
for email. HTTPS is the usage of TLS over HTTP, which is
the most popular way of securing communication between a
web server and client, and is supported by most major web
servers. HTTPS typically uses port 443.
The TLS version, the ordered list of offered ciphersuites,
and the list of supported TLS extensions are collected from
the client hello message. The selected ciphersuite and
selected TLS extensions are collected from the server
hello message. The server’s certificate is collected from the
certificate message. The client’s public key length is
collected from the client key exchange message, and
is the length of the RSA ciphertext or DH/ECDH public key,
depending on the ciphersuite. Similar to the sequence of packet
lengths and times, the sequence of record lengths, times, and
types is collected from TLS sessions.
In our classification algorithms, the list of offered cipher-
suites, the list of advertised extensions, and the client’s public
15. key length were used. 176 offered ciphersuite hex codes were
observed in our full dataset, and a binary vector of length
176 was created where a one is assigned to each ciphersuite
in the list of offered ciphersuites. Similarly, we observed 21
unique extensions, and a binary vector of length 21 was created
where a one is assigned to each extension in the list of
advertised extensions. Finally, the client’s public key length
was represented as a single integer value. In total, 198 TLS
client-based features were used in the classification algorithms.
In some experiments, we use an additional TLS server-based
binary feature: whether the certificate was self-signed or not.
IV. MALWARE FAMILIES AND TLS
Although malware uses TLS to secure its communication,
our data suggests that for the majority of the families we
analyzed, malware’s use of TLS is quite distinct from that
of the enterprise network’s traffic. In this section, we highlight
these differences from the perspective of the TLS client and
also from the perspective of the TLS server.
For the comparisons between general malware and enter-
prise traffic, we first removed all of the TLS flows that offered
an ordered ciphersuite list that matched a list found in the
default Windows XP SChannel implementation [22], [29]
from our full dataset. We found that ∼40% of TLS flows
from malware samples offered this list. To help ensure that
our analysis was capturing trends in the malware’s use of
TLS, and not that of the underlying operating system, we
removed all of these flows. After this filtering stage, we used
all of the TLS flows in our dataset. From August 2015 to May
2016, we collected 133,744 TLS flows initiated by malicious
programs that successfully negotiated the full TLS handshake
and sent application data. In May and June 2016, we collected
1,500,005 TLS flows from an enterprise network using the
same criteria.
16. The malware data collection process can introduce biases
in terms of malware family representation, and the conclusions
that can be made from the TLS parameters collected. To
account for this, we also analyze the TLS clients that malware
uses and the TLS servers that malware connects to on a per-
family basis. In this analysis, we highlight the families that
use the default Windows TLS library, and the families which
include their own TLS client. The data for this experiment is
listed in Table II.
4
00
2f
00
35
00
0a
c0
13
c0
09
c0
0a
c0
14
23. 10
15
20
25
30
TLS Client
Fig. 1: Malware’s use of TLS versus that of enterprise network
traffic relative to the TLS client features. Some values and the
full ciphersuite names were omitted for clarity of presentation.
Ciphersuites and extensions are represented as hex codes, which
are given in full in Appendix A.
A. TLS Clients
1) Malware versus Enterprise: Figure 1 illustrates the
differences between the malware’s and the enterprise’s us-
age of TLS with respect to the TLS clients after filter-
ing typical Windows XP ciphersuite lists. Nearly 100%
of the enterprise TLS sessions offered the 0x002f
(TLS_RSA_WITH_AES_128_CBC_SHA) ciphersuite and the
0x0035 (TLS_RSA_WITH_AES_256_CBC_SHA) cipher-
suite. On the other hand, nearly 100% of the malicious TLS
sessions observed offered:
• 0x000a
(TLS_RSA_WITH_3DES_EDE_CBC_SHA)
• 0x0005 (TLS_RSA_WITH_RC4_128_SHA)
• 0x0004 (TLS_RSA_WITH_RC4_128_MD5)
24. These three ciphersuites are considered weak, and although the
enterprise traffic we observed does offer these ciphersuites, it
does not offer them with the same frequency that the malicious
traffic does.
The differences in malware and enterprise’s TLS client
hello messages become more evident when the advertised
TLS extensions are considered. We observed a much greater
diversity in the TLS extensions that enterprise clients adver-
tised. Almost half of the enterprise clients would advertise up
to 9 extensions, but the malicious clients would only consis-
tently advertise one: 0x000d (signature_algorithms),
an RFC MUST in most circumstances [13]. The following four
extensions were observed in ∼50% of the enterprise traffic and
rarely observed in the malicious traffic:
• 0x0005 (status_request)
• 0x0010 (supported_groups)
• 0x3374 (next_protocol_negotiation)
• 0x0017 (extended_master_secret)
The client’s public key length, taken from the client
key exchange message, has discriminatory power. As il-
lustrated in Figure 1, most of the enterprise traffic used a
512-bit (ECDHE_RSA) public key. In contrast, malware almost
exclusively used a 2048-bit (DHE_RSA) public key.
5
Malware Number Most Seen Number of Distinct Most
26. Virtob 511 IE 8* 4 None 2048-bit (RSA)
Yakes 337 IE 8* 3 None 2048-bit (RSA)
Zbot 2,902 IE 8* 12 None 2048-bit (RSA)
Zusy 733 IE 8* 7 None 2048-bit (RSA)
TABLE III: The most popular TLS client configurations for the
18 malicious families. The TLS client was estimated using TLS
fingerprinting techniques [29]. For TLS extensions, in the case
of a tie, all equally probable extensions are listed. (*) indicates
the fingerprint of the TLS client provided by the underlying
sandbox operating system.
Finally, we mapped the TLS client parameters to well
known client programs that use specific TLS libraries and
configurations [29]. This information could be spoofed, but we
feel that this is still a valuable and compact way to represent
a client. As shown in Figure 1, the most popular clients for
malware and enterprise TLS connections are quite distinct. In
the enterprise setting, we found that the four most common
client configurations resembled the most recent releases of
the four most popular browsers: Firefox 47, Chrome 51,
Internet Explorer 11, and Safari 9. On the other
hand, malware most frequently used TLS client parameters
that matched those of Opera 12, Firefox 46, and Tor
0.2.x.
2) Malware Families: Table III gives the most popular
TLS client parameters for each of the 18 malware families we
had access to. The most popular TLS client was Internet
Explorer 8, which was used most frequently by 10 of
the 18 families. These families and client values are listed
for completeness, but should more accurately be read as
utilizing the TLS library provided by the underlying Windows
environment.
The Tor client and browser were very popular among the
27. malware families, being the most popular with Deshacop,
Dynamer, Razy, Skeeyah, and Toga. Dynamer, Skeeyah, and
Symmi all used a 512-bit (ECDHE_RSA) public key as opposed
to the most popular public key: 2048-bit (RSA), which is most
likely an artifact of the underlying Windows environment.
Table III also lists the number of distinct ciphersuite offer
vectors observed for each malware family. In this context, a
client is taken to be unique if it has a different set of offered ci-
phersuites and advertised extensions. Some families have very
few unique clients, e.g., Bergat. On the other hand, Sality has
a large number of distinct ciphersuite offer vectors. And while
Sality’s most used TLS client offered parameters similar to
Internet Explorer 8, it had hundreds of other unique
combinations of offered ciphersuites and advertised extensions.
B. TLS Servers
1) Malware versus Enterprise: Figure 2 illustrates the
differences between the servers connected to by the malware
and the enterprise TLS clients after filtering clients that used
typical Windows XP ciphersuite lists. The filtering was done
for the server statistics because those clients have a significant
impact on what is sent in the server hello message.
As seen in Figure 2, the selected ciphersuites of the
server hello messages are sharply divided for the major-
ity of enterprise and malicious TLS sessions. The following
four ciphersuites were selected by ∼90% of the servers that
malware communicated with:
• 0x000a
(TLS_RSA_WITH_3DES_EDE_CBC_SHA)
6
32. er
ce
nt
ag
e
of
F
lo
w
s
Validity of Certificate (In Days)
2 1 10 5 7 12 3
0
10
20
30
40
50
Number of SAN Entries
Fig. 2: Malware’s use of TLS versus that of enterprise network
traffic relative to the TLS server features. Some values and the
full ciphersuite names were omitted for clarity of presentation.
Ciphersuites and extensions are represented as hex codes, which
33. are given in full in Appendix A.
• 0x0004 (TLS_RSA_WITH_RC4_128_MD5)
• 0x006b
(TLS_DHE_RSA_WITH_AES_256_CBC_SHA256)
• 0x0005 (TLS_RSA_WITH_RC4_128_SHA)
These ciphersuites were rarely selected by servers that
enterprise hosts communicated with. TLS_RSA_WITH
_RC4_128_MD5 and TLS_RSA_WITH_RC4_128_SHA are
considered weak.
As one would expect given the lack of advertised TLS
extensions by the malware clients, the servers that malware
communicated with rarely selected TLS extensions. On the
other hand, the servers that the enterprise hosts communicated
with had a much greater diversity in the selected TLS exten-
sions with 0xff01 (renegotiation_info) and 0x000b
(ec_point_formats) being the most frequent.
We also analyzed information from the servers’ certifi-
cates. As anticipated, we found that enterprise endpoints most
frequently connected to servers with the following certificate
subjects:
• *.google.com
• api.twitter.com
• *.icloud.com
• *.g.doubleclick.net
• *.facebook.com
34. This distribution of certificate subjects was very long tailed.
The certificate subjects of servers that the malware sam-
ples communicated with also had a long tail. These certifi-
cates were mostly composed of subjects that had charac-
teristics of a domain generation algorithm (DGA) [6], e.g.,
www.33mhwt2j.net. Although malware mostly communi-
cated with servers that had suspicious certificate subjects, it is
also clear that malware communicates with many inherently
benign servers, e.g., google.com for connectivity checks
or twitter.com for command and control. The following
certificate subjects were the most frequent for TLS flows
initiated by malware:
• block.io
• *.wpengine.com
• *.criteo.com
7
Malware Number Unique Number of Selected Certificate
Family of Flows Server IPs SS Certs Ciphersuite Subject
Bergat 332 12 0 TLS_RSA_WITH_3DES_EDE_CBC_SHA
www.dropbox.com
Deshacop 129 38 0 TLS_RSA_WITH_3DES_EDE_CBC_SHA
*.onion.to
Dridex 103 10 89 TLS_RSA_WITH_AES_128_CBC_SHA
amthonoup.cy
Dynamer 372 155 3
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
www.dropbox.com
35. Kazy 1152 225 52 TLS_RSA_WITH_3DES_EDE_CBC_SHA
*.onestore.ms
Parite 275 128 0 TLS_RSA_WITH_3DES_EDE_CBC_SHA
*.google.com
Razy 564 118 16 TLS_RSA_WITH_RC4_128_SHA baidu.com
Sality 1,200 323 4 TLS_RSA_WITH_3DES_EDE_CBC_SHA
vastusdomains.com
Skeeyah 218 90 0
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
www.dropbox.com
Symmi 2,618 700 22
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA *.criteo.com
Tescrypt 205 26 0 TLS_RSA_WITH_3DES_EDE_CBC_SHA
*.onion.to
Toga 404 138 8 TLS_RSA_WITH_3DES_EDE_CBC_SHA
www.dropbox.com
Upatre 891 37 155 TLS_RSA_WITH_RC4_128_MD5
*.b7websites.net
Virlock 12,847 1 0
TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 block.io
Virtob 511 120 0 TLS_RSA_WITH_3DES_EDE_CBC_SHA
*.g.doubleclick.net
Yakes 337 51 0 TLS_RSA_WITH_RC4_128_SHA baidu.com
Zbot 2,902 269 507 TLS_RSA_WITH_RC4_128_MD5
tridayacipta.com
Zusy 733 145 14 TLS_RSA_WITH_3DES_EDE_CBC_SHA
*.criteo.com
TABLE IV: TLS server configurations for the servers most
visited by the 18 malicious families. The certificate subject
typically
has a long tail, but only the most frequent is reported. The
reported number of self-signed certificates is not necessarily
related
to the most popular certificate subject.
36. • baidu.com
• *.google.com
Because the DGA-like certificate subjects are counted as
unique, they do not show up in this list.
Figure 2 highlights two other interesting features associated
with server certificates: the validity of the certificate in days
and the number of subjectAltName entries. Interestingly,
the high prevalence of connections to block.io, a Bitcoin
wallet, heavily skewed the validity (375 days) and number of
subjectAltName entries (3) for the certificates of servers
that malware connected to.
It is also interesting to note the frequency of TLS servers
using certificates that are self-signed. In the enterprise data,
1,352 out of the 1,500,005 TLS sessions, or ∼0.09%, used
a self-signed certificate. In the malware data, 947 out of the
133,744 TLS sessions, or ∼0.7%, used a self-signed certificate,
which is roughly an order of magnitude more frequent than the
enterprise case.
2) Malware Families: Table IV lists several interesting
statistics about the servers that malware most often connects
to. Some of the malicious families connect to a large number
of unique IP addresses, e.g., Symmi and Dynamer. The family
with the most flows, Virlock, only connects to 1 unique IP
address owned by block.io.
We observed 10 families that made use of self-signed
certificates. ZBot was the most frequent offender, with the
subject of these certificates being tridayacipta.com, a
domain name that has many detections on VirusTotal [2].
Common certificate subjects also allow one to make infer-
37. ences about the tools that the malware families use and the
functionality that the families support. For instance, Deshacop
and Tescrypt have *.onion.to as the most common certifi-
cate subject, and, as anticipated, both have many samples that
have TLS client configurations that indicate that they use the
Tor Browser. The Tor Browser is the most prevalent
client for Deshacop, and the second most prevalent client
for Tescrypt. Symmi’s most common certificate subject is
*.criteo.com, an ad service. This could indicate Symmi’s
intent to perform click-fraud.
V. CLASSIFYING ENCRYPTED TRAFFIC
We used a logistic regression classifier with an l1 penalty
[20] for all classification results. For the initial binary-class
classification results, we trained four machine learning classi-
fiers using different subsets of data features we collected. The
first classifier used the flow metadata (Meta), the sequence
of packet lengths and inter-arrival times (SPLT), and the
distribution of bytes (BD). The second classifier only used
the TLS information (TLS). The third classifier was trained
using the same features as the first, with the addition of the
TLS client information, specifically, the offered ciphersuites,
advertised extensions, and the client’s public key length. The
fourth classifier was trained with all data, and an additional,
custom feature: whether the server certificate was self-signed
(SS).
A. Malware versus Enterprise
To demonstrate the value of the additional TLS features in
a classification setting, we use 10-fold cross-validation and all
8
38. All Data No SChannel
Dataset Total Accuracy 0.01% FDR Total Accuracy 0.01% FDR
Meta+SPLT+BD+TLS+SS 99.6% 92.6% 99.6% 87.4%
Meta+SPLT+BD+TLS 99.6% 92.8% 99.6% 87.2%
TLS 98.2% 63.8% 96.7% 59.1%
Meta+SPLT+BD 98.9% 1.3% 98.5% 0.9%
TABLE V: Classifier accuracy for different combinations of
data features, showing the overall accuracy and the accuracy at
a
0.01% FDR.
Malware Family Meta+SPLT TLS Only Meta+SPLT All+SS
+BD +BD+TLS
Bergat* 100.0% 100.0% 100.0% 100.0%
Kazy* 98.5% 99.5% 99.8% 100.0%
Parite* 99.3% 97.8% 99.6% 99.6%
Sality* 95.0% 94.1% 97.7% 98.0%
Tescrypt* 89.8% 95.6% 97.6% 97.6%
Upatre* 99.9% 98.7% 100.0% 100.0%
Virtob* 99.2% 98.8% 99.4% 99.4%
Yakes* 88.7% 98.5% 99.7% 99.7%
Zbot* 98.9% 99.6% 99.7% 100.0%
Zusy* 98.6% 88.7% 99.9% 99.9%
Deshacop 93.0% 63.6% 96.1% 96.1%
Dridex 16.5% 68.7% 78.5% 97.9%
Dynamer 95.4% 78.8% 95.7% 96.5%
Razy 91.5% 77.1% 95.9% 96.8%
Skeeyah 95.9% 82.1% 98.6% 98.6%
Symmi 99.1% 92.4% 99.8% 99.8%
Toga 100.0% 100.0% 100.0% 100.0%
Virlock 100.0% 100.0% 100.0% 100.0%
39. TABLE VI: Classifier accuracy when separated by family.
Families with an (*) offered an ordered ciphersuite list that
matched a
list found in the default Windows XP SChannel implementation.
Malware data from August and September 2015, and enterprise
data from May and June 2016 were used for training; these
malware samples were collected from October 2015 to May
2016.
Results using unencrypted TLS handshake messages are given
in addition to results based on only standard side-channel
features.
The two baselines are the first two data columns: side-channel-
only and TLS-only.
of the malicious TLS flows collected from August 2015 until
May 2016, and a random subset of the May and June 2016
enterprise network’s TLS flows. In total, there were 225,740
malicious and 225,000 enterprise flows for this experiment.
To account for the bias that the Windows XP-based sandbox
could introduce, we also present results on a dataset composed
of only flows that did not offer an ordered ciphersuite list that
matched a list found in the default Windows XP SChannel
implementation [29]: 133,744 malicious and 135,000 enter-
prise TLS flows.
The 10-fold cross-validation results for the above problem
is shown in Table V. We see that using all available data
views significantly improves the results. A 1-in-10,000 false
discovery rate (FDR) is defined as the accuracy on the positive
class given that only 1 false positive is allowed for every
10,000 true positives. As these results show, not using TLS
header information leads to significantly worse performance,
especially in the important case of a fixed, 1-in-10,000 FDR.
The removal of the Windows XP SChannel TLS flows had
40. no effect on the total accuracy of the classifiers based on all
data views, but does reduce the performance at a 1-in-10,000
FDR by ∼5%.
B. Malware Families
To determine how well a trained classifier is able to detect
the TLS flows generated by the different malware families,
we first trained the four classifiers from Table V on the
same 225,000 enterprise flows as above for the negative class,
and 76,760 malicious TLS flows collected during August and
September 2015 for the positive class. These binary classifiers
were applied to the testing data consisting of the TLS flows
from October 2015 to May 2016, summarized in Table II.
While we do not remove flows that offered an ordered cipher-
suite list that matched a list found in the default Windows XP
SChannel implementation in this experiment, we do make
9
c0
14
c0
13
c0
0a
c0
09
00
35
45. lo
w
s
Client's Public Key Length
IE 1
1 IE 9 IE 1
0 IE 8
Fire
fox
46
Ope
ra 1
2 1
7
0
20
40
60
80
100
TLS Client
46. Fig. 3: Dridex’s use of TLS versus that of Virlock’s. Some
values and the full ciphersuite names were omitted for clarity of
presentation. Ciphersuites and extensions are represented as hex
codes, which are given in full in Appendix A.
explicit the families that have this bias in the majority of their
flows.
Table VI lists the classification accuracy of the four classi-
fiers for each family. Because only malware data was used to
test the trained classifiers, false positives for this experiment
are ill-defined and are therefore not reported. In the August
and September 2015 malware training data, there was strong
representation of the malicious families presented in this paper,
but there were not any exact SHA1 matches. There were four
families that had no representation in August or September:
Bergat, Yakes, Razy, and Dridex.
For the most part, combining traditional flow metadata,
typical side-channel information, and the TLS specific features
led to the best performing machine learning models. Out of all
families, our classifiers with all data views performed the worst
on Deshacop with a 96.1% true positive rate. With respect to
only the malware families that primarily used ciphersuites sim-
ilar to those used by Windows XP SChannel-based clients,
our classifiers with all data views performed the worst on
Tescrypt with a 97.6% true positive rate. Both of these families
most often visited servers with a server certificate subject of
*.onion.to, and use TLS client configurations that indicate
the Tor Browser for some of their TLS connections. This
is particularly interesting because a major goal of the Tor
Browser is to maintain the privacy of its users, which in this
case are the malware families.
The classifier based only on the TLS data was able to
47. perform quite well on the malware families that used TLS
client configurations that matched those of Windows XP
SChannel-based clients, but this result is not guaranteed
to hold if the malware runs on another operating system.
The TLS-only classifier performed the worst on most of the
families that used TLS client configurations that did not match
those of Windows XP SChannel-based clients, with the
exception of Toga and Virlock. Both of these families did a
poor job at varying the TLS client parameters in our dataset,
and they both used TLS client parameters that indicated older
versions of clients: Toga → Tor 0.2.2 and Virlock →
Opera 12.
The machine learning classifiers were able to perform
reasonably on most malware families, with the exception of
Dridex. Dridex was one of four families that did not have
10
pa
ri
te
te
sc
ry
pt
zb
ot
ka
51. 0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
Fig. 4: Similarity Matrix for the different malware families with
respect to the observed TLS client’s parameters.
any representation in the training data. The classifier on the
other three families, Bergat, Yakes, and Razy, had ∼96-100.0%
total accuracy. In the case of Bergat and Yakes, this good
performance is expected because these families offered an
ordered ciphersuite list that matched a list found in the default
Windows XP SChannel implementation.
Figure 3 shows Dridex’s use of TLS from a client point-
of-view. Unlike most of the other families, Dridex most often
selects:
• 0x002f (TLS_RSA_WITH_AES_128_CBC_SHA)
Figure 2 shows that this ciphersuite is not uncommon for
enterprise TLS sessions. Dridex also advertises several TLS
52. extensions and offers many current ciphersuites in the client
hello message.
Figure 3 also compares Dridex’s TLS usage with that of
Virlock’s. Virlock is an example of a malicious family that used
the same TLS client for every sample that we observed, and
was able to be easily classified, i.e., all four classifiers
achieved
100% accuracy. While Dridex offers a variety of strong cipher-
suites, Virlock offers a smaller set of outdated ciphersuites.
Virlock also only advertises the signature_algorithms
TLS extensions. Another significant difference between these
two families is that Virlock did not alter its TLS client’s
behavior once in our entire dataset. Virlock always used the
same client parameters that are similar to those of Opera 12.
Virlock’s lack of adaptation makes it trivial for a machine
learning, or a rule-based, system to classify. Dridex’s use of
multiple TLS clients made a significant difference in terms of
detection efficacy.
As we now show, awareness of self-signed certificates
proved to be crucial. The classification of Dridex using
Meta+SPLT+BD+TLS, 78.5%, does not inspire confidence in
a system designed to detect malicious, encrypted traffic. Our
hypothesis was that, although Dridex varies the behavior of
its TLS clients, there might be an invariant with the servers
that Dridex communicates with that would allow us to more
easily classify these encrypted flows. Upon manual inspection,
this hypothesis was confirmed. We included a binary feature
indicating whether the server certificate was self-signed (de-
noted as SS), and retrained our machine learning classifier
with this new feature. The 10-fold cross-validation results on
the training data were nearly identical. With the self-signed
feature, the new classifier with all data sources achieved an
accuracy of 97.9% on Dridex, a significant improvement.
53. VI. FAMILY ATTRIBUTION
Being able to accurately attribute malware samples to a
known family is highly valuable. Attribution provides incident
responders with actionable prior information before they begin
to reverse engineer malware samples. From a network point-
of-view, this attribution can help to prioritize the incident
responders time, i.e., available resources should be assigned
to investigate more serious infections. In these results, there
are no enterprise samples; we only consider malicious samples
and their associated families.
To analyze the differences between the TLS parameters
used by different malware families, we used the malware
samples from October 2015 to May 2016 that had an iden-
tifiable family name as described in Section III. This process
pruned our original set of 20,548 samples to 5,623 unique
samples across 18 families. These samples generated 25,793
TLS encrypted flows that successfully negotiated the full TLS
handshake and sent application data.
11
ka
zy
sy
m
m
i
vi
58. Fig. 5: Confusion matrix for the 18-class malware family
classifier. The total 10-fold accuracy of the machine learning
model
was 90.3%.
A. Similar TLS Usage
Figure 4 shows a similarity matrix for the 18 malware fam-
ilies with respect to their TLS clients. The offered ciphersuites,
advertised extensions, and the client’s public key length were
used as features, and a standard squared exponential similarity
function was used to compute the similarity values:
exp
i,j
(xi −xj)2
with λ = 1, and xi being the mean of the feature vectors for
the i’th family. The diagonal of this matrix will be 1.0 because
each family will be perfectly self-similar.
There is a lot of structure in Figure 4. The upper left
block consists of families that have some number of flows
that use the default Windows XP TLS library. The group
of Skeeyah, Dynamer, Symmi, and Toga all heavily make
use of offered ciphersuite lists and advertised extensions that
are indicative of Tor 0.2.x. Dridex and Virlock were the
two most dissimilar malware families. And while Dridex was
difficult to accurately classify, Virlock was trivial. Uniqueness
59. is not always a desirable quality.
B. Multi-Class Classification
Finally, to assess the malware family attribution potential
of TLS flows, we used the data listed in Table II, and did not
remove samples that offered an ordered ciphersuite list that
matched a list found in the default Windows XP SChannel
implementation in this experiment because all of the samples
would have the same bias. We position the problem of attribut-
ing a malicious TLS flow to a known malware family as a
multi-class classification problem. For this analysis, we use all
of the malware families and data features described in Section
III. Similar to the enterprise versus malware results in Section
V, we used 10-fold cross validation and l1− multinomial
logistic regression [21]. We not only present our results in
terms of overall classification accuracy, but also as a confusion
matrix showing the true positives and false positives broken
down per-family. This was done to illustrate that we were not
simply using a naı̈ ve majority-class classifier, but were in fact
making useful inferences.
Using all available data features led to the best cross-
validated performance, with a total accuracy of 90.3% for the
18-class classification problem using a single, encrypted flow.
The confusion matrix for this problem is shown in Figure 5.
For a given row (family) in the confusion matrix, the column
entries represent the percentage of samples identified as that
specific family. A perfect confusion matrix would have all of
its weight focused on the diagonal. As an example, most of
Kazy’s TLS flows, the first row, were identified as Kazy, the
first column. Some of Kazy’s TLS flows were also identified
as Symmi (column: 2), Yakes (column: 4), Razy (column: 5),
and Zbot (column: 8).
60. The majority of the TLS flows were attributed to the
appropriate family with ∼80-90% accuracy. Again, the two
exceptions are Dridex and Virlock. Attribution for these two
families are trivial, in large part because of their distinctive
use of TLS compared to other malicious families.
There were two sets of two families that the multiclass
classification algorithm had problems differentiating. The first
of these was Bergat and Dynamer. Interestingly, Bergat used a
Windows XP SChannel-like TLS client, but Dynamer used
12
a tor 0.2.2-like TLS client. The confusion came from the
other data views, specifically the sequence of packet lengths.
Both of these families most often connected to servers at
www.dropbox.com, and had similar communication pat-
terns.
Finally, Yakes and Razy were another two malicious fam-
ilies that the multi-class classifier could not differentiate. Like
Bergat and Dynamer, Yakes and Razy most often connected
to servers at baidu.com. In fact, these two families are
subfamilies of the Ramnit family. Upon manual inspection, the
network behavior of Yakes and Razy looked mostly identical.
Determining the malware family based on a single, en-
crypted flow is an unnecessarily difficult problem. In our
dataset, the malware samples often created many encrypted
flows that can be used for attribution. In this framework, one
could initially classify all of the flows in a 5 minute sliding
window for a given host, and use the suspicious flows to
perform family attribution. We first trained an independent
flow, multi-class classifier. Then, for each window in the
61. testing set, each flow was classified, and a majority vote was
used to classify all flows within the window. This is similar
to ensemble methods in machine learning [16]. The confusion
matrix resulting from 10-fold cross validation on this problem
looked very similar to that shown in Figure 5. The accuracy
of the multi-class problem increased from 90.3% using single,
encrypted flows to 93.2% using a simplistic multiple flow
algorithm. While there were several families that had improved
performance, this simple, multi-flow scheme increased the
accuracy of Yakes and Razy most notably. This was most likely
because Razy was more promiscuous.
VII. RELATED WORK
Identifying threats in encryption poses significant chal-
lenges. Nevertheless, the security community has put forth two
solutions to solve this problem. The first involves decrypting
all traffic that flows through a security appliance: Man-in-
the-Middle (MITM) [9]. Once the traffic has been decrypted,
traditional signature-based methods, such as Snort [31], can
be applied. While this approach can be successful at finding
threats, there are several important shortcomings. First, this
method does not respect the privacy of the users on the
network. Second, this method is computationally expensive
and difficult to deploy and maintain. Third, this method relies
on malware clients and servers to not change their behavior
when a MITM interposes itself.
The second method of identifying threats in encrypted
network traffic leverages flow-based metadata. These methods
examine high-level features of a network flow, such as the
number of packets and bytes within a flow. This data is typi-
cally exported and stored as IPFIX [12] or NetFlow [11]. There
have been several papers that push the limits of traditional
flow monitoring systems. For instance, [8] uses NetFlow and
external reputation scores to classify botnet traffic. This work
62. can also be applied to encrypted network traffic, but does not
take advantage of the TLS-specific data features.
In addition to pure flow-based features to detect malware’s
network traffic, there has been many papers that augment
this data with more detailed features about a flow [14], [18],
[24], [34], [35], [36], [37], [39]. This work can been seen
as utilizing side-channel attacks, such as analyzing the sizes
and inter-arrival times of packets, to learn more information
about a flow. In [27], the authors derive features based on the
packet sizes to perform website fingerprinting attacks against
encrypted traffic. In our work, we are only concerned with
identifying malware communication and we use information
specific to the TLS protocol.
There has been previous work that uses active probing [17]
and passive monitoring to gain visibility into how TLS is used
in the wild [19]. Unlike [19], our results specifically highlight
malware’s use of the TLS protocol, and show how data features
from TLS can be used in rules and classifiers.
Malware clustering and family attribution has had a lot
of exposure in the academic literature [5], [7], [28], [30]. This
work has taken a variety of data source, e.g, HTTP or dynamic
system call traces, and clustered the samples to attribute a
sample to a malicious family. In contrast, our work gives an
in-depth analysis of how malware uses TLS, and shows how
data features from passive monitoring of TLS can be used for
accurate malware identification and family attribution.
VIII. LIMITATIONS AND FUTURE WORK
Our method for collecting malware data was straightfor-
ward and allowed us to quickly generate a large volume of
network data, but the dependence on Windows XP and 5
63. minute runs introduced some biases in our presented results.
We accounted for these biases by specifically considering the
cases in which the TLS features reflected the operating system
and not the malware, and either analyzing the data with those
cases removed, or clearly labeling and analyzing those cases
otherwise. Accounting for the bias caused by the sandbox was
essential to understanding the actual malware use of TLS.
From a practitioners point of view, however, it is sometimes
worthwhile to consider the raw, biased data. Malware often
targets obsolete and unpatched software because it is vulnera-
ble, and thus it is biased in the same direction as the sandbox.
We leave running these samples under multiple environments
and collecting the additional results for future work.
After family names were associated with our malware
samples, the original set of 20,548 samples that used TLS was
reduced to a set of 5,623 unique samples across 18 families.
It is difficult to reliably determine the family, if any, asso-
ciated with a malware sample, even in a structured sandbox
setting. While our multi-class, malware family classifier can
reasonably be criticized for failing to provide attribution for
∼3/4 of the malware samples, this fact reflects the difficulty
of family attribution in a dynamic analysis environment, and
not a limitation of the underlying approach. In future work,
the malware families for the training data can be determined
by a robust clustering algorithm [5] instead of relying on a
consensus vote from VirusTotal [2].
Like nearly all other methods of threat detection, a moti-
vated threat actor could attempt to evade detection by mim-
icking the features of enterprise traffic. For instance, in our
case, this could take the form of attempting to offer the
same TLS parameters as a popular Firefox browser and
using a certificate issued by a reputable certificate authority.
But, while evasion is always possible in principle, in practice
it poses challenges for the malware operator. Mimicking a
64. 13
popular HTTPS client implementation requires an ongoing
and non-trivial software engineering effort; if a client offers
a TLS ciphersuite or extension that it cannot actually support,
the session is unlikely to complete. On the server side, the
certificate must mimic the issuer, subjectAltName, time of
issuance, and validity period of the benign server. In either
case, the detection methods outlined in this paper are not meant
to be exhaustive, and in a robust system, these methods would
only be one facet of the final solution. An example of extending
this methodology for robustness would be to build a profile for
an endpoint based on the user-agent string advertised in
the unencrypted HTTP flows. If the TLS parameters indicate
a user agent that has not been observed on an endpoint, this
could be an interesting indicator of compromise.
All of the classification results presented in this paper
used 10-fold cross-validation and l1-logistic regression. We
have found this classifier to be very efficient and to perform
extremely well for network data feature classification. This
model reports a probabilistic output, allowing one to easily
change the threshold of the classifier. We did compare l1-
logistic regression with a support vector machine (Gaussian
kernel, width adjusted through CV), and found no statistically-
significant improvement using a 10-fold paired t-test at a 5%
significance level [15]. Because of the added computational
resources needed to train the SVM and the chosen model’s
robustness against overfitting [38], we only reported the l1-
logistic regression results. We leave examining alternative
models and quantifying their advantages for future work.
IX. CONCLUSIONS
65. Understanding malware’s use of TLS is imperative for
developing appropriate techniques to identify threats and re-
spond to those threats accordingly. In this paper, we reviewed
what TLS parameters malware typically uses from both the
perspective of the TLS client and the TLS servers that the
samples communicated with. Even when we accounted for the
bias caused by the underlying sandbox’s operating system,
we found that malware generally offers and selects weak
ciphersuites and does not offer the variety of extensions that
we see in enterprise clients.
We also analyzed the TLS usage of malware on a per family
basis. We identified malware families that are most likely
to use TLS client parameters that matched the TLS library
provided by Windows XP, the underlying operating system of
the sandbox, e.g., Bergat and Yakes; malware families that use
TLS client parameters that matched the TLS library provided
by the underlying operating system in addition to hundreds of
other TLS client configurations, e.g., Sality; and families that
exclusively used TLS client configurations that do not match
the TLS libraries supplied by the underlying operating system,
e.g., Virlock. As anticipated, we found that families who
actively evolve their usage of TLS are more difficult to classify.
We also found a malware family that used TLS parameters
that are similar to those found on an enterprise network, and
was difficult to classify: Dridex. But, if we leverage additional,
domain-specific knowledge such as whether the TLS certificate
was self-signed, we can significantly increase the performance
of our classifiers.
We showed that the differences in how malware families
use TLS can be used to attribute malicious, encrypted network
flows to a specific malware family. We also observed some
malware families using TLS in exactly the same way, e.g.,
66. Yakes and Kazy, which most often offered an ordered cipher-
suite list that matched a list found in the default Windows XP
SChannel implementation. We demonstrated an accuracy of
90.3% for the family attribution problem when restricted to
a single, encrypted flow, and an accuracy of 93.2% when we
made use of all encrypted flows within a 5-minute window.
We conclude that data features that are passively observed
in TLS provide information about both the client and server
software and its configuration. This data can be used to detect
malware and perform family attribution, either through rules
or classifiers. Malware’s TLS data features obtained from
sandboxes are biased, and it is essential to understand and
account for this bias when using these features
REFERENCES
[1] Most Internet Traffic will be Encrypted by Year End. Here’s
Why. http://fortune.com/2015/04/30/netflix-internet-traffic-
encrypted/,
accessed: 2016-03-23
[2] Virus Total. https://www.virustotal.com/ (2016)
[3] Zeus Source Code. https://github.com/Visgean/Zeus (2016)
[4] Adrian, D., Bhargavan, K., Durumeric, Z., Gaudry, P.,
Green, M.,
Halderman, J.A., Heninger, N., Springall, D., Thomé, E.,
Valenta,
L., VanderSloot, B., Wustrow, E., Zanella-Béguelin, S.,
Zimmermann,
P.: Imperfect Forward Secrecy: How Diffie-Hellman Fails in
Practice.
In: Proceedings of the Conference on Computer and
Communications
Security (CCS) (2015)
67. [5] Anderson, B., Storlie, C., Lane, T.: Multiple Kernel
Learning Clustering
with an Application to Malware. In: 12th International
Conference on
Data Mining (ICDM). pp. 804–809. IEEE (2012)
[6] Antonakakis, M., Perdisci, R., Nadji, Y., Vasiloglou, N.,
Abu-Nimeh,
S., Lee, W., Dagon, D.: From Throw-Away Traffic to Bots:
Detecting
the Rise of DGA-Based Malware. In: USENIX Security
Symposium.
pp. 491–506 (2012)
[7] Bayer, U., Comparetti, P.M., Hlauschek, C., Kruegel, C.,
Kirda, E.:
Scalable, Behavior-Based Malware Clustering. In: Proceedings
of the
Network and Distributed System Security Symposium (NDSS).
vol. 9,
pp. 8–11. Citeseer (2009)
[8] Bilge, L., Balzarotti, D., Robertson, W., Kirda, E., Kruegel,
C.: Disclo-
sure: Detecting Botnet Command and Control Servers through
Large-
Scale NetFlow Analysis. In: Proceedings of the 28th Annual
Computer
Security Applications Conference. pp. 129–138. ACM (2012)
[9] Callegati, F., Cerroni, W., Ramilli, M.: Man-in-the-Middle
Attack to the
HTTPS Protocol. IEEE Security & Privacy 7(1), 78–81 (2009)
[10] Cisco Talos: IP Blacklist Feed.
68. http://www.talosintel.com/feeds/ip-filter.
blf (2016)
[11] Claise, B.: Cisco Systems NetFlow Services Export
Version 9 (2013),
RFC 3954
[12] Claise, B., Trammell, B., Aitken, P.: Specification of the
IP Flow Infor-
mation Export (IPFIX) Protocol for the Exchange of Flow
Information
(2013), RFC 7011
[13] Dierks, T., Rescorla, E.: The Transport Layer Security
(TLS) Protocol
Version 1.2 (2008), RFC 5246
[14] Dietrich, C.J., Rossow, C., Pohlmann, N.: Cocospot:
Clustering and
Recognizing Botnet Command and Control Channels using
Traffic
Analysis. Computer Networks 57(2), 475–486 (2013)
[15] Dietterich, T.G.: Approximate Statistical Tests for
Comparing Super-
vised Classification Learning Algorithms. Neural computation
10(7)
(1998)
[16] Dietterich, T.G.: Ensemble Methods in Machine Learning.
In: Multiple
classifier systems, pp. 1–15. Springer (2000)
[17] Durumeric, Z., Wustrow, E., Halderman, J.A.: Zmap: Fast
Internet-
Wide Scanning and Its Security Applications. In: USENIX
69. Security
Symposium. pp. 605–620 (2013)
14
http://fortune.com/2015/04/30/netflix-internet-traffic-encrypted/
https://www.virustotal.com/
https://github.com/Visgean/Zeus
http://www.talosintel.com/feeds/ip-filter.blf
http://www.talosintel.com/feeds/ip-filter.blf
[18] Gu, G., Perdisci, R., Zhang, J., Lee, W.: BotMiner:
Clustering Analysis
of Network Traffic for Protocol-and Structure-Independent
Botnet De-
tection. In: USENIX Security Symposium. vol. 5, pp. 139–154
(2008)
[19] Holz, R., Amann, J., Mehani, O., Wachs, M., Kaafar, M.A.:
TLS in the
Wild: an Internet-Wide Analysis of TLS-Based Protocols for
Electronic
Communication. In: Proceedings of the Network and Distributed
System
Security Symposium (NDSS) (2016)
[20] Koh, K., Kim, S.J., Boyd, S.P.: An Interior-Point Method
for Large-
Scale l1-Regularized Logistic Regression. Journal of Machine
Learning
Research 8(8), 1519–1555 (2007)
[21] Krishnapuram, B., Carin, L., Figueiredo, M.A., Hartemink,
A.J.: Sparse
Multinomial Logistic Regression: Fast Algorithms and
70. Generalization
Bounds. Pattern Analysis and Machine Intelligence, IEEE
Transactions
on 27(6), 957–968 (2005)
[22] Microsoft: Choose the right ciphersuites in SChannel.
https://www.ssl.
com/how-to/choose-the-right-cipher-suites-in-schannel-dll/
(2016)
[23] Microsoft: SChannel. https://msdn.microsoft.com/en-
us/library/
windows/desktop/ms678421%28v=vs.85%29.aspx (2016)
[24] Moore, A.W., Zuev, D.: Internet Traffic Classification
Using Bayesian
Analysis Techniques. In: ACM SIGMETRICS Performance
Evaluation
Review. vol. 33, pp. 50–60. ACM (2005)
[25] Nguyen, T.T., Armitage, G.: A Survey of Techniques for
Internet Traffic
Classification using Machine Learning. Communications
Surveys &
Tutorials, IEEE 10(4), 56–76 (2008)
[26] Opderbeck, D.W., Hurwitz, J.G.: Apple v. FBI: Brief in
Support of
Neither Party in San Bernardino iPhone case.
http://ssrn.com/abstract=
2746100 (2016)
[27] Panchenko, A., Lanze, F., Zinnen, A., Henze, M.,
Pennekamp, J.,
Wehrle, K., Engel, T.: Website Fingerprinting at Internet Scale.
In: Pro-
71. ceedings of the Network and Distributed System Security
Symposium
(NDSS) (2016)
[28] Perdisci, R., Lee, W., Feamster, N.: Behavioral Clustering
of HTTP-
Based Malware and Signature Generation using Malicious
Network
Traces. In: NSDI. pp. 391–404 (2010)
[29] Qualys: Qualys SSL Labs.
https://www.ssllabs.com/ssltest/clients.html
(2016)
[30] Rieck, K., Holz, T., Willems, C., Düssel, P., Laskov, P.:
Learning and
Classification of Malware Behavior. In: Detection of Intrusions
and
Malware, and Vulnerability Assessment, pp. 108–125. Springer
(2008)
[31] Roesch, M.: Snort - Lightweight Intrusion Detection for
Networks. In:
Proceedings of the 13th USENIX Conference on System
Administra-
tion. pp. 229–238. LISA, USENIX Association (1999)
[32] Snort: Community Rules.
https://www.snort.org/downloads/community/
community-rules.tar.gz (2016)
[33] Vassilev, A.: Annex A: Approved Security Functions for
FIPS PUB
140-2, Security Requirements for Cryptographic Modules.
http://csrc.
nist.gov/publications/fips/fips140-2/fips1402annexa.pdf (2016)
72. [34] Wang, K., Cretu, G., Stolfo, S.J.: Anomalous Payload-
Based Worm
Detection and Signature Generation. In: Recent Advances in
Intrusion
Detection. pp. 227–246. Springer (2006)
[35] Wang, L., Dyer, K.P., Akella, A., Ristenpart, T.,
Shrimpton, T.: Seeing
through Network-Protocol Obfuscation. In: Proceedings of the
Confer-
ence on Computer and Communications Security (CCS). pp. 57–
69.
ACM (2015)
[36] Williams, N., Zander, S., Armitage, G.: A Preliminary
Performance
Comparison of Five Machine Learning Algorithms for Practical
IP Traf-
fic Flow Classification. Computer Communication Review 30
(2006)
[37] Wurzinger, P., Bilge, L., Holz, T., Goebel, J., Kruegel, C.,
Kirda, E.:
Automatically Generating Models for Botnet Detection. In:
Computer
Security–ESORICS 2009, pp. 232–249. Springer (2009)
[38] Yuan, G.X., Ho, C.H., Lin, C.J.: An Improved GLMNET
for L1-
Regularized Logistic Regression. Journal of Machine Learning
Research
13(Jun), 1999–2030 (2012)
[39] Zander, S., Nguyen, T., Armitage, G.: Automated Traffic
Classification
73. and Application Identification using Machine Learning. In: The
30th
IEEE Conference on Local Computer Networks. pp. 250–257.
IEEE
(2005)
APPENDIX A
CIPHERSUITE AND EXTENSION HEX CODES
Hex Code Ciphersuite
0x0004 TLS_RSA_WITH_RC4_128_MD5
0x0005 TLS_RSA_WITH_RC4_128_SHA
0x000a TLS_RSA_WITH_3DES_EDE_CBC_SHA
0x002f TLS_RSA_WITH_AES_128_CBC_SHA
0x0033 TLS_DHE_RSA_WITH_AES_128_CBC_SHA
0x0035 TLS_RSA_WITH_AES_256_CBC_SHA
0x0039 TLS_DHE_RSA_WITH_AES_256_CBC_SHA
0x003c TLS_RSA_WITH_AES_128_CBC_SHA256
0x003d TLS_RSA_WITH_AES_256_CBC_SHA256
0x0067 TLS_DHE_RSA_WITH_AES_128_CBC_
SHA256
0x006b TLS_DHE_RSA_WITH_AES_256_CBC_
SHA256
77. Discussion #1: The Essentials of Master's Educations in Nursing
Jenny Jauregui
Florida National University
Nursing Research
Dr. Hirigoyen Jorge
January 14, 2019
The Essentials of Master's Educations in Nursing
For Master's nursing students, it is crucial to follow and
integrate the essentials described on The Essentials of Master's
Educations in Nursing to face the continuous changes in the
health environment; including changes related with knowledge,
technology, diversity, and globalization. These essentials
provide Master’s students with a complete understanding of the
discipline of nursing, allowing the graduates from the program
to be involved in an advanced level practice, be leaders in a
diversity of sceneries and commit to lifelong learning (AACN,
2011, p. 4). According to the information described in The
78. Essential of Master's in Nursing the essentials define the
knowledge and skills that all graduates from the nurse
practitioner program should acquire "to lead changes, promote
health, and elevate care in diverse roles and settings" (AACN,
2011, p. 3). There are nine essentials among The Essentials of
Master's Education in Nursing, and I selected the essential IV:
"Translating and Integrating Scholarship into Practice" to
analyze its crucial importance in students’ success in the
Master's Nursing Programs.
Since nursing is a profession that bases all its work in the
practice, "it is critical that clinical practice be based on
scientific knowledge" (Cherry & Jacob, 2014, p. 89). One of the
most vital elements that make up the scientific knowledge for
nursing practices is research, because it helps authenticate and
improve existing knowledge and make discoveries (Cherry &
Jacob, 2014, p. 89). It is essential that nurses always try to find
the correct rationale behind existing methods, ideas, and
traditions in the nursing field and use evidence into practice.
The Translating and Integrating Scholarship into Practice allows
the program graduates to develop essential skills like
knowledge, acquisition and dissemination, team working, and
change management to apply the existent evidence-based
practice in their daily work at the time of delivering care
(AACN, 2011, p. 16).
There is not doubt that this essential is a valuable resource
that helps the formation of Nurse Practitioners as successful
professionals who are able to provide the best interventions and
the best nursing care based in translation and integration of the
scholarship into practice.
De otra estudiante
Concerning the selection of one of the essentials and expanding
on it, I settled for essential VIII – clinical prevention and
population health for improving health. A master’s program of
nursing designed to achieve this essential should equip the
learners with the knowledge and skills to efficiently and
effectively plan, deliver, manage and evaluate evidence-based
79. clinical prevention and population care and services to
individuals and families among other targeted populations
(American Association of Colleges of Nursing, 2011). The
learner at FNU, upon completion of the program, should deliver
a client-centered, broad or organizational care that is also
culturally appropriate. The program, therefore, ensures learners
are adequately prepared to take upon inter-disciplinary care
roles and be excellent patient-educators by building on their
communication skills which is essential for proper diagnosis
and treatment.
Mis ideas del tema
Relating to Essential VIII that you discussed in your post: I
want to add that Health System in the U.S has been suffered
considerable changes during the last years like health disparities
and increase in population diversity. For that reason, it is
essential that graduates from the master’s program develop
skills and knowledge that allow them to provide care and
services that are equitable an responsive to the unique cultural
and ethnic identity, socio-economic conditions, emotional and
spiritual needs, and values of the patient and population
(AACN, 2011, p. 24)
Mis ideas de Essential IX
I agree with your post. I think It will be a long way to get to
the last step of the phases that completes the nursing program:
Essential IX “Master’s Level Nursing Practice.” However, it
will be gratifying. In the last phase, we will be able to fully
understand the foundations of care and the art of nursing
practice as well as nursing theories. We will be able to
implement safe quality care in different environments and roles
(AACN, 2011, p. 26). We will also have the capacity to
integrate concepts related to “quality improvement, patient
safety, economics of health care, environmental sciences,
80. epidemiology, genetics/genomics, gerontology, global health
care environment and perspective, health policy, informatics,
organization and systems, communication, negotiation,
advocacy, and interprofessional practice “ (AACN, 2011, p. 26).
The essential IX includes the practice-focused outcomes that
have in account the realities of the health care in our days that
are influenced by the faster changes in technology, and
globalization. The learning experience based in a variety of
settings, a deeper understanding of the nursing profession based
on reflective practice, the lifelong learning and professional
development (AACN, 2011, p. 26) are going to add valuable
knowledge to our prior experience as nurses. At that point of
the Master’s level we are going to be able to influences health
care outcomes for individuals, populations, or systems.
Idea de otro estudiante sobre ssential IX
I believe that this essential is the outcome or final
point that a Master’s students need to reach in order to be
capable of leading changes, promote health and elevate care in
various roles and settings (AACN, 2011, p. 3)
ssential IX: Master’s-Level Nursing Practice: It is described as
the final step or the phase that completes the program. At this
level is expected for the nurse to be able to influence
individually, wider populations, or institutions or systems
through conscious intervention. It is expected a broad
knowledge and understanding of the Science of Nursing
considering the acquired experience that made possible the
nurse to get to this level, and the experience acquired by the
systematic practice. It is at this level when the nurse gets
prepare to understand that the learning experience creates the
basis for the nurse to accomplish the nursing role and deliver
excellent care to families, populations, and individuals. These
experiences will be a compliment beyond the acquired
experiences in other nursing programs.
81. References
AACN. (2011, March 21). The Essentials of Master's Education
in Nursing. Retrieved from https://fnu.blackboard.com.
Cherry B. & Jacob S. (2014). Contemporary Nursing Issues,
Trends and Management. (6th, Ed.) St. Louis: Elsevier Mosby.