Despite billions spent on enterprise cyber security, breaches from advanced attacks, costing millions, are occurring on a daily basis.
Our Solution: Complete Near Real-time Network Security Visibility and Awareness: If security analysts could see everything occurring on their network in real-time, breaches would occur but there would never be catastrophic damage – breach reaction would be almost instantaneous. Novetta Cyber Analytics is a linchpin enterprise security solution that enables security analysts, for the first time, to see a complete, near real-time, uncorrupted picture of their entire network. Security analysts then ask and receive answers to subtle questions – at the speed of thought – to enable detection, triage and response to breaches as they occur.
The Benefits: Increase events-responded-to an estimated 30X over.
Substantially reduce or eliminate damage from breaches.
Create a dramatically more effective and efficient security team.
Maximize current security infrastructure investment.
Be far more confident that your network is actually secure.
OUR DIFFERENTIATORS:
Understands the truth of what is happening on your network.
Detects advanced attacks that have breached perimeter defenses.
Develops a complete, near real-time understanding of suspicious behaviour.
Develops a battleground understanding of your entire security situation.
Augments current security solutions.
Proven speed, scale and effectiveness on the largest, most attacked networks on earth.
Burning Down the Haystack to Find the Needle: Security Analytics in ActionJosh Sokol
Your network is already compromised, but do you know how and by whom? Can you find them, remove them, and prevent them from getting back in again? In this presentation, we will examine actual attacks and indicators of compromise and show how, using some basic network flow pattern analysis, we can detect and prevent contemporary malware, advanced persistent threats (APTs), zero-day exploits and more. In addition, we will discuss how to feed this data into a security analytics program to create a new, broader perspective on the threats that your organization faces.
Over the past four years at National Instruments, we have been collecting tools to work cohesively as part of a larger security analytics platform. The goal of this presentation is to provide the attendee with the basic information that they need in order to build a security analytics program of their own. We will begin by talking about the problem of a lack of visibility within the enterprise environment. From there, we will talk about the traits that characterize a tool as being good for security analytics. Next, we will talk about the types of data that exists in the different tool sets and what types of questions they are good at answering. From there, we will talk about what it means to create patterns and analyze your data to find those specific patterns. Then, we will look at some specific analytics that are useful to run on a regular basis to find malware, misconfigured systems, APTs, and more. Lastly, we will talk about actionable (and even automated) next steps once we discover the patterns that we are looking for.
This talk will encourage audience participation by encouraging them to share what they are doing to perform security analytics and is appropriate for both novice and experienced security professionals.
Advances in cloud scale machine learning for cyber-defensePriyanka Aash
Picking an attacker’s signals out of billions of log events in near real time from petabyte scale storage is a daunting task, but Microsoft has been using security data science at cloud scale to successfully disrupt attackers. This session will present the latest frameworks, techniques and the unconventional machine-learning algorithms that Microsoft uses to protect its infrastructure and customers.
(Source : RSA Conference USA 2017)
Applied cognitive security complementing the security analyst Priyanka Aash
Security incidents are increasing dramatically and becoming more sophisticated, making it almost impossible for security analysts to keep up. A cognitive solution that can learn about security from structured and unstructured information sources is essential. It can be applied to empower security analysts with insights to qualify incidents and investigate risks quickly and accurately.
(Source : RSA Conference 2017)
Confusion and deception new tools for data protectionPriyanka Aash
Cyberthreats are assymetric risks: corporate defenders must secure and detect everything, but the attacker needs to exploit only once. As petabytes of data traverse the ecosystem, legacy data protection methods leave many gaps. By looking through the adversary’s eyes, you can create subterfuges, delay attack progress or reduce the value of any data ultimately accessed—and shift the risk equation.
(Source : RSA Conference USA 2017)
Burning Down the Haystack to Find the Needle: Security Analytics in ActionJosh Sokol
Your network is already compromised, but do you know how and by whom? Can you find them, remove them, and prevent them from getting back in again? In this presentation, we will examine actual attacks and indicators of compromise and show how, using some basic network flow pattern analysis, we can detect and prevent contemporary malware, advanced persistent threats (APTs), zero-day exploits and more. In addition, we will discuss how to feed this data into a security analytics program to create a new, broader perspective on the threats that your organization faces.
Over the past four years at National Instruments, we have been collecting tools to work cohesively as part of a larger security analytics platform. The goal of this presentation is to provide the attendee with the basic information that they need in order to build a security analytics program of their own. We will begin by talking about the problem of a lack of visibility within the enterprise environment. From there, we will talk about the traits that characterize a tool as being good for security analytics. Next, we will talk about the types of data that exists in the different tool sets and what types of questions they are good at answering. From there, we will talk about what it means to create patterns and analyze your data to find those specific patterns. Then, we will look at some specific analytics that are useful to run on a regular basis to find malware, misconfigured systems, APTs, and more. Lastly, we will talk about actionable (and even automated) next steps once we discover the patterns that we are looking for.
This talk will encourage audience participation by encouraging them to share what they are doing to perform security analytics and is appropriate for both novice and experienced security professionals.
Advances in cloud scale machine learning for cyber-defensePriyanka Aash
Picking an attacker’s signals out of billions of log events in near real time from petabyte scale storage is a daunting task, but Microsoft has been using security data science at cloud scale to successfully disrupt attackers. This session will present the latest frameworks, techniques and the unconventional machine-learning algorithms that Microsoft uses to protect its infrastructure and customers.
(Source : RSA Conference USA 2017)
Applied cognitive security complementing the security analyst Priyanka Aash
Security incidents are increasing dramatically and becoming more sophisticated, making it almost impossible for security analysts to keep up. A cognitive solution that can learn about security from structured and unstructured information sources is essential. It can be applied to empower security analysts with insights to qualify incidents and investigate risks quickly and accurately.
(Source : RSA Conference 2017)
Confusion and deception new tools for data protectionPriyanka Aash
Cyberthreats are assymetric risks: corporate defenders must secure and detect everything, but the attacker needs to exploit only once. As petabytes of data traverse the ecosystem, legacy data protection methods leave many gaps. By looking through the adversary’s eyes, you can create subterfuges, delay attack progress or reduce the value of any data ultimately accessed—and shift the risk equation.
(Source : RSA Conference USA 2017)
The New Pentest? Rise of the Compromise AssessmentInfocyte
If an attacker had a foothold in your network today, would you know it?
If they made it past your real-time defense measures (EDR, EPP, AV, UEBA, firewalls, etc.) or an analyst misinterpreted a critical alert, chances are they've entrenched themselves for the long haul. Skilled and organized attackers know long-term persistence in your network is the most critical component to meeting their goal of stealing information, causing damage, or pivoting attacks on other organizations.
Threat hunting is the proactive practice of finding attackers in your environment before they can cause damage (or at least stop the bleeding from continued exposure). Unfortunately, effective threat hunting practices remain out-of-reach for most organizations due to lack of security infrastructure and qualified people to manage advanced endpoint security solutions.
One solution to this problem is to hire a third party to conduct a periodic assessment geared toward discovery of unauthorized access and compromised systems. This is called a "compromise assessment" and just recently compromise assessments have become one of the most requested services from top security service providers.
Customers don’t want to just know if they can be hacked (a good penetration tester will generally conclude “yes”) they want to know if they ARE hacked—right now—and if so, what endpoints/hosts/servers on their network are compromised.
In this presentation, which was originally prepared for Black Hat 2018, Chris Gerritz outlines the growing practice of compromise assessments and the best practices being utilized by some of the largest and most sophisticated managed security service providers (MSSPs) with this offering.
What approaches are most effective?
What data is being utilized?
What are some of the top challenges?
To request a free 100-node compromise assessment or to learn more about Infocyte HUNT — our comprehensive threat hunting platform — and start a free trial, please visit https://try.infocyte.com.
Palestra do evento "Cybersecurity: a nova era em resposta a incidentes e auditoria de dados"
Jim Butterworth - Senior Cybersecurity Director Guidance Software Inc.
Brasília, 04 de agosto de 2010
Applied machine learning defeating modern malicious documentsPriyanka Aash
A common tactic adopted by attackers for initial exploitation is the use of malicious code embedded in Microsoft Office documents. This attack vector is not new, but attackers are still having success. This session will dive into the details of these techniques, introduce some machine learning approaches to analyze and detect these attempts, and explore the output in Elasticsearch and Kibana.
(Source : RSA Conference USA 2017)
Investigating, Mitigating and Preventing Cyber Attacks with Security AnalyticsIBMGovernmentCA
Presentation material from Cyber Security Briefing held in Ottawa on June 12, 2013.
- Investigating, Mitigating, and Preventing Cyber Attacks with Security Analytics and Visualization - Presented by: Orion Suydam, Director of Product Management, 21CT
Protecting Financial Networks from Cyber CrimeLancope, Inc.
Financial services organizations are prime targets for cyber criminals. They must take extreme care to protect customer data, while also ensuring high levels of network availability to allow for 24/7 access to critical financial information. Additionally, industry consolidation has created large, heterogeneous network environments within large financial institutions, making it difficult to ensure that networks have the necessary visibility and protection to prevent a devastating security breach. By leveraging NetFlow from existing network infrastructure, financial services organizations can achieve comprehensive visibility across even the largest, most complex networks. The ability to quickly detect a wide range of potentially malicious activity helps prevent damaging data breaches and network disruptions. Attend this informational webinar, conducted by Lancope’s Director of Security Research, Tom Cross, to learn: How NetFlow can help quickly uncover both internal and external threats How pervasive network insight can accelerate incident response and forensic investigations How to substantially decrease enterprise risks
Machine learning cybersecurity boon or boondogglePriyanka Aash
Machine learning (ML) and artificial intelligence (AI) are the latest “shiny new things” in cybersecurity technology but while ML and AI hold great promise for automating routine processes and tasks and accelerating threat detection, they are not a panacea. This session will demonstrate what they can and can’t do in a cybersecurity program through real world examples of possibilities and limits.
(Source: RSA Conference USA 2017)
Level Up Your Security Skills in Splunk EnterpriseSplunk
During this advanced Splunk webinar, Splunk security experts covered the following security scenarios:
- Automated threat intelligence response
- Behavior profiling
- Anomaly detection
- Tracking an attack against the “kill chain”
You can watch a recording of the webinar here: https://splunkevents.webex.com/splunkevents/lsr.php?RCID=8163d71e6fa0646beb8f8354bfac61a1
This is a Brief overview of what Vulnerability and Penetration Testing are in the Information Technology Security. The focus is on the issues that always arise within a Security Network. How you as an IT can identify or notice activity of any the Attacks from Hackers or unknown Individual that are a Client.
CNIT 121: 4 Getting the Investigation Started on the Right Foot & 5 Initial D...Sam Bowne
Slides for a college course based on "Incident Response & Computer Forensics, Third Edition" by by Jason Luttgens, Matthew Pepe, and Kevin Mandia.
Teacher: Sam Bowne
Website: https://samsclass.info/121/121_F16.shtml
My slides for PHDays 2018 Threat Hunting Hands-On Lab - https://www.phdays.com/en/program/reports/build-your-own-threat-hunting-based-on-open-source-tools/
Virtual Machines for lab are available here - https://yadi.sk/d/qB1PNBj_3ViWHe
Digital Forensics and Incident Response (DFIR) Training Session - JanuaryInfocyte
Join Infocyte co-founder and Chief Product Officer, Chris Gerritz, for a two-hour digital forensics and incident response (DFIR) training session.
During this month's session, Chris will lead training focused on artifact triage during IR investigations—reviewing shimcache, amcache, and process event logs.
Though this DFIR training session and examples are aligned with Infocyte's agentless detection and response platform, the topics, techniques, and principles covered can be carried over for use within your endpoint security solution—given it has similar capabilities as Infocyte.
To learn more about Infocyte, request a cybersecurity compromise assessment, or learn about our managed security services (supported by a global network of partners) like incident response and managed detection and response (MDR) services, please visit our website.
How to Hunt for Lateral Movement on Your NetworkSqrrl
Once inside your network, most cyber-attacks go sideways. They progressively move deeper into the network, laterally compromising other systems as they search for key assets and data. Would you spot this lateral movement on your enterprise network?
In this training session, we review the various techniques attackers use to spread through a network, which data sets you can use to reliably find them, and how data science techniques can be used to help automate the detection of lateral movement.
Presentation talks about different techniques to achieve persistence in Windows environment as well as how it could be detected. Talk was delivered @Null Bangalore chapter on 15th February 2019.
Presentation talks about introduction to MITRE ATT&CK Framework, different use cases, pitfalls to take care about.. Talk was delivered @Null Bangalore and @OWASP Bangalore chapter on 15th February 2019.
Open source network forensics and advanced pcap analysisGTKlondike
Speaker: GTKlondike
There is a lot of information freely available out on the internet to get network administrators and security professionals started with network analysis tools such as Wireshark. However, there is a well defined limit on how in depth the topic is covered. This intermediate level talk aims to bridge the gap between a basic understanding of protocol analyzers (I.e. Wireshark and TCPdump), and practical real world usage. Things that will be covered include: network file carving, statistical flow analysis, GeoIP, exfiltration, limitations of Wireshark, and other network based attacks. It is assumed the audience has working knowledge of protocol analysis tools (I.e. Wireshark and TCPdump), OSI and TCP/IP model, and major protocols (I.e. DNS, HTTP(s), TCP, UDP, DHCP, ARP, IP, etc.).
Bio
GTKlondike is a local hacker/independent security researcher who has a passion for network security, both attack and defense. He has several years experience working as an network infrastructure and security consultant mainly dealing with switching, routing, firewalls, and servers. Currently attending graduate school, he is constantly studying and learning new techniques to better defend or bypass network security mechanisms.
The New Pentest? Rise of the Compromise AssessmentInfocyte
If an attacker had a foothold in your network today, would you know it?
If they made it past your real-time defense measures (EDR, EPP, AV, UEBA, firewalls, etc.) or an analyst misinterpreted a critical alert, chances are they've entrenched themselves for the long haul. Skilled and organized attackers know long-term persistence in your network is the most critical component to meeting their goal of stealing information, causing damage, or pivoting attacks on other organizations.
Threat hunting is the proactive practice of finding attackers in your environment before they can cause damage (or at least stop the bleeding from continued exposure). Unfortunately, effective threat hunting practices remain out-of-reach for most organizations due to lack of security infrastructure and qualified people to manage advanced endpoint security solutions.
One solution to this problem is to hire a third party to conduct a periodic assessment geared toward discovery of unauthorized access and compromised systems. This is called a "compromise assessment" and just recently compromise assessments have become one of the most requested services from top security service providers.
Customers don’t want to just know if they can be hacked (a good penetration tester will generally conclude “yes”) they want to know if they ARE hacked—right now—and if so, what endpoints/hosts/servers on their network are compromised.
In this presentation, which was originally prepared for Black Hat 2018, Chris Gerritz outlines the growing practice of compromise assessments and the best practices being utilized by some of the largest and most sophisticated managed security service providers (MSSPs) with this offering.
What approaches are most effective?
What data is being utilized?
What are some of the top challenges?
To request a free 100-node compromise assessment or to learn more about Infocyte HUNT — our comprehensive threat hunting platform — and start a free trial, please visit https://try.infocyte.com.
Palestra do evento "Cybersecurity: a nova era em resposta a incidentes e auditoria de dados"
Jim Butterworth - Senior Cybersecurity Director Guidance Software Inc.
Brasília, 04 de agosto de 2010
Applied machine learning defeating modern malicious documentsPriyanka Aash
A common tactic adopted by attackers for initial exploitation is the use of malicious code embedded in Microsoft Office documents. This attack vector is not new, but attackers are still having success. This session will dive into the details of these techniques, introduce some machine learning approaches to analyze and detect these attempts, and explore the output in Elasticsearch and Kibana.
(Source : RSA Conference USA 2017)
Investigating, Mitigating and Preventing Cyber Attacks with Security AnalyticsIBMGovernmentCA
Presentation material from Cyber Security Briefing held in Ottawa on June 12, 2013.
- Investigating, Mitigating, and Preventing Cyber Attacks with Security Analytics and Visualization - Presented by: Orion Suydam, Director of Product Management, 21CT
Protecting Financial Networks from Cyber CrimeLancope, Inc.
Financial services organizations are prime targets for cyber criminals. They must take extreme care to protect customer data, while also ensuring high levels of network availability to allow for 24/7 access to critical financial information. Additionally, industry consolidation has created large, heterogeneous network environments within large financial institutions, making it difficult to ensure that networks have the necessary visibility and protection to prevent a devastating security breach. By leveraging NetFlow from existing network infrastructure, financial services organizations can achieve comprehensive visibility across even the largest, most complex networks. The ability to quickly detect a wide range of potentially malicious activity helps prevent damaging data breaches and network disruptions. Attend this informational webinar, conducted by Lancope’s Director of Security Research, Tom Cross, to learn: How NetFlow can help quickly uncover both internal and external threats How pervasive network insight can accelerate incident response and forensic investigations How to substantially decrease enterprise risks
Machine learning cybersecurity boon or boondogglePriyanka Aash
Machine learning (ML) and artificial intelligence (AI) are the latest “shiny new things” in cybersecurity technology but while ML and AI hold great promise for automating routine processes and tasks and accelerating threat detection, they are not a panacea. This session will demonstrate what they can and can’t do in a cybersecurity program through real world examples of possibilities and limits.
(Source: RSA Conference USA 2017)
Level Up Your Security Skills in Splunk EnterpriseSplunk
During this advanced Splunk webinar, Splunk security experts covered the following security scenarios:
- Automated threat intelligence response
- Behavior profiling
- Anomaly detection
- Tracking an attack against the “kill chain”
You can watch a recording of the webinar here: https://splunkevents.webex.com/splunkevents/lsr.php?RCID=8163d71e6fa0646beb8f8354bfac61a1
This is a Brief overview of what Vulnerability and Penetration Testing are in the Information Technology Security. The focus is on the issues that always arise within a Security Network. How you as an IT can identify or notice activity of any the Attacks from Hackers or unknown Individual that are a Client.
CNIT 121: 4 Getting the Investigation Started on the Right Foot & 5 Initial D...Sam Bowne
Slides for a college course based on "Incident Response & Computer Forensics, Third Edition" by by Jason Luttgens, Matthew Pepe, and Kevin Mandia.
Teacher: Sam Bowne
Website: https://samsclass.info/121/121_F16.shtml
My slides for PHDays 2018 Threat Hunting Hands-On Lab - https://www.phdays.com/en/program/reports/build-your-own-threat-hunting-based-on-open-source-tools/
Virtual Machines for lab are available here - https://yadi.sk/d/qB1PNBj_3ViWHe
Digital Forensics and Incident Response (DFIR) Training Session - JanuaryInfocyte
Join Infocyte co-founder and Chief Product Officer, Chris Gerritz, for a two-hour digital forensics and incident response (DFIR) training session.
During this month's session, Chris will lead training focused on artifact triage during IR investigations—reviewing shimcache, amcache, and process event logs.
Though this DFIR training session and examples are aligned with Infocyte's agentless detection and response platform, the topics, techniques, and principles covered can be carried over for use within your endpoint security solution—given it has similar capabilities as Infocyte.
To learn more about Infocyte, request a cybersecurity compromise assessment, or learn about our managed security services (supported by a global network of partners) like incident response and managed detection and response (MDR) services, please visit our website.
How to Hunt for Lateral Movement on Your NetworkSqrrl
Once inside your network, most cyber-attacks go sideways. They progressively move deeper into the network, laterally compromising other systems as they search for key assets and data. Would you spot this lateral movement on your enterprise network?
In this training session, we review the various techniques attackers use to spread through a network, which data sets you can use to reliably find them, and how data science techniques can be used to help automate the detection of lateral movement.
Presentation talks about different techniques to achieve persistence in Windows environment as well as how it could be detected. Talk was delivered @Null Bangalore chapter on 15th February 2019.
Presentation talks about introduction to MITRE ATT&CK Framework, different use cases, pitfalls to take care about.. Talk was delivered @Null Bangalore and @OWASP Bangalore chapter on 15th February 2019.
Open source network forensics and advanced pcap analysisGTKlondike
Speaker: GTKlondike
There is a lot of information freely available out on the internet to get network administrators and security professionals started with network analysis tools such as Wireshark. However, there is a well defined limit on how in depth the topic is covered. This intermediate level talk aims to bridge the gap between a basic understanding of protocol analyzers (I.e. Wireshark and TCPdump), and practical real world usage. Things that will be covered include: network file carving, statistical flow analysis, GeoIP, exfiltration, limitations of Wireshark, and other network based attacks. It is assumed the audience has working knowledge of protocol analysis tools (I.e. Wireshark and TCPdump), OSI and TCP/IP model, and major protocols (I.e. DNS, HTTP(s), TCP, UDP, DHCP, ARP, IP, etc.).
Bio
GTKlondike is a local hacker/independent security researcher who has a passion for network security, both attack and defense. He has several years experience working as an network infrastructure and security consultant mainly dealing with switching, routing, firewalls, and servers. Currently attending graduate school, he is constantly studying and learning new techniques to better defend or bypass network security mechanisms.
Get Real-Time Cyber Threat Protection with Risk Management and SIEMRapid7
The 2012 Verizon Data Breach Investigations Report quantified the sharp increase in cyber threats, noting that 68% were due to malware, up 20% from 2011. What is most concerning is that 85% of breaches took weeks or more to discover. Despite the focus on threat prevention, breaches will happen. In this environment the ability to identify risk, protect vulnerable assets and manage threats become critical. Learn how these combined solutions can help your organization identify behavioral anomalies, internal and external threats, and prevent breaches based on accurate enterprise security intelligence.
To download a free Nexpose demo, clock here: http://www.rapid7.com/products/nexpose/compare-downloads.jsp
How to protect your corporate from advanced attacksMicrosoft
Cybersecurity is a top priority for CSO/CISO and the budget allocated, especially in a large organization, is growing. The complexity and sophistication
of cyber threats are increasing. What are these current threats and how can Microsoft help your organization in their efforts to eliminate cyber threats?
Malware Analysis 101: N00b to Ninja in 60 Minutes at BSidesDC on October 19, ...grecsl
Knowing how to perform basic malware analysis can go a long way in helping infosec analysts do some basic triage to either crush the mundane or recognize when its time to pass the more serious samples on to the the big boys. This presentation covers several analysis environment options and the three quick steps that allows almost anyone with a general technical background to go from n00b to ninja (;)) in no time. Well … maybe not a "ninja" per se but the closing does address follow-on resources on the cheap for those wanting to dive deeper into the dark world of malware analysis.
SplunkLive! Stockholm 2015 breakout - Analytics based securitySplunk
Splunk products provide a flexible and fast security intelligence platform that makes security personnel and processes more efficient by providing quick and flexible access to all of the data and information needed to detect, investigate and remediate threats. This presentation will discuss best practices for building out or enhancing an analytics based security strategy and how Splunk products can make people, process, and technology work better together. Presented at SplunkLive! Stockholm October 2015 for more information please visit http://live.splunk.com/stockholm
a brief introduction of cyber war and its methods, may be called "cyber warfare introduction" . i have good knowledge on this domain and i practically follow this method. in this presentation i explain the reference 50% and it will complete on my next upload. please give your feedback if any suggestions to help me. thank you.
Hacker Halted 2014 - Why Botnet Takedowns Never Work, Unless It’s a SmackDown!EC-Council
Why Botnet Takedowns Never Work, Unless It’s a SmackDown!
If organizations are truly working to limit Internet abuse and protect end users, we need to take a more thoughtful approach to botnet takedowns – or once again bots will veer their ugly heads.
There are three main causes of ineffective takedowns:
The organizations performing botnet takedowns do so in a haphazard manner.
The organizations do not account for secondary communication methods, such as peer-to-peer or domain generation algorithms (DGA) that may be used by the malware.
The takedowns do not result in the arrest of the malware actor.
So what does a successful botnet take down actually look like? In his presentation on Botnet SmackDowns, Brian Foster, CTO of Damballa will share with attendees how to effectively takedown botnets for good. The only way botnet takedowns will have a lasting impact on end user safety is if security researchers use a comprehensive and systematic process that renders the botnet inoperable.
MMIX Peering Forum and MMNOG 2020: Packet Analysis for Network SecurityAPNIC
APNIC Senior Network Analyst/Technical Trainer Warren Finch presents on packet analysis for network security at the MMIX Peering Forum and MMNOG 2020 in Yangon, Myanmar, from 13 to 17 January 2020.
SHOWDOWN: Threat Stack vs. Red Hat AuditDThreat Stack
Traditionally, people have used the userland daemon ‘auditd’ built by some good Red Hat folks to collect and consume this data. However, there are a couple of problems with traditional open source auditd and auditd libraries that we’ve had to deal with ourselves, especially when trying to run it on performance sensitive systems and make sense of the sometimes obtuse data that traditional auditd spits out. To that effect, we’ve written a custom audit listener from the ground up for the Threat Stack agent (tsauditd).
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...pchutichetpong
M Capital Group (“MCG”) expects to see demand and the changing evolution of supply, facilitated through institutional investment rotation out of offices and into work from home (“WFH”), while the ever-expanding need for data storage as global internet usage expands, with experts predicting 5.3 billion users by 2023. These market factors will be underpinned by technological changes, such as progressing cloud services and edge sites, allowing the industry to see strong expected annual growth of 13% over the next 4 years.
Whilst competitive headwinds remain, represented through the recent second bankruptcy filing of Sungard, which blames “COVID-19 and other macroeconomic trends including delayed customer spending decisions, insourcing and reductions in IT spending, energy inflation and reduction in demand for certain services”, the industry has seen key adjustments, where MCG believes that engineering cost management and technological innovation will be paramount to success.
MCG reports that the more favorable market conditions expected over the next few years, helped by the winding down of pandemic restrictions and a hybrid working environment will be driving market momentum forward. The continuous injection of capital by alternative investment firms, as well as the growing infrastructural investment from cloud service providers and social media companies, whose revenues are expected to grow over 3.6x larger by value in 2026, will likely help propel center provision and innovation. These factors paint a promising picture for the industry players that offset rising input costs and adapt to new technologies.
According to M Capital Group: “Specifically, the long-term cost-saving opportunities available from the rise of remote managing will likely aid value growth for the industry. Through margin optimization and further availability of capital for reinvestment, strong players will maintain their competitive foothold, while weaker players exit the market to balance supply and demand.”
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
1. Novetta Cyber Analytics
Scott Van Valkenburgh
Manager, Product Marketing
svanvalkenburgh@novetta.com 512.284.4091 11.24.2014
2. NETWORK
BREACHES
novetta.com 2
Everyone is Being Breached
NETWORK
BREACHES
66%
Undiscovered for
months
70%
Discovered by people
outside your network
3. IPSs, IDSs, Firewalls Network Capture Tools
novetta.com 3
Why?
Too rigid and have
serious blind spots
Too slow and/or doesn’t
make the right data
available to analysts
SIEMs
LOG
BOOK
Captures and analyzes
inherently untrustworthy
data
4. A Complete Picture of the Ground Truth
Cyber Analytics Hub
Batch Ingest Module
Pre-Processing Module
Metadata
Analysts Web Interface API Interface
SIEM
novetta.com 4
Internet
Router
Firewall
SIEM
IDS/IPS
DLP
ATP
Network
Ingestion and
Analytics Engine
Meta
Data
Custom Workflows
PCAP
Archive
Packet Capture
Legacy
Sensor
PCAP*
Sensor
Sensor
Sensor
PCAP*
PCAP*
PCAP*
* PCAP is stored at sensors and is instantly retrievable when needed for deeper inspection
5. See threats as they occur.
Choose which ones to go after
before the damage is done.
Developed for agencies within the
Leading Security
Analytics Solution
(Good for Forensics)
novetta.com 5
Why We’re Different
A Complete Picture in Near Real-time
Novetta Cyber Analytics
Common Netflow
Based Solutions
Sampled Net Flow Intelligent & Selective
Metadata Extraction
US government.
Content Unraveling
NOT ENOUGH OPTIMAL FOR ANALYSIS TOO MUCH
Team & Infrastructure
Effectiveness
6. 4
Analytics Engine
novetta.com 6
How it Works – System Summary
1
Sensors
70+ pre-built analytical
searches that look for
suspicious behaviors or
build your own queries.
3
Security-specific
MetaData
For a clean and consolidated
view of the network
Internet
Router
Firewall
SIEM
IDS/IPS
DLP
ATP
Network
2
PCAP Data
For preprocessing
7. novetta.com 7
How it Works – At the Core
1
Sensors
4
Analytics Engine
Security-specific MetaData
For a clean and consolidated view
of the network
Internet
Router
Firewall
SIEM
IDS/IPS
DLP
ATP
Networ
k
2
PCAP Data
For preprocessing
1%
of total
PCAP
data
8. Role
novetta.com 8
How it Works – Contextualization
Third Party
Forensics
Session Details
Port
Protocol
1.2.3.4 5.6.7.8
Export Selected PCAP
Searchable
Content
Related Sessions and IPs
Port
Duration
Extract Content
Role
ftp-prod2.largeco.com
Role
Client
4754
RuVPS123.com
Private.RuVPS.com
21
Role
Server
Overlapping sessions
Common IPs
Associated IPs (hopfinder)
Bytes to/from server,
TCP flags, Packet counts
Service
FTP
Traffic Analysis
Taps network traffic
TCP
47 sec
Geo
DC, USA
Geo
Moscow, RU
9. novetta.com 9
How it Works – Top 10 Analytics
Of 70+ and always growing
Beacon Distant Admin HTTP(s)
Exfiltration
Protocol Abuse RDP Keyboard
Layout
Relay Finder
Suspicious
Admin
Toolkits
2 Degrees of
Separation
Unknown Service
Analysts get the whole picture
Port Scanners
10. See threats as they occur
Developed for agencies
within the US government.
Leading Security
Analytics Solution
(Good for Forensics)
novetta.com 10
Results
Choose which ones to go after
before the damage is done
NOVETTA Cyber Analytics
Common Netflow
Based Solutions
Sampled Net Flow Intelligent & Selective
Metadata Extraction
Content Unraveling
NOT ENOUGH OPTIMAL FOR ANALYSIS TOO MUCH
Team & Infrastructure
Effectiveness
11. novetta.com 11
Results
Estimated 30x gain
for incident response
Near real-time ability
to respond to attacks
Drastically improved
security team effectiveness
13. novetta.com 13
Proven Effectiveness
DEVELOPED TO SECURE
the largest and most attacked networks on earth
14. SIEM Analytics
novetta.com 14
Case Study – US DOD Agency 1
Problem: Constant Ongoing Breaches
• Wanted to stop attacks.
• Leading security tools could not provide the visibility,
speed, and flexibility they needed to respond quickly to
incidents or discover malicious behavior.
Solution: Novetta Cyber Analytics
• Uncovered known malicious activity
• Discovered unknown attacks
• Queries that had taken hours were now taking seconds
• Estimated 30x the number of incidents-responded-to
IPS
Overview:
Sensors: 4
Analytics Hub: 32 nodes
Users: 200+
PCAP Analyzed: 13 TB
Metadata Stored: 1.5 TB
Now the cornerstone tool for their threat response team
15. novetta.com 15
Case Study, US DOD Agency 2
Problem: Known Breaches
• Wanted to know WHO was attacking their network, WHY,
and WHAT methods used.
• Leading security tools could not provide the visibility,
speed, and flexibility they needed to respond quickly to
incidents or discover malicious behavior.
Solution: Novetta Cyber Analytics
• Uncovered known malicious activity
• Discovered unknown attacks
• Queries that had taken hours were now taking seconds
Now the cornerstone tool for their threat response team
Overview:
Sensors: 4
Analytics Hub: 32 nodes
Users: 200+
PCAP Analyzed: 13 TB
Metadata Stored: 1.5 TB
16. novetta.com 16
Summary
Novetta Cyber Analytics
The cornerstone tool for the largest and most attacked
networks on earth
Near real-time analysis: 30x incident response
Respond to attacks as they occur
Figure out what and why
Dramatically improve overall security team effectiveness
19. Attacker Infrastructure Contractor Infrastructure Enterprise Infrastructure
Internal
Server
novetta.com 19
A Real World Breach Story
With enough time, an attacker will find a way in—and out
Attacker
Local
Machine
Email
Server
Contractor
Laptop
Compromised
Internet Hosts
Attacker Drop Sites
Anonymous
Internet
Sharing Sites
Logs changed to
bypass high priority
SIEM alerts
Windows
File Server
Contractor
Maintenance
Web Server
Database
Server
Internal
FTP Server
Performs active
and passive
reconnaissance
Slow randomized 1
port scanning
avoids real-time
IDS port scanning
Spear phishes third
party contractor to
steal login credentials Finds database
server and dumps
sensitive records
Sends stolen data to
external drop sites
7
Moves laterally to
increase privileges
and search for
valuable data
Uses cracked 6
passwords from
Maintenance Server
to gain access
5
Executes SQL
injection attack to
gain admin-level
access
4
9
Sends stolen data
here for staging
8
Uses stolen login
credentials to
access Maintenance
Web Server
3
Anonymously
retrieves data from
drop sites
10
2
alarms
Not covered by
Contractor’s employee
training or security
technologies
Perimeter defenses
bypassed with
Username and
Password
SIEM alerts
dismissed by
overwhelmed
security team
Low priority SIEM
alerts again ignored
Further increase in
privileges enabled
bypass of DB
perimeter
NetFlow-focused
tool triggers alerts,
but analyst doesn’t
have enough detail
Contents encrypted
by attacker and
external sites not
blacklisted
Customer informs
company about
breach, and
becomes viral news
story
20. Same Story with Novetta Cyber Analytics
Anomalous behavior detected at almost every step
Attacker Infrastructure Contractor Infrastructure Enterprise Infrastructure
Internal
Server
novetta.com 20
Attacker
Local
Machine
Email
Server
Compromised
Internet Hosts
Spear phishes third
party contractor to
steal login credentials
Attacker Drop Sites
Anonymous
Internet
Sharing Sites
Protocol Abuse
analytic detects
anomalous lateral
movement and tags
Windows
File Server
Contractor
Maintenance
Web Server
Database
Server
Internal
FTP Server
Contractor
Laptop
Finds database
server and dumps
sensitive records
7
Moves laterally to
increase privileges
and search for
valuable data
Uses cracked 6
passwords from
Maintenance Server
to gain access
5
Executes SQL
injection attack to
gain admin-level
access
4
Sends stolen data to
external drop sites
9
Sends stolen data
here for staging
8
Uses stolen login
credentials to
Maintenance
Server
3
Anonymously
retrieves
data from drop sites
10
2
Performs active
and passive
reconnaissance
1
Port Scanner
analytic identifies &
tags suspicious IP
addresses
Occurs on the
Contractor’s network
outside the end-target
enterprise
Geolocation analytic
detects foreign server
access or interactions
out of subnet
HTTP analysis can
reveal attack
attempts by volume
Unknown Service
analytic detects
anomalous lateral
movement
Traffic Summary
analytic reveals
connections between
unrelated internal
hosts
Traffic Summary
analytic again
reveals uncommon
connections
HTTP Exfil analytic
detects data moving
to known anonymous
drop sites
Attack would never
get this far
21. novetta.com 21
Network Security Landscape
Post-Compromise Forensics
Real Time and Near Real Time
Analysis
Network Traffic (e.g. websites and
email)
What: Forensics, DPI
Who: RSA, Blue Coat
What: Netflow analysis
Who: Lancope, Arbor
What: Security-specific
metadata analysis
Who: Novetta
Traffic Payloads (e.g. attached files)
What: Sandboxing
Who: FireEye, McAfee, Check
Point
Endpoints (e.g. user machines and
servers)
What: Forensics, Host-level
change monitoring
Who: Bit9, Carbon Black
What: Application whitelisting,
monitoring
Who: Bromium, Sandboxie
WHERE
WHEN
22. Current Solutions | Incident Response
novetta.com 22
Reaction Investigation Analysis Conclusion
Tedious labor-intensive investigation
• Days of wrangling data for multiple people
Has enough been done?
Attackers may have covered their tracks
• We don’t know because of the manual tools used for
analysis and the incomplete data
Output
• Best-effort timeline of events
• Incomplete findings report with recommendations
• Partial list of external actors and impacted machines
CISO Confidence: Low
Analyst Job Satisfaction: Low
23. Novetta Cyber Analytics | Incident Response
novetta.com 23
Reaction Investigation Analysis Conclusion
Thoughtful, interesting investigation
• Handful of hours for single Tier 1 analyst
Complete high-level visibility
Detailed low-level information on activities
High confidence in analysis
Output
• Complete timeline and Full report
• Lists of all external actors
• Complete, exhaustive list of impacted machines
• Full packet capture
• New custom analytics, enhanced tribal knowledge
CISO Confidence: High
Analyst Job Satisfaction: High
Editor's Notes
[Hi, I’m ________, <enter job title>, from Novetta Solutions. Thanks for attending. For over 10 years, Novetta has specialized in applying advanced analytics to solve organizations’ most complex problems. Our customers benefit every day from making better data driven decisions.
In the next few minutes, you’re NOT going to see just another of the latest and greatest cyber security tools. What you will see is a completely unique approach to pro-actively responding to suspicious traffic on your network…a solution that’s already being used on the front lines by many government agencies today.
If you have any questions during the presentation, I’ll be happy to answer them as they arise. But first, let’s get to real reason we’re all here.]
[Let’s be honest, while the current generation of network security tools have been effective stopping known threats that are signature based, they have NOT been effective at stopping advanced persistent threats. We see evidence of this in newspaper headlines nearly any given day at this point. And it’s why I am here today. So why is that?
[NEXT SECTION]
To begin, 2/3rds of network breaches still go undiscovered for months!
[NEXT SECTION]
And when they are finally discovered, fully 70% are reported by people outside the network. (Optional) No one wants to find out about a breach from someone else…especially not a customer?
Yet the industry’s response thus far has been to improve on the tools you already have. Has there been success here? Yes. Is there still room for improvement?
“Absolutely”. Current tool sets haven’t worked against the kind of advanced persistent threats commonplace today. Our goal here today is for you to leave with an understanding of why this is the case, and how we’re different.
[Your perimeter defenses, IPSs IDSs, Firewalls…have a job to do. And they do that job well. Identify and protect against known threats. But when it comes to the scale and complexity of identification and protection against unknown, non-signature-based, advanced persistent threats – there solutions fall short. They are too rigid and have blind spots. Hackers have access to these solutions and simply design ways to get around them.
[NEXT SECTION]
SIEMs can fill some of the gap. Problem is, they capture and analyze only inherently corruptible event and log data. Bad actors simply edit the events and logs to make it look like they were never there. Like a burglar wiping his fingerprints from the door knob.
[NEXT SECTION]
Then there’s network based tools – the ones that capture the ground truth. Basically, the big problem here is twofold. First, most of these tools are really forensics tools. So the data – while extremely useful – takes too long to become available to respond to while an incident is underway. They’re too slow to be useful outside of targeted investigation scenarios.
The second class while fast enough, are too high level to be of significant value to analysts. To fully understand this particular problem, we need to take a closer look at sensor placement: (Intro New Slide)]
[Strategic Sensor Placement is the name of the game. Like I said, to fully understand the problem, we need to jump ahead a minute to show you how we’re doing it.
Novetta Cyber Analytics can be deployed in three different configurations.
The first and most common configuration is for us to deploy Novetta sensors within your organization’s network – these are sensors you see here. They’re made of commodity hardware running Novetta’s packet capture and pre-processing software. They serve to passively collect network traffic, extract metadata from the traffic, and push the metadata to a centralized Analytics Hub. Within the Analytics Hub the system executes analytical searches on the metadata. Analysts interact with the system using our web interface, and external applications such as SIEMs can interact directly with the Analytics Engine via our APIs. When interesting behavior is found within the metadata, the analyst or external application can get immediate and direct access to the original packet capture stored on the sensors.
[CLICK]
The second option is to leverage existing “legacy” packet capture devices that are already in place. Some organizations have existing traffic monitoring tools, both in the purchased and do-it-yourself categories, so it makes sense to reuse this capability. Using our Batch Ingest Module, the Cyber Analytics Hub can bring in live streams of packet capture from other devices. We extract metadata from this traffic and merge it with the other data in the Analytics Engine.
[Let me repeat. Today’s tools have some effectiveness:
Mostly Forensics tools on the far side of the curve here. But on the front side of the curve, here, the netflow-based solutions falter. For analyzing network data and network performance issues they’re great. But the biggest problem is that most of these aren’t purpose-built for security. It’s at best a secondary utility.
So, you end up with information that’s inadequate or too high level – they may point in the right direction, but generally just do not provide enough information to actually do complete analysis, forcing an analyst to look to other tools, systems and databases to piece together what is actually happening. So, most analysts and incident responders spend most of their time wrangling data … an extremely time consuming and tedious process.
While the forensics based tools fall here on the other side. Here the issues is the sheer volumes of data that they collect is simply too much to do real-time analysis on to find attacks as they’re in progress. They're doing a great job on the whodunit but a not so great job on who’s-doing-it-now. If an analyst has to wait hours for a result set, they are nowhere near working at the speed of thought and can’t get the information they need.
In fact, most analysts find themselves spending more time gathering data from different sources than analyzing it. They simply can’t do a whole lot to detect attackers when it matters…while it’s happening.
So where’s that leave you? Well, you’ve read the headlines. It’s leaves you cleaning up the mess. With overwhelmed analysts and incident responders. And the damage already done.
What you need is a tool that directly monitors your nearly all network traffic – capturing enough data to give you the intelligence to respond while an attack’s underway…so forensic-like in it its scale and depth but with a netflow-performance tools speed.
That’s where Novetta Cyber Analytics fits in. Novetta captures nearly all the information running across a network, so you can see threats in near real-time, and choose which ones to go after before the damage is done.
How do we provide the scale and speed needed? For now, let me explain that fundamentally, Novetta is not trying to force fit yesterday’s solution to today’s problem. Our solution was purpose built to analyze massive quantities of network data. We sessionize, instrument, and create intelligent meta-data from the raw packet capture, maximizing team infrastructure and effectiveness. (We can review our architecture in detail in a follow-up meeting as needed.)
We like to think of it as getting the ground truth.
And this isn’t more pie in the sky promises: it’s already a proven. It’s a central tool being used by various departments within the US government. We originally developed this FOR these agencies and it’s made them more effective and efficient as a result.
Now, we’re making it available to private enterprise so you can benefit the same.]
[So what are we doing different? In its simplest form, we put in place sensors that will tap the traffic from your different locations on your network. These sensors then capture all the PCAP data for preprocessing.
This captured data becomes the foundation of the Novetta Cyber Analytics hub. The raw data feeds a data model that supports high-end analytics.
[CLICK]
Basically, we then extract key metadata and attributes from that capture traffic – the data that will be most valuable for large scale analysis. Analysts receive a clean and consolidated view of all network conversations with key fields that will be most useful to them. It’s really an act of translation where we’re making the conversations between hosts understandable to humans. The original PCAP is retained, indexed, and stored for later analysis.
[CLICK]
The pre-processed information is then made available to analysts using our Analytics Engine with more than 70 pre-built analytical automatic and manual searches that can be executed to look for suspicious behaviors. It’s worth nothing, these searches have been created by some of the top minds in the field today – the people on the frontlines of the industry. ]
So, at the very core of what we do, we are simply taking raw PCAP data and enabling humans to understand it and run queries against in near real-time. Put more simply, we enable analysts to ask and receive answers to subtle questions at the speed of thought.
[So you can see the important role Novetta Cyber Analytics near real-time analysis plays for analysts and incident responders. Your current infrastructure still provides value. But we provide an added layer of security.]
Need to unhide. Adapt text from the old deck for this slide.
Here’s the text:
The following is an example of how traffic analysis is performed within Novetta Cyber Analytics. We’ll see how Cyber Analytics anticipates the needs of an Incident Responder and provides them the contextual information they need to perform network traffic analysis.
[Step in – show IP addresses]
Traffic analysis starts as a conversation between two IP addresses. In this example we have two example IP addresses 1.2.3.4 and 4.5.6.7.
[Step in – show ports]
First we show port information. We see here that the host on the right is using the standard File Transfer Protocol (FTP) command port 21 and the host on the left is using a non-standard port 4754.
[Step in – show service]
Next we show the service being used for the session. In this case it is indeed FTP. Cyber Analytics uses proprietary service decoders and parsers, built starting from RFCs, then customized based on real-world observation and adversary-specific traffic patterns.
[Step in – show protocol]
We show the protocol being used (TCP, UDP, etc.). The FTP service would always occur over TCP, but for other services (e.g. DNS) the protocol is more relevant.
[Step in – show session duration]
Next we show the duration of the session in seconds to tell the analyst how long the session between these hosts remained active.
At this point we’ve shown the analyst generally to what NetFlow provides – high level information about the bi-directional conversation. Unfortunately NetFlow typically only gets an analyst to the point of frustration when they need more information or even the raw packet capture to understand the context of the exchange.
[Step in – show client/server designations]
Next we show client/server designations for the hosts. We determine these roles by performing proprietary statistical analysis to decide, based on the communication and behavior, which host is acting more like the client and which is acting more like the server. This distinction helps to immediately orient the analyst to show which host is making requests and which is responding.
How does this simulate what the Incident Responder has to do already? The analyst normally has to review the traffic and make their own determination about client and server based on source and destination information, traffic pattern, the volume of data being exchanged, and the service.
[Step in – show domain names]
Next we show the domains that have been linked to these IP addresses. Cyber Analytics links IPs to both passively collected DNS and subscription domain information to make the picture as complete as possible for the analyst. This passive DNS capability is very powerful and not commonly found in network security solutions.
How does this simulate what the Incident Responder has to do already? The analyst would normally have to manually map IP addresses to domain names, either via ‘nslookup’ or by using a DNS look-up utility. They would miss the 1-to-N DNS mappings that we provide by passively collecting DNS. So for example, if the external host IP address changes domains frequently, we would show the analyst that the IP address is associated with multiple domains, which would provide additional context for their investigation.
[Step in – show Geolocation]
Another augmentation source we add is Geolocation data, which enables the system to add city, state, and country location data to domains and IP addresses. So we learn that our server is in Minnesota and our client is in the Moscow district of Russia.
How does this simulate what the Incident Responder has to do already? Incident Responders don’t normally do this, but if they do they are likely using a free online utility for performing IP geolocation. By automatically adding this to the interface we reduce that manual look-up that they would have to perform.
[Step in – show distance in nautical miles]
Geolocation data in the form of latitude and longitude also enables the calculation of distance between the client and server. This is very useful for analytical queries, such as finding all high privileged or administrative traffic where the client and server are further apart geographically than one would expect.
Why nautical miles? In order to provide a universal distance measure between two points on the globe, we chose to measure in nautical miles as the crow files.
How does this simulate what the Incident Responder has to do already? Incident Responders don’t normally have this capability. If they wanted to calculate distance between two geographic points they would likely have to use Wolfram Alpha or a similar online tool. By automatically adding this to the interface we reduce that manual look-up that they would have to perform.
[Step in – show IP block owners]
Another source of augmentation data is IP block owners, which in addition to domain names helps nail down the owner of IP addresses.
How does this simulate what the Incident Responder has to do already? An Incident Responder would have to run a manual command line ‘whois’ query to get this information. They would likely then copy and paste the information they find into their scratchpad for the investigation.
[Step in – show Threat Lists]
If an IP address or domain is identified as a known threat or is on one of our threat lists, the interface will reveal this to the analyst. This would be an immediate indication that this traffic merits investigation.
How does this simulate what the Incident Responder has to do already? An Incident Responder might search their internal spreadsheets of known threats or might search through free online open source databases. Since neither of these are complete sources of threat information (versus a paid subscription to a threat list) that manual look-up will only yield partial information at best.
[Step in – show custom tags]
Through the use of custom data tags, sessions and IP addresses (and more data elements later) can be tagged with custom tags or labels that persist in the Analytics Hub. The web interface shows these tags, which in this example would reveal that the server is a website FTP that is part of the IT department.
How does this simulate what the Incident Responder has to do already? For enterprise assets, the Incident Responder likely has a separate asset inventory list or a spreadsheet of information that they reference during the day. By tagging IP addresses with labels the system brings that asset inventory into the analytical system so they don’t need to manually look up the subnet or department for a particular host.
For threats, the Incident Responder likely has a spreadsheet or shared wiki page where known threats are tracked. By tagging known bad IP addresses the system fuses that intelligence into the network traffic, both enabling greater sharing of that information and empowering analysts to easily execute searches on categories of known bad actors.
[Pause and summarize]
So pausing here for a moment, if we zoom out a bit and look at the current session details and augmentation data, we see a corporate FTP server in the US communicating with what appears to be a Russian Virtual Private Server client that is on an Emerging Threat list. This is a turning into an alarming scenario, but we’re not done with our analysis yet.
[Step in – show session details]
We provide session-level details such as bytes transferred, exact TCP flags used, packet counts, and more. This allows for deeper analysis and greater awareness of what occurred during the session.
How does this simulate what the Incident Responder has to do already? The Incident Responder normally doesn’t have access to this level of information. It may be available in a NetFlow collection tool, but is rarely meaningful without any other contextual information.
[Step in – show related sessions and IPs]
The analyst is also able to pivot in multiple ways as they investigate. Network traffic analysis often branches in multiple directions as leads are followed, and Cyber Analytics anticipates this need by making it easy to bring up overlapping sessions, common IP addresses between the client and server, and associated IPs that may have been hops from one host to the other.
How does this simulate what the Incident Responder has to do already? The Incident Responder normally doesn’t have access to this level of information.
[Step in – show packet capture]
In addition to all these details, Cyber Analytics retains the original packet capture indexed and compressed on the sensors. So when the analyst finds something of interest they can reach back to the network edge to get the PCAP with a single click on the interface. Then they can review the PCAP in Wireshark or forensic analysis tools.
How does this simulate what the Incident Responder has to do already? The Incident Responder normally doesn’t have access to this level of information. Or if they do, it takes a very long time to find the relevant packet capture because their existing system does not perform well at scale.
[Step in – show traffic analysis]
When there is a large timespan of traffic to analyze the interface provides a visualization for network traffic over time. This means that analysts can quickly identify spikes in traffic, outliers, and suspicious patterns of behavior just by looking at traffic volume.
[Step in – show searchable content]
If the analyst is interested in finding files within the session data, such as executables, documents, or images, they can use tools provided in the web interface to extract this content and search through it. They could then move these files to a sandbox or forensic analysis tool for deeper investigation.
How does this simulate what the Incident Responder has to do already? The Incident Responder normally doesn’t have access to this type of capability. If they did it would likely be on a separate access-restricted machine that they would need to access. Cyber Analytics brings this capability to them.
[Step in – show export selected PCAP]
Finally, when the analyst has found activity that definitely merits further investigation they can export the packet capture and have it sent to third-party forensics tools. Alternatively, third-party tools could integrate with Cyber Analytics using one of our APIs to access information from the Analytics Hub directly.
How does this simulate what the Incident Responder has to do already? The Incident Responder normally doesn’t have access to packet capture.
[Conclusion]
So looking at the scenario as a whole we can see how the analyst is able to move beyond NetFlow-level information and gain insight into network traffic by analyzing session metadata and the related contextual information added by augmentation data sources. An analyst would really struggle to pull together all this information on their own, so Cyber Analytics anticipates this need and brings the contextual information and traffic visibility to the analyst.
[Through the interface, analysts get the whole picture. Here’s a sample of what these queries can do:
1. Find beacons from infected hosts. Beaconing is the practice of sending short and regular communications to an external host to inform the external host that the client is alive, functioning, and ready for instructions. This analytic is useful because beaconing behavior is one of the first network-related indications of a malware infection.
2. Uncover remote, unauthorized ‘admin like’ or Distant Admin access. This is where network ¬-sessions between two end points where (a) the service/application being used is administrative in nature and (b) the end points are geographically far apart. (optional) The purpose of the analytic is to uncover remote unauthorized access to enterprise servers and workstations. An example of network behavior found by this analytic is Remote Desktop Protocol (RDP) traffic between a client in Japan and a server in Canada. If there is no administrator living or traveling in Japan, then there should be no remote access from that location.
3. Retrace an attacker’s path between host and relays. With Hop Finder, you can find internal and external hosts that were used by attackers while attacking a network. (optional) It takes as input a known hop point and finds other hop points based on the assumption that hop points are used concurrently. The purpose of the analytic is to connect retrace the path an attacker took by analyzing relationship between hosts in network traffic.
4. Find large uploads to remote servers, including data exfiltration. See, normal web browsing traffic has more traffic being provided to the client by the server than vice versa – large uploads to servers are uncommon. The HTTP(S) Exfiltration analytic finds unencrypted (HTTP) and encrypted (HTTPS) web traffic where the traffic ratio between the client and the server indicates a data upload to the server. An example of network behavior found by the analytic is stealth data theft using internet file sharing sites as drop points. Attackers commonly use free file sharing or dump sites such as Dropbox to anonymously transfer stolen files out of corporate networks.
5. Find slow, randomized port scans. The purpose of the analytic is to identify network scanning, which is part of an attacker's active reconnaissance activities. (optional) Attackers look for open ports and exposed/vulnerable services that they can exploit. If an analyst is able to identify port scanning early they will benefit by (a) identifying potential attackers as early as possible and (b) seeing what responses are sent back to the scanning attempts as this will help the analyst identify weaknesses.
6. Discover Protocol Abuse from traffic utilizing backdoor access/pathways. The purpose of the analytic is to uncover covert communication channels created by attackers. (optional) After a successful intrusion into a machine, attackers routinely set up backdoors or hidden access paths that give them direct and undetected access. A common technique is to tunnel communication through a common service port, such as port 80 (HTTP), because these ports are allowed by firewalls and other network security devices. An example of network behavior found by the analytic is reverse shell activity. A reverse shell is created when an attacker opens a command line shell connection from the victim machine to the attacking machine. It is called a reverse shell because the normal direction is usually the opposite – the client creates a connection to the server. This is effective because firewalls typically focus on blocking incoming traffic and allow all outbound traffic. If an attacker manages to compromise a machine and starts a reverse shell, especially on a common port (port 80 for web traffic), this activity often goes unnoticed since it is lost within the network noise.
7. Then there’s sessions run by unexpected keyboard types. Administrators of corporate resources typically use keyboard layouts (e.g. US English) that are consistent with the primary locations for the enterprise. If non-standard keyboard layouts are observed, this could indicate unauthorized access to infrastructure by a foreign attacker. The RDP Keyboard Layout analytic summarizes Remote Desktop Protocol (RDP) sessions by the layout of the keyboard being used by the client.
8. Find suspicious sessions where the client is using a Remote Administration Toolkit. This is where Remote Administration Toolkit (RAT) to interact with the server. There are many RATs that are often used by attackers to streamline or automate malicious actions. (optional) An example of activity found by the analytic is traffic related to the Poison Ivy RAT. Poison Ivy bypasses normal security mechanisms to secretly control programs, computers, and network connections. It gives an attacker nearly complete control over the infected computer and enables the following functionality: file upload and modifications, Windows registry changes, current process control, service control, remote shell execution, keylogging, screen grabbing, and password dumping. The tool is popular because it makes controlling a compromised machine easy.
9. Or find out more about suspicious behavior, such as using unknown services against servers that are responding with known services. (optional) This type of behavior is suspicious because within a single session clients and servers typically interact using the same service, such as HTTP (web browsing) and FTP (file transfers). If a server uses a known service to respond to a client's unknown service request, this merits investigations.
10. And finally, find unknown services. An unknown service means that an application-specific service is being used or the traffic is abnormal and doesn't match a known application. The purpose of the analytic is to give the analyst visibility of network traffic that is uncommon, suspicious, and potentially malicious.
Remember, this is just a small sample of the queries you’ll have access to. Your analysts can even run their own queries using our powerful Query Builder. Analysts can tag sessions, IP addresses, and domains with free text tags and have those persistently stored in the system. This empower analysts to share and augment their teams’ collective tribal knowledge.
They’ll finally have a complete near real-time toolset to respond to threats as they occur.
That’s what we mean by getting the ground truth.]
[But let’s get back to the big picture for a moment. So what do you get that other security systems on the bell curve can’t offer?]
A 30X gain in efficiency for your analysts. The near real-time ability to respond to attacks. And drastically improved security team effectiveness. So you’re finally at the peak of the efficiency and analysis curve.]
[Now let’s take a direct look via this demonstration…]
[So we mentioned earlier this was developed for the US government. Here’s an example of the types of departments we support. Within each of these, we support various 3-to-4 letter alphabet soup agencies, including some with the most attacked networks on the planet. We can’t tell you which ones, But, our contracts are public record, so if you’re so inclined you could look up all the different agencies we’re doing business with, but it wouldn’t tell you specifically what we’re doing with them. What I can tell you is that beginning with a very core DoD agency, due to its effectiveness our Novetta Cyber Analytics is now being used as the main cyber defense tool in multiple DoD agencies, most if not all of whom you would know if I were to tell you who they are. ]
[So, let’s talk a bit about the experience one agency within the DoD has had with our solution. Basically, in 2007, an agency was getting extremely frustrated with all of their tools after purchasing the leading SIEM, leading IPS, and leading analytics tools.
After constant breaches, they knew they needed something new. The major theme? Scale and Speed with raw PCAP data. They presented this problem to Novetta, and we developed and deployed Novetta Cyber Analytics.
[CLICK]
Within days of implementation, they discovered breaches they didn’t know existed, and within minutes they were able to triage and remediate these breaches.
The solution dramatically reduced their team’s time to investigate incidents. Queries that had taken hours (if they completed at all) were now taking seconds. They estimate that their teams are now handling to 30 TIMES the number of incidents per analyst, significantly improving their overall security posture.
Since then multiple other DoD agencies have deployed our solution, it’s worked so well improving the cyber security efforts of the DoD.
[Remember how we mentioned Novetta Cyber Analytics utility as a forensics tool as well? Here’s an example.
Another ABC agency had wanted to know who was attacking them and why. They had years of PCAP data available. But traditional tools left them with more questions than answers.
[CLICK]
Within the first week of deploying Novetta Cyber Analytics, they uncovered the known cases of malicious activities…and many previously unknown attacks.
[CLICK]
The tool is now the cornerstone for their threat response team. (Feels thin on details – can’t share any more details – it’s secret…..)]
[So, as you can see, there’s nothing in the cyber security space like Novetta Cyber Analytics. It’s the best way to dramatically increase events responded to by analysts by as much as 30x. Substantially reduce or eliminate damage from breaches. And create an overall far more effective and efficient security team.
Which brings us to our next step. Let us prove how good this works with your own data. Let us demo our solution using our test data or yours – we’ll even be happy to employ a proof of concept system within your network. The choice is yours.
Slow & randomized port scanning and banner grabbing avoids automated IDS port scanning alarms
Not covered by employee training or security technologies
With a username and password, any / all perimeter defenses bypassed.
Low priority SIEM alerts ignored.
Changed logs that would have triggered high priority SIEM alerts.
Low priority SIEM alerts again ignored.
Further increase in privileges enabled bypass of db perimeter
Netflow-based and a leading PCAP security analytics query triggers alerts, but analyst can’t complete investigation.
Overall defense assumed perimeter solid. Nothing monitoring for exfiltration.
A customer tells the company they’ve been breached. Then it hits the press.
Which brings us to our next step. Let us prove how good this works with your own data. Let us demo our solution using our test data or yours – we’ll even be happy to employ a proof of concept system within your network. The choice is yours.
Port Scanner query identifies & tags suspicious IP addresses before breach
Nothing even we can do about this – it is outside the enterprise. Rely on contractor’s security systems and user training.
Geolocation query detects foreign server access OR interactions outside the Contractor’s defined subnets
HTTP analysis can reveal attack attempts by volume
Protocol Abuse query detects anomalous lateral movement AND tags show uncommon connections between unrelated internal hosts
Protocol Abuse query or Unknown Service query detects anomalous lateral movement
Traffic Summary query reveals uncommon connections between unrelated internal hosts
Traffic Summary query again reveals uncommon connections between unrelated internal hosts
HTTP Exfil query detects data moving to known anonymous drop sites
They never would have gotten here
Which brings us to our next step. Let us prove how good this works with your own data. Let us demo our solution using our test data or yours – we’ll even be happy to employ a proof of concept system within your network. The choice is yours.
[So, as you can see, there’s nothing in the cyber security space like Novetta Cyber Analytics. It’s the best way to dramatically increase events responded to by analysts by as much as 30x. Substantially reduce or eliminate damage from breaches. And create an overall far more effective and efficient security team.
Which brings us to our next step. Let us prove how good this works with your own data. Let us demo our solution using our test data or yours – we’ll even be happy to employ a proof of concept system within your network. The choice is yours.
First, let me say that this is a single summary chart taken from a much longer deck on a before Novetta after Novetta analysis. I’ll just briefly summarize here, but if you’re interested in the details, I’d be happy to take you through the other deck another time. Or, you can see a video version of it online at novetta.com/cyber-analytics.
This is the typical result for a scenario whereby a CISO has asked if an analyst’s network has been breached by a recently publicized zero day attack.
This analysis would generally take both a senior and a junior analyst about 3 days to complete their investigation. Their findings would based on incomplete information because their data sources did not have enough logging to paint a complete picture of what happened on the network. And since this zero-day attack did not trip any monitoring or alerting on their existing tools, they have no intelligence to gather from those tools. If they had been breached, the threat would have gotten by all of their existing defenses.
The output of their efforts was a best-effort timeline of events, a report containing everything they could find, a recommendation for creating a new signature based on the ISAC alert, a list of external actors that connected to their network, and a list of impacted machines that may or may not be exhaustive.
Now with Cyber Analytics, analysts can gain both complete high-level visibility and detailed low-level information about the activities on a network.
Analysts can be much more efficient with incident response activities, running analytic queries, pivoting to other queries, running quick traffic intersections, saving results, and exporting packet capture for later analysis.
Full investigations can take a few hours, not a few days, to investigate, analyze, report, and then move on to containment and recovery.
Output now takes the form of the following:
- An exhaustive and detailed timeline of events
- A complete list of external bad actors
- A complete list of affected organization machines
- Information that can be used to generate new signatures for his signature-based tools
- The full packet capture for all network activity related to the attacks
- New custom queries that can be used to identify this type of behavior in the future
It is now much easier to find all the relevant information because the data is all in one place and all the tools are there to perform analysis. Analysts have much higher confidence because Cyber Analytics operates at the network traffic level, so attackers have nowhere to hide.
This new capability increase the satisfaction of analysts and CISOs alike.