Knowledge of cyber threats is a key focus in cyber security. In this talk, we present TypeDB CTI, which is an open source threat intelligence platform to store and manage such knowledge. It enables Cyber Security Intelligence (CTI) professionals to bring together their disparate CTI information into one platform, enabling them to more easily manage such data and discover new insights about cyber threats.
We will describe how we use TypeDB to represent STIX 2.1, the most widely used language and serialization format used to exchange cyber threat intelligence. We cover how we leverage TypeDB's modelling constructs such as type hierarchies, nested relations, hyper relations, unique attributes, and logical inference to build this threat intelligence platform.
Speaker: Tomás Sabat
Tomás is the Chief Operating Officer at Vaticle. He works closely with TypeDB's open source and enterprise users who use TypeDB to build applications in a wide number of industries including financial services, life sciences, cyber security and supply chain management. A graduate of the University of Cambridge, Tomás has spent the last seven years founding and building businesses in the technology industry.
From ATT&CKcon 3.0
By Fred Frey and Jonathan Mulholland, SnapAttack
Atomic Red Team and Sigma are the largest open-source attack simulation and analytic projects. Many organizations utilize one or both internally for security controls validation or supplementing their detections and alerts. Building on the work from these two great communities, we smashed (scientific-term) the attacks and analytics together and applied data science to analyze the results. We'll describe our methodology and testing framework, show the real-world MITRE ATT&CK coverage and gaps, discuss our algorithms for calculating analytic similarity, identifying log sources for a technique, and determining the best analytics to deploy that maximizes ATT&CK coverage.
This project aims to:
- Bring a measurable testing rigor to community analytics to improve adoption
- Test every analytic against every attack, validating the true positive detection
- Understand the log sources required to detect specific attack techniques
- Apply data science to identify analytic similarity (reduce community duplication)
- Identify gaps between the projects' analytics without attack simulations; attack simulations without detections; missing or incorrect MITRE ATT&CK labels, etc
- Automate the process so insights can stay up to date with new attack/analytic contributions over time
- Share our analysis back to the community to improve these projects
Threat Modelling - It's not just for developersMITRE ATT&CK
From ATT&CKcon 3.0
By Tim Wadhwa-Brown, Cisco
The purpose of this session will be to look at how you can take public information about threat actors, vulnerabilities, and incidents and use them to build better defenses, utilizing ATT&CK along the way to align your security organization to the people and assets that matter.
Stories are critical to how humans learn, so this session will leverage a story book approach to give the audience some ideas on approaches they could use. Tim will take the audience through 3 real world examples where he has leveraged ATT&CK to drive operational improvement. The premise of each story will be real, although some of the details will be apocryphal to protect the innocent.
One story will focus on defending a network, one will look at adversary detection, while the final one will look at responding to an active attack and in each case, Tim will guide the audience to think about the kinds of data sources that ATT&CK tracks, that they might call upon to achieve a successful outcome.
From Theory to Practice: How My ATTACK Perspectives Have ChangedMITRE - ATT&CKcon
From MITRE ATT&CKcon Power Hour December 2020
By Katie Nickels, Director of Intelligence, Red Canary
Good analysts (and good human beings) change their minds based on new information. In this presentation, Katie will share how her perspectives on ATT&CK have changed since moving from ATT&CK team member to ATT&CK end-user. She will discuss how her ideas about coverage, procedures, and detection creation have evolved and why those perspectives matter. Katie will also share practical examples from observed threats to help explain the nuances of her perspectives. Attendees should expect to leave this presentation with a better understanding of how to handle challenges they’re likely to face when navigating their own ATT&CK journey.
It's just a jump to the left (of boom): Prioritizing detection implementation...MITRE ATT&CK
From ATT&CKcon 3.0
By Lindsay Kaye and Scott Small, Recorded Future
Many organizations ask: "Where do I start, and where do I go next" when prioritizing implementation of behavior-based detections? We often hear "use threat intelligence!" but your goals must be qualified and quantified in order to properly prioritize the most relevant TTPs. A wealth of open-sourced, ATT&CK-mapped resources now exists, giving security teams greater access to both detections and red team tests they can implement, but intelligence (also aligned with ATT&CK), is essential to provide necessary context to ensure that detection efforts are focused effectively.
This session will discuss a new approach to the prioritization challenge, starting with an analysis of the current defensive landscape, as measured by ATT&CK coverage for more than a dozen detection repositories and technologies, and guidance on sourcing TTP intelligence. The team will then show how real-world defensive strategies can be strengthened by encompassing a full-spectrum view of threat detection, including the implementation of YARA, Sigma, and Snort in security appliances. Critically, alignment of both intelligence and defenses with ATT&CK enables defenders to move the focus of detection efforts to indications of malicious behavior before the final payload is deployed, where controls are most effective at preventing serious damage to the organization.
Knowledge for the masses: Storytelling with ATT&CKMITRE ATT&CK
From ATT&CKcon 3.0
By Ismael Valenzuela and Jose Luis Sanchez Martinez, Trellix
The Trellix team believes that creating and sharing compelling stories about cyber threats -with ATT&CK- is a powerful way for raising awareness and enabling actionability against cyber threats.
In this talk the team will share their experiences leveraging ATT&CK to disseminate Threat knowledge to different audiences (Software Development teams, Managers, Threat detection engineers, Threat hunters, Cyber Threat Analysts, Support Engineers, upper management, etc.).
They will show concrete examples and representations created with ATT&CK to describe the threats at different levels, including: 1) an Attack Path graph that shows the overall flow of the attack; 2) Tactic-specific TTP summary tables and graphs; 3) very detailed, step-by-step description of the attacker's behaviors.
From ATT&CKcon 3.0
By Fred Frey and Jonathan Mulholland, SnapAttack
Atomic Red Team and Sigma are the largest open-source attack simulation and analytic projects. Many organizations utilize one or both internally for security controls validation or supplementing their detections and alerts. Building on the work from these two great communities, we smashed (scientific-term) the attacks and analytics together and applied data science to analyze the results. We'll describe our methodology and testing framework, show the real-world MITRE ATT&CK coverage and gaps, discuss our algorithms for calculating analytic similarity, identifying log sources for a technique, and determining the best analytics to deploy that maximizes ATT&CK coverage.
This project aims to:
- Bring a measurable testing rigor to community analytics to improve adoption
- Test every analytic against every attack, validating the true positive detection
- Understand the log sources required to detect specific attack techniques
- Apply data science to identify analytic similarity (reduce community duplication)
- Identify gaps between the projects' analytics without attack simulations; attack simulations without detections; missing or incorrect MITRE ATT&CK labels, etc
- Automate the process so insights can stay up to date with new attack/analytic contributions over time
- Share our analysis back to the community to improve these projects
Threat Modelling - It's not just for developersMITRE ATT&CK
From ATT&CKcon 3.0
By Tim Wadhwa-Brown, Cisco
The purpose of this session will be to look at how you can take public information about threat actors, vulnerabilities, and incidents and use them to build better defenses, utilizing ATT&CK along the way to align your security organization to the people and assets that matter.
Stories are critical to how humans learn, so this session will leverage a story book approach to give the audience some ideas on approaches they could use. Tim will take the audience through 3 real world examples where he has leveraged ATT&CK to drive operational improvement. The premise of each story will be real, although some of the details will be apocryphal to protect the innocent.
One story will focus on defending a network, one will look at adversary detection, while the final one will look at responding to an active attack and in each case, Tim will guide the audience to think about the kinds of data sources that ATT&CK tracks, that they might call upon to achieve a successful outcome.
From Theory to Practice: How My ATTACK Perspectives Have ChangedMITRE - ATT&CKcon
From MITRE ATT&CKcon Power Hour December 2020
By Katie Nickels, Director of Intelligence, Red Canary
Good analysts (and good human beings) change their minds based on new information. In this presentation, Katie will share how her perspectives on ATT&CK have changed since moving from ATT&CK team member to ATT&CK end-user. She will discuss how her ideas about coverage, procedures, and detection creation have evolved and why those perspectives matter. Katie will also share practical examples from observed threats to help explain the nuances of her perspectives. Attendees should expect to leave this presentation with a better understanding of how to handle challenges they’re likely to face when navigating their own ATT&CK journey.
It's just a jump to the left (of boom): Prioritizing detection implementation...MITRE ATT&CK
From ATT&CKcon 3.0
By Lindsay Kaye and Scott Small, Recorded Future
Many organizations ask: "Where do I start, and where do I go next" when prioritizing implementation of behavior-based detections? We often hear "use threat intelligence!" but your goals must be qualified and quantified in order to properly prioritize the most relevant TTPs. A wealth of open-sourced, ATT&CK-mapped resources now exists, giving security teams greater access to both detections and red team tests they can implement, but intelligence (also aligned with ATT&CK), is essential to provide necessary context to ensure that detection efforts are focused effectively.
This session will discuss a new approach to the prioritization challenge, starting with an analysis of the current defensive landscape, as measured by ATT&CK coverage for more than a dozen detection repositories and technologies, and guidance on sourcing TTP intelligence. The team will then show how real-world defensive strategies can be strengthened by encompassing a full-spectrum view of threat detection, including the implementation of YARA, Sigma, and Snort in security appliances. Critically, alignment of both intelligence and defenses with ATT&CK enables defenders to move the focus of detection efforts to indications of malicious behavior before the final payload is deployed, where controls are most effective at preventing serious damage to the organization.
Knowledge for the masses: Storytelling with ATT&CKMITRE ATT&CK
From ATT&CKcon 3.0
By Ismael Valenzuela and Jose Luis Sanchez Martinez, Trellix
The Trellix team believes that creating and sharing compelling stories about cyber threats -with ATT&CK- is a powerful way for raising awareness and enabling actionability against cyber threats.
In this talk the team will share their experiences leveraging ATT&CK to disseminate Threat knowledge to different audiences (Software Development teams, Managers, Threat detection engineers, Threat hunters, Cyber Threat Analysts, Support Engineers, upper management, etc.).
They will show concrete examples and representations created with ATT&CK to describe the threats at different levels, including: 1) an Attack Path graph that shows the overall flow of the attack; 2) Tactic-specific TTP summary tables and graphs; 3) very detailed, step-by-step description of the attacker's behaviors.
Mapping to MITRE ATT&CK: Enhancing Operations Through the Tracking of Interac...MITRE ATT&CK
From ATT&CKcon 3.0
By Jason Wood and Justin Swisher, CrowdStrike
When it comes to understanding and tracking intrusion tradecraft, security teams must have the tools and processes that allow the mapping of hands-on adversary tradecraft. Doing this enables your team to both understand the adversaries and attacks you currently see and observe how these adversaries and attacks evolve over time. This session will explore how a threat hunting team uses MITRE ATT&CK to understand and categorize adversary activity. The team will demonstrate how threat hunters map ATT&CK TTPs by showcasing a recent interactive intrusion against a Linux endpoint and how the framework allowed for granular tracking of tradecraft and enhanced security operations. They will also take a look into the changes in the Linux activity they have observed over time, using the ATT&CK navigator to compare and contrast technique usage. This session will provide insights into how we use MITRE ATT&CK as a powerful resource to track intrusion tradecraft, identify adversary trends, and prepare for attacks of the future.
From ATT&CKcon 3.0
By Jared Stroud, Lacework
Adversaries target common cloud misconfigurations in container-focused workflows for initial access. Whether this is Docker or Kubernetes environments, Lacework Labs has identified adversaries attempting to deploy malicious container images (T1610) , mine Cryptocurrency (T1496), and deploy C2 agents. Defenders new to the container space may be unaware of the built-in capabilities popular container runtime engines have that can help defend against rogue containers being deployed into their environment. Attendees will walk away with an understanding of what these attack patterns look like based on honeypot data Lacework has gathered over the past year, as well as techniques on how to defend their own container focused workloads.
FIRST CTI Symposium: Turning intelligence into action with MITRE ATT&CK™Katie Nickels
Katie Nickels and Adam Pennington presented "Turning intelligence into action with MITRE ATT&CK™" at the FIRST CTI Symposium in London on 20 March 2019.
MITRE ATT&CKcon 2018: Hunters ATT&CKing with the Data, Roberto Rodriguez, Spe...MITRE - ATT&CKcon
With the development of the MITRE ATT&CK framework and its categorization of adversary activity during the attack cycle, understanding what to hunt for has become easier and more efficient than ever. However, organizations are still struggling to understand how they can prioritize the development of hunt hypothesis, assess their current security posture, and develop the right analytics with the help of ATT&CK. Even though there are several ways to utilize ATT&CK to accomplish those goals, there are only a few that are focusing primarily on the data that is currently being collected to drive the success of a hunt program.
This presentation shows how organizations can benefit from mapping their current visibility from a data perspective to the ATT&CK framework. It focuses on how to identify, document, standardize and model current available data to enhance a hunt program. It presents an updated ThreatHunter-Playbook, a Kibana ATT&CK dashboard, a new project named Open Source Security Events Metadata known as OSSEM and expands on the “data sources” section already provided by ATT&CK on most of the documented adversarial techniques.
The ATT&CK Latin American APT PlaybookMITRE ATT&CK
From ATT&CKcon 3.0
By Santiago Pontiroli and Dmitry Bestuzhev, Kaspersky
Financially motivated cyber-attacks thrive in emerging Latin American markets. However, there's room for locally grown threat actors operating in the cyber espionage field as well. During the last decade, this includes but is not limited to Blind Eagle, Puppeteer, Machete, Poseidon, and others. We also saw foreign operations targeting specific assets in Latin America, still connected to certain regional sources.
Since the threat actors' origin, culture, and language is often different, it's not uncommon for tactics, techniques, and procedures (TTPs) to present marked differences. As a result of our regional expertise and experience, we created MITRE's ATT&CK play-by-play mappings to help other analysts understand regional actors. If you are interested in threat intelligence and what's going on in Latin America, this presentation is for you. Our work is based only on real-world attackers and their operations, including those not publicly known, such as COVID-19 Machete's targeted campaign.
Using ATTACK to Create Cyber DBTS for Nuclear Power PlantsMITRE - ATT&CKcon
From MITRE ATT&CKcon Power Hour December 2020
By Jacob Benjamin, Principal Industrial Consultant Dragos, INL, & University of Idaho
Design Basis Threat (DBT) is concept introduced by the Nuclear Regulatory Commission (NRC). It is a profile of the type, composition, and capabilities of an adversary. DBT is the key input nuclear power plants use for the design of systems against acts of radiological sabotage and theft of special nuclear material. The NRC expects its licensees, nuclear power plants, to demonstrate that they can defend against the DBT. Currently, cyber is included in DBTs simply as a prescribed list of IT centric security controls. Using MITRE’s ATT&CK framework, Cyber DBTs can be created that are specific to the facility, its material, or adversary activities.
My slides for PHDays 2018 Threat Hunting Hands-On Lab - https://www.phdays.com/en/program/reports/build-your-own-threat-hunting-based-on-open-source-tools/
Virtual Machines for lab are available here - https://yadi.sk/d/qB1PNBj_3ViWHe
Landing on Jupyter: The transformative power of data-driven storytelling for ...MITRE ATT&CK
From ATT&CKcon 3.0
By Jose Barajas and Stephan Chenette, AttackIQ
Every cybersecurity leader wants visibility into the health of their security program. Yet teams suffer with disparate data streams - CTI teams and the SOC often use separate Excel spreadsheets, an anachronistic practice - and silos constrain their ability to operate effectively. Enter the Jupyter notebook, an open-source computational notebook that researchers use to combine code, computing output, text, and media into a single interface. In this talk, we share three stories of how organizations use Jupyter notebooks to align ATT&CK-based attack flows to the security program, generating data about detection and prevention failures, defensive gaps, and longitudinal performance. By using Jupyter notebooks in this way, teams can better leverage ATT&CK for security effectiveness. It becomes less of a bingo card and more of a strategic tool for understanding the health of the program against big tactics (I.e., lateral movement), defensive gaps (I.e., micro-segmentation), and the team's performance.
Would you Rather Have Telemetry into 2 Attacks or 20? An Insight Into Highly ...MITRE ATT&CK
From ATT&CKcon 3.0
By Jonny Johnson, Red Canary and Olaf Hartong, FalconForce
As defenders, we often find ourselves wanting "more" data. But why? Will this new data provide a lot of value or is it for a very niche circumstance? How many attacks does it apply to? Are we leveraging previous data sources to their full capability? Within this talk, Olaf and Jonny will walk through different data sources they leverage more than most when analyzing data within environments, why they do, and what these data sources do and can provide in terms of value to a defender.
Recently, NTT published the Global Threat Intelligence Report 2016 (GTIR). This year’s report focused both on the changes in threat trends and on how security organizations around the world can use the kill chain to help defend the enterprise.
Turning threat intelligence data from multiple sources into actionable, contextual information is a challenge faced by many organizations today. The Global Threat Intelligence Platform provides increased efficiency, reduces risks and focuses on global coverage with accurate and up-to-date threat intelligence.
This presentation was given at Carnegie Mellon University by Kenji Takahashi, VP of Product Management, Security at NTT Innovation Institute.
Tracking Noisy Behavior and Risk-Based Alerting with ATT&CKMITRE ATT&CK
From ATT&CKcon 3.0
By Haylee Mills, Splunk
Having ATT&CK to identify threats, prioritize data sources, and improve security posture has been a huge step forward for our industry, but how do we actualize those insights for better detection and alerting? By shifting to observations of behavior over one-to-one direct alerts, noisy datasets become valuable treasure troves with ATT&CK metadata. Additionally, we can begin to look at detection and threat hunting on behavior instead of users or systems. In this presentation, Haylee will discuss the shift in mindset and the nuts and bolts of detections that leverage this metadata in Splunk, but the concept can be applied with custom tools to any valuable security dataset.
Mapping ATT&CK Techniques to ENGAGE ActivitiesMITRE ATT&CK
From ATT&CKcon 3.0
By David Barroso, CounterCraft
When an adversary engages in a specific behavior, they are vulnerable to expose an unintended weakness. By looking at each ATT&CK technique, we can examine the weaknesses revealed and identify an engagement activity or activities to exploit this weakness.
During the presentation we will see some real examples of how we can use different ATT&CK techniques in order to plan different adversary engagement activities.
From ATT&CKcon 3.0
By Ivan Ninichuck and Andy Shepard, Siemplify
The MITRE ATT&CK framework has improved many areas within the infosec workflow. But many of these select areas are those that are relatively isolated from the tactical operations faced every day by lower or mid-tier analysts. When faced with alert fatigue and an ever-growing number of data sources, the impact of ATT&CK can become esoteric to non-existent. In this presentation experts from Siemplify propose the problem be looked at like an orchestra with its dozens of instrument types. Without a conductor to guide each section there would only be noise, but with the conductor leading, beautiful symphonies can now be played. The Siemplify team plan to show how a SOAR platform can be that conductor using the ATT&CK framework as its sheet music, and turn the constant noise into a threat intel driven security program.
Measure What Matters: How to Use MITRE ATTACK to do the Right Things in the R...MITRE - ATT&CKcon
From MITRE ATT&CKcon Power Hour January 2021
By Daniel Wyleczuk-Stern, Senior Security Engineer, Snowflake
Cyber security is inherently a function of risk management. Risk management is the identification, evaluation, and prioritization of risks followed by the effort to reduce those risks in a coordinated and economical manner (thanks wikipedia!). In this talk, Daniel will be going over some strategies for measuring and prioritizing your cyber risks using MITRE ATT&CK. He'll discuss some lessons learned in atomic testing of techniques vs attack chaining as well as what to measure and how to make decisions with that data.
Automating the mundanity of technique IDs with ATT&CK Detections CollectorMITRE ATT&CK
From ATT&CKcon 3.0
By Marcus LaFerrera and Ryan Kovar, Splunk
Since the release of MITRE ATT&CK, vendors and governmental bodies have begun mapping their security blogs, whitepapers, and threat intel reports to ATT&CK TTPs, which is incredible! Vendors have then begun mapping their detections to those mapped TTPs, which is even more awesome! What is not awesome is dissecting a piece of prose for all of the specific embedded ATT&CK technique IDs and then mapping them to your detections to determine coverage. Over the last year, the team at Splunk has spent more time doing this than they would like to admit, so they wrote a tool to do it for them and want to share it with the world. Join the Splunk team as they tell the world about ATT&CK Detections Collector (ADC). ADC is an open-source python tool that will allow you to extract MITRE technique IDs from a third-party URLs and output them into a file. If you use Splunk, the team even maps them to their existing (previously mapped) detection corpus. They even added the ability to export them into a navigator json for fun, profit, or (at least) better visualization!
ATTACKers Think in Graphs: Building Graphs for Threat IntelligenceMITRE - ATT&CKcon
From MITRE ATT&CKcon Power Hour January 2021
By Valentine Mairet, Security Researcher, McAfee
The MITRE ATT&CK framework is the industry standard to dissect cyberattacks into used techniques. At McAfee, all attack information is disseminated into different categories, including ATT&CK techniques. What results from this exercise is an extensive repository of techniques used in cyberattacks that goes back many years. Much can be learned from looking at historical attack data, but how can we piece all this information together to identify new relationships between threats and attacks? In her recent efforts, Valentine has embraced analyzing ATT&CK data in graphical representations. One lesson learned is that it is not just about merely mapping out attacks and techniques used into graphs, but the strength lies in applying different algorithms to answer specific questions. In this presentation, Valentine will showcase the results and techniques obtained from her research journey using graph and graph algorithms.
RDMS have their data modeling methodology and diagrams. What about Cassandra? Let's discover the key principles of Cassandra data modeling with the Chebotko methodology. Have a look at KDM, a Chebotko modeling tool. And finally, let's talk about the time dimension in Cassandra.
This presentation was made for the Lyon Cassandra Users meetup (France).
Mapping to MITRE ATT&CK: Enhancing Operations Through the Tracking of Interac...MITRE ATT&CK
From ATT&CKcon 3.0
By Jason Wood and Justin Swisher, CrowdStrike
When it comes to understanding and tracking intrusion tradecraft, security teams must have the tools and processes that allow the mapping of hands-on adversary tradecraft. Doing this enables your team to both understand the adversaries and attacks you currently see and observe how these adversaries and attacks evolve over time. This session will explore how a threat hunting team uses MITRE ATT&CK to understand and categorize adversary activity. The team will demonstrate how threat hunters map ATT&CK TTPs by showcasing a recent interactive intrusion against a Linux endpoint and how the framework allowed for granular tracking of tradecraft and enhanced security operations. They will also take a look into the changes in the Linux activity they have observed over time, using the ATT&CK navigator to compare and contrast technique usage. This session will provide insights into how we use MITRE ATT&CK as a powerful resource to track intrusion tradecraft, identify adversary trends, and prepare for attacks of the future.
From ATT&CKcon 3.0
By Jared Stroud, Lacework
Adversaries target common cloud misconfigurations in container-focused workflows for initial access. Whether this is Docker or Kubernetes environments, Lacework Labs has identified adversaries attempting to deploy malicious container images (T1610) , mine Cryptocurrency (T1496), and deploy C2 agents. Defenders new to the container space may be unaware of the built-in capabilities popular container runtime engines have that can help defend against rogue containers being deployed into their environment. Attendees will walk away with an understanding of what these attack patterns look like based on honeypot data Lacework has gathered over the past year, as well as techniques on how to defend their own container focused workloads.
FIRST CTI Symposium: Turning intelligence into action with MITRE ATT&CK™Katie Nickels
Katie Nickels and Adam Pennington presented "Turning intelligence into action with MITRE ATT&CK™" at the FIRST CTI Symposium in London on 20 March 2019.
MITRE ATT&CKcon 2018: Hunters ATT&CKing with the Data, Roberto Rodriguez, Spe...MITRE - ATT&CKcon
With the development of the MITRE ATT&CK framework and its categorization of adversary activity during the attack cycle, understanding what to hunt for has become easier and more efficient than ever. However, organizations are still struggling to understand how they can prioritize the development of hunt hypothesis, assess their current security posture, and develop the right analytics with the help of ATT&CK. Even though there are several ways to utilize ATT&CK to accomplish those goals, there are only a few that are focusing primarily on the data that is currently being collected to drive the success of a hunt program.
This presentation shows how organizations can benefit from mapping their current visibility from a data perspective to the ATT&CK framework. It focuses on how to identify, document, standardize and model current available data to enhance a hunt program. It presents an updated ThreatHunter-Playbook, a Kibana ATT&CK dashboard, a new project named Open Source Security Events Metadata known as OSSEM and expands on the “data sources” section already provided by ATT&CK on most of the documented adversarial techniques.
The ATT&CK Latin American APT PlaybookMITRE ATT&CK
From ATT&CKcon 3.0
By Santiago Pontiroli and Dmitry Bestuzhev, Kaspersky
Financially motivated cyber-attacks thrive in emerging Latin American markets. However, there's room for locally grown threat actors operating in the cyber espionage field as well. During the last decade, this includes but is not limited to Blind Eagle, Puppeteer, Machete, Poseidon, and others. We also saw foreign operations targeting specific assets in Latin America, still connected to certain regional sources.
Since the threat actors' origin, culture, and language is often different, it's not uncommon for tactics, techniques, and procedures (TTPs) to present marked differences. As a result of our regional expertise and experience, we created MITRE's ATT&CK play-by-play mappings to help other analysts understand regional actors. If you are interested in threat intelligence and what's going on in Latin America, this presentation is for you. Our work is based only on real-world attackers and their operations, including those not publicly known, such as COVID-19 Machete's targeted campaign.
Using ATTACK to Create Cyber DBTS for Nuclear Power PlantsMITRE - ATT&CKcon
From MITRE ATT&CKcon Power Hour December 2020
By Jacob Benjamin, Principal Industrial Consultant Dragos, INL, & University of Idaho
Design Basis Threat (DBT) is concept introduced by the Nuclear Regulatory Commission (NRC). It is a profile of the type, composition, and capabilities of an adversary. DBT is the key input nuclear power plants use for the design of systems against acts of radiological sabotage and theft of special nuclear material. The NRC expects its licensees, nuclear power plants, to demonstrate that they can defend against the DBT. Currently, cyber is included in DBTs simply as a prescribed list of IT centric security controls. Using MITRE’s ATT&CK framework, Cyber DBTs can be created that are specific to the facility, its material, or adversary activities.
My slides for PHDays 2018 Threat Hunting Hands-On Lab - https://www.phdays.com/en/program/reports/build-your-own-threat-hunting-based-on-open-source-tools/
Virtual Machines for lab are available here - https://yadi.sk/d/qB1PNBj_3ViWHe
Landing on Jupyter: The transformative power of data-driven storytelling for ...MITRE ATT&CK
From ATT&CKcon 3.0
By Jose Barajas and Stephan Chenette, AttackIQ
Every cybersecurity leader wants visibility into the health of their security program. Yet teams suffer with disparate data streams - CTI teams and the SOC often use separate Excel spreadsheets, an anachronistic practice - and silos constrain their ability to operate effectively. Enter the Jupyter notebook, an open-source computational notebook that researchers use to combine code, computing output, text, and media into a single interface. In this talk, we share three stories of how organizations use Jupyter notebooks to align ATT&CK-based attack flows to the security program, generating data about detection and prevention failures, defensive gaps, and longitudinal performance. By using Jupyter notebooks in this way, teams can better leverage ATT&CK for security effectiveness. It becomes less of a bingo card and more of a strategic tool for understanding the health of the program against big tactics (I.e., lateral movement), defensive gaps (I.e., micro-segmentation), and the team's performance.
Would you Rather Have Telemetry into 2 Attacks or 20? An Insight Into Highly ...MITRE ATT&CK
From ATT&CKcon 3.0
By Jonny Johnson, Red Canary and Olaf Hartong, FalconForce
As defenders, we often find ourselves wanting "more" data. But why? Will this new data provide a lot of value or is it for a very niche circumstance? How many attacks does it apply to? Are we leveraging previous data sources to their full capability? Within this talk, Olaf and Jonny will walk through different data sources they leverage more than most when analyzing data within environments, why they do, and what these data sources do and can provide in terms of value to a defender.
Recently, NTT published the Global Threat Intelligence Report 2016 (GTIR). This year’s report focused both on the changes in threat trends and on how security organizations around the world can use the kill chain to help defend the enterprise.
Turning threat intelligence data from multiple sources into actionable, contextual information is a challenge faced by many organizations today. The Global Threat Intelligence Platform provides increased efficiency, reduces risks and focuses on global coverage with accurate and up-to-date threat intelligence.
This presentation was given at Carnegie Mellon University by Kenji Takahashi, VP of Product Management, Security at NTT Innovation Institute.
Tracking Noisy Behavior and Risk-Based Alerting with ATT&CKMITRE ATT&CK
From ATT&CKcon 3.0
By Haylee Mills, Splunk
Having ATT&CK to identify threats, prioritize data sources, and improve security posture has been a huge step forward for our industry, but how do we actualize those insights for better detection and alerting? By shifting to observations of behavior over one-to-one direct alerts, noisy datasets become valuable treasure troves with ATT&CK metadata. Additionally, we can begin to look at detection and threat hunting on behavior instead of users or systems. In this presentation, Haylee will discuss the shift in mindset and the nuts and bolts of detections that leverage this metadata in Splunk, but the concept can be applied with custom tools to any valuable security dataset.
Mapping ATT&CK Techniques to ENGAGE ActivitiesMITRE ATT&CK
From ATT&CKcon 3.0
By David Barroso, CounterCraft
When an adversary engages in a specific behavior, they are vulnerable to expose an unintended weakness. By looking at each ATT&CK technique, we can examine the weaknesses revealed and identify an engagement activity or activities to exploit this weakness.
During the presentation we will see some real examples of how we can use different ATT&CK techniques in order to plan different adversary engagement activities.
From ATT&CKcon 3.0
By Ivan Ninichuck and Andy Shepard, Siemplify
The MITRE ATT&CK framework has improved many areas within the infosec workflow. But many of these select areas are those that are relatively isolated from the tactical operations faced every day by lower or mid-tier analysts. When faced with alert fatigue and an ever-growing number of data sources, the impact of ATT&CK can become esoteric to non-existent. In this presentation experts from Siemplify propose the problem be looked at like an orchestra with its dozens of instrument types. Without a conductor to guide each section there would only be noise, but with the conductor leading, beautiful symphonies can now be played. The Siemplify team plan to show how a SOAR platform can be that conductor using the ATT&CK framework as its sheet music, and turn the constant noise into a threat intel driven security program.
Measure What Matters: How to Use MITRE ATTACK to do the Right Things in the R...MITRE - ATT&CKcon
From MITRE ATT&CKcon Power Hour January 2021
By Daniel Wyleczuk-Stern, Senior Security Engineer, Snowflake
Cyber security is inherently a function of risk management. Risk management is the identification, evaluation, and prioritization of risks followed by the effort to reduce those risks in a coordinated and economical manner (thanks wikipedia!). In this talk, Daniel will be going over some strategies for measuring and prioritizing your cyber risks using MITRE ATT&CK. He'll discuss some lessons learned in atomic testing of techniques vs attack chaining as well as what to measure and how to make decisions with that data.
Automating the mundanity of technique IDs with ATT&CK Detections CollectorMITRE ATT&CK
From ATT&CKcon 3.0
By Marcus LaFerrera and Ryan Kovar, Splunk
Since the release of MITRE ATT&CK, vendors and governmental bodies have begun mapping their security blogs, whitepapers, and threat intel reports to ATT&CK TTPs, which is incredible! Vendors have then begun mapping their detections to those mapped TTPs, which is even more awesome! What is not awesome is dissecting a piece of prose for all of the specific embedded ATT&CK technique IDs and then mapping them to your detections to determine coverage. Over the last year, the team at Splunk has spent more time doing this than they would like to admit, so they wrote a tool to do it for them and want to share it with the world. Join the Splunk team as they tell the world about ATT&CK Detections Collector (ADC). ADC is an open-source python tool that will allow you to extract MITRE technique IDs from a third-party URLs and output them into a file. If you use Splunk, the team even maps them to their existing (previously mapped) detection corpus. They even added the ability to export them into a navigator json for fun, profit, or (at least) better visualization!
ATTACKers Think in Graphs: Building Graphs for Threat IntelligenceMITRE - ATT&CKcon
From MITRE ATT&CKcon Power Hour January 2021
By Valentine Mairet, Security Researcher, McAfee
The MITRE ATT&CK framework is the industry standard to dissect cyberattacks into used techniques. At McAfee, all attack information is disseminated into different categories, including ATT&CK techniques. What results from this exercise is an extensive repository of techniques used in cyberattacks that goes back many years. Much can be learned from looking at historical attack data, but how can we piece all this information together to identify new relationships between threats and attacks? In her recent efforts, Valentine has embraced analyzing ATT&CK data in graphical representations. One lesson learned is that it is not just about merely mapping out attacks and techniques used into graphs, but the strength lies in applying different algorithms to answer specific questions. In this presentation, Valentine will showcase the results and techniques obtained from her research journey using graph and graph algorithms.
RDMS have their data modeling methodology and diagrams. What about Cassandra? Let's discover the key principles of Cassandra data modeling with the Chebotko methodology. Have a look at KDM, a Chebotko modeling tool. And finally, let's talk about the time dimension in Cassandra.
This presentation was made for the Lyon Cassandra Users meetup (France).
Nachdem sich Apache Spark 2015 als ernsthafte Alternative unter den Big Data Frameworks etablieren konnte und Hadoop MapReduce den Rang abläuft, kommt nun aus Berlin unerwartet Konkurrenz in Form von Apache Flink.
Video zu den Slides:
https://www.youtube.com/watch?v=-MmX44pjJ9s&list=PL6ceXNIVUaAKIxQO_aBLlWpp48x-cRzOE&index=2
Zur Spark & Hadoop User Group:
http://www.meetup.com/Hadoop-User-Group-Munich/
IDS are great tools for blue teams and resource for network forensics, however they can also be a great resource for the red teams and as part of a penetration testing exercise.
Querying Riak Just Got Easier - Introducing Secondary IndicesRusty Klophaus
This presentation introduces new Riak KV functionality called Secondary Indexes. Secondary Indices allows a developer to retrieve data by attribute value, rather than by primary key.
Currently, a developer coding outside of Riak’s key/value based access must maintain their own indexes into the data using links, other Riak objects, or external systems. This is straightforward for simple use cases, but can add substantial coding and data modeling for complex applications. By formalizing an approach and building index support directly into Riak KV, we remove this burden from the application developer while preserving Riak’s core benefits, including scalability and tolerance against hardware failure and network partitions.
The presentation covers usage, capabilities, limitations, and lessons learned.
Dragos S4X20: Mapping ICS Incidents to the MITRE Attack FrameworkDragos, Inc.
Principal Industrial Pentester, Austin Scott, presents at S4x20 on how to map ICS incidents to the MITRE ATT&CK Framework.
View the webinar here: https://dragos.com/resource/introducing-mitre-attck-for-ics-and-why-it-matters/
Redis Streams plus Spark Structured StreamingDave Nielsen
Continuous applications have 3 things in common: They collect data from sources (ex: IoT devices), process them in real-time (example: ETL), and deliver them to machine learning serving layer for decision making. Continuous applications face many challenges as they grow to production. Often, due to the rapid increase in the number of devices or end-users or other data sources, the size of their data set grows exponentially. This results in a backlog of data to be processed. The data will no longer be processed in near-real-time.
Redis Streams enables you to collect both binary and text data in the time series format. The consumer groups of Redis Stream help you match the data processing rate of your continuous application with the rate of data arrival from various sources.
Apache Spark’s Structured Streaming API enables real-time decision making for Continuous Applications.
In this session, Dave will perform a live demonstration of how to integrate open source Redis with Apache Spark’s Structured Streaming API using Spark-Redis library. I will also walk through the code and run a live continuous application.
Data Privacy with Apache Spark: Defensive and Offensive ApproachesDatabricks
In this talk, we’ll compare different data privacy techniques & protection of personally identifiable information and their effects on statistical usefulness, re-identification risks, data schema, format preservation, read & write performance.
We’ll cover different offense and defense techniques. You’ll learn what k-anonymity and quasi-identifier are. Think of discovering the world of suppression, perturbation, obfuscation, encryption, tokenization, watermarking with elementary code examples, in case no third-party products cannot be used. We’ll see what approaches might be adopted to minimize the risks of data exfiltration.
Creating Your Own Threat Intel Through Hunting & VisualizationRaffael Marty
The security industry is talking a lot about threat intelligence; external information that a company can leverage to understand where potential threats are knocking on the door and might have already perpetrated the network boundaries. Conversations with many CERTs have shown that we have to stop relying on knowledge about how attacks have been conducted in the past and start 'hunting' for signs of compromises and anomalies in our own environments.
In this presentation we explore how the decade old field of security visualization has emerged. We show how we have applied advanced analytics and visualization to create our own threat intelligence and investigated lateral movement in a Fortune 50 company.
Visualization. Data science. No machine learning. But pretty pictures.
Here is a blog post I wrote a bit ago about the general theme of internal threat intelligence:
http://www.darkreading.com/analytics/creating-your-own-threat-intel-through-hunting-and-visualization/a/d-id/1321225?
Another compromised hostname “https://xxx.com” is acting like drop-zone for stolen data from eight different Italian banks. The analysis of this drop-zone reveal a custom web application focused for info stealing. They steal a credit card details from the infected users using a phishing attack.
Using Anaconda to light up dark data. My talk given to the Berkeley Institute of Data Science describing Anaconda and the Blaze ecosystem for bringing a virtual analytical database to your data.
[Webinar] Scientific Computation and Data Visualization with Ruby Srijan Technologies
Speaker: Sameer Deshmukh
Fast and easy analysis, visualization of data, and scientific computing in general has been an uphill task for Rubyists due to the lack of suitable libraries. The good news is, the Ruby Science Foundation has introduced several tools that make Ruby a viable and highly approachable programming language for Scientific Computing, Data Analysis and Visualization.
In this webinar we take a look at some innovative ways in which data analysis and visualization can be easily performed in Ruby. We also explore how you can leverage Ruby’s programmer friendly approach to create fast and intuitive scientific libraries.
Similar to Building a Cyber Threat Intelligence Knowledge Graph (20)
Building Biomedical Knowledge Graphs for In-Silico Drug DiscoveryVaticle
The rapid development and spread of analytical tools in the biomedical sciences has produced a variety of information about all sorts of biological components and their functions. Though important individually, their biological characteristics need to be understood in relation to the interactions they have with other biological components, which requires the integration of vast amounts of complex, semantically-rich, heterogenous data.
Traditional systems are inadequate at accurately modelling and handling data at this scale and complexity, making solutions that speed up the integration and querying of such data a necessity.
In this talk, we present various approaches being used in organisations to build biomedical computational pipelines to address these problems using tools such as Machine Learning and TypeDB. In particular, we discuss how to create an accurate and scalable semantic representation of molecular level biomedical data by presenting examples from drug discovery, precision medicine and competitive intelligence.
Speaker: Tomás Sabat
Tomás is the Chief Operating Officer at Vaticle, dedicated to building a strongly-typed database for intelligent systems. He works directly with TypeDB's open source and enterprise users so they can fulfil their potential with TypeDB and change the world. He focuses mainly in life sciences, cyber security, finance and robotics.
Loading a lot of data into a graph database is not a trivial exercise. TypeDB Loader (formerly known as GraMi) was developed to allow large-scale data import into TypeDB, a strongly-typed database. Recent improvements have immensely simplified the configuration interface to allow for easier data importing, while maintaining features and the promise of loading huge amounts of data into TypeDB as fast as possible.
Natural Language Interface to Knowledge GraphVaticle
Natural language interfaces (NLI) offer end-users an easy and convenient way to query ontology-based knowledge graphs. They automatically generate database queries based on their natural language inputs, avoiding the need for the end user to learn different query languages. NLIs can be used with REST APIs to facilitate and enrich the interactions with knowledge graphs, in domains such as interactive root cause analysis (RCA), dynamic dashboard generation, and Online Transactional Processing (OLTP).
In this talk, you'll learn about a natural language interface built with a TypeDB server running on Raspberry Pi4. This application offers a conversational bot assistant with Cisco Webex for an efficient and flexible way to facilitate human-machine interactions. In particular, this talk will demonstrate how natural language inputs are translated into TypeQL queries using Abstract Syntax Trees that represent the syntactic structure discovered during the Named Entity Recognition (NER) analysis of the textual inputs provided by Rasa 2.X running on an Intel Celeron J3455 miniPC.
A Data Modelling Framework to Unify Cyber Security KnowledgeVaticle
Cyber security companies collect massive amounts of heterogenous data coming from a huge number of sources. These describe hundreds of different data types, such as vulnerabilities, observables, incidents, and malwares. While this data is highly complex (with many types of relations, type hierarchies, and rules), its structure doesn't significantly change between organisations. However, without a publicly available data model, organisations end up modelling the same data in different ways: in other words, reinventing the wheel, and wasting their resources. This modelling complexity makes scaling cyber security applications extremely difficult.
That's why efforts are underway to provide ready-made solutions for typical cyber security use cases which provide the flexibility to expand for specific requirement of individual setups. The combination of those efforts have created a lot of inter-related knowledge silos (e.g. CVE, CAPEC, CWE, CVSS, Cocoa, MITRE, VERIS, STIX, MAEC). To unify these silos, various ontologies have been proposed by researchers, with different levels of granularity - from specific use cases like defence exercises, to more comprehensive cases like the UCO project.
During this talk, you’ll learn about the OmnibusCyber Project, an open-source, ready-made solution that aggregates cyber security knowledge silos, based on TypeDB. TypeDB’s framework offers the expressivity, safety, and inference properties required to implement a knowledge graph without the complexity associated with the OWL/RDF semantic frameworks.
Unifying Space Mission Knowledge with NLP & Knowledge GraphVaticle
Synopsis
The number of space missions being designed and launched worldwide is growing exponentially. Information on these missions, such as their objectives, orbit, or payload, is disseminated across various documents and datasets. Facilitating access to this information is key to accelerating the design of future missions, enabling experts to link an application to a mission, and following various stakeholders' activities.
This presentation introduces recent research done at the ESA to combine the latest Language Models with Knowledge Graphs, unifying our knowledge on space missions. Language Models such as GPT-3 and BERT are trained to understand the patterns of human (natural) language. These models have revolutionised the field of NLP, the branch of AI enabling machines to understand human language in all its complexity. In this work, key information on a mission is parsed from documents with the GPT-3 model, and the parsed data is then migrated to a TypeDB Knowledge Graph to be easily queried. Although this work focuses on an application in the space sector, the method can be transferred to other engineering fields.
Presenters
Dr. Audrey Berquand is a Research Fellow at the ESA. Her research aims at enhancing space mission design and knowledge management with text mining, NLP, and Knowledge Graphs. She was awarded her PhD in 2021 from the University of Strathclyde (Scotland) for her thesis on “Text Mining and Natural Language Processing for the Early Stages of Space Mission Design”. Audrey has a background in space systems engineering, she holds an MSc in Aerospace Engineering from the Royal Institute of Technology KTH (Sweden), and a diplôme d'ingénieur from the EPF Graduate School of Engineering (France). Before diving into the world of AI, she spent 3 years at ESA being involved in the early design phases of future Earth Observation missions.
Ana Victória Ladeira works with Knowledge Management at the ESA, using automated methods to exploit the information contained in the piles and piles of documents that ESA generates every day. With a Masters degree in Data Science from Maastricht University, Ana is particularly excited about how NLP methods can help large organizations connect different documents and highlight the bigger picture over a big universe of data sources, as well as using Knowledge Graphs to help connect people to the expertise and information they need.
Talk Summary:
State of the art AI approaches can struggle to create solutions which provide accurate results that stand the test of time. They are also plagued by problems such as bias and a lack of explainability. Causal AI addresses these key problems and is at the center of the Geminos Causeway platform, which is built on TypeDB.
This webinar will give you an introduction to why causal AI is so important, and how you can start to use it to drive more value for your organisation.
Speaker: Stuart Frost
Stu is the CEO and founder of Geminos. Their focus is on building AI-driven solutions for mid-sized Smart Manufacturing and Logistics companies, that are frustrated by their inability to digitalize their operations at sensible cost. Stu has 30 years’ experience in founding and leading successful data management and analytics startups, starting at 26 when he founded SELECT Software Tools, and led the company to a NASDAQ IPO in 1996. He then founded DATAllegro in 2003 which was acquired by Microsoft.
Knowledge Graphs for Supply Chain Operations.pdfVaticle
Agility in supply chain operations has never been so important, especially with today's nonlinear and complex world. That is why companies with supply chains need knowledge graphs.
So how do enterprises unleash the power of their own supply chain data to make smarter decisions? This is where bops comes into play. Bops activates supply chain data from existing operating systems (ERPs, Pos, OMS, etc) simplifying how operators optimize working capital in every decision.
In this session, bops will showcase a few use cases that portray the power of a knowledge graph to represent a supply chain network composed of an end to end product flow driven by actions among plants, customers and suppliers.
Supply chain operations visibility:
- Story of a Product and an SKU: from raw material to finished goods track trace & bill of material deviations
- Story of a Supplier – risk assessments – “the most influential supplier”
- Story of a Process – anomaly detection – “what went wrong?”
Join us for a lively discussion to learn how using knowledge graphs is already helping supply chain companies to better collect, unify, and activate their data.
Speaker: Jorge Risquez
Jorge is the Co-founder and CEO of bops, a headless supply chain intelligence platform helping manufacturers and distributors source, make, and deliver their products, and unlock working capital. Previously, Jorge spent a decade as a Supply Chain Consultant for Deloitte, where he worked with Fortune 500 companies such as Tyson and Cargill. In his spare time, he enjoys going for a run in Central Park and spending time with family and friends.
Building a Distributed Database with Raft.pdfVaticle
Applications running on production have much higher requirements. Not only do they need to be correct, they also need to be "always-on", handle a much bigger user load, and also be secure.
Meet TypeDB Cluster, the TypeDB database for production-scale, built using the Raft replication algorithm. Join us for a walk through the underlying architecture and what value it brings to developers running an application at scale.
Speaker: Ganeshwara Henanda
Ganesh leads the development of TypeDB Cluster while also managing other aspects such as infrastructure and project management. His day-to-day work involves building concurrent and distributed algorithms such as Raft and the Actor Model.
He graduated with an MSc of Grid Computing from University of Amsterdam, and has built several large scale distributed and real-time systems throughout his career.
Enabling the Computational Future of Biology.pdfVaticle
Computational biology has revolutionised biomedicine. The volume of data it is generating is growing exponentially. This requires tools that enable computational and non-computational biologists to collaborate and derive meaningful insights. However, traditional systems are inadequate to accurately model and handle data at this scale and complexity.
In this talk, we discuss how TypeDB enables biologists to build a deeper understanding of life, and increase the probability of groundbreaking discoveries, across the life sciences.
Speaker: Tomás Sabat
Tomás is the Chief Operating Officer at Vaticle. He works closely with TypeDB's open source and enterprise users who use TypeDB to build applications in a wide number of industries including financial services, life sciences, cybersecurity and supply chain management. A graduate of the University of Cambridge, Tomás has spent the last seven years founding and building businesses in the technology industry.
Build your skills and learn how TypeDB's native inference engine works.
Good for:
- Beginners to TypeDB and TypeQL
- Those who have been using TypeDB and want a refresher on inference in TypeDB
- Experienced software engineers
- Those who want to better represent their domain in a model that allows for logical reasoning at the database level
Description:
TypeDB is capable of reasoning over data via pre-defined rules. TypeQL rules look for a given pattern in the database and when found, infer the given queryable fact. The inference provided by rules is performed at query (run) time. Rules not only allow shortening and simplifying of commonly-used queries, but also enable knowledge discovery and implementation of business logic at the database level.
Takeaways:
- Understanding of fundamental components of TypeDB's inference engine and how to write rules for your domain
- Write at least 1 rule for your use case
- Utilise the rule you wrote in a query
Tomás Sabat:
Tomás is the Chief Operating Officer at Vaticle, dedicated to building a strongly-typed database for intelligent systems. He works directly with TypeDB's open source and enterprise users so they can fulfil their potential with TypeDB and change the world. He focuses mainly in life sciences, cyber security, finance and robotics.
Join the TypeDB community to learn how we think about data modelling, and how TypeDB's expressivity allows you to model your domain based on logical and object-oriented programming principles.
Good for:
- Engineers, scientists, and technical executives
- Those in a technical field working with complex datasets, and building intelligent systems
- Anyone curious to learn about the expressive power of TypeDB's data model
Description:
We open this training with an exploration into what a schema looks like in TypeDB, starting with clarifying the motivation for the conceptual model in TypeDB, and its relationship to the Enhanced Entity-Relationship model.
Then we break things down a bit more philosophically, delving into: what does it mean to represent data in TypeDB, and how TypeDB allows you to think higher-level, as opposed to join-tables, columns, documents, vertices, edges, and properties.
Takeaways:
- Be able to articulate why TypeDB's data model is so beneficial for complex data, and why we use it to build intelligent systems
- Write a TypeDB schema in TypeQL
- Practice modelling one of your own domains
Tomás Sabat:
Tomás is the Chief Operating Officer at Vaticle, dedicated to building a strongly-typed database for intelligent systems. He works directly with TypeDB's open source and enterprise users so they can fulfil their potential with TypeDB and change the world. He focuses mainly in life sciences, cyber security, finance and robotics.
Using SQL to query relational databases is easy. As a declarative language, it’s straightforward to write queries and build powerful applications. However, relational databases struggle when working with complex data. When querying such data in SQL, challenges especially arise in the modelling and querying of the data.
For example, due to the large number of necessary JOINs, it forces us to write long and verbose queries. Such queries are difficult to write and prone to mistakes.
TypeQL is the query language used in TypeDB. Just as SQL is the standard query language in relational databases, TypeQL is TypeDB's query language. It’s a declarative language, and allows us to model, query and reason over our data.
In this talk, we will look at how TypeQL compares to SQL. Why and when should you use TypeQL over SQL? How do we do outer/inner joins in TypeQL? We'll look at the common concepts, but mostly talk about the differences between the two.
Speaker: Tomás Sabat
Tomás is the Chief Operating Officer at Vaticle. He works closely with TypeDB's open source and enterprise users who use TypeDB to build applications in a wide number of industries including financial services, life sciences, cybersecurity and supply chain management. A graduate of the University of Cambridge, Tomás has spent the last seven years founding and building businesses in the technology industry.
TypeDB Academy- Getting Started with Schema DesignVaticle
In this TypeDB Academy, we start by gaining an understanding of the fundamental components of TypeDB's type system and what makes it unique. We will see how we can download, install, and run TypeDB, and learn to perform basic database operations.
We'll then explore what a schema looks like in TypeDB, starting with clarifying the motivation for schema, the conceptual schema of TypeDB, and its relationship to the Enhanced Entity-Relationship model.
Good for:
- Beginners to TypeDB and TypeQL
- Those who have been using TypeDB and want a refresher on schema and TypeQL
- Experienced database administrators and software engineers
Takeaways:
- Understanding of fundamental components of TypeDB
- How to download, install, and run TypeDB on your computer
- Be able to articulate why schema is so beneficial when using TypeDB, why we use one, and how it enables a more expressive model
- Write a TypeDB schema in TypeQL
Comparing Semantic Web Technologies to TypeDBVaticle
Semantic Web technologies enable us to represent and query for very complex and heterogeneous datasets. We can add semantics and reason over large bodies of data on the web. However, despite a lot of educational material available, they have failed to achieve mass adoption outside academia.
TypeDB works at a higher level of abstraction and enables developers to be more productive when working with complex data. TypeDB is easier to learn, reducing the barrier to entry and enabling more developers to access semantic technologies. Instead of using a myriad of standards and technologies, we just use one language - TypeQL.
In this talk we will:
- look at how TypeQL compares to Semantic Web standards, specifically RDF, SPARQL RDFS, OWL and SHACL.
- cover questions such as, how do we represent hyper-relations in TypeDB? How does one use rdfs:domain and rdfs:range in TypeDB? And how do the modelling philosophies compare?
Speaker: Tomás Sabat
Tomás is the Chief Operating Officer at Vaticle. He works closely with TypeDB's open source and enterprise users who use TypeDB to build applications in a wide number of industries including financial services, life sciences, cyber security and supply chain management. A graduate of the University of Cambridge, Tomás has spent the last seven years founding and building businesses in the technology industry.
How might we utilise an actor-based execution model to build a powerful yet elegant reasoning engine?
Actors are an asynchronous, inherently parallel framework that form the basis of some of the most computationally heavy systems in the world. By leveraging this in an event-driven model, we can build an execution engine that makes efficient use of all available hardware resources to answer your reasoning queries.
We'll visit the key ideas behind actors, and then walk through how we break reasoning into neat, actor-sized building blocks. As we do this, it will become clear how our marriage of reasoning and actors naturally produces a scalable and elegant execution engine. By examining the problem of reasoning from an actor-based lens, we'll be able to better understand the complexities of reasoning and visualise bottlenecks and optimisations.
Intro to TypeDB and TypeQL | A strongly-typed databaseVaticle
TypeDB is a strongly-typed database. It provides a rich and logical type system which breaks down complex problems into meaningful and logical systems, using TypeQL as its query language.
TypeDB allows you to model your domain based on logical and object-oriented principles. Composed of entity, relationship, and attribute types, as well as type hierarchies, roles, and rules, TypeDB allows you to think higher-level, as opposed to join-tables, columns, documents, vertices, and edges.
Types describe the logical structures of your data, allowing TypeDB to validate that your code inserts and queries data correctly. Query validation goes beyond static type-checking, and includes logical validation of meaningless queries. With strict type-checking errors, you have a dataset that you can trust.
Finally, TypeDB encodes your data for logical interpretation by its reasoning engine. It enables type-inference and rule-inference, which create logical abstractions of data. This allows for the discovery of facts and patterns that would otherwise be too hard to find.
With these abstractions, queries in the tens to hundreds of lines in SQL or NoSQL databases can be written in just a few lines in TypeQL – collapsing code complexity by orders of magnitude.
Join Tomás from the Vaticle team where he'll discuss the origins of TypeDB, the impetus for inventing a new query language, TypeQL, and why we are so excited about the future of software and intelligent systems.
Tomás Sabat:
Tomás is the Chief Operating Officer at Vaticle, dedicated to building a strongly typed database for intelligent systems. He works directly with TypeDB's open source and enterprise users so they can fulfil their potential with TypeDB and change the world. He focuses mainly in life sciences, cyber security, finance and robotics.
Graph Databases vs TypeDB | What you can't do with graphsVaticle
Developing with graph databases has a number of challenges, such as the modelling of complex schemas, and maintaining data consistency in your database.
In this talk, we discuss how TypeDB addresses these challenges, as well as how it compares to property graph databases. We’ll look at how to read and write data, how to model complex domains, and TypeDB’s ability to infer new data.
The main differences between TypeDB and graph databases can be summarised as:
1. TypeDB provides a concept-level schema with a type system that fully implements the Entity-Relationship (ER) model. Graph databases, on the other hand, use vertices and edges without integrity constraints imposed in the form of a schema
2. TypeDB contains a built-in inference engine - graph databases don’t provide native inferencing capabilities
3. TypeDB is an abstraction over a graph, and leverages a graph database under the hood to create a higher-level model, while graph databases work at different levels of abstraction
Tomás Sabat
Tomás is the Chief Operating Officer at Vaticle. He works closely with TypeDB's open source and enterprise users who use TypeDB to build applications in a wide number of industries including financial services, life sciences, cyber security and supply chain management. A graduate of the University of Cambridge, Tomás has spent the last seven years founding and building businesses in the technology industry.
In this seminar we use TypeDB to open a window on the Pandora Papers, a massive 'data tsunami' based on 11.9 million leaked source documents obtained by the International Consortium of Investigative Journalists (ICIJ).
We will use an automated query builder to get an initial set of results, and then hop from node to node, exploring neighbours and mapping out a suspicious-looking network of offshore shell companies, officers and intermediaries.
Speaker: Jon Thompson
Jon has an MSc in Applied Mathematics and has worked for several years as a Data Scientist in high-throughput biological sequencing. He is the founder of Nodelab, which is on a mission to provide a fully-featured graphical user interface experience for TypeDB.
Heterogenous data holds significant inherent context. We would like our machine learning models to understand this context, and utilise this ancillary but critical information to improve the accuracy and versatility of our models.
How can we systematically make use of context in Machine Learning?
We delve in and investigate the knowledge modelling techniques, which applied with the right ML strategies, give us a promising approach for robustly handling heterogeneous data in large knowledge models. We aim to do this in a way that allows us to build any Machine Learning models, including graph learning models like our KGCN.
Speaker: James Fletcher, Vaticle
James comes from a background of Computer Vision, specialising in automated diagnostics. As Principal Scientist at Vaticle, his mission is to demonstrate to the world how traditional symbolic approaches to AI, built-in to TypeDB, can be combined with present-day research in machine learning.
AI offers enormous potential in terms of improving the effectiveness and efficiency of robots. In recent years, data-driven AI has achieved remarkable success in specialised tasks such as speech recognition, machine translation and object detection. Despite these successes, there are also some clear signs of the limitations.
On finding a solution to these limitations, we study the following three challenges:
1) How may robots operate under real-world conditions, which are dynamic and packed with unknown objects and situations?
2) How may robots be able to execute multiple tasks, instead of just one?
3) How can robots cooperate with other robots and with human team-mates?
In this talk the first two challenges will be addressed. Also, we will show how the knowledge-base of TypeDB enables us to tackle such challenges.
Speaker: Joris Sijs, Scientist @ TNO
Joris is a team-lead at TNO, where he develops and integrates software modules for the perception, awareness and planning of autonomous systems and autonomous robots. He recently started to extend this work with the development of knowledge-graphs (or cognitive databases), and how to combine this type of AI with the machine- and deep-learning solutions in AI.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
4. Datasets
Data Model
Database
TypeDB is a strongly-typed database
and TypeQL is its query language
Based on STIX 2.1 to enable ingestion of
heterogeneous CTI data
Includes a connector to load MITRE ATT&CK
(other datasets to follow)
A knowledge graph to manage CTI knowledge and discover new insights about cyber threats
TypeDB Data - CTI
5. Why TypeDB?
Expressivity Logical Inference
An expressive data model to represent
complex data and enable
contextualization of CTI data
Discover previously unknown insights
from existing data to detect and respond
to new threats
6. Structured Threat Information Expression (STIX™)
uses
Intrusion Set Entity
Uses Relationship
Malware Entity
Embedded
Relationship
17. define
stix-object sub entity;
stix-core-object sub stix-object;
stix-domain-object sub stix-core-object;
stix-cyber-observable-object sub stix-core-object;
stix-meta-object sub stix-object;
marking-definition sub stix-meta-object;
external-reference sub stix-meta-object;
kill-chain-phase sub stix-meta-object;
Modelling STIX in TypeDB
18. define
stix-object sub entity;
stix-core-object sub stix-object;
stix-domain-object sub stix-core-object;
stix-cyber-observable-object sub stix-core-object;
stix-meta-object sub stix-object;
marking-definition sub stix-meta-object;
statement-marking sub marking-definition;
tlp-marking sub marking-definition;
external-reference sub stix-meta-object;
kill-chain-phase sub stix-meta-object;
Modelling STIX in TypeDB
19. define
stix-object sub entity;
stix-core-object sub stix-object;
stix-domain-object sub stix-core-object;
stix-cyber-observable-object sub stix-core-object;
stix-meta-object sub stix-object;
marking-definition sub stix-meta-object;
statement-marking sub marking-definition;
tlp-marking sub marking-definition;
tlp-white sub tlp-marking;
tlp-green sub tlp-marking;
tlp-red sub tlp-marking;
tlp-amber sub tlp-marking;
external-reference sub stix-meta-object;
kill-chain-phase sub stix-meta-object;
Modelling STIX in TypeDB
20. Modelling STIX in TypeDB
Threat Actor Entity
Indication Relation
Embedded Relation
Embedded Relation
Embedded Relation
Nested Relations
21. Modelling STIX in TypeDB
Nested Relations
object-
marking
indication
threat-
actor
indicator
tlp-red
indicating
marking
indicated
marked
22. Nested Relations
Modelling STIX in TypeDB
match
$indicator isa indicator, has name "Fake email
address";
$threat-actor isa threat-actor, has name "The
Joker";
$tlp isa tlp-red, has stix-id "marking-
definition--5e57c739-391a-4eb3-b6be-
7d15ca92d5ed";
$ind (indicating: $indicator, indicated: $threat-
actor) isa indication;
$mark (marking: $tlp, marked: $ind) isa object-
marking;
object-
marking
indication
threat-
actor
indicator
tlp-red
indicated
indicating
marking
marked
26. Type Hierarchies
Modelling STIX in TypeDB
network-traffic
icm
http-request socket tcp
define
network-traffic sub entity;
http-request sub network-traffic;
icmp sub network-traffic;
socket sub network-traffic;
tcp sub network-traffic;
27. Type Hierarchies
Modelling STIX in TypeDB
network-traffic
icm
http-request socket tcp
define
network-traffic sub entity,
owns start,
owns end,
owns is-active,
owns protocol;
http-request sub network-traffic;
icmp sub network-traffic;
socket sub network-traffic;
tcp sub network-traffic;
28. Type Hierarchies
Modelling STIX in TypeDB
network-traffic
icm
http-request socket tcp
define
network-traffic sub entity,
owns start,
owns end,
owns is-active,
owns protocol;
http-request sub network-traffic,
owns request-method;
icmp sub network-traffic,
owns icmp-type-hex;
socket sub network-traffic,
owns address-family;
tcp sub network-traffic,
owns src-flags-hex;
29. What are all the network traffic entities with protocol “ip”?
Modelling STIX in TypeDB
network-traffic
icm
http-request socket tcp
match
$network-traffic isa network-traffic, has protocol "ip";
$http-request isa http-request, has protocol "ip";
$icmp isa icmp, has protocol "ip";
$socket isa socket, has protocol "ip";
$tcp isa tcp, has protocol "ip";
30. What are all the network traffic entities with protocol “ip”?
Modelling STIX in TypeDB
network-traffic
icm
http-request socket tcp
match
$network-traffic isa network-traffic, has protocol "ip";
31. Inferring New Insights
match
$course isa course-of-action, has name "Restrict File and Directory Permissions";
$in isa intrusion-set, has name "BlackTech";
$mit (mitigating: $course, mitigated: $in) isa mitigation;
33. Inferring New Insights
rule mitigating-course-of-action-with-intrusion-set:
when {
(mitigating: $course-of-action, mitigated: $sdo) isa mitigation;
(used: $sdo, used-by: $intrusion-set) isa use;
} then {
(mitigating: $course-of-action, mitigated: $intrusion-set) isa
inferred-mitigation;
};
36. Inferring New Insights
rule transitive-use:
when {
(used-by: $sdo-1, used: $sdo-2) isa use;
(used-by: $sdo-2, used: $sdo-3) isa use;
} then {
(used-by: $sdo-1, used: $sdo-3) isa use;
};
37. Why TypeDB?
Expressivity Logical Inference
An expressive data model to represent
complex data and enable
contextualization of CTI data
Discover previously unknown insights
from existing data to detect and respond
to new threats
38. Usage Flow of TypeDB CTI
Downloader.py
Download the
latest or a
specific
version of STIX
packages.
Migrate.py Explorer.py Explorer.py
Ingest the data
into TypeDB.
Run a stat
check on
object counts.
Input your
techniques or
sub-
techniques.
Retrieve list of
groups.
explorer.py --stat
explorer.py --infer_group –ttp
T1189 T1068
39.
40. Discord is the community’s
home for learning, ideation
and collaboration.