Slides of a talk given to the Seattle Chapter of the Cloud Security Alliance. Looks briefly at Architectures, Sources of Log Data, and behavioral signatures in the data and issues and observations around using Big Data products for security.
Is Your Data Secure?
Odds are good that your data is extremely important to you. Now consider how one secures that data. Typical approaches address access, authentication, integrity, non-repudiation and confidentiality concerns at the domain and link layers, implicitly securing the data. The challenge and need is to move these security specifications to the data itself, and provide explicit security policies on each element of system-identified data.
Why is this level of finesse needed? As you build out your systems, and systems of systems, how do you manage security when individually element of data, the communication links, and domain boundaries have different behaviors? With this level of complexity and risk, it's critical to have awareness at the level that matters – the data level – so you can make the right design and implementation decisions.
At this webinar, learn how to achieve an assured and predictable security footprint by minimizing the leak of information or exploitation of data through unintended consequences. Secure DDS offers data-centric configuration policies for content and behaviors. Recognizing that security isn't one-size fits all, a standards-based optional plugin SDK allows developers to create custom security plugins.
Connext Secure DDS is the world's first turnkey DDS security solution that conforms to the OMG specification and provides an essential security infrastructure that is data-focused for DDS and legacy systems.
Watch On-Demand: http://ecast.opensystemsmedia.com/478
The document discusses security for the Industrial Internet of Things (IIoT) and Connext DDS Secure. It provides an overview of security frameworks from the Industrial Internet Consortium, including how they address threats in publish-subscribe systems. It then describes the key features of Connext DDS Secure, which is based on the DDS Security specification and provides authentication, access control, and encryption without a broker. The document demonstrates how to configure QoS profiles and permission files to set up secure domains for a Connext DDS shapes demo.
How to Build Your Own Physical Pentesting Go-bagBeau Bullock
Whenever an attacker decides to attempt to compromise an organization they have a few options. They can try to send phishing emails, attempt to break in through an externally facing system, or if those two fail, an attacker may have to resort to attacks that require physical access. Having the right tools in the toolkit can determine whether a physical attacker is successful or not. In this talk we will discuss a number of different physical devices that should be in every physical pentester’s go-bag.
Stealing credentials from a locked computer, getting command and control access out of a network, installing your own unauthorized devices, and cloning access badges are some of the topics we will highlight. We will demo these devices from our own personal go-bags live. Specific use cases for each of the various devices will be discussed including build lists for some custom hardware devices.
Pentest Apocalypse-That's when you hire a pentester, and they walk all over your network. To avoid this, organizations need to be prepared before the first packet is sent in order to get the most value from the tester. There is no excuse for pentesters to find critical vulnerabilities that are six years old on an assessment. And who needs a zero-day when employees leave credentials on wide-open shares? Just like how Doomsday Preppers helps you prepare for the apocalypse, this presentation will help you prepare for, and avoid, a pentest apocalypse by describing common vulnerabilities found on many assessments. Being prepared for common pentester activities will not only help add value to a pentest but will also help prevent attackers from using the same tactics to compromise your organization.
For More Information Please Visit:- http://bsidestampa.net
http://www.irongeek.com/i.php?page=videos/bsidestampa2015/104-pentest-apocalypse-beau-bullock
NetFlow provides visibility into network traffic by capturing metadata on network flows. It identifies the source and destination IP addresses and ports, protocol, start and end times, byte and packet counts for each flow. This flow data is exported from routers and switches to a collector, where NetFlow analyzers aggregate and analyze the data to provide insights into network usage, applications in use, traffic trends, and potential security issues.
As more businesses migrate their employee email and data into collaborative cloud platforms, default configurations, even in a secured environment, could leave them susceptible to attacks. While these platforms create a centralized way to collaborate, manage access and view the world from a single pane of glass -- they also create unique attack paths that attackers can leverage using built-in APIs.
In this presentation, we will explore an innovative approach to red teaming organizations that use Google Suite as their main cloud provider. We will walk through leveraging features to inject calendar events, phishing credentials, capturing 2-factor tokens, backdooring accounts and finally pilfering secrets. Techniques presented will also be incorporated and released as modules within MailSniper.
TABLETOP SCENARIO: Your organization regularly patches, uses application whitelisting, has NextGen-NG™ firewalls/IDS’s, and has the latest Cyber-APT-Trapping-Blinky-Box™. You were just made aware that your entire customer database was found being sold on the dark web. Go.
Putting too much trust in security products alone can be the downfall of an organization. In the 2015 BSides Tampa talk “Pentest Apocalypse” Beau discussed 10 different pentesting techniques that allow attackers to easily compromise an organization. These techniques still work for many organizations but occasionally more advanced tactics and techniques are required. This talk will continue where “Pentest Apocalypse” left off and demonstrate a number of red team techniques that organizations need to be aware of in order to prevent a “Red Team Apocalypse” as described in the tabletop scenario above.
Identifying and Correlating Internet-wide Scan Traffic to Newsworthy Security...Andrew Morris
In this presentation, we will discuss using GreyNoise, a geographically and logically distributed system of passive Internet scan traffic collector nodes, to identify statistical anomalies in global opportunistic Internet scan traffic and correlate these anomalies with publicly disclosed vulnerabilities, large-scale DDoS attacks, and other newsworthy events. We will discuss establishing (and identifying any deviations away from) a “standard” baseline of Internet scan traffic. We will discuss successes and failures of different methods employed over the past six months. We will explore open questions and future work on automated anomaly detection of Internet scan traffic. Finally, we will provide raw data and a challenge as an exercise to the attendees.
Is Your Data Secure?
Odds are good that your data is extremely important to you. Now consider how one secures that data. Typical approaches address access, authentication, integrity, non-repudiation and confidentiality concerns at the domain and link layers, implicitly securing the data. The challenge and need is to move these security specifications to the data itself, and provide explicit security policies on each element of system-identified data.
Why is this level of finesse needed? As you build out your systems, and systems of systems, how do you manage security when individually element of data, the communication links, and domain boundaries have different behaviors? With this level of complexity and risk, it's critical to have awareness at the level that matters – the data level – so you can make the right design and implementation decisions.
At this webinar, learn how to achieve an assured and predictable security footprint by minimizing the leak of information or exploitation of data through unintended consequences. Secure DDS offers data-centric configuration policies for content and behaviors. Recognizing that security isn't one-size fits all, a standards-based optional plugin SDK allows developers to create custom security plugins.
Connext Secure DDS is the world's first turnkey DDS security solution that conforms to the OMG specification and provides an essential security infrastructure that is data-focused for DDS and legacy systems.
Watch On-Demand: http://ecast.opensystemsmedia.com/478
The document discusses security for the Industrial Internet of Things (IIoT) and Connext DDS Secure. It provides an overview of security frameworks from the Industrial Internet Consortium, including how they address threats in publish-subscribe systems. It then describes the key features of Connext DDS Secure, which is based on the DDS Security specification and provides authentication, access control, and encryption without a broker. The document demonstrates how to configure QoS profiles and permission files to set up secure domains for a Connext DDS shapes demo.
How to Build Your Own Physical Pentesting Go-bagBeau Bullock
Whenever an attacker decides to attempt to compromise an organization they have a few options. They can try to send phishing emails, attempt to break in through an externally facing system, or if those two fail, an attacker may have to resort to attacks that require physical access. Having the right tools in the toolkit can determine whether a physical attacker is successful or not. In this talk we will discuss a number of different physical devices that should be in every physical pentester’s go-bag.
Stealing credentials from a locked computer, getting command and control access out of a network, installing your own unauthorized devices, and cloning access badges are some of the topics we will highlight. We will demo these devices from our own personal go-bags live. Specific use cases for each of the various devices will be discussed including build lists for some custom hardware devices.
Pentest Apocalypse-That's when you hire a pentester, and they walk all over your network. To avoid this, organizations need to be prepared before the first packet is sent in order to get the most value from the tester. There is no excuse for pentesters to find critical vulnerabilities that are six years old on an assessment. And who needs a zero-day when employees leave credentials on wide-open shares? Just like how Doomsday Preppers helps you prepare for the apocalypse, this presentation will help you prepare for, and avoid, a pentest apocalypse by describing common vulnerabilities found on many assessments. Being prepared for common pentester activities will not only help add value to a pentest but will also help prevent attackers from using the same tactics to compromise your organization.
For More Information Please Visit:- http://bsidestampa.net
http://www.irongeek.com/i.php?page=videos/bsidestampa2015/104-pentest-apocalypse-beau-bullock
NetFlow provides visibility into network traffic by capturing metadata on network flows. It identifies the source and destination IP addresses and ports, protocol, start and end times, byte and packet counts for each flow. This flow data is exported from routers and switches to a collector, where NetFlow analyzers aggregate and analyze the data to provide insights into network usage, applications in use, traffic trends, and potential security issues.
As more businesses migrate their employee email and data into collaborative cloud platforms, default configurations, even in a secured environment, could leave them susceptible to attacks. While these platforms create a centralized way to collaborate, manage access and view the world from a single pane of glass -- they also create unique attack paths that attackers can leverage using built-in APIs.
In this presentation, we will explore an innovative approach to red teaming organizations that use Google Suite as their main cloud provider. We will walk through leveraging features to inject calendar events, phishing credentials, capturing 2-factor tokens, backdooring accounts and finally pilfering secrets. Techniques presented will also be incorporated and released as modules within MailSniper.
TABLETOP SCENARIO: Your organization regularly patches, uses application whitelisting, has NextGen-NG™ firewalls/IDS’s, and has the latest Cyber-APT-Trapping-Blinky-Box™. You were just made aware that your entire customer database was found being sold on the dark web. Go.
Putting too much trust in security products alone can be the downfall of an organization. In the 2015 BSides Tampa talk “Pentest Apocalypse” Beau discussed 10 different pentesting techniques that allow attackers to easily compromise an organization. These techniques still work for many organizations but occasionally more advanced tactics and techniques are required. This talk will continue where “Pentest Apocalypse” left off and demonstrate a number of red team techniques that organizations need to be aware of in order to prevent a “Red Team Apocalypse” as described in the tabletop scenario above.
Identifying and Correlating Internet-wide Scan Traffic to Newsworthy Security...Andrew Morris
In this presentation, we will discuss using GreyNoise, a geographically and logically distributed system of passive Internet scan traffic collector nodes, to identify statistical anomalies in global opportunistic Internet scan traffic and correlate these anomalies with publicly disclosed vulnerabilities, large-scale DDoS attacks, and other newsworthy events. We will discuss establishing (and identifying any deviations away from) a “standard” baseline of Internet scan traffic. We will discuss successes and failures of different methods employed over the past six months. We will explore open questions and future work on automated anomaly detection of Internet scan traffic. Finally, we will provide raw data and a challenge as an exercise to the attendees.
This document provides an overview of a presentation about Splunk for IT operations. The presentation includes an introduction to Splunk for ITOps and Splunk apps. It discusses how increasing IT complexity is plaguing operations and how Splunk's machine data platform can provide operational intelligence. The presentation also covers Splunk IT Service Intelligence for monitoring IT services and key performance indicators. It provides examples of how customers are using Splunk to increase uptime, reduce mean time to resolution for issues, and improve margins. The presentation concludes with information on an upcoming Splunk user conference.
Cloud Security - A Visibility ChallengeRaffael Marty
Cloud security really boils down to a visibility challenge. I am showing why companies are moving to the cloud and what the security implications are. The security challenges boil down to a visibility, which in turn is a big data challenge. Loggly, a logging as a service provider, addresses this visibility challenge by providing a big data, cloud logging platform. The presentation outlines some visualization use-cases that can be built on top of the Loggly platform to support visibility into cloud operations.
Security Visualization - State of 2010 and 2011 PredictionsRaffael Marty
The document discusses current trends in data visualization. It notes that data collection is important but often lacking. The cloud enables open standards and tools for visualization. However, security visualization remains an afterthought, with few examples and small individual projects, as most organizations do not collect enough security data to visualize. Standards and general purpose visualization tools are still needed to help users understand security data.
The last few years have seen a dramatic increase in the number of PowerShell-based penetration testing tools. A benefit of tools written in PowerShell is that it is installed by default on every Windows system. This allows us as attackers to “”live off the land””. It also has built-in functionality to run in memory bypassing most security products.
I will walk through various methodologies I use surrounding popular PowerShell tools. Details on attacking an organization remotely, establishing command and control, and escalating privileges within an environment all with PowerShell will be discussed. You say you’ve blocked PowerShell? Techniques for running PowerShell in locked down environments that block PowerShell will be highlighted as well.
These are the slides for the presentation that I gave at ICMEAE in Cuernavaca, Mexico on November 20th, 2014. This includes an example using Spark Core.
KiZAN will bring 25 Raspberry Pi starter kits that run Windows 10 IoT Core. This will enable participants to build a really compelling IoT/Azure/Power BI story in a single day! Interet of Things (IoT) Raspberry Pi starter kit
We’ll start off the day with an introduction to IoT and build IoT devices (hands on). Next, we’ll build a simple temperature sensor, collecting ambient temperature readings, and stream the data to an Azure IoT Hub.
Once the data is in Azure, we’ll analyze it with Azure Stream Analytics, and ship it to an Azure SQL Database.
Finally, we’ll report on the data and build dashboards of our temperature readings using Power BI.
Xerox uses Splunk to monitor its electronic payment processing systems. Some key benefits of Splunk include huge time savings over its previous Tivoli platform, increased efficiencies across the business from automated features, and improved visibility into transaction processing through Splunk dashboards. Splunk helps with IT operations monitoring, compliance activities, fraud management, and SSL certificate management. Xerox is able to track $90 billion in payments annually and monitor fraud in real-time using Splunk.
This document discusses how visualization can help make IT security possible when dealing with large scale infrastructure. It describes some common security tools like antivirus software, firewalls, and intrusion detection systems that are used but have limitations. The document then introduces parallel coordinate visualization as a tool that can help analyze large volumes of log and event data to help detect unknown attacks and behaviors. It provides examples of how parallel coordinates could be used to visualize and analyze squid proxy logs, Apache logs, and OpenVPN tunnels. The conclusion is that traditional visualization often fails at scale but parallel coordinates enables analyzing large datasets to find the unknown and understand logs better.
As more corporations adopt Google for providing cloud services they are also inheriting the security risks associated with centralized computing, email and data storage outside the perimeter. In order for pentesters and red teamers to remain effective in analyzing security risks, they must adapt techniques in a way that brings value to the customer.
In this presentation we will begin by demonstrating adaptive techniques to crack the perimeter of Google Suite customers. Next, we will show how evasion can be accomplished by hiding in plain-sight due to failures in incident response plans. Finally, we will also show how a simple compromise could mean collateral damage for customers who are not carefully monitoring these cloud environments.
SplunkLive! Wien 2016 - Splunk für IT OperationsSplunk
This document discusses Splunk software for IT operations. It notes that IT environments have become increasingly complex with many different technologies, applications, and data sources. This makes it difficult for IT teams to maintain systems and innovate. Splunk provides a platform to integrate data from all these different sources for real-time search, monitoring, and analytics. It allows organizations to gain insights from their machine data to more quickly resolve issues and improve IT operations and services. The document highlights how Splunk apps can provide deep insights into specific technologies and roles. It also discusses how Splunk can provide visibility into cloud environments like AWS.
Here’s your chance to get hands-on with Splunk for the first time! Bring your modern Mac, Windows, or Linux laptop and we’ll go through a simple install of Splunk. Then, we’ll load some sample data, and see Splunk in action – we’ll cover searching, pivot, reporting, alerting, and dashboard creation. At the end of this session you’ll have a hands-on understanding of the pieces that make up the Splunk Platform, how it works, and how it fits in the landscape of Big Data. You’ll experience practical examples that differentiate Splunk while demonstrating how to gain quick time to value.
Pentest Apocalypse-That's when you hire a pentester, and they walk all over your network. To avoid this, organizations need to be prepared before the first packet is sent in order to get the most value from the tester. There is no excuse for pentesters to find critical vulnerabilities that are six years old on an assessment. And who needs a zero-day when employees leave credentials on wide-open shares? Just like how Doomsday Preppers helps you prepare for the apocalypse, this presentation will help you prepare for, and avoid, a pentest apocalypse by describing common vulnerabilities found on many assessments. Being prepared for common pentester activities will not only help add value to a pentest but will also help prevent attackers from using the same tactics to compromise your organization.
For More Information Please Visit:- http://bsidestampa.net
http://www.irongeek.com/i.php?page=videos/bsidestampa2015/104-pentest-apocalypse-beau-bullock
Delivering a New Architecture for Security: Blockchain + Trusted ComputingRivetz
Rivetz aims to deliver a new architecture for security by combining blockchain and trusted computing technologies. This will allow instructions executed on devices to be provably secure through the use of a trusted execution environment (TEE) isolated from the main operating system. Rivetz tokens (RvT) can enable multifactor authentication, policy-controlled spending, and automated compliance for utilities through the verification of a device's integrity at the transaction level. The goal is to provide on-demand security controls for machines that are assured through attestation and recorded on the blockchain.
If you’re just getting started with Splunk, this session will help you understand how to use Splunk software to turn your silos of data into insights that are actionable. In this session, we’ll dive right into a Splunk environment and show you how to use the simple Splunk search interface to quickly find the needle-in-the-haystack or multiple needles in multiple haystacks. We’ll demonstrate how to perform rapid ad-hoc searches to conduct routine investigations across your entire IT infrastructure in one place, whether physical, virtual or in the cloud. We’ll show you how to then convert these searches into real time alerts and dashboards, so you can proactively monitor for problems before they impact your end user. We’ll demonstrate how you can use Splunk to connect the dots across heterogeneous systems in your environment for cross-tier, cross-silo visibility. You’ll have access to a demo environment. So, don’t forget to bring your laptop and follow along for a hands-on experience.
"A session in the DevNet Zone at Cisco Live, Berlin. Analytics of network telemetry data (such as flow records, IPSLA measurements, and time series of MIB data) helps address many important operational problems. Traditional Big Data approaches run into limitations even as they push scale boundaries for processing data further. One reason for this is the fact that in many cases, the bottleneck for analytics is not analytics processing itself but the generation and export of the data on which analytics depends. Data does not come for free. The amount of data that can be reasonably collected from the network runs into inherent limitations due to bandwidth and processing constraints in the network itself. In addition, management tasks related to determining and configuring which data to generate lead to significant deployment challenges.
This presentation provides an overview of DNA (Distributed Network Analytics), a novel technology to analyze network telemetry data in distributed fashion at the network edge, allowing users to detect changes, predict trends, recognize anomalies, and identify hotspots in their network. Analytics processing occurs at the source of the data using an embedded DNA Agent App that dynamically configures data sources as needed and analyzes the data using an embedded analytics engine. This provides DNA with superior scaling characteristics while avoiding the significant operational and bandwidth overhead that is associated with centralized analytics solutions. An ODL-based SDN controller application orchestrates network analytics tasks across the network, providing a network analytics service that allows users to interact with the network as a whole instead of individual devices one at a time. DNA is enabled by the IOx App Hosting Framework and integrated with light-weight embedded analytics engines, CSA (Connected Service Analytics) and DMO (Data in Motion). "
The document provides an overview of software defined networking (SDN) and future-proofing data center networks. It discusses the evolution of networks since 2000 and current challenges around inflexibility. SDN is defined as separating the network control and forwarding functions, making the network programmable. Benefits include direct programmability, agility, centralized management, and open standards. SDN uses the OpenFlow protocol and virtualization. The future of SDN includes integration of LAN and WAN SDN, lower MPLS costs, and controlling networks through devices.
Large enterprise SIEM: get ready for oversizeMona Arkhipova
This document discusses large enterprise security information and event management (SIEM) systems. It begins by distinguishing SIEM from simple log collection and systems monitoring. It then discusses the IBM qRadar SIEM platform and some of its architecture and performance challenges. The remainder of the document addresses challenges around log collection from various sources like Windows, Unix, databases and custom applications. It provides best practices guides and discusses normalization, indexing and storage of large volumes of log data. Specific metrics are given for the large QIWI SIEM installation handling millions of events per day. The document concludes by discussing automation of security monitoring and response.
CYBER INTELLIGENCE & RESPONSE TECHNOLOGYjmical
This document provides an overview of AccessData's Cyber Intelligence Response Technology (CIRT) platform. CIRT offers an integrated suite of digital forensics and incident response capabilities including network forensics, host-based forensics, data auditing, and malware analysis. Key features include an agent that can independently collect and store data from endpoints, a Cerberus module that analyzes files for malicious behaviors without signatures or prior knowledge, and modules for analyzing removable media, volatile memory, and network packet captures. The platform allows multiple teams such as incident response, computer forensics, and compliance to collaborate on investigations.
Iaetsd secure data storage against attacks in cloudIaetsd Iaetsd
The document proposes solutions for securing data storage in the cloud against attacks. It discusses threats and attacks like incorrect data storage, data modification, and perimeter defense weaknesses. It then proposes a defense in depth approach with multiple layers of security controls at the storage devices, network, and management access layers. Specific controls are suggested like authentication, authorization, encryption, firewalls, intrusion detection, and logging. The paper also addresses issues like data correctness verification, error localization, and reliability of the security strategy through techniques like challenge-response protocols and redundant storage across multiple locations.
Deep-Dive into Big Data ETL with ODI12c and Oracle Big Data ConnectorsMark Rittman
- The document discusses Oracle tools for extracting, transforming, and loading (ETL) big data from Hadoop into Oracle databases, including Oracle Data Integrator 12c, Oracle Loader for Hadoop, and Oracle Direct Connector for HDFS.
- It provides an overview of using Hadoop for ETL tasks like data loading, processing, and exporting data to structured databases, as well as tools like Hive, Pig, and Spark for these functions.
- Key benefits of the Oracle Hadoop connectors include pushing data transformations to Hadoop clusters for scale and leveraging SQL interfaces to access Hadoop data for business intelligence.
This document provides an overview of a presentation about Splunk for IT operations. The presentation includes an introduction to Splunk for ITOps and Splunk apps. It discusses how increasing IT complexity is plaguing operations and how Splunk's machine data platform can provide operational intelligence. The presentation also covers Splunk IT Service Intelligence for monitoring IT services and key performance indicators. It provides examples of how customers are using Splunk to increase uptime, reduce mean time to resolution for issues, and improve margins. The presentation concludes with information on an upcoming Splunk user conference.
Cloud Security - A Visibility ChallengeRaffael Marty
Cloud security really boils down to a visibility challenge. I am showing why companies are moving to the cloud and what the security implications are. The security challenges boil down to a visibility, which in turn is a big data challenge. Loggly, a logging as a service provider, addresses this visibility challenge by providing a big data, cloud logging platform. The presentation outlines some visualization use-cases that can be built on top of the Loggly platform to support visibility into cloud operations.
Security Visualization - State of 2010 and 2011 PredictionsRaffael Marty
The document discusses current trends in data visualization. It notes that data collection is important but often lacking. The cloud enables open standards and tools for visualization. However, security visualization remains an afterthought, with few examples and small individual projects, as most organizations do not collect enough security data to visualize. Standards and general purpose visualization tools are still needed to help users understand security data.
The last few years have seen a dramatic increase in the number of PowerShell-based penetration testing tools. A benefit of tools written in PowerShell is that it is installed by default on every Windows system. This allows us as attackers to “”live off the land””. It also has built-in functionality to run in memory bypassing most security products.
I will walk through various methodologies I use surrounding popular PowerShell tools. Details on attacking an organization remotely, establishing command and control, and escalating privileges within an environment all with PowerShell will be discussed. You say you’ve blocked PowerShell? Techniques for running PowerShell in locked down environments that block PowerShell will be highlighted as well.
These are the slides for the presentation that I gave at ICMEAE in Cuernavaca, Mexico on November 20th, 2014. This includes an example using Spark Core.
KiZAN will bring 25 Raspberry Pi starter kits that run Windows 10 IoT Core. This will enable participants to build a really compelling IoT/Azure/Power BI story in a single day! Interet of Things (IoT) Raspberry Pi starter kit
We’ll start off the day with an introduction to IoT and build IoT devices (hands on). Next, we’ll build a simple temperature sensor, collecting ambient temperature readings, and stream the data to an Azure IoT Hub.
Once the data is in Azure, we’ll analyze it with Azure Stream Analytics, and ship it to an Azure SQL Database.
Finally, we’ll report on the data and build dashboards of our temperature readings using Power BI.
Xerox uses Splunk to monitor its electronic payment processing systems. Some key benefits of Splunk include huge time savings over its previous Tivoli platform, increased efficiencies across the business from automated features, and improved visibility into transaction processing through Splunk dashboards. Splunk helps with IT operations monitoring, compliance activities, fraud management, and SSL certificate management. Xerox is able to track $90 billion in payments annually and monitor fraud in real-time using Splunk.
This document discusses how visualization can help make IT security possible when dealing with large scale infrastructure. It describes some common security tools like antivirus software, firewalls, and intrusion detection systems that are used but have limitations. The document then introduces parallel coordinate visualization as a tool that can help analyze large volumes of log and event data to help detect unknown attacks and behaviors. It provides examples of how parallel coordinates could be used to visualize and analyze squid proxy logs, Apache logs, and OpenVPN tunnels. The conclusion is that traditional visualization often fails at scale but parallel coordinates enables analyzing large datasets to find the unknown and understand logs better.
As more corporations adopt Google for providing cloud services they are also inheriting the security risks associated with centralized computing, email and data storage outside the perimeter. In order for pentesters and red teamers to remain effective in analyzing security risks, they must adapt techniques in a way that brings value to the customer.
In this presentation we will begin by demonstrating adaptive techniques to crack the perimeter of Google Suite customers. Next, we will show how evasion can be accomplished by hiding in plain-sight due to failures in incident response plans. Finally, we will also show how a simple compromise could mean collateral damage for customers who are not carefully monitoring these cloud environments.
SplunkLive! Wien 2016 - Splunk für IT OperationsSplunk
This document discusses Splunk software for IT operations. It notes that IT environments have become increasingly complex with many different technologies, applications, and data sources. This makes it difficult for IT teams to maintain systems and innovate. Splunk provides a platform to integrate data from all these different sources for real-time search, monitoring, and analytics. It allows organizations to gain insights from their machine data to more quickly resolve issues and improve IT operations and services. The document highlights how Splunk apps can provide deep insights into specific technologies and roles. It also discusses how Splunk can provide visibility into cloud environments like AWS.
Here’s your chance to get hands-on with Splunk for the first time! Bring your modern Mac, Windows, or Linux laptop and we’ll go through a simple install of Splunk. Then, we’ll load some sample data, and see Splunk in action – we’ll cover searching, pivot, reporting, alerting, and dashboard creation. At the end of this session you’ll have a hands-on understanding of the pieces that make up the Splunk Platform, how it works, and how it fits in the landscape of Big Data. You’ll experience practical examples that differentiate Splunk while demonstrating how to gain quick time to value.
Pentest Apocalypse-That's when you hire a pentester, and they walk all over your network. To avoid this, organizations need to be prepared before the first packet is sent in order to get the most value from the tester. There is no excuse for pentesters to find critical vulnerabilities that are six years old on an assessment. And who needs a zero-day when employees leave credentials on wide-open shares? Just like how Doomsday Preppers helps you prepare for the apocalypse, this presentation will help you prepare for, and avoid, a pentest apocalypse by describing common vulnerabilities found on many assessments. Being prepared for common pentester activities will not only help add value to a pentest but will also help prevent attackers from using the same tactics to compromise your organization.
For More Information Please Visit:- http://bsidestampa.net
http://www.irongeek.com/i.php?page=videos/bsidestampa2015/104-pentest-apocalypse-beau-bullock
Delivering a New Architecture for Security: Blockchain + Trusted ComputingRivetz
Rivetz aims to deliver a new architecture for security by combining blockchain and trusted computing technologies. This will allow instructions executed on devices to be provably secure through the use of a trusted execution environment (TEE) isolated from the main operating system. Rivetz tokens (RvT) can enable multifactor authentication, policy-controlled spending, and automated compliance for utilities through the verification of a device's integrity at the transaction level. The goal is to provide on-demand security controls for machines that are assured through attestation and recorded on the blockchain.
If you’re just getting started with Splunk, this session will help you understand how to use Splunk software to turn your silos of data into insights that are actionable. In this session, we’ll dive right into a Splunk environment and show you how to use the simple Splunk search interface to quickly find the needle-in-the-haystack or multiple needles in multiple haystacks. We’ll demonstrate how to perform rapid ad-hoc searches to conduct routine investigations across your entire IT infrastructure in one place, whether physical, virtual or in the cloud. We’ll show you how to then convert these searches into real time alerts and dashboards, so you can proactively monitor for problems before they impact your end user. We’ll demonstrate how you can use Splunk to connect the dots across heterogeneous systems in your environment for cross-tier, cross-silo visibility. You’ll have access to a demo environment. So, don’t forget to bring your laptop and follow along for a hands-on experience.
"A session in the DevNet Zone at Cisco Live, Berlin. Analytics of network telemetry data (such as flow records, IPSLA measurements, and time series of MIB data) helps address many important operational problems. Traditional Big Data approaches run into limitations even as they push scale boundaries for processing data further. One reason for this is the fact that in many cases, the bottleneck for analytics is not analytics processing itself but the generation and export of the data on which analytics depends. Data does not come for free. The amount of data that can be reasonably collected from the network runs into inherent limitations due to bandwidth and processing constraints in the network itself. In addition, management tasks related to determining and configuring which data to generate lead to significant deployment challenges.
This presentation provides an overview of DNA (Distributed Network Analytics), a novel technology to analyze network telemetry data in distributed fashion at the network edge, allowing users to detect changes, predict trends, recognize anomalies, and identify hotspots in their network. Analytics processing occurs at the source of the data using an embedded DNA Agent App that dynamically configures data sources as needed and analyzes the data using an embedded analytics engine. This provides DNA with superior scaling characteristics while avoiding the significant operational and bandwidth overhead that is associated with centralized analytics solutions. An ODL-based SDN controller application orchestrates network analytics tasks across the network, providing a network analytics service that allows users to interact with the network as a whole instead of individual devices one at a time. DNA is enabled by the IOx App Hosting Framework and integrated with light-weight embedded analytics engines, CSA (Connected Service Analytics) and DMO (Data in Motion). "
The document provides an overview of software defined networking (SDN) and future-proofing data center networks. It discusses the evolution of networks since 2000 and current challenges around inflexibility. SDN is defined as separating the network control and forwarding functions, making the network programmable. Benefits include direct programmability, agility, centralized management, and open standards. SDN uses the OpenFlow protocol and virtualization. The future of SDN includes integration of LAN and WAN SDN, lower MPLS costs, and controlling networks through devices.
Large enterprise SIEM: get ready for oversizeMona Arkhipova
This document discusses large enterprise security information and event management (SIEM) systems. It begins by distinguishing SIEM from simple log collection and systems monitoring. It then discusses the IBM qRadar SIEM platform and some of its architecture and performance challenges. The remainder of the document addresses challenges around log collection from various sources like Windows, Unix, databases and custom applications. It provides best practices guides and discusses normalization, indexing and storage of large volumes of log data. Specific metrics are given for the large QIWI SIEM installation handling millions of events per day. The document concludes by discussing automation of security monitoring and response.
CYBER INTELLIGENCE & RESPONSE TECHNOLOGYjmical
This document provides an overview of AccessData's Cyber Intelligence Response Technology (CIRT) platform. CIRT offers an integrated suite of digital forensics and incident response capabilities including network forensics, host-based forensics, data auditing, and malware analysis. Key features include an agent that can independently collect and store data from endpoints, a Cerberus module that analyzes files for malicious behaviors without signatures or prior knowledge, and modules for analyzing removable media, volatile memory, and network packet captures. The platform allows multiple teams such as incident response, computer forensics, and compliance to collaborate on investigations.
Iaetsd secure data storage against attacks in cloudIaetsd Iaetsd
The document proposes solutions for securing data storage in the cloud against attacks. It discusses threats and attacks like incorrect data storage, data modification, and perimeter defense weaknesses. It then proposes a defense in depth approach with multiple layers of security controls at the storage devices, network, and management access layers. Specific controls are suggested like authentication, authorization, encryption, firewalls, intrusion detection, and logging. The paper also addresses issues like data correctness verification, error localization, and reliability of the security strategy through techniques like challenge-response protocols and redundant storage across multiple locations.
Deep-Dive into Big Data ETL with ODI12c and Oracle Big Data ConnectorsMark Rittman
- The document discusses Oracle tools for extracting, transforming, and loading (ETL) big data from Hadoop into Oracle databases, including Oracle Data Integrator 12c, Oracle Loader for Hadoop, and Oracle Direct Connector for HDFS.
- It provides an overview of using Hadoop for ETL tasks like data loading, processing, and exporting data to structured databases, as well as tools like Hive, Pig, and Spark for these functions.
- Key benefits of the Oracle Hadoop connectors include pushing data transformations to Hadoop clusters for scale and leveraging SQL interfaces to access Hadoop data for business intelligence.
This document provides a whirlwind tour of big data, security, and cloud computing. It begins by looking back at where technology has been, from mainframes to client-server models to virtualization. It then examines the present state of early decentralization and a focus on cost-cutting and flexibility. Looking ahead, it discusses the future of commodity-based computing and storage and the need to revise governance. The document emphasizes that security is not one-size-fits-all and should be tied to risk tolerance policies. It stresses the importance of standards, privacy, and continual adaptation to vulnerabilities. In the end, it summarizes that cloud, big data, and security require balancing tolerance to risk with strong governance and adaptability
Cloud computing provides opportunities for operational efficiency and scalability but also poses security risks. Key security concerns include weak trust boundaries as data is shared, lack of transparency around data collection and monitoring, and inadequate identity and access management. Privacy is also a concern as data moves outside an organization's control and across legal jurisdictions with inconsistent privacy laws. Customers should ask cloud providers detailed questions about their security certifications, data protection practices, key management, and control monitoring to understand how provider addresses these risks.
This document discusses the intersection of cloud computing, big data, and security. It explains how cloud computing has enabled big data by providing large amounts of cheap storage and on-demand computing power. This has allowed companies to analyze larger datasets than ever before to gain insights. However, big data also presents security challenges as more data is stored remotely in the cloud. The document outlines both the benefits and risks to security from adopting cloud computing and discusses how big data analytics could also be used to enhance cyber security.
This document presents an agenda for discussing identity-based secure distributed data storage schemes. The agenda includes sections on an abstract, introduction, existing systems, objectives, proposed systems, literature survey, system requirements, system design including data flow diagrams and class diagrams, testing, results and performance evaluation, and conclusions. The introduction discusses cloud computing services models. The existing systems section discusses database-as-a-service and its disadvantages. The proposed systems would provide two identity-based secure distributed data storage schemes with properties like file-based access control and protection against collusion attacks.
Testing Big Data: Automated ETL Testing of HadoopBill Hayduk
Learn why testing your enterprise's data is pivotal for success with Big Data and Hadoop. See how to increase your testing speed, boost your testing coverage (up to 100%), and improve the level of quality within your data warehouse - all with one ETL testing tool.
The document discusses why network security is important and outlines common security threats and network attacks. It notes that as networks have grown in size and importance, security compromises could have serious consequences. It describes various types of threats like hackers, crackers, viruses and malware that target network vulnerabilities. It also provides examples of reconnaissance attacks, denial of service attacks, and different strategies that can be used to mitigate security risks.
Data storage security in cloud computingSonali Jain
The document discusses cloud computing and ensuring data security in cloud storage. It defines cloud computing as internet-based computing using shared resources provided on demand. It then lists advantages and disadvantages of cloud storage. The document proposes using distributed verification protocols and homomorphic tokens to ensure data integrity, error detection, and dependability while supporting dynamic operations like updates, deletes and appends. The goal is to address security threats to confidentiality, integrity and availability of data stored in the cloud.
This document discusses basic concepts in computer security. It defines computer security as techniques for ensuring data cannot be read or compromised without authorization, usually through encryption and passwords. The three main goals of computer security are confidentiality, integrity, and availability. Vulnerabilities are weaknesses that can be exploited, and threats are circumstances with potential to cause harm. Common threats include interception, interruption, modification, and fabrication. Controls are protective measures used to reduce vulnerabilities, and physical security and security methods like antivirus software and firewalls can help secure computers.
F. Questier, Computer security, workshop for Lib@web international training program 'Management of Electronic Information and Digital Libraries', university of Antwerp, October 2015
This document provides an overview of information security. It defines information and discusses its lifecycle and types. It then defines information security and its key components - people, processes, and technology. It discusses threats to information security and introduces ISO 27001, the international standard for information security management. The document outlines ISO 27001's history, features, PDCA process, domains, and some key control clauses around information security policy, organization of information security, asset management, and human resources security.
Preventing The Next Data Breach Through Log ManagementNovell
The document discusses how log management can be used for prevention, detection, and investigation of security incidents and data breaches. It explains that log management provides transparency by collecting logs from across an organization's IT infrastructure in a central location. This allows security teams to discover misconfigurations, unauthorized access attempts, and other anomalies that could indicate potential threats or actual security breaches. The document advocates for taking a preventative approach to security by using log data to monitor user activity and identity risks. It also promotes investing in security intelligence capabilities like security monitoring, analytics, and automated remediation.
PXL Data Engineering Workshop By Selligent Jonny Daenen
On 2020-12-09 Laurens Vijnck and Jonny Daenen gave a workshop at PXL.
During this session, we collectively provisioned a streaming ingestion pipeline in mere minutes. The technology stack included Pub/Sub, Dataflow, and BigQuery. Hereafter, students had the opportunity to perform interactive queries on their own real-time data to answer a series of business questions. These questions were borrowed from real-life cases that we encountered at Selligent Marketing Cloud.
Google Colab (Free Jupyter Notebooks) and Google Data Studio have proven to be excellent tools to facilitate these kinds of interactive sessions.
This document provides an overview of architectural considerations for smart object networking. It discusses the history behind the document and parallel work done in other standards bodies. It then covers four common communication patterns for smart objects (device-to-device, device-to-cloud, device-to-application layer gateway, and back-end data sharing). The document summarizes key areas that lack standardization and discusses security recommendations from the IETF.
Setting Up InfluxDB for IoT by David G SimmonsInfluxData
David will be walking you through a typical data architecture for an IoT device. Then, it will be a hands-on workshop to gather data from the device, display it on a dashboard and trigger alerts based on thresholds that you set. View this InfluxDays NYC 2019 presentation to learn about setting up InfluxDB for IoT.
Slide share device to iot solution – a blueprintGuy Vinograd ☁
Creating an IoT Cloud service for a connected product presents a huge challenge. Why? Because the tasks of serving millions, responding to events in near real-time, securing the solution from ambitious IoT hackers, AND generating a monthly bill that doesn't collapse the business model, resemble attempts to solve Rubik's Cube, but are far more difficult. Commercial IoT platforms are irrelevant because of the vendor-lock, so we must use basic building blocks to accomplish all this. This session will illustrate the architecture of an IoT service on top of the AWS Cloud.
Presentation used during SAP Tech days 2018 in Tokyo during a joint presentation between Hortonworks & Vupico represented by myself, what to think when implementing an IoT strategy. Why use Fog / Edge computing, showcased in a fun use case that I built: a cocktail machine built using raspberry pies with AndroidThings, cameras, TensorFlow lite, Mobilenet 1.0, peristaltic pumps and orchestrated by NiFi.
In the Internet of things, data and commands between things and servers are sent as streams of events, which are often aggregated and processed to provide up to date information to end users. Because of this, CQRS and Event Sourcing patterns are a natural fit for IoT applications. In this presentation we provide an overview of these patterns, how they apply to IoT applications and their benefits. A prototype application of Event Sourcing is then demonstrated using the Sense Tecnic FRED platform based on Node-RED - a data flow programming tool for wiring up the internet of things
How Will Going Virtual Impact Your Search Performance?IdeaEng
Mark Bennett from New Idea Engineering will be giving a presentation on SharePoint Saturday about search and virtual platforms. The agenda will cover business drivers, search performance with virtualization, test results, and trends. New Idea Engineering provides vendor-neutral search expertise and consulting. They will discuss the benefits of virtualization like standardized environments and flexibility, but note virtual may not provide short term cost savings. Performance test results showed virtualization overhead was on average 10%, better than their predicted 3-20% range. The presentation will take questions at the end.
This document discusses intrusion detection systems (IDS). It defines intrusion, intrusion detection, and intrusion prevention. It explains the components of an IDS including audit data, detection models, and detection and decision engines. It describes misuse detection using signatures and anomaly detection using statistical analysis. It also discusses host-based and network-based IDS, their advantages and disadvantages, and limitations of exploit-based signatures. The document emphasizes the importance of selecting and properly deploying the right IDS for an organization's needs.
Distributed Sensor Data Contextualization for Threat Intelligence AnalysisJason Trost
As organizations operationalize diverse network sensors of various types, from passive sensors to DNS sinkholes to honeypots, there are many opportunities to combine this data for increased contextual awareness for network defense and threat intelligence analysis. In this presentation, we discuss our experiences by analyzing data collected from distributed honeypot sensors, p0f, snort/suricata, and botnet sinkholes as well as enrichments from PDNS and malware sandboxing. We talk through how we can answer the following questions in an automated fashion: What is the profile of the attacking system? Is the host scanning/attacking my network an infected workstation, an ephemeral scanning/exploitation box, or a compromised web server? If it is a compromised server, what are some possible vulnerabilities exploited by the attacker? What vulnerabilities (CVEs) has this attacker been seen exploiting in the wild and what tools do they drop? Is this attack part of a distributed campaign or is it limited to my network?
Developing IoT with Zephyr is a journey from hardware all the way to application. It involves multiple teams and expertise, from hardware to cloud and application development. This talk will cover the options for getting a Zephyr app connected (WiFi, Ethernet, Cellular), selecting the right data encoding (JSON/CBOR), securing the data transfer (DTLS/TLS), and choosing a protocol (HTTP/MQTT/COAP). But that’s not the end of the story, the cloud needs to manage devices allowed to connect, consume the data being received, open up options for using that data, and be aware of the continued state of the hardware. And once you have the data you need to build a user-facing application on top of it. Understanding this lifecycle will help us as developers to make good choices on what Zephyr provides, helping ensure successful IoT projects.
The document discusses Cisco's vision for the Internet of Everything (IoE) and how it applies to the manufacturing sector. It addresses some of the challenges manufacturers face, such as disconnected systems and technology silos. It then presents Cisco's proposed architecture for industrial networks, including security frameworks, edge computing, and fog computing to enable distributed data processing at the network edge. This architecture is meant to help manufacturers overcome challenges and leverage IoT/IoE for operational improvements and business benefits.
Coding Secure Infrastructure in the Cloud using the PIE frameworkJames Wickett
At National Instruments, we have developed an automation and provisioning framework called PIE (Programmable Infrastructure Environment) that we use daily on our devops team. Similar tools are available such as chef or puppet, but what makes PIE unique is its ability to work in multi-cloud deployments (Azure and AWS) along with multiple node OS types (linux and windows). It uses zookeeper to keep state and track dependencies across nodes and services.
When building PIE we actively considered how to implement it in a Rugged way for a DevOps team. As noted in the deck on slide 68, we are Rugged by Design and Devops by Culture. We see these as intersecting domains that have the ability to impact each other. For more info see ruggeddevops.org
John Hugg presented on building an operational database for high-performance applications. Some key points:
- He set out to reinvent OLTP databases to be 10x faster by leveraging multicore CPUs and partitioning data across cores.
- The database, called VoltDB, uses Java for transaction management and networking while storing data in C++ for better performance.
- It partitions data and transactions across server cores for parallelism. Global transactions can access all partitions transactionally.
- VoltDB is well-suited for fast data applications like IoT, gaming, ad tech which require high write throughput, low latency, and global understanding of live data.
Fixing Twitter Improving The Performance And Scalability Of The Worlds Most ...smallerror
Twitter's operations team manages software performance, availability, capacity planning, and configuration management for Twitter. They use metrics, logs, and analysis to find weak points and take corrective action. Some techniques include caching everything possible, moving operations to asynchronous daemons, and optimizing databases to reduce replication delay and locks. The team also created several open source projects like CacheMoney for caching and Kestrel for asynchronous messaging.
Fixing Twitter Improving The Performance And Scalability Of The Worlds Most ...xlight
Fixing Twitter and Finding your own Fail Whale document discusses Twitter operations. The operations team manages software performance, availability, capacity planning, and configuration management using metrics, logs, and data-driven analysis to find weak points and take corrective action. They use managed services for infrastructure to focus on computer science problems. The document outlines Twitter's rapid growth and challenges in maintaining performance as traffic increases. It provides recommendations around caching, databases, asynchronous processing, and other techniques Twitter uses to optimize performance under heavy load.
Fixing Twitter and Finding your own Fail Whale document discusses Twitter operations. The Twitter operations team focuses on software performance, availability, capacity planning, and configuration management using metrics, logs, and science. They use a dedicated managed services team and run their own servers instead of cloud services. The document outlines Twitter's rapid growth and challenges in maintaining performance. It discusses strategies for monitoring, analyzing metrics to find weak points, deploying changes, and improving processes through configuration management and peer reviews.
Twitter's operations team manages software performance, availability, capacity planning, and configuration management. They use metrics, logs, and analysis to find weak points and take corrective action. Some techniques include caching everything possible, moving operations to asynchronous daemons, optimizing databases, and instrumenting all systems. Their goal is to process requests asynchronously when possible and avoid overloading relational databases.
Similar to Big Data Approaches to Cloud Security (20)
Some brief thoughts on Microsoft 365 Enterprise offers in combination with Extended Use Rights and Azure and Azure Stack HCI to data center consolidation, modernization, transformation and cost savings.
The document proposes a concept called the Billion Node Cloud, which would incorporate existing server and cloud computing technologies on a massive scale. It envisions using low-power microservers installed in vehicles, buildings, and other locations to create a globally distributed cloud network. By taking advantage of unused computing resources everywhere, from idling vehicles to solar-powered huts, the Billion Node Cloud could generate billions of dollars per day in revenue while promoting environmentally friendly "super green computing". It argues this vision is technically feasible with current technologies and could have widespread economic and social benefits by distributing cloud revenues more broadly.
Local Media MicroServers are one application of a new breed of MicroServers. Powerful, compact and low-wattage, these new MicroServers support hundreds to thousands of users and are completely portable. How would you use them?
CriKit is a desktop cluster solution that provides a low-cost way for organizations to test Hadoop and other big data technologies. It contains 4 compute nodes, a 1Gb Ethernet switch, keyboard/video/mouse switch, and an optional management workstation. Each node has an Intel processor, 16GB RAM, and up to 1.2TB of SSD storage, providing enough power for moderate data testing of Hadoop. CriKit's small size and ability to reuse components makes it cheaper than large hardware solutions for evaluating new technologies.
The Era of MicroServers is upon us. MicroServers will radically change the computing landscape much like mini-computers did when they first came on the scene. People and organizations will be able to do more computing while consuming less energy in very small form factors. It is only up to the imagination what can be accomplished with these new systems.
Decision Making with Cost, Value and RiskPaul Morse
Quick overview of the concepts behind CVR. Please contact me if you would like to put the concepts into action ! I can guarantee a coaching session with me will change the way your managers and executives view decision making !
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Big Data Approaches to Cloud Security
1. BIG DATA APPROACHES
TO CLOUD SECURITY
Paul Morse – President, WebMall Ventures
Cloud Security Alliance, Seattle Chapter 3/28/2013
2. “BIG DATA IS NOT JUST ABOUT LOTS OF DATA, IT IS ABOUT
HAVING THE ABILITY TO EXTRACT MEANING; TO SORT
THROUGH THE MASSES OF DATA ELEMENTS TO DISCOVER THE
HIDDEN PATTERN, THE UNEXPECTED CORRELATION,”
Art Coviello, executive chairman of RSA
ON THE SURFACE, BIG DATA SEEMS TO BE ALL ABOUT BUSINESS
INTELLIGENCE AND ANALYTICS, BUT IT ALSO AFFECTS THE NITTY-
GRITTY OF POWER AND COOLING, NETWORKING, STORAGE
AND DATA CENTER EXPANSION.
3. AGENDA
• Observations
• Cloud Architectures/Components
• Machine-Generated Data
• Sources of Data
• Time Sequencing of Events
• Searching for Behavior
• Recent Hack Examples
4. OBSERVATIONS
• Big Data solutions are changing the game for security practitioners and execs
• Provide the ability to look at discovery, detection and remediation across large portions
of the organization in entirely new ways
• Correlation between seemingly unrelated events in near real time is now relatively easy
• Growing range of solution types – simple to highly complex
• Roll your own to pre-packaged solutions
• On-prem, Public Cloud-based and Hybrid
• Simple Log search to Predictive Analysis with complex dashboards and reporting
• Some solutions have extremely short “time to value” propositions
• “Big Data Washing” like “Cloud Washing” is showing up
• Prices vary – Free to mondo
• It is NOT the holy grail for security but has many advantages over traditional SIEM
products – real time, large amounts of data, broad event correlation, etc.
5. SET THE STAGE
• Many perspectives to Cloud Computing
• Main focus for this talk is as a Public Cloud Provider
• You are the “owner” of the facility – all of it.
• Infrastructure-centric discussion
• How do Big Data solutions improve Security?
9. SCADA DATA SOURCES
Backup Generators
Door Wireless Devices
Backup Batteries Sensors
RFID PC’s Tablets
Power Card Key
Storage Distribution Systems Printers Phones?
This is your attack surface Temp Water System
Servers Sensors
Lighting controls
Routers/Switches
I want all the data in one searchable repository and available in near real time
10. SECURE? THINK AGAIN.
• Internet Mapping Project
• “harmless” Port ping and bot install
• 660 million IPs with 71 billion ports
tested
• 460 Million Devices Responded
• Resulted in 420 thousand bots
• Stupid uid/pwd combos
• Admin/admin, Admin/no pwd,
root/root, root/no pwd
• What’s on your network?
http://internetcensus2012.bitbucket.org/paper.html
11. CAUSE FOR PAUSE
“ We hope other researchers will find the data we
have collected useful and that this publication will
help raise some awareness that, while everybody is
talking about high class exploits and cyberwar, four
simple stupid default telnet passwords can give you
access to hundreds of thousands of consumer as well
as tens of thousands of industrial devices all over the
world.”
12. MACHINE DATA
• Isn’t it really all machine data?
• Machine-generated data (MGD) is the generic term for information which was
automatically created from a computer process, application, or other machine
without the intervention of a human.
• Network Device Log files
• Event logs
• Application logs
• RFID logs
• Storage logs
• HVAC Logs
• Sensor data
• Etc.
14. TIME SEQUENCE OF EVENTS
Outbound Traffic
Terminate Sess
Delete logs
Installer runs
Upload Small File
Command
Fail
Pass
Login Attempt
Server
TOR
LB
Front end
IP Address/Packet T0 T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 T11 T12 T13 T14 T15 T16 T17 T18 T19
15. TIME SEQUENCE OF EVENTS
Terminate Sess
Delete logs
Update
Upload Small File
Command
Fail
Pass
Login Attempt
Device
TOR
LB
Front end
IP Address/Packet T0 T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 T11 T12 T13 T14 T15 T16 T17 T18
16. TIME SEQUENCE OF EVENTS
Terminate Sess
Delete logs
Update
Upload Small File
Command
Fail
Pass
Login Attempt
Device
IP Address/Packet T0 T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 T11 T12 T13 T14 T15 T16 T17 T18
Door 5
Door 4
Door 3
Door 2
Door 1
T-30 T-15 T0 T15 T30 T45
17. SOME AREAS TO CONSIDER
• Ingesting various data formats
• Many vendors claim it is easy, when it may not be
• Transforms and connectors may be required (affect performance)
• Device companies create add-ons, connectors, dashboards, transforms, queries, etc
• Speed of indexing determines “real time” abilities
• Do you need to index ALL machine data?
• Vendor-specific Query languages
• No standard, some commonality
• Learning curve for seriously complex queries and operationalizing environment
• Dashboards and Visualizations Vary
• Large number of simultaneous queries is required
• Workflow is critical – what happens when you find something?
• Implementation architecture – lots of hardware? Bandwidth? Security? Users?
• Data Governance – You found what?
18.
19. HACK EXAMPLES
• DOJ in January
• Defacement
• What specific behavior happened and what did they do?
• Log in Remotely
• Completely replace Index.*
• Solution – monitor index.* and set up a parsing stream and search for a code in
the html. Call a workflow if the file changes or the code doesn’t match.
• DDoS
• Overwhelm Website
• Solution – compare request rate of increase to a previous ‘norm”. If the disparity
is great enough, call a workflow to check IP addresses of source(s). Depending
on results, do nothing or script a filter or block.
20. VENDORS AND GETTING STARTED
• Hadoop with Flume • Getting Started
• HP ArcSight • Easiest – Cloud Based
• Loggly • Sumo Logic
• Splunk Storm
• Logrythm
• Download and Install
• SumoLogic • Loggly
• LogScape • Logrythm
• LogStash • LogScape
• Sawmill • LogStash
• Sawmill
• Splunk • Splunk
• Splunk Storm • Hadoop/Flume/Pig