This presentation for Inside Analysis' Briefing Room explains the ExtraHop architecture for stream analytics. This concept enables you to mine all your wire data, which is all the data in motion in your environment.
The ExtraHop wire data analytics platform enables IT teams to answer questions they hadn't known to ask before, such as "Which SSL servers are receiving heartbeats?" and "Where are heartbeat messages coming from?"
A concise report on the state of your environment, with prescriptive recommendations for how you can improve performance, efficiency and security in up to 14 vital IT domains.
EMA Presentation: Driving Business Value with Continuous Operational Intellig...ExtraHop Networks
In this presentation, EMA Vice President of Research Jim Frey and ExtraHop SVP Erik Giesa explain how IT organizations can derive real-time IT and business insights from their wire data, as well as the unique capabilities included in the fourth-generation ExtraHop platform that make this continuous operational intelligence possible. For more information, visit www.extrahop.com
With stream analytics for your data in motion from ExtraHop, you can confidently migrate applications to virtualized environments and manage their performance.
By passively analyzing your wire data, ExtraHop provides deep visibility into HL7 messages, Citrix performance, EHR behavior, ICD-10 conversion, and more.
Proactive monitoring and remediation
Optimization and continuous improvement
Pervasive security monitoring and compliance
Clinical and operations analytics
Democratising Security: Update Your Policies or Update Your CVExtraHop Networks
Security is everyone's responsibility. That’s the lesson learned as enterprises seek to improve their detection and response for cyber incidents. This session introduces a new model where InfoSec sets the policies and delegates monitoring to application teams.
The ExtraHop wire data analytics platform enables IT teams to answer questions they hadn't known to ask before, such as "Which SSL servers are receiving heartbeats?" and "Where are heartbeat messages coming from?"
A concise report on the state of your environment, with prescriptive recommendations for how you can improve performance, efficiency and security in up to 14 vital IT domains.
EMA Presentation: Driving Business Value with Continuous Operational Intellig...ExtraHop Networks
In this presentation, EMA Vice President of Research Jim Frey and ExtraHop SVP Erik Giesa explain how IT organizations can derive real-time IT and business insights from their wire data, as well as the unique capabilities included in the fourth-generation ExtraHop platform that make this continuous operational intelligence possible. For more information, visit www.extrahop.com
With stream analytics for your data in motion from ExtraHop, you can confidently migrate applications to virtualized environments and manage their performance.
By passively analyzing your wire data, ExtraHop provides deep visibility into HL7 messages, Citrix performance, EHR behavior, ICD-10 conversion, and more.
Proactive monitoring and remediation
Optimization and continuous improvement
Pervasive security monitoring and compliance
Clinical and operations analytics
Democratising Security: Update Your Policies or Update Your CVExtraHop Networks
Security is everyone's responsibility. That’s the lesson learned as enterprises seek to improve their detection and response for cyber incidents. This session introduces a new model where InfoSec sets the policies and delegates monitoring to application teams.
Health IT has a Big Data opportunity with HL7 analytics. Learn about what is possible from Wes Wright, CIO at Seattle Children's Hospital, and Erik Giesa, SVP of Marketing and Business Development at ExtraHop.
Affecto Informatica World Tour 2015: The Age of EngagementAffecto
We are moving from an era of cost optimisation and productivity to an age of engagement. This engagement is with customers, partners, third parties, and machines. At the center of every winning business strategy is data. How to accelerate your organisation’s results using the Intelligent Data Platform to manage data of any type and from any source to drive better business outcomes.
A presentation by Greg Hanson, Vice President Business Operations EMEA, Informatica
Operational Analytics at Credit Suisse from ThousandEyes ConnectThousandEyes
Darrell Westbury, Director of Operational Analytics at Credit Suisse, presents on how the global bank collects five types of IT operations data, analyzes it and uses it to derive insights.
How to Design, Build and Map IT and Business Services in SplunkSplunk
Your IT department supports critical business functions, processes and products. You're most effective when your technology initiatives are closely aligned and measured with specific business objectives. This session covers best practices and techniques for designing and building an effective service model, using the domain knowledge of your experts and capturing and reporting on key metrics that everyone can understand. We will design a sample service model and map them to performance indicators to track operational and business objectives. We will also show you how to make Splunk service-ware with Splunk IT Service Intelligence (ITSI).
The process of streaming real-time data from a wide variety of machine data sources and entities can be very complex and unwieldy. Using an agent-based approach, Informatica has invented a new technique and open access product that makes this process much more user friendly and efficient, even when dealing with multiple environments such as Hadoop, Cassandra, Storm, Amazon Kinesis and Complex Event Processing.
Power of Splunk Search Processing Language (SPL)Splunk
This session will unveil the power of the Splunk Search Processing Language (SPL). See how to use Splunk's simple search language for searching and filtering through data, charting statistics and predicting values, converging data sources and grouping transactions, and finally data science and exploration. We'll begin with basic search commands and build up to more powerful advanced tactics to help you harness your Splunk Fu!
How to Design, Build and Map IT and Business Services in SplunkSplunk
Your IT department supports critical business functions, processes and products. You're most effective when your technology initiatives are closely aligned and measured with specific business objectives. This session covers best practices and techniques for designing and building an effective service model, using the domain knowledge of your experts and capturing and reporting on key metrics that everyone can understand. We will design a sample service model and map them to performance indicators to track operational and business objectives. We will also show you how to make Splunk service-ware with Splunk IT Service Intelligence (ITSI).
Make Streaming IoT Analytics Work for YouHortonworks
Download Hortonworks DataFlow (HDF™) here - http://hortonworks.com/downloads/#dataflow. Making Streaming IoT Analytics Work For You With Apache NiFi, Storm, Raspberry Pi and more.
In addition to seeing the latest features in Splunk Enterprise, learn some of the top commands that will solve most search and analytics needs. Ninja’s can use these blindfolded. New features will be demonstrated in the following areas: TCO and Performance Improvements, Platform Management and New Interactive Visualizations.
New Splunk Management Solutions Update: Splunk MINT and Splunk App for Stream Splunk
Learn what is new in Splunk App for Stream and how it can help you utilize wire/network data analytics to proactively resolve applications and IT operational issues and to efficiently analyze security threats in real-time, across your cloud and on-premises infrastructures. Additionally, you will learn about Splunk MINT, which allows you to gain operational intelligence on the availability, performance, and usage of your mobile apps. You’ll learn how to instrument your mobile apps for operational insight, and how you can build the dashboards, alerts, and searches you need to gain real-time insight on your mobile apps.
Health IT has a Big Data opportunity with HL7 analytics. Learn about what is possible from Wes Wright, CIO at Seattle Children's Hospital, and Erik Giesa, SVP of Marketing and Business Development at ExtraHop.
Affecto Informatica World Tour 2015: The Age of EngagementAffecto
We are moving from an era of cost optimisation and productivity to an age of engagement. This engagement is with customers, partners, third parties, and machines. At the center of every winning business strategy is data. How to accelerate your organisation’s results using the Intelligent Data Platform to manage data of any type and from any source to drive better business outcomes.
A presentation by Greg Hanson, Vice President Business Operations EMEA, Informatica
Operational Analytics at Credit Suisse from ThousandEyes ConnectThousandEyes
Darrell Westbury, Director of Operational Analytics at Credit Suisse, presents on how the global bank collects five types of IT operations data, analyzes it and uses it to derive insights.
How to Design, Build and Map IT and Business Services in SplunkSplunk
Your IT department supports critical business functions, processes and products. You're most effective when your technology initiatives are closely aligned and measured with specific business objectives. This session covers best practices and techniques for designing and building an effective service model, using the domain knowledge of your experts and capturing and reporting on key metrics that everyone can understand. We will design a sample service model and map them to performance indicators to track operational and business objectives. We will also show you how to make Splunk service-ware with Splunk IT Service Intelligence (ITSI).
The process of streaming real-time data from a wide variety of machine data sources and entities can be very complex and unwieldy. Using an agent-based approach, Informatica has invented a new technique and open access product that makes this process much more user friendly and efficient, even when dealing with multiple environments such as Hadoop, Cassandra, Storm, Amazon Kinesis and Complex Event Processing.
Power of Splunk Search Processing Language (SPL)Splunk
This session will unveil the power of the Splunk Search Processing Language (SPL). See how to use Splunk's simple search language for searching and filtering through data, charting statistics and predicting values, converging data sources and grouping transactions, and finally data science and exploration. We'll begin with basic search commands and build up to more powerful advanced tactics to help you harness your Splunk Fu!
How to Design, Build and Map IT and Business Services in SplunkSplunk
Your IT department supports critical business functions, processes and products. You're most effective when your technology initiatives are closely aligned and measured with specific business objectives. This session covers best practices and techniques for designing and building an effective service model, using the domain knowledge of your experts and capturing and reporting on key metrics that everyone can understand. We will design a sample service model and map them to performance indicators to track operational and business objectives. We will also show you how to make Splunk service-ware with Splunk IT Service Intelligence (ITSI).
Make Streaming IoT Analytics Work for YouHortonworks
Download Hortonworks DataFlow (HDF™) here - http://hortonworks.com/downloads/#dataflow. Making Streaming IoT Analytics Work For You With Apache NiFi, Storm, Raspberry Pi and more.
In addition to seeing the latest features in Splunk Enterprise, learn some of the top commands that will solve most search and analytics needs. Ninja’s can use these blindfolded. New features will be demonstrated in the following areas: TCO and Performance Improvements, Platform Management and New Interactive Visualizations.
New Splunk Management Solutions Update: Splunk MINT and Splunk App for Stream Splunk
Learn what is new in Splunk App for Stream and how it can help you utilize wire/network data analytics to proactively resolve applications and IT operational issues and to efficiently analyze security threats in real-time, across your cloud and on-premises infrastructures. Additionally, you will learn about Splunk MINT, which allows you to gain operational intelligence on the availability, performance, and usage of your mobile apps. You’ll learn how to instrument your mobile apps for operational insight, and how you can build the dashboards, alerts, and searches you need to gain real-time insight on your mobile apps.
Kafka and Stream Processing, Taking Analytics Real-time, Mike Spicerconfluent
Do you think that analytics are run on stored data sets? Think again, the combination of Apache Kafka and Stream Processing enables analytics on real-time data streams.
First, we give a brief overview of stream processing and how it differs from the request response model of analytics on stored data. Next, we cover the characteristics of Kafka which make it such a good fit for Stream processing and why they matter. Finally, we show a number of use cases which highlight how stream processing is being used to do real-time analytics at scale with very low latency.
Thomas Weise, Apache Apex PMC Member and Architect/Co-Founder, DataTorrent - ...Dataconomy Media
Thomas Weise, Apache Apex PMC Member and Architect/Co-Founder of DataTorrent presented "Streaming Analytics with Apache Apex" as part of the Big Data, Berlin v 8.0 meetup organised on the 14th of July 2016 at the WeWork headquarters.
Assessing New Databases– Translytical Use CasesDATAVERSITY
Organizations run their day-in-and-day-out businesses with transactional applications and databases. On the other hand, organizations glean insights and make critical decisions using analytical databases and business intelligence tools.
The transactional workloads are relegated to database engines designed and tuned for transactional high throughput. Meanwhile, the big data generated by all the transactions require analytics platforms to load, store, and analyze volumes of data at high speed, providing timely insights to businesses.
Thus, in conventional information architectures, this requires two different database architectures and platforms: online transactional processing (OLTP) platforms to handle transactional workloads and online analytical processing (OLAP) engines to perform analytics and reporting.
Today, a particular focus and interest of operational analytics includes streaming data ingest and analysis in real time. Some refer to operational analytics as hybrid transaction/analytical processing (HTAP), translytical, or hybrid operational analytic processing (HOAP). We’ll address if this model is a way to create efficiencies in our environments.
The Internet of Analytics – Discovering actionable insights from high-velocity streams of real-time IoT data
IoT devices generate high volume, continuous streams of data that must be analyzed in-memory – before they land on disk – to identify potential outliers/failures or business opportunities. Companies need to build robust yet flexible applications that can instantly act on the information derived from analyzing their IoT data. Attend this session to learn how you can easily handle real-time data acquisition across structured and semi-structured data, as well as windowing, fast in-memory streaming analytics, event correlation, visualization, alerts, workflows and smart data storage.
WebAction Founder and EVP, Sami Akbay
Wikibon #IoT #HyperConvergence Presentation via @theCUBE John Furrier
SiliconANGLE Media Research team at Wikibon prepared this presentation to share their findings on a new category called #IoT #HyperConvergence Analytics
Crowd Chat Conversation here:
https://www.crowdchat.net/chat/c3BvdF9vYmpfMTg4Mg==
More and more data is streaming in from many sources in order to drive operations in real-time.
When driving decisions with speed at scale is the norm, the traditional trade-off in analytics between simple but fast and slow but sophisticated has to give way.
Traditionally fast data comes to rest in a database after the simpler in-flight analytics. Only after it is comes to rest can a database perform sophisticated analytics. But in-flight and at rest analytics have to come together in a single, hyper-converged analytic platform.
Gain New Insights by Analyzing Machine Logs using Machine Data Analytics and BigInsights.
Half of Fortune 500 companies experience more than 80 hours of system down time annually. Spread evenly over a year, that amounts to approximately 13 minutes every day. As a consumer, the thought of online bank operations being inaccessible so frequently is disturbing. As a business owner, when systems go down, all processes come to a stop. Work in progress is destroyed and failure to meet SLA’s and contractual obligations can result in expensive fees, adverse publicity, and loss of current and potential future customers. Ultimately the inability to provide a reliable and stable system results in loss of $$$’s. While the failure of these systems is inevitable, the ability to timely predict failures and intercept them before they occur is now a requirement.
A possible solution to the problem can be found is in the huge volumes of diagnostic big data generated at hardware, firmware, middleware, application, storage and management layers indicating failures or errors. Machine analysis and understanding of this data is becoming an important part of debugging, performance analysis, root cause analysis and business analysis. In addition to preventing outages, machine data analysis can also provide insights for fraud detection, customer retention and other important use cases.
Day 5 - Real-time Data Processing/Internet of Things (IoT) with Amazon KinesisAmazon Web Services
Amazon Kinesis is a fully managed service for real-time processing of streaming data at massive scale. Amazon Kinesis can collect and process hundreds of terabytes of data per hour from hundreds of thousands of sources, allowing you to easily write applications that process information in real-time, from sources such as web site click-streams, marketing and financial information, manufacturing instrumentation and social media, and operational logs and metering data.
Reasons to attend:
- This session, will provide you with an overview of Amazon Kinesis.
- Learn about sample use cases and real life case studies.
- Learn how Amazon Kinesis can be integrated into your own applications.
Amazon Kinesis is a fully managed service for real-time processing of streaming data at massive scale. Amazon Kinesis can collect and process hundreds of terabytes of data per hour from hundreds of thousands of sources, allowing you to easily write applications that process information in real-time, from sources such as web site click-streams, marketing and financial information, manufacturing instrumentation and social media, and operational logs and metering data.
This introductory webinar, presented by Adi Krishnan, Senior Product Manager for Amazon Kinesis, will provide you with an overview of the service, sample use cases, and some examples of customer experiences with the service so you can better understand its capabilities and see how it might be integrated into your own applications.
IoT devices generate high volume, continuous streams of data that must be analyzed in-memory – before they land on disk – to identify potential outliers/failures or business opportunities. Companies need to build robust yet flexible applications that can instantly act on the information derived from analyzing their IoT data. Attend this session to learn how you can easily handle real-time data acquisition across structured and semi-structured data, as well as windowing, fast in-memory streaming analytics, event correlation, visualization, alerts, workflows and smart data storage.
AWS APAC Webinar Week - Real Time Data Processing with KinesisAmazon Web Services
Extracting real-time information from streaming data generated by mobile devices, sensors, and servers used to require distributed systems skills and writing custom code. This presentation will introduce Kinesis Streams and Kinesis Firehose, the AWS services for real-time streaming big data ingestion and processing.
We’ll provide an overview of the key scenarios and business use cases suitable for real-time processing, and how Kinesis can help customers shift from a traditional batch-oriented processing of data to a continual real-time processing model. We’ll explore the key concepts, attributes, APIs and features of the service, and discuss building a Kinesis-enabled application for real-time processing. This talk will also include key lessons learnt, architectural tips and design considerations in working with Kinesis and building real-time processing applications.
In this webinar, we will also provide an overview of Amazon Kinesis Firehose. We will then walk through a demo showing how to create an Amazon Kinesis Firehose delivery stream, send data to the stream, and configure it to load the data automatically into Amazon S3 and Amazon Redshift.
Ransomware: Hard to Stop for Enterprises, Highly Profitable for CriminalsExtraHop Networks
Ransomware attacks doubled in 2015 and the trend is sure to continue. To meet this growing threat, enterprises must gain real-time visibility into anomalous behaviour. This session explains how organisations can detect and mitigate ransomware attacks using wire data.
With insight from ExtraHop, the six-person IT team at Geel has correlated, cross-tier visibility across all applications and systems, both on-premises and in the cloud.
The IT team at Zonar is leveraging wire data from ExtraHop to streamline their own operations and ensure better performance across the infrastructure. In addition to a large-scale infrastructure mapping initiative, the team is also using wire data to troubleshoot issues from code-level errors to machines throwing millions of DNS requests.
Managed Services Provider Serves Customers Better with Wire DataExtraHop Networks
ACS Solutions GmbH (ACS) is a managed services provider, delivering hosting, application and infrastructure, and cloud computing services. Lack of visibility into Citrix performance problems meant not only unhappy customers, but failure to satisfy SLAs. Analysis of ACS' wire data delivered critical insight into performance across the entire infrastructure, including the Citrix environment.
Conga case study: Application visibility in AWS with ExtraHopExtraHop Networks
Conga is a leading Salesforce application partner. They use the ExtraHop platform to gain new insights into their application performance in AWS as well as the real-time activities of their users.
Learn more at http://www.extrahop.com.
This troubleshooting guide shows you how to identify and troubleshoot common web application performance problems using the ExtraHop Discovery Edition, a free virtual appliance for wire data analytics.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Stream Analytics for Data in Motion
1.
2. Host
Eric Kavanagh
CEO, The Bloor Group
Presenter
Erik Giesa
SVP, Marketing and
Business Development,
ExtraHop Networks
Analyst
Mark Madsen
Research Analyst,
Third Nature
3.
4.
5. 432 TB
of analysis
@40 Gbps/day
216 TB
of analysis
@20 Gbps/day
108 TB
of analysis
@10 Gbps/day
11 TB
of analysis @1
Gbps/day
8. 1) Data Collection
• Unmatched scalability – Up to 40 Gbps
sustained throughput. Bulk SSL decryption at
line rate up to 64,000 SSL TPS using 2048-bit
keys @ 40 Gbps.
2) StreamOS
• Full-stream reassembly – Requisite for true
application fluency; understand sessions,
flows, and transactions.
• Broad protocol support – 40+ wire protocols
supported out of the box, including storage
and all major databases.
3) Trigger Engine
• Automatically executes on system events
through the ExtraHop trigger API.
4) Streaming Datastore
• More than 3,000 metrics that populate
customizable, real-time dashboards.
5) Full Transaction Records
• Rich transaction, message, and flow data
continuously gathered from across tiers, in a
consistent format
1
2
3
4
5
1
2
3
5
4
9. Wire Data Example (a small subset)
Zero modifications to applications or infrastructure are required unlike logs, machine data, or APM agents.
All data is processed, indexed, and stored in real time from live data streams off the wire.
Customer adds products to ecommerce
shopping cart. All page objects and user
interactions are measured and recorded in
real time. Order is placed and confirmed.
Customer order and payment are
received and approved confirming order
above.
Application selects and writes to database.
Every individual database method,
statement, and associated contextual data is
measured and recorded.
Behavior / Action
Real-Time Business and IT
Intelligence
• Correlate end-user performance with
purchasing patterns
• Drive DevOps website optimization
• Invest in IT based on observed fact
• Guarantee SLAs
• Rapid triage and troubleshooting
• Proactively alert and warn
• Track product and customer demand
• Top sellers by location, time, and offers
• Multi-dimensional business analysis and
correlation
• Business process monitoring
• Security analytics
• Tune applications and databases
• Manage application lifecycles
• Perform root cause analysis
• Detect and prevent data exfiltration
• Enable smart capacity planning
ExtraHop is the only vendor who can transform all network packets into structured Wire Data as in this example.
14. It’s an anomaly. We’ve only seen it once. We can work with the
merchant to understand why it happened and attempt to
resolve it.
Editor's Notes
Out-of-the-box, the ExtraHop platform delivers more functionality than any other comparable product on the market. At the core of our Discover appliance, we have the real-time stream processor, which transforms raw unstructured packets into structured wire data. It takes packets off the wire and reassembles them into full streams. This is what enables ExtraHop to understand application behavior. Unlike other products that claim to be application-aware, this capability makes ExtraHop truly application fluent.
Our platform offers broad protocol support, including for important storage protocols and all major databases. If you have Citrix in your environment, ExtraHop is the only vendor to license the ICA protocol for real-time analysis.
We analyze all communications on the wire to record more than 3,400 metrics out of the box. Other products record only hundreds, and for only a few protocols. This means that ExtraHop delivers immediate value as soon as you start sending it traffic.
Finally, we do all of this at tremendous scale. A single 2U appliance can handle up to a sustained 40 Gigabits per second. If your traffic is encrypted, we also offer SSL decryption capabilities so that you can see all of your wire data. This bulk decryption can scale to 64,000 SSL transactions per second using 2048-bit keys.
This is a sample of the toplevel dashboard for the service provider. It shows high level business information for the health of the application such as the number of transactions, what types of credit cards are in use, revenue, what state the cards are coming from.
This shows the most recent transactions for the entire application without any filtering.
Calls from users that people are getting double charged –
The same data, but grouped by the orderid attribute. The “Group By” operationanalyzes all records during the selected time frame and counts how many times the selected attribute occurs within the dataset. Any entries that occur more than once will have a value more than 1, and indicate an impacted customer.
We have access to every single transaction – filter by OrderID and anything that happens more than once = double charge.
Filtering for just the one orderid that was shown to be aduplicate provides all of the details for those transactions such as the merchant.
We’ve found the needle in the haystack – we know who was affected and valided the charges and the merchant who was charging. We’ve solved the issue in a few clicks. So, eat it!