Seth Eliot is a senior knowledge engineer at Test Excellence Services and Cloud. He has over [NUMBER] years of experience in software. Some of his responsibilities include A/B testing of services, processing petabytes of data per month, and ensuring data-driven decision making. He advocates for testing systems in real production environments using real users to get the most accurate observations about quality and uncover hard to replicate issues.
1. Autonomous analytics enables analyzing past, real-time, and predictive analytics on large amounts of data with minimal configuration.
2. An example application is given of a mobile app developer who wants to measure app usage and performance to understand why users started uninstalling the app.
3. Automated anomaly detection learns normal behavioral patterns and can detect and classify different types of anomalies, helping app developers identify and address issues quickly to improve the user experience.
Data driven decision making can be retrospective, real-time, or predictive. We use Amazon Machine Learning to predict the probability that a vulnerability will become exploited, using only the data available when a vulnerability is released.
The purpose of our study is to report findings on fingerprint interaction consistency of the thumb and four-finger prints on both hands. We analyzed videos and compared the videos to the results that were programmed in an excel sheet. We examined the fingerprints of individuals to better understand how a fingerprint system collects data. So we can better fingerprint systems in the future. Also, we analyzed the thumb prints to see if there were more readings on the middle left or middle right bars. It is believed that many thumb prints tend to lean to one side compared to the other, which impacts system readings. We gathered these results to show consistency in the placement of the captured print, based on different force levels: 5N, 7N, 9N, 11N and 13N. The higher the force level of the ten-print system the more force is required for the system to capture a read.
Can automated feature engineering prevent target leaks Meir Maor
In this talk we will review common and subtle ways of how problem definitions can go wrong. Exemplified by cases we encounter in the field, we will discuss target leaks (the use of information which cannot be available at prediction time), address sampling bias and consider ways to identify & tackle them.
You'll hear many real-life examples of how these issues manifested and see how introducing automated feature engineering can change the way data scientists discover and treat them.
Security Metrics are often about the performance of information security professionals - traditional ones are centered around vulnerability close rates, timelines, or criticality ratings. But how does one measure if those metrics are the rights ones? How does one measure risk reduction, or how successful your metrics program is at operationalizing that which is necessary to prevent a breach? The data we'll explore defined the 2016 Verizon DBIR Vulnerabilities section.
This talk will borrow concepts from epidemiology, repeated game theory, classical and causal probability theory in order to demonstrate some inventive metrics for evaluating vulnerability management strategies. Not all vulnerabilities are at risk of being breached. Not all people are at risk for catching the flu. By analogy, we are trying to be effective at catching the "disease" of vulnerabilities which are susceptible to breaches, and not all are. How do we determine what is truly critical? How do we determine if we are effective at remediating what is truly critical? Because the incidence of disease is unknown, the absolute risk can not be calculated. This talk will introduce some concepts from other fields for dealing with infosec uncertainty.
Attackers are human too - and currently available data allows us to make some predictions about how they'll behave. And to predict is to prevent.
RSA 2017 - Predicting Exploitability - With PredictionsMichael Roytman
Data driven decision making can be retrospective, real-time, or predictive. We use Amazon Machine Learning to predict the probability that a vulnerability will become exploited, using only the data available when a vulnerability is released.
The document discusses Harlan Mills's proposal for a surgical team approach to software development. Mills proposes that a chief programmer or "surgeon" leads a small team to develop software. The surgeon defines specifications, designs the program, codes it, tests it, and documents it. The surgeon is supported by a "copilot" to discuss and evaluate the design, an administrator to handle resources, an editor, secretaries, a program clerk, testers, and a "language lawyer" with expertise in the programming language being used. The surgical team approach aims to develop software in a timely manner while maintaining quality through close collaboration between specialized roles.
I will be sharing illusions and realities that I have observed as a veteran FBI agent, who has worked hundreds of cyber incidents, and what I see today having assimilated into the innovative world of Silicon Valley tech. We all know that cybersecurity threats are evolving faster than the world can consume them and that requires passionate and dedicated people to help advance us forward and protect our assets. The reality is government alone cannot move at the pace that is needed to protect their constituents. Often there is a disconnect from what government perceives as a problem versus what private industry categorizes as a risk. Government and technology companies must work together to solve the breach pandemic we have today. I will be highlighting how enterprises are truly preparing their security teams, what valuable metrics they are capturing, what tools are most useful, and what government best practices and standards have been the most sticky. I will be covering the realities of applying threat intelligence, big data analytics and artificial intelligence at scale. Then we will take a step forward and think about what new security problems might be awaiting us in the near future. My goal is to expose the facts of what organizations are actually experiencing, which should help government focus their efforts in the areas that will be most effective at combating the threats that face us daily.
1. Autonomous analytics enables analyzing past, real-time, and predictive analytics on large amounts of data with minimal configuration.
2. An example application is given of a mobile app developer who wants to measure app usage and performance to understand why users started uninstalling the app.
3. Automated anomaly detection learns normal behavioral patterns and can detect and classify different types of anomalies, helping app developers identify and address issues quickly to improve the user experience.
Data driven decision making can be retrospective, real-time, or predictive. We use Amazon Machine Learning to predict the probability that a vulnerability will become exploited, using only the data available when a vulnerability is released.
The purpose of our study is to report findings on fingerprint interaction consistency of the thumb and four-finger prints on both hands. We analyzed videos and compared the videos to the results that were programmed in an excel sheet. We examined the fingerprints of individuals to better understand how a fingerprint system collects data. So we can better fingerprint systems in the future. Also, we analyzed the thumb prints to see if there were more readings on the middle left or middle right bars. It is believed that many thumb prints tend to lean to one side compared to the other, which impacts system readings. We gathered these results to show consistency in the placement of the captured print, based on different force levels: 5N, 7N, 9N, 11N and 13N. The higher the force level of the ten-print system the more force is required for the system to capture a read.
Can automated feature engineering prevent target leaks Meir Maor
In this talk we will review common and subtle ways of how problem definitions can go wrong. Exemplified by cases we encounter in the field, we will discuss target leaks (the use of information which cannot be available at prediction time), address sampling bias and consider ways to identify & tackle them.
You'll hear many real-life examples of how these issues manifested and see how introducing automated feature engineering can change the way data scientists discover and treat them.
Security Metrics are often about the performance of information security professionals - traditional ones are centered around vulnerability close rates, timelines, or criticality ratings. But how does one measure if those metrics are the rights ones? How does one measure risk reduction, or how successful your metrics program is at operationalizing that which is necessary to prevent a breach? The data we'll explore defined the 2016 Verizon DBIR Vulnerabilities section.
This talk will borrow concepts from epidemiology, repeated game theory, classical and causal probability theory in order to demonstrate some inventive metrics for evaluating vulnerability management strategies. Not all vulnerabilities are at risk of being breached. Not all people are at risk for catching the flu. By analogy, we are trying to be effective at catching the "disease" of vulnerabilities which are susceptible to breaches, and not all are. How do we determine what is truly critical? How do we determine if we are effective at remediating what is truly critical? Because the incidence of disease is unknown, the absolute risk can not be calculated. This talk will introduce some concepts from other fields for dealing with infosec uncertainty.
Attackers are human too - and currently available data allows us to make some predictions about how they'll behave. And to predict is to prevent.
RSA 2017 - Predicting Exploitability - With PredictionsMichael Roytman
Data driven decision making can be retrospective, real-time, or predictive. We use Amazon Machine Learning to predict the probability that a vulnerability will become exploited, using only the data available when a vulnerability is released.
The document discusses Harlan Mills's proposal for a surgical team approach to software development. Mills proposes that a chief programmer or "surgeon" leads a small team to develop software. The surgeon defines specifications, designs the program, codes it, tests it, and documents it. The surgeon is supported by a "copilot" to discuss and evaluate the design, an administrator to handle resources, an editor, secretaries, a program clerk, testers, and a "language lawyer" with expertise in the programming language being used. The surgical team approach aims to develop software in a timely manner while maintaining quality through close collaboration between specialized roles.
I will be sharing illusions and realities that I have observed as a veteran FBI agent, who has worked hundreds of cyber incidents, and what I see today having assimilated into the innovative world of Silicon Valley tech. We all know that cybersecurity threats are evolving faster than the world can consume them and that requires passionate and dedicated people to help advance us forward and protect our assets. The reality is government alone cannot move at the pace that is needed to protect their constituents. Often there is a disconnect from what government perceives as a problem versus what private industry categorizes as a risk. Government and technology companies must work together to solve the breach pandemic we have today. I will be highlighting how enterprises are truly preparing their security teams, what valuable metrics they are capturing, what tools are most useful, and what government best practices and standards have been the most sticky. I will be covering the realities of applying threat intelligence, big data analytics and artificial intelligence at scale. Then we will take a step forward and think about what new security problems might be awaiting us in the near future. My goal is to expose the facts of what organizations are actually experiencing, which should help government focus their efforts in the areas that will be most effective at combating the threats that face us daily.
This document outlines a presentation given by Simón Roses Femerling on software security verification tools. It discusses BinSecSweeper, an open source tool created by VulnEx to scan binaries and check that security best practices were followed in development. The presentation covers using BinSecSweeper to verify in-house software, assess a company's software security posture, and compare the security of popular browsers. Examples of plugin checks and reports generated by BinSecSweeper are also provided.
This document provides a summary of a presentation by Robert Hansen on the future of browser security. Hansen argues that while browser developers want to improve security and privacy, their companies' business models focused on advertising revenue prohibit them from doing so. He outlines various techniques used by advertisers and browser companies to track users against their preferences. Hansen advocates for technical controls that allow users to opt out of tracking through a "can not track" approach, rather than relying on ineffective "do not track" policies. He concludes by discussing WhiteHat Security's focus on privacy and their plans to add more security and privacy features to their Aviator browser.
This document provides information about Olivier Duchenne and his experience and qualifications. It summarizes his educational background which includes a Ph.D in Computer Science from ENS Paris/INRIA and a postdoctoral fellowship at Carnegie Mellon University. It also lists his professional experience which includes positions at NEC Labs, Intel, and as a co-founder of Solidware. The document then provides guidelines for machine learning and discusses challenges such as having enough and changing data. It explores the history and reasons for increased use of machine learning in computer vision.
Velocity Conference: Building a Scalable, Global SaaS Offering: Lessons from ...Intuit Inc.
QuickBooks Online is the no. 1 small business cloud accounting solution worldwide. In this session we discussed how we built a highly scalable, global SaaS offering and the lessons learnt along the way.
This talk gives an introduction about Healthcare Use cases - The AI ladder and Lifestyle AI at Scale Themes The iterative nature of the workflow and some of the important components to be aware in developing AI health care solutions were being discussed. The different types of algorithms and when machine learning might be more appropriate in deep learning or the other way will also be discussed. Use cases in terms of examples are also shared as part of this presentation .
This document summarizes a keynote presentation about challenges in bioinformatics software development and proposed solutions. Some of the key points made include: 1) bioinformatics software development involves multiple disciplines including computer science, software engineering, statistics, and biology, each with different priorities; 2) there is a massive proliferation of bioinformatics software packages that leads to many difficult choices for researchers; 3) proposed solutions include developing software in a more modular and automated way, using common benchmarks and protocols to evaluate tools, and focusing on reproducibility and usability.
Challenges in Building Operational AI - Daniel Skantze at Jfokus 2019 Peltarion
This document discusses challenges in building operational AI systems and provides guidance on how to approach AI development. It notes that while AI will significantly impact many aspects of life, building reliable operational AI systems faces hurdles such as a lack of high-quality data, problems being underspecified, tools not being designed for real-world use, and proof-of-concept systems failing to scale. The document offers design guidelines for AI such as ensuring ecological validity, being data-driven, making all aspects immutable and version-controlled, considering scaling from the start, and avoiding strong dependencies. It also provides advice on getting started with AI through self-study, using existing solutions, implementing non-critical test cases, and growing knowledge and processes
Filip Maertens - AI, Machine Learning and Chatbots: Think AI-first Patrick Van Renterghem
Filip Maertens presented this "AI, Machine Learning and Chatbots" at the "Future of IT" seminar on 20th of September 2017 in Brussels. Twitter: @fmaertens Email: filip@faction.xyz
InfoSphere Streams toolkits :Real-Time Analytics on Data in MotionAvadhoot Patwardhan
InfoSphere Streams comes standard with several real-time analytic toolkits to help provide quicker time to value. These include telecommunications event data, time series, text, messaging, database, geospatial, and more. Many of these toolkits are part of the InfoSphere Streams Open Source Project.
A confluence of events is accelerating the growth of AI in the Enterprise - (i) The COVID pandemic is accelerating the digital transformation of enterprises, (ii) increased digital sales & digital interaction is fueling interest in operationalizing AI to drive revenue and cost efficiencies and (iii) Enterprise databases and enterprise apps are infusing AI to transparently augment predictive capabilities for clients. Enterprise Power Systems are pillars of the global economy hosting our trinity of operating systems
Introduction to Artificial Intelligence. Not complex and should be relative easy to follow. Be aware that due to its high levelness (and no voice over) some care should be taken by the simplified examples used.
Deep Learning Image Processing Applications in the EnterpriseGanesan Narayanasamy
The presentation has many use cases covering the following Image classification: "The process of identifying and detecting an object or a feature in a digital image or video," the report states. In retail, deep learning models "quickly scan and analyze in-store imagery to intuitively determine inventory movement."
Voice recognition: "The ability to receive and interpret dictation or to understand and carry out spoken commands. Models are able to convert captured voice commands to text and then use natural language processing to understand what is being said and in what context." In transportation, deep learning "uses voice commands to enable drivers to make phone calls and adjust internal controls - all without taking their hands off the steering wheel."
Anomaly detection: "Deep learning technique strives to recognize abnormal patterns which don't match the behaviors expected for a particular system, out of millions of different transactions. These applications can lead to the discovery of an attack on financial networks, fraud detection in insurance filings or credit card purchases, even isolating sensor data in industrial facilities signifying a safety issue."
Recommendation engines: "Analyze user actions in order to provide recommendations based on user behavior."
Sentiment analysis: "Leverages deep learning-heavy techniques such as natural language processing, text analysis, and computational linguistics to gain clear insight into customer opinion, understanding of consumer sentiment, and measuring the impact of marketing strategies."
Video analysis: "Process and evaluate vast streams of video footage for a range of tasks including threat detection, which can be used in airport security, banks, and sporting events."
ITCamp 2018 - Magnus Mårtensson - Azure Global Application PerspectivesITCamp
Building and running a service for a truly global audience has always been the ultimate challenge for any business and for any application developer. In this session, we will discuss global perspectives on running your application tier in a scalable way – WebApps/APIs, Traffic Manager and Serverless. We will discuss the new Cosmos DB service offering in Azure and it’s built in global sync with little more than a press of a button on your end – data was always the final frontier of globalization of your app. We will look at what it takes to monitor this kind of an environment. Naturally this is a very big set of topics which means this session is aimed to give an overview, spark a discussion and provide some directional and inspirational input.
apidays LIVE New York 2021 - Solving API security through holistic obervabili...apidays
apidays LIVE New York 2021 - API-driven Regulations for Finance, Insurance, and Healthcare
July 28 & 29, 2021
Solving API security through holistic obervability
Jean-Baptiste Aviat, AppSec Staff Engineer at Datadog
Webinar: Machine learning analytics for immediate resolution to the most chal...Melina Black
The document discusses challenges with troubleshooting performance issues in virtualized environments. It notes the complexity of these infrastructures with limited visibility into relationships between virtual and physical components. Current tools provide isolated views and make root cause analysis difficult. Machine learning analytics that learn patterns and behaviors can help overcome these challenges by providing a unified view, automatically detecting issues, and identifying the true root causes of problems in one touch. The document demonstrates SIOS iQ, a machine learning platform that aims to provide these benefits for virtualized environments.
Democratization - New Wave of Data Science (홍운표 상무, DataRobot) :: AWS Techfor...Amazon Web Services Korea
This document discusses the democratization of data science and machine learning using automated machine learning tools. It provides examples of how DataRobot has helped customers in various industries build predictive models faster and with less coding than traditional approaches. Specifically, it summarizes how DataRobot has helped customers in banking, insurance, retail, and other industries with use cases like predictive maintenance, sales forecasting, fraud detection, customer churn prediction, and insurance underwriting.
This document outlines a presentation given by Simón Roses Femerling on software security verification tools. It discusses BinSecSweeper, an open source tool created by VulnEx to scan binaries and check that security best practices were followed in development. The presentation covers using BinSecSweeper to verify in-house software, assess a company's software security posture, and compare the security of popular browsers. Examples of plugin checks and reports generated by BinSecSweeper are also provided.
This document provides a summary of a presentation by Robert Hansen on the future of browser security. Hansen argues that while browser developers want to improve security and privacy, their companies' business models focused on advertising revenue prohibit them from doing so. He outlines various techniques used by advertisers and browser companies to track users against their preferences. Hansen advocates for technical controls that allow users to opt out of tracking through a "can not track" approach, rather than relying on ineffective "do not track" policies. He concludes by discussing WhiteHat Security's focus on privacy and their plans to add more security and privacy features to their Aviator browser.
This document provides information about Olivier Duchenne and his experience and qualifications. It summarizes his educational background which includes a Ph.D in Computer Science from ENS Paris/INRIA and a postdoctoral fellowship at Carnegie Mellon University. It also lists his professional experience which includes positions at NEC Labs, Intel, and as a co-founder of Solidware. The document then provides guidelines for machine learning and discusses challenges such as having enough and changing data. It explores the history and reasons for increased use of machine learning in computer vision.
Velocity Conference: Building a Scalable, Global SaaS Offering: Lessons from ...Intuit Inc.
QuickBooks Online is the no. 1 small business cloud accounting solution worldwide. In this session we discussed how we built a highly scalable, global SaaS offering and the lessons learnt along the way.
This talk gives an introduction about Healthcare Use cases - The AI ladder and Lifestyle AI at Scale Themes The iterative nature of the workflow and some of the important components to be aware in developing AI health care solutions were being discussed. The different types of algorithms and when machine learning might be more appropriate in deep learning or the other way will also be discussed. Use cases in terms of examples are also shared as part of this presentation .
This document summarizes a keynote presentation about challenges in bioinformatics software development and proposed solutions. Some of the key points made include: 1) bioinformatics software development involves multiple disciplines including computer science, software engineering, statistics, and biology, each with different priorities; 2) there is a massive proliferation of bioinformatics software packages that leads to many difficult choices for researchers; 3) proposed solutions include developing software in a more modular and automated way, using common benchmarks and protocols to evaluate tools, and focusing on reproducibility and usability.
Challenges in Building Operational AI - Daniel Skantze at Jfokus 2019 Peltarion
This document discusses challenges in building operational AI systems and provides guidance on how to approach AI development. It notes that while AI will significantly impact many aspects of life, building reliable operational AI systems faces hurdles such as a lack of high-quality data, problems being underspecified, tools not being designed for real-world use, and proof-of-concept systems failing to scale. The document offers design guidelines for AI such as ensuring ecological validity, being data-driven, making all aspects immutable and version-controlled, considering scaling from the start, and avoiding strong dependencies. It also provides advice on getting started with AI through self-study, using existing solutions, implementing non-critical test cases, and growing knowledge and processes
Filip Maertens - AI, Machine Learning and Chatbots: Think AI-first Patrick Van Renterghem
Filip Maertens presented this "AI, Machine Learning and Chatbots" at the "Future of IT" seminar on 20th of September 2017 in Brussels. Twitter: @fmaertens Email: filip@faction.xyz
InfoSphere Streams toolkits :Real-Time Analytics on Data in MotionAvadhoot Patwardhan
InfoSphere Streams comes standard with several real-time analytic toolkits to help provide quicker time to value. These include telecommunications event data, time series, text, messaging, database, geospatial, and more. Many of these toolkits are part of the InfoSphere Streams Open Source Project.
A confluence of events is accelerating the growth of AI in the Enterprise - (i) The COVID pandemic is accelerating the digital transformation of enterprises, (ii) increased digital sales & digital interaction is fueling interest in operationalizing AI to drive revenue and cost efficiencies and (iii) Enterprise databases and enterprise apps are infusing AI to transparently augment predictive capabilities for clients. Enterprise Power Systems are pillars of the global economy hosting our trinity of operating systems
Introduction to Artificial Intelligence. Not complex and should be relative easy to follow. Be aware that due to its high levelness (and no voice over) some care should be taken by the simplified examples used.
Deep Learning Image Processing Applications in the EnterpriseGanesan Narayanasamy
The presentation has many use cases covering the following Image classification: "The process of identifying and detecting an object or a feature in a digital image or video," the report states. In retail, deep learning models "quickly scan and analyze in-store imagery to intuitively determine inventory movement."
Voice recognition: "The ability to receive and interpret dictation or to understand and carry out spoken commands. Models are able to convert captured voice commands to text and then use natural language processing to understand what is being said and in what context." In transportation, deep learning "uses voice commands to enable drivers to make phone calls and adjust internal controls - all without taking their hands off the steering wheel."
Anomaly detection: "Deep learning technique strives to recognize abnormal patterns which don't match the behaviors expected for a particular system, out of millions of different transactions. These applications can lead to the discovery of an attack on financial networks, fraud detection in insurance filings or credit card purchases, even isolating sensor data in industrial facilities signifying a safety issue."
Recommendation engines: "Analyze user actions in order to provide recommendations based on user behavior."
Sentiment analysis: "Leverages deep learning-heavy techniques such as natural language processing, text analysis, and computational linguistics to gain clear insight into customer opinion, understanding of consumer sentiment, and measuring the impact of marketing strategies."
Video analysis: "Process and evaluate vast streams of video footage for a range of tasks including threat detection, which can be used in airport security, banks, and sporting events."
ITCamp 2018 - Magnus Mårtensson - Azure Global Application PerspectivesITCamp
Building and running a service for a truly global audience has always been the ultimate challenge for any business and for any application developer. In this session, we will discuss global perspectives on running your application tier in a scalable way – WebApps/APIs, Traffic Manager and Serverless. We will discuss the new Cosmos DB service offering in Azure and it’s built in global sync with little more than a press of a button on your end – data was always the final frontier of globalization of your app. We will look at what it takes to monitor this kind of an environment. Naturally this is a very big set of topics which means this session is aimed to give an overview, spark a discussion and provide some directional and inspirational input.
apidays LIVE New York 2021 - Solving API security through holistic obervabili...apidays
apidays LIVE New York 2021 - API-driven Regulations for Finance, Insurance, and Healthcare
July 28 & 29, 2021
Solving API security through holistic obervability
Jean-Baptiste Aviat, AppSec Staff Engineer at Datadog
Webinar: Machine learning analytics for immediate resolution to the most chal...Melina Black
The document discusses challenges with troubleshooting performance issues in virtualized environments. It notes the complexity of these infrastructures with limited visibility into relationships between virtual and physical components. Current tools provide isolated views and make root cause analysis difficult. Machine learning analytics that learn patterns and behaviors can help overcome these challenges by providing a unified view, automatically detecting issues, and identifying the true root causes of problems in one touch. The document demonstrates SIOS iQ, a machine learning platform that aims to provide these benefits for virtualized environments.
Democratization - New Wave of Data Science (홍운표 상무, DataRobot) :: AWS Techfor...Amazon Web Services Korea
This document discusses the democratization of data science and machine learning using automated machine learning tools. It provides examples of how DataRobot has helped customers in various industries build predictive models faster and with less coding than traditional approaches. Specifically, it summarizes how DataRobot has helped customers in banking, insurance, retail, and other industries with use cases like predictive maintenance, sales forecasting, fraud detection, customer churn prediction, and insurance underwriting.
This document discusses big data use cases that use Amazon Elastic MapReduce (EMR) and Hadoop. It provides examples of companies like a big box retailer, Etsy, Yelp, and Foursquare using EMR to perform clickstream analysis, recommendations, process large amounts of log data, and analyze user check-in patterns at massive scales impossible without these cloud big data tools. EMR allows these companies to easily experiment and process massive datasets across large clusters of servers in an on-demand, flexible, and cost-effective manner in the cloud.
Big data use cases in the cloud presentationTUSHAR GARG
This document discusses big data use cases that use Amazon Elastic MapReduce (EMR) and Hadoop on the cloud. It provides examples of companies like a big box retailer, Etsy, Yelp, and Foursquare using EMR to perform clickstream analysis, recommendations, process large amounts of log data, and analyze user check-in patterns from terabytes of data. EMR allows these companies to easily run Hadoop jobs and experiments on large, scalable cloud clusters and lower the costs of operating distributed data systems.
Staring with an brief overview of the changing role of the CIO between 2018 and 2020, then moving into the technology landscape, here are 10 use cases across the new three: AI, IoT and Blockchain (and in many cases an overlap of them)
Cristene Gonzalez-Wertz is the Leader for the IBM Institute for Business Value in Electronics as well as an alumni of IBM's Watson Group. She speaks on the intersection of technology, software, offerings, platforms and new business models.
The document discusses best practices for AI/ML projects based on past failures to understand disruptive technologies. It recommends (1) setting clear expectations and metrics, (2) assessing skills needed, (3) choosing the right tools based on cost, time and accuracy tradeoffs, (4) using best practices like iterative development, and (5) repeating until gains become irrelevant before moving to the next project.
Similar to Do it in-production-seth_eliot_2013_03 (20)
Web security-–-everything-we-know-is-wrong-eoin-kearydrewz lin
1) Web application security is often approached incorrectly, focusing too much on annual penetration tests and compliance, rather than ongoing monitoring and prevention through the development process.
2) Many vulnerabilities are introduced through third party libraries and dependencies, which are not properly tested or managed. Continuous testing across the full software supply chain is needed.
3) Not all vulnerabilities are equal - context is important. A risk-based approach should prioritize the most critical issues based on factors like impact, likelihood, and the development environment. Compliance alone does not ensure real security.
This document summarizes a presentation about the mobile security Linux distribution Santoku Linux. It discusses how Santoku Linux was created by modifying Lubuntu to include mobile forensic and security tools from the company viaForensics. Some key tools discussed include AFLogical OSE for Android logical acquisitions, iPhone Backup Analyzer, and utilities for analyzing mobile malware samples. Real-world examples of analyzing the Any.DO task manager app and Korean banking malware are also provided.
This document discusses sandboxing untrusted JavaScript from third parties to improve security. It proposes a two-tier sandbox architecture that uses JavaScript libraries and wrappers, without requiring browser modifications. Untrusted code is executed in an isolated environment defined by policy code, and can only access approved APIs. This approach aims to mediate access between code and the browser securely and efficiently while maintaining compatibility with existing third-party scripts.
This document discusses how HTML5 features can be used for authentication purposes and addresses some security challenges. It describes APIs like local storage, canvas, geolocation, and notifications that could be leveraged for authentication factors like passwords, patterns, and one-time passwords. However, it also notes risks like storing sensitive data on devices, spoofing locations, and notifications not being reliable. The document advocates using HTML5 responsibly and understanding privacy and user behavior when designing authentication solutions.
Owasp advanced mobile-application-code-review-techniques-v0.2drewz lin
The document discusses code review techniques for advanced mobile applications. It begins with an overview of why mobile security is important given the rise in mobile usage. It then discusses different mobile application types and architectures that can be code reviewed, including native, hybrid, and HTML5 applications. The document outlines the goals of mobile application code reviews, such as understanding the application and finding security vulnerabilities. It provides the methodology for conducting code reviews, which includes gaining access to source code, understanding the technology, threat modeling, analyzing the code, and creating automation scripts. Finally, it discusses specific vulnerabilities that may be found in Windows Phone, hybrid, Android, and iOS applications.
The document discusses research conducted by Gregg Ganley and Gavin Black at MITRE in FY13-14 on iOS mobile application security. It describes their work on a tool called iMAS (iOS Mobile Application Security) which aims to provide additional security controls and containment for native iOS applications. iMAS addresses vulnerabilities related to runtime access, device access, application access, data at rest, and threats from app stores/malware. It utilizes techniques like encrypted code modules, forced inlining, secure MDM and more to raise security levels above standard iOS but below a fully customized/rooted mobile device environment. The document outlines the motivation, capabilities and future research directions for the iMAS project.
Defeating xss-and-xsrf-with-my faces-frameworks-steve-wolfdrewz lin
This document discusses how to defeat cross-site scripting (XSS) and cross-site request forgery (XSRF) when using JavaServer Faces (JSF) frameworks. It covers validating user input, encoding output, and protecting view states to prevent XSS, as well as configuring JSF implementations to protect against XSRF by encrypting view states and adding tokens to URLs. The presentation emphasizes testing validation, encoding, and protection in specific JSF implementations since behaviors can differ.
This document summarizes a presentation on defending against CSRF (cross-site request forgery) attacks. It discusses four main design patterns for CSRF defenses: the synchronizer token pattern, double submit cookies, challenge-response systems, and checking the referrer header. It then provides details on implementing these patterns, specifically looking at libraries and features in .NET, .NET MVC, Anticsrf, CSRFGuard, and HDIV that can help implement CSRF tokens and validation. The document covers the tradeoffs of different approaches and considerations for using them effectively on the code and server level.
Chuck willis-owaspbwa-beyond-1.0-app secusa-2013-11-21drewz lin
This document provides an overview of the OWASP Broken Web Applications (OWASP BWA) project. It discusses the background and motivation for the project, describes the current status including what applications are included in the virtual machine, outlines future plans, and solicits feedback to help guide and expand the project. The goal of OWASP BWA is to provide a free, open-source virtual machine containing a variety of intentionally vulnerable web applications to aid in testing tools and techniques for finding and addressing security issues.
Appsec usa2013 js_libinsecurity_stefanodipaoladrewz lin
This document summarizes Stefano di Paola's talk on security issues with JavaScript libraries. It discusses how jQuery's $() method can be considered a "sink" that executes HTML passed to it, including examples of XSS via jQuery selectors and AJAX calls. It also covers problems with JSON parsing regular expressions, AngularJS expression injection, and credentials exposed in URLs. Solutions proposed include validating all input, auditing third-party libraries, and moving away from approaches like eval() that execute untrusted code.
Appsec2013 presentation-dickson final-with_all_final_editsdrewz lin
(1) A study surveyed 600 software developers and found that most did not have a basic understanding of software security concepts, with 73% failing an initial survey and the average score being 59% before training. (2) However, after training, developers' understanding of key concepts increased, with some areas like cross-site scripting seeing a 20 percentage point gain. (3) The study concluded that targeted security training can improve developers' knowledge in the short-term, though retention of this knowledge may require refresher training over time.
This document summarizes Bruno Gonçalves de Oliveira's talk on hacking web file servers for iOS. It introduces Bruno and his background in offensive security and discusses how iOS devices store a lot of information and mobile applications are often poorly designed and vulnerable. It provides examples of vulnerable file storage apps, outlines features and vulnerabilities like lack of encryption, authentication, XSS issues, and path traversal flaws. The document demonstrates exploits like unauthorized access to file systems on jailbroken devices and how to find vulnerable systems through mDNS queries. It concludes that mobile apps are the future but designers still do not prioritize security and there are too many apps for users to vet carefully.
Appsec 2013-krehel-ondrej-forensic-investigations-of-web-exploitationsdrewz lin
This document discusses forensic investigations of web exploitations. It presents a scenario where a web server in a DMZ zone was exploited but logs are unavailable, so network traffic must be analyzed. Wireshark will be used to analyze a PCAP file of recorded traffic to determine what happened and find any traces of commands or malware. The document also provides information on the costs of different types of cyber attacks, how to decode HTTP requests, and discusses tools that can be used for network forensics investigations like Wireshark, tcpdump, and Xplico.
Appsec2013 assurance tagging-robert martindrewz lin
The document discusses engineering software systems to be more secure against attacks. It notes that reducing a system's attack surface alone is not enough, as software and networks are too complex and it is impossible to know all vulnerabilities. It then discusses characteristics of advanced persistent threats, including that the initial attack may go unnoticed and adversaries cannot be fully kept out. Finally, it argues that taking a threat-driven perspective beyond just operational defense can help balance mitigation with detection and response.
The document summarizes a presentation on vulnerabilities found in SCADA systems between 2009-2013. It analyzed vulnerabilities by component, with the majority (66%) found in communication components like Modbus and DNP3 protocols. Examples of vulnerabilities are described for several devices. Real-world issues with SCADA systems are discussed like lack of authentication and patching. Recommendations are provided like auditing SCADA networks, implementing secure protocols and password policies, and keeping systems updated.
This 3-page document discusses the real-world challenges of implementing an agile software development lifecycle (SDLC) approach from the perspectives of Chris Eng and Ryan O'Boyle. It was presented at the OWASP AppSec USA conference on November 20, 2013 and focuses on practical lessons learned and best practices for incorporating security throughout an agile SDLC.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
19. Provides insight into real usage
Reproducible and well understood scenarios
Covers a vast variety of environments
Requires proper handling of Personally Identifiable Information (PII)
May adversely alter production and production data
20. “To have a great
idea, have a lot
of them”
-Thomas Edison
28. “We know we can't anticipate the 101
things that will go wrong, The only
thing we can control is ensuring our
team responds appropriately to those
situations.“
– Jerry Hook, Executive Producer Halo
…Hundreds of
thousands of
requests per
second
Chronologically Left to RightExperience is in software servicesTesting Planet LinksThe future of software testing Part Three – CloudJuly 10, 2012http://www.thetestingplanet.com/2012/07/july-2012-issue-8/ The future of software testing Part two– TestOpshttp://www.thetestingplanet.com/2012/03/march-2012-issue-7/ The future of software testing Part one – Testing in productionThe Testing Planet, November 2011<http://www.thetestingplanet.com/2011/11/the-future-of-software-testing-part-one-testing-in-production/> I also did a Mind Map Testing in Production MindmapAugust 6, 2012http://www.ministryoftesting.com/2012/08/mindmap-testing-in-production/For Ministry of Testing (Software Testing Club)
Good book – I recommend itIs there something the assembled crowd here might be interested in measuring?CLICK: yes, Quality!So this is how I define Testing This INCLUDES classic pre-prod test case executionAnd his will necessarily include more than the classic test case execution
Data Driven Decision Making (D3M) is about the first definition: measurementData Driven Validation is about the second definition: testingThis talk about TiP, but TiP is but one form of Data-Driven ValidationCLICK TiP is leveraging real users, because we cannot know what all users will doCLICK and actual production, because production is a dangerous and chaotic place….in a risk mitigated way to reduce uncertainty about the quality of your software
Let’s dive in with an exampleBen was not someone I had followed CLICK: show re-tweetThe TweetBeing from MSFT this caught my attentionLikely IE6…. Even MSFT is running away from IE6Is it cost effective to keep that XP environment around? With IE 6?And how about every other OS and browser in the world?The matrix gets hugeWould it be great to answer the question, What are you users actually using?…and understand how you product works with them
Instead of a huge matrix, you can use production to get the data you needOf end to end performance under real operating conditionsIn this case PLT for Outlook.com (Hotmail at the time) – from millions of actual usersGet data on every OS, browser, Geographic location, or data center used – instead of testing a huge matrix in the labIdentified and remedied performance bottlenecks They use JSIThis is Big Data **** Who’s heard of big datatransitions to definition on next slide-------------------Not just a PLT, but a round trip for everything – data you can’t get in a labPublic internetLoad balancersLAN switchesPartner ServicesThis is (old) data from Hotmail (now Outlook.com). Based on this and similar measurements theyIdentified and remedied performance bottlenecks Such as upstream bandwidth constraintsBy using more caching and static images
The previous example makes use of Big DataSo while not all of our Data-Driven Validation needs to be Big Data, it is worthwhile understanding what Big Data Is3 V’sVolumeVelocityVariety4th V – Value – what’s the value? Efficient quality assessment------------------------------------------Ultimately it is about Big Insights- Again Hubbard: When you have high uncertainty, you need very little data to make an impactful reduction in it.3 V’s - http://radar.oreilly.com/2012/01/what-is-big-data.htmlVolumeCannot be handled by conventional RDBMSSQL Server maxes out at 16TBEntire web is 0.5 ZB (2009); probably about 1-2 ZB today`Richard Wray (2009-05-18). "Internet data heads for 500bn gigabytes". The Guardian. http://www.guardian.co.uk/business/2009/may/18/digital-content-expansion.Velocityeverything’s instrumentedSpeed of feedback is importantIBM: The Road: could you cross a busy road with just a snapshot (not live data)? http://vimeo.com/20718357 Batch vs. StreamPartial Analysis: http://research.microsoft.com/apps/video/default.aspx?id=163222 VarietyStructured: DBUnstructured: TweetsHow about XML? One good rule of thumb is if the data structure (or lack thereof) is not sufficient for the processing task at hand, then it is unstructured.
I mentioned Twitter in the previous slide, here is how Twitter data can be usedThis is an internal Microsoft tool. Public tools exist to do similar things******* Turn Tweet stream into actionable metricsSentiment is positive 2:1; there was a spike in certain topics around TechFest (MSFT R&D showcase)Can be used to find bugs too: version over version issues-----------------------------The Ambient Data of the web / social can be usedData Sources: Twitter, Blog, News, Forum, Facebook… but mostly TwitterSentiment, Timeliness (TechFest was in March), Quality signalsBugs? Timeline = new version? Certain phrases?NoteSentiment: almost everything has a large neutral frequency. Positive > 2:1 over negative is goodSDK and Kinect for Windows had a boost in early March – Microsoft TechFest (R&D showcase)“Kinect Fusion” creates a detailed 3-D rendering of your environmentThis technology may help you find bugs,Certain phrases may indicate itA rapid change in sentiment with a new releaseOther technologies that mine Microsoft’s customer support data can also be used to find issues with released product.
Data-Driven Validation is bigger than just TiPLots of good Data-Driven validation prior to production tooFor any system of sufficient scale, only Production looks like productionData center pics: Ideal (lab) versus Reality (production)
You cannot find this bug pre-prodWould you test walking directions between A and B for every combination in the world?It is trivial to find in productionWith the Right telemetryGoogle can know when this happens in Prod and report itCLICK Google knows that this route may be missing sidewalks!Remember: only in production do you find:The true diversity of Real users and usageThe true complexity of the production environment-----------------------------------------“Find this with a unit Test…” – James Whittaker - http://www.youtube.com/watch?v=cqwXUTjcabs&feature=BF&list=PL1242F05D3EA83AB1&index=16
Let’s look at an example from FacebookFacebook uses Open source monitoring software like GangliaUsing Hadoop, which we will talk about later, they developed….An internally produced ODS – persistent and accurateSystem metrics (CPU, Memory, lO, Network)Application metrics (Web, DB, Caches)Facebook metrics (Usage, Revenue)They claim to collect 5 million metrics they’re about dodgy on what this specifically means, but it is….Passive Validation at scale------------------------------------Nagios: ping testing, ssh testing- Is Active ValidationRefs:Ganglia, ODS: Cook, Tom. A Day in the Life of Facebook Operations. Velocity 2010. [Online] June 2010. http://www.youtube.com/watch?v=T-Xr_PJdNmQPicture: FB Prineville Datacenter: http://www.facebook.com/prinevilleDataCenter/
But 5 million metrics is a bit ambiguous- I understand it to mean number of different metrics collected x servers they collect them onCook, Tom. A Day in the Life of Facebook Operations. Velocity 2010. [Online] June 2010. http://www.youtube.com/watch?v=T-Xr_PJdNmQ
So how does Facebook use their 5 million metrics to assess quality?Let’s refer to a Quora answer and blog post from a FB engineer that discusses thisCLICK: How is FB like Gondor?Boromir: "Gondor has no king, Gondor needs no king.“ “Facebook has no testers, Facebook needs no testers”CLICK: What does FB actually do then? (refer to slide)So am I mocking this or promoting it as a valid practice?CLICK: well, both really… depends on your business requirementsThe FB engineer in question said… (refer to slide)****** FB uses TiP only, they just throw it in productionWe’re all pretty familiar with FB’s “quality” – if your quality needs to be higher, then this approach does not work-----------------------------------“A lot of cross talk betweenDev and QA… it’s pretty slow… let’s get rid of it”“Our engineers write, debug, and test their own code”“We expose real traffic to these services”Engineers need to be there every step of the wayOn IRC channel when deployAggressively log and audit“5 million metrics” can findProblems at scaleBroken features for significant percent of usersRefs:Cook, Tom. A Day in the Life of Facebook Operations. Velocity 2010. [Online] June 2010. http://www.youtube.com/watch?v=T-Xr_PJdNmQhttp://www.zdnet.com/blog/facebook/why-facebook-doesnt-have-or-need-testers/7191http://www.quora.com/Is-it-true-that-Facebook-has-no-testers - Evan Priestley, - Facebook engineer from 2007-2011
Blue = DeveloperPurple = TesterWe’ve seen FB just “throw it in Production,” and that is part of their business decision. But most teams will not choose to do thisThis is a simplified model of the test life cycleI call this the BUFT model (Big Up-Front Testing)I presume this look familiar to most of youCLICK: So then maybe we add TiPWe still have BUFT, and now the Testers have that much more to do!CLICK: So we need to adjust the modelThis is just one possible way to do itDevs take on more UFT testing - focus on functional & code quality at the COMPONENT level (Test can help with strategy)Test focus on integrated service quality (Dev can help with implementation – Testability in Production)****** Rule of thumb: should not find bugs that could have found in an earlier stage---------------------------Other notes:“Instrument Everything” is from FB - http://www.youtube.com/watch?v=T-Xr_PJdNmQMetrics and Optics give you access to the data streamTDD is a better way to build in qualityNo, do NOT just throw it in productionShould be part of a continuous test strategyBut may want to reduce UFT (Up-Front Testing).From BUFT to UFT + TiP
The examples I have shown thus far are types of Passive ValidationPassive Validation is very valuable – do not be fooled by the nameAnother types of Data-driven validation is Active ValidationSynthetic Transactions will be very familiar – Test Cases are Synthetic TxsLet’s look at some examples-----------------------------------------------------------Passive ValidationLooks a lot like what we would call monitoringOperational intelligence, like availability and performanceBusiness Intelligence tells us where the user is going. Crucial knowledge for a quality strategyWe always have to make hard decision on what to test, this answer thatBI also can indicate bugsIf usage drops off when no user-facing change has been madeActive ValidationThis looks a lot like the testing we do todaySynthetic Transactions = Test CasesIf we do this in production, Testing Active MonitoringAvailability = Is it there? = successful TX, regardless of resultReliability = Does it work = Tx without errorPerformance = How long does it take?
Visuals are from “Office Service Pulse” – dashboard many metrics from active validationA specific example is Exchange Online, a hosted service – provides email, calendar, contacts managementWanted to re-use existing on-prem testsDeveloped execution framework running from Azure, Microsoft’s cloud platformAvailability = Is it there? = successful TX, regardless of resultRun repeatedly to turn pass/fail into availability/non-availabilityPerformance = How long does it take?Run repeatedly and time Tx to historically trend with timeEspecially useful release over release-----------------------------------------------we quite simply had to figure out how to simultaneously test a server and a service? How do we pull our existing rich cache of test automation into the services space? For server (on-prem)5000 machines in test labs70,000 automated test cases run multiple times a day on these machines.Reuse and extend our existing infrastructure.Exchange will remain one codebase.We are one team and will not have a separate service engineering team or service operations teamSolutionTiPRun tests from AzureYou getAvailabilityPerformanceRef:Experiences of Test Automation; Dorothy Graham; Jan 2012; ISBN 0321754069; Chapter: “Moving to the Cloud: The Evolution of TiP, Continuous Regression Testing in Production”; Ken Johnston, Felix Deschamps
Another example that does not quite look like test casesOperational Fault InjectionThis is another type of Active ValidationInjects Synthetic FaultsTo disrupt service operationTo Test System Fault Tolerance – assuming system was designed to be FT!!Chaos MonkeyApril 2011 exampleSimian armyJune 2012 exampleAmazon Game Day--------------------------------------Other Notes:Netflix is a streaming video service hosted on Amazon AWS CloudAvailable in both North and South America, the Caribbean, United Kingdom, Ireland, Sweden, Denmark, Norway, Finland Chaos Monkey Simian ArmyIt started with their "Chaos Monkey", a script deployed to randomly kill instances and services within their production architecture. The name comes from the idea of unleashing a wild monkey with a weapon in your data center (or cloud region) to randomly shoot down instances and chew through cables. Then they took the concept further with other jobs with other similar goals. Latency Monkey induces artificial delays, Conformity Monkey finds instances that don’t adhere to best practices and shuts them down, Janitor Monkey searches for unused resources and disposes of them. April 2011 outage – stayed upJune 2012 Outage – Chaos Gorilla should have prepared them to survive an outage, but did notChaos Gorilla, the Simian Army member tasked with simulating the loss of an availability zone, was built for exactly this purpose. This outage highlighted the need for additional tools and use cases for both Chaos Gorilla and other parts of the Simian Army. Amazon Game DayEntire DC taken downAnnounced in advance, but few services opt-outService owners are alert, but mostly not worried – Amazon services designed for thisRefs for Chaos Monkeyhttp://techblog.netflix.com/2010/12/5-lessons-weve-learned-using-aws.htmlhttp://techblog.netflix.com/2011/07/netflix-simian-army.html June 2012 outage: http://techblog.netflix.com/2012/07/lessons-netflix-learned-from-aws-storm.html Refs for Amazon Game Day- There really aren’t any, but this post mentions it: http://devops.com/2011/03/08/
Continuing our theme of monkeys….Fault Injection has some obvious risks, but even less intrusive synthetic transactions carry risksMonkey story (below) illustrates some risk of synthetics onBusiness Metrics and ReportingChanging “The shape” od production data-----------------------------------Other risks of syntheticsService OperationDirect User ExperiencePartner ServicesSecurityCosthttp://thedailywtf.com/Articles/Ive-Got-The-Monkey-Now.aspx 1999 was a big year for Harvard Business School Publishing. In the past few years, they had seen their business model – selling books, journals, articles, case studies, and so forth – transform from being entirely catalogue-based to largely web-based, and it had finally come time for a major re-launch of their website. HBRP’s new website was slick. On top of a fairly advanced search system, the re-designed site also featured community forums and a section called “Ideas @ Work”, which let users download audio broadcasts from influential business thinkers from around the world. And best of all, despite the rapid development schedule, scope creep, and all of the new bells and whistles, the new site actually worked. In the height of the dot-com era, not too many other sites could claim the same. One key contributor to the success of Harvard Business School Publishing’s new website was its extensive testing and QA. Analysts developed all sorts of test cases to cover virtually every aspect of the site. They worked closely with HBSP’s logistics department to make sure the tests – searching, fulfillment, account management, etc. – were run. And not just run, but run often. This aggressive testing strategy ensured that the site would function as intended for years to come. That is, until that one day in 2002. On that day, one of the test cases failed: the “Single Result Search.” The “Single Result Search” test case was part of a trio of cases designed to test the system’s search logic. Like the “Zero Result Search” case, which had the tester enter a term like “asdfasdf” to produce no results, and the “Many Results Search” case, which had the tester enter a term like “management” to produce pages of results, the “Single Result Search” case had the tester enter a term – specifically, “monkey” – to verify that the system would return exactly one result. And for three years, “monkey” returned exactly one result: Who's Got the Monkey? (full article text) by William Oncken Jr. Written in 1974 Oncken’s article is for managers who “find themselves running out of time while their subordinates are running out of work.” As for the monkeys, they’re just an analogy for work, not who managers should outsource work to. Apparently, Oncken wasn’t that ahead of his time. In any case, on that day in 2002, the “monkey” search returned two results. The first, as expected, was Who's Got the Monkey?. The second result was something to the effect of Who’s Got The Monkey Now?, which was an update to HBSP’s run-away best seller, Oncken’s 1974 Who's Got the Monkey?. It seemed obvious: the “Single Result Search” test case just needed to be updated. But then they looked into the matter a bit further. As part of the aggressive testing strategy mentioned earlier, the HBSP logistics team would fill their down time by executing test cases. First they’d run through the “Zero Result Search” test, then the “Many Result Search” test, then the “Single Result Search”. Then they’d add that single result – Who’s Got the Monkey? – to their shopping cart, create an new account, submit the order, and then fulfill it. Of course, they didn’t actually fulfill it – everyone knew that orders for “Mr. Test Test” and “123 Test St.” were not to be filled. That is, everyone except the marketing department. When HBSP’s marketing department analyzed the sales trends, they noticed a rather interesting trend. Oncken’s 1974 Who's Got the Monkey? was a run-away best seller! And like any marketing department would, they took the story and ran. HBSP created pamphlets and other distillations of the paper. They even repackaged those little plastic cocktail monkeys as official “Who’s Got the Monkey monkeys”. And finally, sometime in 2002, the updated version of Who’s Got the Monkey? was posted to HBSP, which was then picked up by the searching system, which, in turn, caused the “Single Result Search” test case to fail. Of course, by this point, there was little anyone could do. The fictional success of Who’s Got the Monkey had already been widely publicized as reality. And with all the subsequent write-ups (many of which are still around to this day), it may have very well become a best-seller. Needless to say, HBSP has since changed their aggressive testing policy. Some details of the story have been redacted to protect the guilty. Thanks to the two anonymous sources working at HBSP for the inside scoop, and news archives for the rest.
More risks…..Xbox storyAmazon StoryMitigationsData tagging, filtering, and clean-upXbox- Obviously a negative experience as the user is confused, and may think they have been charged (they were not)- This was only a handful of users. Xbox has implemented clever mitigations, such as only using UUIDs outside the range used by valid Xbox users.AmazonThis is a negative user experience because the user it trying to find actual items to purchase, an intent not served by exposing test data as shown in this example. The exposure of such data ironically creates a sense of "immaturity" or lack of quality. The poor experience becomes worse if a user purchases such an item. It may be reasonable to have such test data on the site transiently, but it should be removed after testing is completeMitigations includeData TaggingData CleaningData FilteringTransaction StubbingTransaction pre-validationTransaction Throttling
A quizA = ActiveP = PassiveAnswers can be subject to argument, there are gray areas
To understand the power of TiP, it is illustrative to understand the power of….Experimentation is a passive validation methodologyTry new things… in productionBuild on SuccessesCut your losses… before they get expensiveA/B testing is… users assigned to one of multiple experiences and comparedDF and Beta is….. A bit different, users opt-in to trying a not yet released versionBoth use Exposure Control which limits who sees the new codeMitigate risk by limited exposure of new codeControlled ExperimentationUn-controlled experimentationOne way FB experiments:Three concentric push phasesp1 = internal releasep2 = small external releasep3 = full external releaseRef: http://framethink.blogspot.com/2011/01/how-facebook-ships-code.html
1% launches – Eric SchmidtSlice and dice is about the dataDesign decisions and also service qualityShadow LaunchesStatus packets: billions packets per dayLaunched service, but users could not see itAt Microsoft we used experimentation to assess how often decisions were goodDecision makers were expertsCLICK1/3 achieved some degree of the desires goal1/3 had not significant effect – this is an important result that many do not consider1/3 had the opposite to the desired effectExperimentation lets you quanitify the good ones and weed out the bad ones--------------------------------------1% launches… “…dice and slice in any way you can possibly fathom” – Eric SchmidtRef: How Google Fuels Its Idea Factory, BusinessWeek, April 29, 2008; http://www.businessweek.com/magazine/content/08_19/b4083054277984.htmSomewhat famously this is used for design decisions“design philosophy was governed by data and data exclusively“ – Douglas Bowman, Former Visual Design Lead - http://stopdesign.com/archive/2009/03/20/goodbye-google.html)Slice and dice what? The data… it’s a data-driven decisionShadow LaunchesRef: Seattle Conference on Scalability: Lessons In Building Scalable Systems, Reza Behforooz; http://video.google.com/videoplay?docid=6202268628085731280 @6:55Google Talk Presence packetsConnectedUsers X BuddylistSize X OnlineStatechanges = billions packets per dayEverything was happening, but nothing was displayed to usersAt Microsoft, an evaluation of decisions tested with experimentation found 1/3 2/3Ref: http://blog.clicksnconversions.com/intuition-sucks-%e2%80%93-that%e2%80%99s-why-we-test/
Let’s look at an example of experimentation more directly ties to traditional software quality assessmentNetflix is a streaming video service hosted on Amazon AWS CloudAvailable in both North and South America, the Caribbean, United Kingdom, Ireland, Sweden, Denmark, Norway, Finland 1B API requests = Big DataBlue is Vcurr – smiley face represents customer traffic is carried on that (virtual) serverRed is Vnext, [click] Netflix spins up Vnext in the cloud carrying no user traffic[click] They then put one red/Vnext server live carrying user traffic and let it run to test code quality[click] They then switch user traffic to red/Vnext servers but keep blue/Vcurr ones around while they run overnight and check for problems[click] Finally if all is well with Vnext, the release the Vcurr resources.Typical problem found: memory leakMove all users to Vnext and let bake – that is big dataAlthough not truly random and un-biased, there is still value here, especially to see large changeshttp://perfcap.blogspot.com/2012/03/ops-devops-and-noops-at-netflix.html Joe Sondow, Building Cloud Tools for NetflixSlides: http://www.slideshare.net/joesondow/building-cloudtoolsfornetflix-9419504Talk: http://blip.tv/silicon-valley-cloud-computing-group/building-cloud-tools-for-netflix-5754984
Data Science is becoming more important for testers to know. (Tester as Data Scientist)Not going to spend a lot of time on basics like median, mean, Standard Deviation or linear regressionI assume you know those… or you can look those up laterHere we will cover some of more interesting TECHNIQUES and GOTCHAS… ones you won’t find in a beginner stats courseWon’t explain them here, will illustrate them on the following slidesPlus the tools of Big Data
This is the one of the first computers I ever usedI have been working in software for 19 yearsSurvey the audienceGet 5 answers (samples)This is to illustrate the rule of 5 Median is point where equal population above and below itMedian years in softwarehas 93.75% chance of being between Min and Max samples surveyed (among the 5 taken)***** Power of small data sets******Explain sample bias
Median is point where equal population above and below itMedian years in softwarehas 93.75% chance of being between Min and Max samples surveyed (among the 5 taken)Explain why the rule of 5 worksExplain sample biasExplaining why the rule of 5 worksA value has a 50% chance of being above the median, same as chance of heads on a coin flipAll 5 values above median? 5 heads or 3.125%Neither all 5 values above median nor below it 100 – (2 x 3.125) = 93.75%Sample BiasMedian years in software among TestersMedian years in software among Testers attending Test Bash - would be same as 1 if we knew Test Bash attendees where a representative sample.Median years in software among Testers attending Test Bash who are willing to volunteer such info [self-selection bias]ModelingThis makes no assumption about the model. By definition a single observation has a 50% chance of being over or under the median
Averaging is a form of lossy data compression…it destroys information!Take-Away: You need to understand your populationAbove Probability Density function contains samples from two distinct populations. For example could be different versions of the softwareOr different user populations: testers vs. real users, different geographic regions
Same as previous… just more complex example – 5 distinct populations
Averaging is a form of lossy data compression…it destroys information!Other stats too can also be lossyTake-Away: You need to understand your data modelR^2 is the Coefficient of DeterminationCloser to 1 indicates that a regression line fits the data wellSD is the Standard DeviationA low standard deviation indicates that the data points tend to be very close to the mean; high standard deviation indicates that the data points are spread out over a large range of values.1 SD 68.27% of set; 2 SD 95.4%; 3 SD 99.73%
Hadoop is a tool for processing large data setsProcessing = what you might do with a SQL SELECT – combine, sort, countImagine this data set of 9 chars is actually 10s of trillions of charsFirst we need tostore massive amounts of dataDistributed storage: HDFS = Hadoop Distributed File SystemBreak file into pieces. Each piece stored multiple times (3) for redundancyThen we need to process massive amounts of dataDistributed computingMap-Reduce and similar algorithms (Cosmos uses Dryad)Bring the compute to the data in its split-up formMap-Reduce can operate on the piecesThe processing is MAP’ed to the smaller subsetsThe output of these many operations is then re-combined (REDUCED) into a single answer(remembering input is 10s of trillions) Output is a much smaller file than input------------------------------------------Hadoop is part of a rich eco-system of tools- Hive - Data warehouse for Hadoop - http://hive.apache.org/query the data using a SQL-like language called HiveQL- Pig - http://pig.apache.org/high-level language for expressing data analysis….compiler that produces sequences of Map-Reduce programsMahout - machine learning library - http://mahout.apache.org/Scribe: log aggregationHDInsight is Hadoop running on Microsoft AzureRef:http://www.windowsazure.com/en-us/manage/services/hdinsight/http://hadoop.apache.org/
Cosmos is similar to HadoopIt is Microsoft-internalThe numbers are impressive-----------------------------------------------Data drives search, advertising, and all of MicrosoftWeb pages: Links, text, titles, etcSearch logs: What people searched for, what they clicked, etcIE logs: What sites people visit, the browsing order, etcAdvertising logs: What ads do people click on, what was shown, etcSocial feeds from Twitter & Facebook Service telemetry Office 365, Hotmail (not emails), MSNPicture is a modularized “Container” of servers used in Microsoft Data Centers Refs:<http://blogs.msdn.com/b/pathelland/archive/2011/09/30/leaving-microsoft-and-moving-to-san-francisco.aspx> “It stores hundreds of petabytes of data on tens of thousands of computers. Large scale batch processing using Dryad with a high-level language called SCOPE on top of it.”The Bing Big Data Platform - Ken Johnston; Big Data Innovation Summit 2013, Las Vegas: Process 2PB per DayData drives search, advertising, and all of Microsofthttp://research.microsoft.com/en-us/events/fs2011/helland_cosmos_big_data_and_big_challenges.pdf
These are Spartans from HaloA headless Spartan cannot be killed. They should not exist, but they didHow did the Halo team find this bug and eliminate it?CLICK: HDInsight – Hadoop running on Azure - **** Hadoop as a serviceCLICK: Halo had the data, but it was overwhelmingCLICK: using the data and HDInsight they found the bug in production, and eliminated itHeadless Spartan: unofficial mod, which can only be applied using a modified Xbox 360.Almost impossible to find pre-release. But in production they can find and eliminate itCLICK: here is brief overview of what they didFrom hundreds of low-wage low-skilled testers Millions of free, low-skilled testers, highly skilled customers CLICK: They also found less obvious bugs and cheats, it is all hidden in the dataTell target storyCLICK: reveal quote - Target statistician Andrew Pole ---------------------------------------------Target StoryTarget, store in US like Tesco in UKhttp://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/Target sent teenage daughter baby supply couponsTarget apologized, then called weeks later to apologize again Father admitted unbeknownst to him at the time, daughter was pregnantThe following purchases may indicate woman is pregnant with a boyCocoa butter lotionA large purseZinc and magnesium supplementsA bright blue rug“But even if you’re following the law, you can do things where people get queasy.” - Target statistician Andrew Pole started mixing in all these ads for things we knew pregnant women would never buy, so the baby ads looked random. We’d put an ad for a lawn mower next to diapersRef:http://www.microsoft.com/en-us/news/features/2012/oct12/10-31halo4.aspxhttp://www.microsoft.com/casestudies/Case_Study_Detail.aspx?CaseStudyID=710000002102
Another Big Data example is Microsoft Exchange Online***** They can predict 75% of availability issues ahead of timeBig Data from Over 8000 Servers instrumented …to collect 1000 MetricsProcessed by COSMOSCLICK ***** Using ML they can *PREDICT* 75% of outages ahead of time---------------------------------------------------------PB’s of data collected, such asAvailabilityLatencyErrorsPerf counters: CPU, Memory, etcLots of serversInstrument them all you get lots of DataPBs, how can we process all that? – Cosmos, Machine Learning – It’s about fitting your data to a modelThink about simple linear regression y=mx + b, it is like that but can get much more advanced
Testing is….We can use Passive and/or Active techniques to get those observationsIn production is were we can find some of the best observationsEither using Passive or Active ValidationWe obtain Data, which we use to calculate metrics, which is used to drive actionsAbout the quality of the product