Estimating the Number of Clusters in Big Data with the Aligned Box Criterion: Finding the number, k, of clusters in a dataset is a fundamental problem in unsupervised learning. It is also an important business problem, e.g. in market segmentation. Existing approaches include the silhouette measure, the gap statistic and Dirichlet process clustering. For thirty years SAS procedures have included the option of using the cubic clustering criterion (CCC) to estimate k. While CCC remains competitive, we propose a significant and original improvement, referred to herein as the aligned box criterion (ABC). Like CCC, ABC is based on a hypothesis-testing framework, but instead of a heuristic measure we use data-adaptive reference distributions to generate more realistic null hypotheses in a scalable and easily parallelizable manner. We have implemented ABC using SAS’ High Performance Analytics platform, and achieve state-of-the-art accuracy in the estimation of k.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This presentation is a comparison of different clustering based on their computational time. This is the first step in creating open source and bespoke Geodemographic classifications in near real time.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This presentation is a comparison of different clustering based on their computational time. This is the first step in creating open source and bespoke Geodemographic classifications in near real time.
Business DNA Model, Balanced Scorecard, and Strategy Map: A Visual Mathematic...Rod King, Ph.D.
This presentation features a 1-Page Diagram of the Business DNA Model as a platform for visually documenting, organizing, managing, and evaluating ideas on business models. The Business DNA Model can also be used to more deeply understood tools of Performance Management such as the Balanced Scorecard and Strategy Map as well as business modeling tools such as the Business Model Yacht, Business Model Canvas, and Lean Canvas.
http://goo.gl/qRZhwV
12 Disruption Vulnerabilities of the Business Model Canvas: BUSINESS MODEL CA...Rod King, Ph.D.
This presentation presents 12 "Disruption Vulnerabilities" or Achilles's Heels of the Business Model Canvas. Although the Business Model Canvas serves as a good tool for visually documenting a business model, it is limited in many respects especially with documenting, analyzing, and designing two/multisided markets (platforms). The tool of the Business Model Strip is presented as an alternative that eliminates the Disruption Vulnerabilities of the Business Model Canvas.
The Business Model Strip is designed with a multilevel paradigm so that it can be presented at various levels and in different visual formats. This presentation features the Business Model Strip in "canvas" (tessellation) format with 5 blocks (meso-level) as well as 9/11 blocks (micro-level). Finally, a visual template and checklist for an Exponential Business Canvas are presented.
A Novel Approaches For Chromatic Squander Less Visceral Coding Techniques Usi...IJERA Editor
Recent advances in video capturing and display technologies, along with the exponentially increasing demand of
video services, challenge the video coding research community to design new algorithms able to significantly
improve the compression performance of the current H.264/AVC standard. This target is currently gaining
evidence with the standardization activities in the High Efficiency Video Coding (HEVC) project. The distortion
models used in HEVC are mean squared error (MSE) and sum of absolute difference (SAD). However, they are
widely criticized for not correlating well with perceptual image quality. The structural similarity (SSIM) index
has been found to be a good indicator of perceived image quality. Meanwhile, it is computationally simple
compared with other state-of-the-art perceptual quality measures and has a number of desirable mathematical
properties for optimization tasks. We propose a perceptual video coding method to improve upon the current
HEVC based on an SSIM-inspired divisive normalization scheme as an attempt to transform the DCT domain
frame prediction residuals to a perceptually uniform space before encoding.
Based on the residual divisive normalization process, we define a distortion model for mode selection and show
that such a divisive normalization strategy largely simplifies the subsequent perceptual rate-distortion
optimization procedure. We further adjust the divisive normalization factors based on local content of the video
frame. Experiments show that the scheme can achieve significant gain in terms of rate-SSIM performance and
better visual quality when compared with HEVC
A Novel Approaches For Chromatic Squander Less Visceral Coding Techniques Usi...IJERA Editor
Recent advances in video capturing and display technologies, along with the exponentially increasing demand of
video services, challenge the video coding research community to design new algorithms able to significantly
improve the compression performance of the current H.264/AVC standard. This target is currently gaining
evidence with the standardization activities in the High Efficiency Video Coding (HEVC) project. The distortion
models used in HEVC are mean squared error (MSE) and sum of absolute difference (SAD). However, they are
widely criticized for not correlating well with perceptual image quality. The structural similarity (SSIM) index
has been found to be a good indicator of perceived image quality. Meanwhile, it is computationally simple
compared with other state-of-the-art perceptual quality measures and has a number of desirable mathematical
properties for optimization tasks. We propose a perceptual video coding method to improve upon the current
HEVC based on an SSIM-inspired divisive normalization scheme as an attempt to transform the DCT domain
frame prediction residuals to a perceptually uniform space before encoding.
Based on the residual divisive normalization process, we define a distortion model for mode selection and show
that such a divisive normalization strategy largely simplifies the subsequent perceptual rate-distortion
optimization procedure. We further adjust the divisive normalization factors based on local content of the video
frame. Experiments show that the scheme can achieve significant gain in terms of rate-SSIM performance and
better visual quality when compared with HEVC
This presentation was originally given live at JMP Discovery Summit 2013 in San Antonio, Texas, USA. To sign up to attend this year's conference, visit http://jmp.com/summit
A sensitivity analysis of contribution-based cooperative co-evolutionary algo...Borhan Kazimipour
Cooperative Co-evolutionary (CC) techniques have
demonstrated the promising performance in dealing with large-scale optimization problems. However, in many applications, their performance may drop due to the presence of imbalanced contributions to the objective function value from different subsets of decision variables. To remedy this drawback, Contribution-Based Cooperative Co-evolutionary (CBCC) algorithms have been proposed.
They have presented significant improvements over traditional CC techniques when the decomposition is accurate and the imbalance level is very high. However, in real-world scenarios, we might not have the knowledge about the ideal decomposition and actual imbalance level of a problem to be solved. Therefore, this study aims at analysing the performance of existing CBCC techniques in more realistic settings, i.e., when the decomposition error is unavoidable and the imbalance level is low or moderate.
Our in-depth analysis reveals that even in these situations, CBCC algorithms are superior alternatives to traditional CC techniques. We also observe that the variations of CBCC techniques may lead to the significantly different performance. Thus, we recommend practitioners to carefully choose a competent variant of CBCC which best suits their particular applications.
Business DNA Model, Balanced Scorecard, and Strategy Map: A Visual Mathematic...Rod King, Ph.D.
This presentation features a 1-Page Diagram of the Business DNA Model as a platform for visually documenting, organizing, managing, and evaluating ideas on business models. The Business DNA Model can also be used to more deeply understood tools of Performance Management such as the Balanced Scorecard and Strategy Map as well as business modeling tools such as the Business Model Yacht, Business Model Canvas, and Lean Canvas.
http://goo.gl/qRZhwV
12 Disruption Vulnerabilities of the Business Model Canvas: BUSINESS MODEL CA...Rod King, Ph.D.
This presentation presents 12 "Disruption Vulnerabilities" or Achilles's Heels of the Business Model Canvas. Although the Business Model Canvas serves as a good tool for visually documenting a business model, it is limited in many respects especially with documenting, analyzing, and designing two/multisided markets (platforms). The tool of the Business Model Strip is presented as an alternative that eliminates the Disruption Vulnerabilities of the Business Model Canvas.
The Business Model Strip is designed with a multilevel paradigm so that it can be presented at various levels and in different visual formats. This presentation features the Business Model Strip in "canvas" (tessellation) format with 5 blocks (meso-level) as well as 9/11 blocks (micro-level). Finally, a visual template and checklist for an Exponential Business Canvas are presented.
A Novel Approaches For Chromatic Squander Less Visceral Coding Techniques Usi...IJERA Editor
Recent advances in video capturing and display technologies, along with the exponentially increasing demand of
video services, challenge the video coding research community to design new algorithms able to significantly
improve the compression performance of the current H.264/AVC standard. This target is currently gaining
evidence with the standardization activities in the High Efficiency Video Coding (HEVC) project. The distortion
models used in HEVC are mean squared error (MSE) and sum of absolute difference (SAD). However, they are
widely criticized for not correlating well with perceptual image quality. The structural similarity (SSIM) index
has been found to be a good indicator of perceived image quality. Meanwhile, it is computationally simple
compared with other state-of-the-art perceptual quality measures and has a number of desirable mathematical
properties for optimization tasks. We propose a perceptual video coding method to improve upon the current
HEVC based on an SSIM-inspired divisive normalization scheme as an attempt to transform the DCT domain
frame prediction residuals to a perceptually uniform space before encoding.
Based on the residual divisive normalization process, we define a distortion model for mode selection and show
that such a divisive normalization strategy largely simplifies the subsequent perceptual rate-distortion
optimization procedure. We further adjust the divisive normalization factors based on local content of the video
frame. Experiments show that the scheme can achieve significant gain in terms of rate-SSIM performance and
better visual quality when compared with HEVC
A Novel Approaches For Chromatic Squander Less Visceral Coding Techniques Usi...IJERA Editor
Recent advances in video capturing and display technologies, along with the exponentially increasing demand of
video services, challenge the video coding research community to design new algorithms able to significantly
improve the compression performance of the current H.264/AVC standard. This target is currently gaining
evidence with the standardization activities in the High Efficiency Video Coding (HEVC) project. The distortion
models used in HEVC are mean squared error (MSE) and sum of absolute difference (SAD). However, they are
widely criticized for not correlating well with perceptual image quality. The structural similarity (SSIM) index
has been found to be a good indicator of perceived image quality. Meanwhile, it is computationally simple
compared with other state-of-the-art perceptual quality measures and has a number of desirable mathematical
properties for optimization tasks. We propose a perceptual video coding method to improve upon the current
HEVC based on an SSIM-inspired divisive normalization scheme as an attempt to transform the DCT domain
frame prediction residuals to a perceptually uniform space before encoding.
Based on the residual divisive normalization process, we define a distortion model for mode selection and show
that such a divisive normalization strategy largely simplifies the subsequent perceptual rate-distortion
optimization procedure. We further adjust the divisive normalization factors based on local content of the video
frame. Experiments show that the scheme can achieve significant gain in terms of rate-SSIM performance and
better visual quality when compared with HEVC
This presentation was originally given live at JMP Discovery Summit 2013 in San Antonio, Texas, USA. To sign up to attend this year's conference, visit http://jmp.com/summit
A sensitivity analysis of contribution-based cooperative co-evolutionary algo...Borhan Kazimipour
Cooperative Co-evolutionary (CC) techniques have
demonstrated the promising performance in dealing with large-scale optimization problems. However, in many applications, their performance may drop due to the presence of imbalanced contributions to the objective function value from different subsets of decision variables. To remedy this drawback, Contribution-Based Cooperative Co-evolutionary (CBCC) algorithms have been proposed.
They have presented significant improvements over traditional CC techniques when the decomposition is accurate and the imbalance level is very high. However, in real-world scenarios, we might not have the knowledge about the ideal decomposition and actual imbalance level of a problem to be solved. Therefore, this study aims at analysing the performance of existing CBCC techniques in more realistic settings, i.e., when the decomposition error is unavoidable and the imbalance level is low or moderate.
Our in-depth analysis reveals that even in these situations, CBCC algorithms are superior alternatives to traditional CC techniques. We also observe that the variations of CBCC techniques may lead to the significantly different performance. Thus, we recommend practitioners to carefully choose a competent variant of CBCC which best suits their particular applications.
Don't optimize my queries, organize my data!Julian Hyde
Your queries won't run fast if your data is not organized right. Apache Calcite optimizes queries, but can we make it optimize data? We had to solve several challenges. Users are too busy to tell us the structure of their database, and the query load changes daily, so Calcite has to learn and adapt. We talk about new algorithms we developed for gathering statistics on massive database, and how we infer and evolve the data model based on the queries.
Scalable Machine Learning: The Role of Stratified Data Shardinginside-BigData.com
In this deck from the 2019 Stanford HPC Conference, Srinivasan Parthasarathy from Ohio State University presents: Scalable Machine Learning: The Role of Stratified Data Sharding.
"With the increasing popularity of structured data stores, social networks and Web 2.0 and 3.0 applications, complex data formats, such as trees and graphs, are becoming ubiquitous. Managing and learning from such large and complex data stores, on modern computational eco-systems, to realize actionable information efficiently, is daunting. In this talk I will begin with discussing some of these challenges. Subsequently I will discuss a critical element at the heart of this challenge relates to the sharding, placement, storage and access of such tera- and peta- scale data. In this work we develop a novel distributed framework to ease the burden on the programmer and propose an agile and intelligent placement service layer as a flexible yet unified means to address this challenge. Central to our framework is the notion of stratification which seeks to initially group structurally (or semantically) similar entities into strata. Subsequently strata are partitioned within this eco-system according to the needs of the application to maximize locality, balance load, minimize data skew or even take into account energy consumption. Results on several real-world applications validate the efficacy and efficiency of our approach. (Notes: Joint work with Y. Wang (Airbnb) and A. Chakrabarti (MSR))."
Srinivasan Parthasarathy, Professor of Computer Science & Engineering, The Ohio State University
Srinivasan Parthasarathy is a Professor of Computer Science and Engineering and the director of the data mining research laboratory at Ohio State. His research interests span databases, data mining and high performance computing. He is among a handful of researchers nationwide to have won both the Department of Energy and National Science Foundation Career awards. He and his students have won multiple best paper awards or "best of" nominations from leading forums in the field including: SIAM Data Mining, ACM SIGKDD, VLDB, ISMB, WWW, ICDM, and ACM Bioinformatics. He chairs the SIAM data mining conference steering committee and serves on the action board of ACM TKDD and ACM DMKD --leading journals in the field. Since 2012 he also helped lead the creation of OSU's first-of-a-kind nationwide (USA) undergraduate major in data analytics and serves as one of its founding directors.
Watch the video: https://youtu.be/hOJI8e0p-UI
Learn more: http://web.cse.ohio-state.edu/~parthasarathy.2/
and
http://hpcadvisorycouncil.com/events/2019/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Understanding How is that Adaptive Cursor Sharing (ACS) produces multiple Opt...Carlos Sierra
Adaptive Cursor Sharing (ACS) is a feature available since 11g. It is enabled by default. ACS can help to generate multiple non-persistent Optimal Execution Plans for a given SQL. But it requires a sequence of events for it to get truly activated. This presentation describes what is ACS, when it is used and when it is not. Then it demonstrates ACS capabilities and limitations with a live demo.
This session is about: How Adaptive Cursor Sharing (ACS) actually works. How a bind sensitive cursor becomes bind aware. What are those "ACS buckets". How the "Selectivity Profile" works. Why sometimes your SQL becomes bind aware and why sometimes it does not. How is that ACS interacts with SQL Plan Management (SPM). These and other questions about ACS are answered in detail.
Some live demonstrations are used to illustrate the ramp-up process on ACS and how some child cursors are created then flagged as non-shareable. You will also "see" how the ACS Selectivity Profile is adapted as new executions make use of predicates with new Selectivities. ACS promotes Plan Flexibility while SPM promotes Plan Stability. Understanding how these duo interacts becomes of great value when some gentle intervention is needed to restore this delicate balance.
This session is for those Developers and DBAs that "need" to understand how things work. ACS can be seen as a back-box; or you can "look" inside and understand how it actually works. If you are curious about the ACS functionality, then this Session brings some light. Consider this session only if you are pretty familiar with Cursor Sharing, Binds, Plan Stability and Plan Flexibility.
Jamila Smith-Loud - Understanding Human Impact: Social and Equity Assessments...MLconf
Understanding Human Impact: Social and Equity Assessments for AI Technologies
Social and Equity Impact Assessments have broad applications but can be a useful tool to explore and mitigate for Machine Learning fairness issues and can be applied to product specific questions as a way to generate insights and learnings about users, as well as impacts on society broadly as a result of the deployment of new and emerging technologies.
In this presentation, my goal is to advocate for and highlight the need to consult community and external stakeholder engagement to develop a new knowledge base and understanding of the human and social consequences of algorithmic decision making and to introduce principles, methods and process for these types of impact assessments.
Ted Willke - The Brain’s Guide to Dealing with Context in Language UnderstandingMLconf
The Brain’s Guide to Dealing with Context in Language Understanding
Like the visual cortex, the regions of the brain involved in understanding language represent information hierarchically. But whereas the visual cortex organizes things into a spatial hierarchy, the language regions encode information into a hierarchy of timescale. This organization is key to our uniquely human ability to integrate semantic information across narratives. More and more, deep learning-based approaches to natural language understanding embrace models that incorporate contextual information at varying timescales. This has not only led to state-of-the art performance on many difficult natural language tasks, but also to breakthroughs in our understanding of brain activity.
In this talk, we will discuss the important connection between language understanding and context at different timescales. We will explore how different deep learning architectures capture timescales in language and how closely their encodings mimic the brain. Along the way, we will uncover some surprising discoveries about what depth does and doesn’t buy you in deep recurrent neural networks. And we’ll describe a new, more flexible way to think about these architectures and ease design space exploration. Finally, we’ll discuss some of the exciting applications made possible by these breakthroughs.
Justin Armstrong - Applying Computer Vision to Reduce Contamination in the Re...MLconf
Applying Computer Vision to Reduce Contamination in the Recycling Stream
With China’s recent refusal of most foreign recyclables, North American waste haulers are scrambling to figure out how to make on-shore recycling cost-effective in order to continue providing recycling services. Recyclables that were once being shipped to China for manual sorting are now primarily being redirected to landfills or incinerators. Without a solution, a nearly $5 billion annual recycling market could come to a halt.
Purity in the recycling stream is key to this effort as contaminants in the stream can increase the cost of operations, damage equipment and reduce the ability to create pure commodities suitable for creating recycled goods. This market disruption as a result of China’s new regulations, however, provides us the chance to re-examine and improve our current disposal & collection habits with modern monitoring & artificial intelligence technology.
Using images from our in-dumpster cameras, Compology has developed an ML-based process that helps identify, measure and alert for contaminants in recycling containers before they are picked-up, helping keep the recycling stream clean.
Our convolutional neural network flags potential instances of contamination inside a dumpster, enabling garbage haulers to know which containers have the wrong type of material inside. This allows them to provide targeted, timely education, and when appropriate, assess fines, to improve recycling compliance at the businesses and residences they serve, helping keep recycling services financially viable.
In this presentation, we will walk through our ML-based contamination measurement and scoring process by showing how Waste Management, a national waste hauler, has experienced 57% contamination reduction in nearly 2,000 containers over six months, This progress shows significant strides towards financially viable recycling services.
Igor Markov - Quantum Computing: a Treasure Hunt, not a Gold RushMLconf
Quantum Computing: a Treasure Hunt, not a Gold Rush
Quantum computers promise a significant step up in computational power over conventional computers, but also suffer a number of counterintuitive limitations --- both in their computational model and in leading lab implementations. In this talk, we review how quantum computers compete with conventional computers and how conventional computers try to hold their ground. Then we outline what stands in the way of successful quantum ML applications.
Josh Wills - Data Labeling as Religious ExperienceMLconf
Data Labeling as Religious Experience
One of the most common places to deploy a production machine learning systems is as a replacement for a legacy rules-based system that is having a hard time keeping up with new edge cases and requirements. I'll be walking through the process and tooling we used to help us design, train, and deploy a model to replace a set of static rules we had for handling invite spam at Slack, talk about what we learned, and discuss some problems to solve in order to make these migrations easier for everyone.
Vinay Prabhu - Project GaitNet: Ushering in the ImageNet moment for human Gai...MLconf
Project GaitNet: Ushering in the ImageNet moment for human Gait kinematics
The emergence of the upright human bipedal gait can be traced back 4 to 2.8 million years ago, to the now extinct hominin Australopithecus afarensis. Fine grained analysis of gait using the modern MEMS sensors found on all smartphones not just reveals a lot about the person’s orthopedic and neuromuscular health status, but also has enough idiosyncratic clues that it can be harnessed as a passive biometric. While there were many siloed attempts made by the machine learning community to model Bipedal Gait sensor data, these were done with small datasets oft collected in restricted academic environs. In this talk, we will introduce the ImageNet moment for human gait analysis by presenting 'Project GaitNet', the largest ever planet-sized motion sensor based human bipedal gait dataset ever curated. We’ll also present the associated state-of-the-art results in classifying humans harnessing novel deep neural architectures and the related success stories we have enjoyed in transfer-learning into disparate domains of human kinematics analysis.
Jekaterina Novikova - Machine Learning Methods in Detecting Alzheimer’s Disea...MLconf
Machine Learning Methods in Detecting Alzheimer’s Disease from Speech and Language
Alzheimer's disease affects millions of people worldwide, and it is important to predict the disease as early and as accurate as possible. In this talk, I will discuss development of novel ML models that help classifying healthy people from those who develop Alzheimer's, using short samples of human speech. As an input to the model, features of different modalities are extracted from speech audio samples and transcriptions: (1) syntactic measures, such as e.g. production rules extracted from syntactic parse trees, (2) lexical measures, such as e.g. features of lexical richness and complexity and lexical norms, and (3) acoustic measures, such as e.g. standard Mel-frequency cepstral coefficients. I will present the ML model that detects cognitive impairment by reaching agreement among modalities. The resulting model is able to achieve state of the art performance in both supervised and semi-supervised manner, using manual transcripts of human speech. Additionally, I will discuss potential limitations of any fully-automated speech-based Alzheimer's disease detection model, focusing mostly on the analysis of the impact of a not-so-accurate automatic speech recognition (ASR) on the classification performance. To illustrate this, I will present the experiments with controlled amounts of artificially generated ASR errors and explain how the deletion errors affect Alzheimer's detection performance the most, due to their impact on the features of syntactic and lexical complexity.
Meghana Ravikumar - Optimized Image Classification on the CheapMLconf
Optimized Image Classification on the Cheap
In this talk, we anchor on building an image classifier trained on the Stanford Cars dataset to evaluate two approaches to transfer learning -fine tuning and feature extraction- and the impact of hyperparameter optimization on these techniques. Once we define the most performant transfer learning technique for Stanford Cars, we will double the size of the dataset through image augmentation to boost the classifier’s performance. We will use Bayesian optimization to learn the hyperparameters associated with image transformations using the downstream image classifier’s performance as the guide. In conjunction with model performance, we will also focus on the features of these augmented images and the downstream implications for our image classifier.
To both maximize model performance on a budget and explore the impact of optimization on these methods, we apply a particularly efficient implementation of Bayesian optimization to each of these architectures in this comparison. Our goal is to draw on a rigorous set of experimental results that can help us answer the question: how can resource-constrained teams make trade-offs between efficiency and effectiveness using pre-trained models?
Noam Finkelstein - The Importance of Modeling Data CollectionMLconf
The Importance of Modeling Data Collection
Data sets used in machine learning are often collected in a systematically biased way - certain data points are more likely to be collected than others. We call this "observation bias". For example, in health care, we are more likely to see lab tests when the patient is feeling unwell than otherwise. Failing to account for observation bias can, of course, result in poor predictions on new data. By contrast, properly accounting for this bias allows us to make better use of the data we do have.
In this presentation, we discuss practical and theoretical approaches to dealing with observation bias. When the nature of the bias is known, there are simple adjustments we can make to nonparametric function estimation techniques, such as Gaussian Process models. We also discuss the scenario where the data collection model is unknown. In this case, there are steps we can take to estimate it from observed data. Finally, we demonstrate that having a small subset of data points that are known to be collected at random - that is, in an unbiased way - can vastly improve our ability to account for observation bias in the rest of the data set.
My hope is that attendees of this presentation will be aware of the perils of observation bias in their own work, and be equipped with tools to address it.
The Uncanny Valley of ML
Every so often, the conundrum of the Uncanny Valley re-emerges as advanced technologies evolve from clearly experimental products to refined accepted technologies. We have seen its effects in robotics, computer graphics, and page load times. The debate of how to handle the new technology detracts from its benefits. When machine learning is added to human decision systems a similar effect can be measured in increased response time and decreased accuracy. These systems include radiology, judicial assignments, bus schedules, housing prices, power grids and a growing variety of applications. Unfortunately, the Uncanny Valley of ML can be hard to detect in these systems and can lead to degraded system performance when ML is introduced, at great expense. Here, we'll introduce key design principles for introducing ML into human decision systems to navigate around the Uncanny Valley and avoid its pitfalls.
Sneha Rajana - Deep Learning Architectures for Semantic Relation Detection TasksMLconf
Deep Learning Architectures for Semantic Relation Detection Tasks
Recognizing and distinguishing specific semantic relations from other types of semantic relations is an essential part of language understanding systems. Identifying expressions with similar and contrasting meanings is valuable for NLP systems which go beyond recognizing semantic relatedness and require to identify specific semantic relations. In this talk, I will first present novel techniques for creating labelled datasets required for training deep learning models for classifying semantic relations between phrases. I will further present various neural network architectures that integrate morphological features into integrated path-based and distributional relation detection algorithms and demonstrate that this model outperforms state-of-the-art models in distinguishing semantic relations and is capable of efficiently handling multi-word expressions.
Anoop Deoras - Building an Incrementally Trained, Local Taste Aware, Global D...MLconf
Building an Incrementally Trained, Local Taste Aware, Global Deep Learned Recommender System Model
At Netflix, our main goal is to maximize our members’ enjoyment of the selected show by minimizing the amount of time it takes for them to find it. We try to achieve this goal by personalizing almost all the aspects of our product -- from what shows to recommend, to how to present these shows and construct their home-pages to what images to select per show, among many other things. Everything is recommendations for us and as an applied Machine Learning group, we spend our time building models for personalization that will eventually increase the joy and satisfaction of our members. In this talk we will primarily focus our attention on a) making a global deep learned recommender model that is regional tastes and popularity aware and b) adapting this model to changing taste preferences as well as dynamic catalog availability.
We will first go through some standard recommender system models that use Matrix Factorization and Topic Models and then compare and contrast them with more powerful and higher capacity deep learning based models such as sequence models that use recurrent neural networks. We will show what it entails to build a global model that is aware of regional taste preferences and catalog availability. We will show how models that are built on simple Maximum Likelihood principle fail to do that. We will then describe one solution that we have employed in order to enable the global deep learned models to focus their attention on capturing regional taste preferences and changing catalog.In the latter half of the talk, we will discuss how we do incremental learning of deep learned recommender system models. Why do we need to do that ? Everything changes with time. Users’ tastes change with time. What’s available on Netflix and what’s popular also change over time. Therefore, updating or improving recommendation systems over time is necessary to bring more joy to users. In addition to how we apply incremental learning, we will discuss some of the challenges we face involving large-scale data preparation, infrastructure setup for incremental model training as well as pipeline scheduling. The incremental training enables us to serve fresher models trained on fresher and larger amounts of data. This helps our recommender system to nicely and quickly adapt to catalog and users’ taste changes, and improve overall performance.
Vito Ostuni - The Voice: New Challenges in a Zero UI WorldMLconf
Vito Ostuni - The Voice: New Challenges in a Zero UI World
The adoption of voice-enabled devices has seen an explosive growth in the last few years and music consumption is among the most popular use cases. Music personalization and recommendation plays a major role at Pandora in providing a daily delightful listening experience for millions of users. In turn, providing the same perfectly tailored listening experience through these novel voice interfaces brings new interesting challenges and exciting opportunities. In this talk we will describe how we apply personalization and recommendation techniques in three common voice scenarios which can be defined in terms of request types: known-item, thematic, and broad open-ended. We will describe how we use deep learning slot filling techniques and query classification to interpret the user intent and identify the main concepts in the query.
We will also present the differences and challenges regarding evaluation of voice powered recommendation systems. Since pure voice interfaces do not contain visual UI elements, relevance labels need to be inferred through implicit actions such as play time, query reformulations or other types of session level information. Another difference is that while the typical recommendation task corresponds to recommending a ranked list of items, a voice play request translates into a single item play action. Thus, some considerations about closed feedback loops need to be made. In summary, improving the quality of voice interactions in music services is a relatively new challenge and many exciting opportunities for breakthroughs still remain. There are many new aspects of recommendation system interfaces to address to bring a delightful and effortless experience for voice users. We will share a few open challenges to solve for the future.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
use the k corresponding to the “elbow”
(the most significant increase in goodness-of-fit)
SAS put forth the proprietary measure Cubic Clustering Criterion (CCC) as its primary method to estimate k. CCC uses the difference between the actual (Wk) and expected (Wk*) Within Cluster Sum of Squares over a range of possible values for k to suggest a best k. While Wk is calculated from a k-cluster solution in the training dataset, Wk* must be calculated from a k-cluster solution in a generated reference distribution.
As direct simulation was computationally expensive when CCC was first developed, the technique employs a heuristic formula derived from numerous Monte Carlo simulations to generate one hyper-cube reference distribution based on the dimensions of the given training dataset to test all k of interest. Despite the intrinsic shortcomings of heuristic approximation, CCC remains perhaps the best method for estimating k, with only one meaningful improvement being proposed since its introduction: using Monte Carlo simulation, directly instead of heuristically, to generate a hyper-cube reference distribution. Tibshirani et al, Estimating the number of clusters in a dataset via the Gap Statistic, J.R. Statist. Soc. B 63, Oxford, UK: Wiley-Blackwell, 2001. 12 pp.
Null hypothesis: reference distribution
Normalize the curve log Wk v.s. k
Error-tolerant normalized elbow!
To generate more realistic null hypothesis values
ABC performs a more precise test at each k than does CCC. Instead of comparing a k-cluster solution in training data to a k-cluster solution in an approximated hyper-cube, ABC compares a k-cluster solution in training data to k-cluster solution in a data-adaptive reference distribution comprised of k hyper-cubes with dimensions that change based on the training data, the clustering algorithm, and on k. Such descriptive reference distributions allow for enhanced detection of differences between Wk and Wk*, which in turn leads to more accurate determinations of k.
To generate more realistic null hypothesis values
ABC performs a more precise test at each k than does CCC. Instead of comparing a k-cluster solution in training data to a k-cluster solution in an approximated hyper-cube, ABC compares a k-cluster solution in training data to k-cluster solution in a data-adaptive reference distribution comprised of k hyper-cubes with dimensions that change based on the training data, the clustering algorithm, and on k. Such descriptive reference distributions allow for enhanced detection of differences between Wk and Wk*, which in turn leads to more accurate determinations of k.
Discuss reference distributions first and then example.
Harder to reject the null hypothesis of no clusters – the clustering solution in the training data has to be better than the clustering solution in this reference distribution. Which it is just slightly … Probably because of the boxy shape of the reference distribution.
Error-tolerant normalized elbow!
Wk+ is calculated from a clustering solution in the reference distribution.
The difference between ABC and competing techniques is the reference distribution.
Makes for an easier to interpret solution
Makes for an easier to interpret solution
Now we are discussing an example.
Mention data prep
Show R code in EM
Point out 3 and 9
Point out 3 and 9
Point out 2, 4 and 9
So something between 2,3,4 and at 9