Note: these are the slides from a presentation at Lexis Nexis in Alpharetta, GA, on 2014-01-08 as part of the DataScienceATL Meetup. A video of this talk from Dec 2013 is available on vimeo at http://bit.ly/1aJ6xlt
Note: Slideshare mis-converted the images in slides 16-17. Expect a fix in the next couple of days.
---
Deep learning is a hot area of machine learning named one of the "Breakthrough Technologies of 2013" by MIT Technology Review. The basic ideas extend neural network research from past decades and incorporate new discoveries in statistical machine learning and neuroscience. The results are new learning architectures and algorithms that promise disruptive advances in automatic feature engineering, pattern discovery, data modeling and artificial intelligence. Empirical results from real world applications and benchmarking routinely demonstrate state-of-the-art performance across diverse problems including: speech recognition, object detection, image understanding and machine translation. The technology is employed commercially today, notably in many popular Google products such as Street View, Google+ Image Search and Android Voice Recognition.
In this talk, we will present an overview of deep learning for data scientists: what it is, how it works, what it can do, and why it is important. We will review several real world applications and discuss some of the key hurdles to mainstream adoption. We will conclude by discussing our experiences implementing and running deep learning experiments on our own hardware data science appliance.
Yurii Pashchenko: Zero-shot learning capabilities of CLIP model from OpenAILviv Startup Club
Yurii Pashchenko: Zero-shot learning capabilities of CLIP model from OpenAI
AI & BigData Online Day 2021
Website - https://aiconf.com.ua/
Youtube - https://www.youtube.com/startuplviv
FB - https://www.facebook.com/aiconf
Transformer modality is an established architecture in natural language processing that utilizes a framework of self-attention with a deep learning approach.
This presentation was delivered under the mentorship of Mr. Mukunthan Tharmakulasingam (University of Surrey, UK), as a part of the ScholarX program from Sustainable Education Foundation.
Tijmen Blankenvoort, co-founder Scyfer BV, presentation at Artificial Intelligence Meetup 15-1-2014. Introduction into Neural Networks and Deep Learning.
https://telecombcn-dl.github.io/2018-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
Yurii Pashchenko: Zero-shot learning capabilities of CLIP model from OpenAILviv Startup Club
Yurii Pashchenko: Zero-shot learning capabilities of CLIP model from OpenAI
AI & BigData Online Day 2021
Website - https://aiconf.com.ua/
Youtube - https://www.youtube.com/startuplviv
FB - https://www.facebook.com/aiconf
Transformer modality is an established architecture in natural language processing that utilizes a framework of self-attention with a deep learning approach.
This presentation was delivered under the mentorship of Mr. Mukunthan Tharmakulasingam (University of Surrey, UK), as a part of the ScholarX program from Sustainable Education Foundation.
Tijmen Blankenvoort, co-founder Scyfer BV, presentation at Artificial Intelligence Meetup 15-1-2014. Introduction into Neural Networks and Deep Learning.
https://telecombcn-dl.github.io/2018-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
State of transformers in Computer VisionDeep Kayal
Transformers have rapidly come up as a challenger network architecture to traditional convnets in computer vision. Here is a quick landscape analysis of the state of transformers in vision, as of 2021.
Artificial Neural Network Tutorial | Deep Learning With Neural Networks | Edu...Edureka!
This Edureka "Neural Network Tutorial" tutorial will help you to understand the basics of Neural Networks and how to use it for deep learning. It explains Single layer and Multi layer Perceptron in detail.
Below are the topics covered in this tutorial:
1. Why Neural Networks?
2. Motivation Behind Neural Networks
3. What is Neural Network?
4. Single Layer Percpetron
5. Multi Layer Perceptron
6. Use-Case
7. Applications of Neural Networks
This talk is about how we applied deep learning techinques to achieve state-of-the-art results in various NLP tasks like sentiment analysis and aspect identification, and how we deployed these models at Flipkart
Using synthetic data for computer vision model trainingUnity Technologies
During this webinar Unity’s computer vision team provides an overview of computer vision, walks through current real-world data workflows, and explains why companies are moving toward synthetically generated data as an alternate data source for model training.
Watch the webinar: https://resources.unity.com/ai-ml/cv-webinar-dec-2021
"Attention Is All You Need" Grazie a queste semplici parole, nel 2017 il Deep Learning ha subito un profondo cambiamento. I Transformers, inizialmente introdotti nel campo del Natural Language Processing, si sono recentemente dimostrati estremamente efficaci anche al di fuori di questo settore, ottenendo un enorme - e forse inaspettato - successo nel campo della Computer Vision. I Vision Transformers e moltissime delle sue varianti stanno ridefinendo oggi lo stato dell'arte su molti task di visione artificiale, dalla classificazione di immagini fino ai sistemi di visione per la guida autonoma. Ma cosa sono i Transformers? In che cosa consiste il meccanismo della self-attention che è alla base del loro funzionamento? Quali sono i suoi limiti? Saranno in grado di rimpiazzare le famose reti convoluzionali che hanno, a loro tempo, rivoluzionato la Computer Vision? In questo talk cercheremo di rispondere a tutte queste domande, offrendo un'ampia panoramica sulle idee fondanti, sulle architetture Transformer più utilizzate, e sulle applicazioni più promettenti.
Knowledge distillation aims at transferring “knowledge” acquired in one model (teacher) to another model (student) that is typically smaller.
Previous approaches can be expressed as a form of training the student with output activations of data examples represented by the teacher.
We introduce a novel approach, dubbed relational knowledge distillation (Relational KD), that transfers relations among data examples represented by the teacher.
As concrete realizations of Relational KD, we propose distance-wise and angle-wise distillation losses that penalize structural differences in relations.
Experiments conducted on different benchmark tasks show that the Relational KD improves the performance of the educated student networks with a significant margin, and even outperforms the teacher’s performance.
[Paper] Multiscale Vision Transformers(MVit)Susang Kim
multiscale feature hierarchies with the transformer model.
a multiscale pyramid of features with early layers operating at high spatial resolution to model simple low-level visual information visual pathway with a hierarchical model
Activation functions and Training Algorithms for Deep Neural networkGayatri Khanvilkar
Training of Deep neural network is difficult task. Deep neural network train with the help of training algorithms and activation function This is an overview of Activation Function and Training Algorithms used for Deep Neural Network. It underlines a brief comparative study of activation function and training algorithms.
On-device machine learning: TensorFlow on AndroidYufeng Guo
Machine learning has traditionally been the solely performed on servers and high performance machines. But there is great value is having on-device machine learning for mobile devices. Doing ML inference on mobile devices has huge potential and is still in its early stages. However, it's already more powerful than most realize.
In this demo-oriented talk, you will see some examples of deep learning models used for local prediction on mobile devices. Learn how to use TensorFlow to implement a machine learning model that is tailored to a custom dataset, and start making delightful experiences today!
State of transformers in Computer VisionDeep Kayal
Transformers have rapidly come up as a challenger network architecture to traditional convnets in computer vision. Here is a quick landscape analysis of the state of transformers in vision, as of 2021.
Artificial Neural Network Tutorial | Deep Learning With Neural Networks | Edu...Edureka!
This Edureka "Neural Network Tutorial" tutorial will help you to understand the basics of Neural Networks and how to use it for deep learning. It explains Single layer and Multi layer Perceptron in detail.
Below are the topics covered in this tutorial:
1. Why Neural Networks?
2. Motivation Behind Neural Networks
3. What is Neural Network?
4. Single Layer Percpetron
5. Multi Layer Perceptron
6. Use-Case
7. Applications of Neural Networks
This talk is about how we applied deep learning techinques to achieve state-of-the-art results in various NLP tasks like sentiment analysis and aspect identification, and how we deployed these models at Flipkart
Using synthetic data for computer vision model trainingUnity Technologies
During this webinar Unity’s computer vision team provides an overview of computer vision, walks through current real-world data workflows, and explains why companies are moving toward synthetically generated data as an alternate data source for model training.
Watch the webinar: https://resources.unity.com/ai-ml/cv-webinar-dec-2021
"Attention Is All You Need" Grazie a queste semplici parole, nel 2017 il Deep Learning ha subito un profondo cambiamento. I Transformers, inizialmente introdotti nel campo del Natural Language Processing, si sono recentemente dimostrati estremamente efficaci anche al di fuori di questo settore, ottenendo un enorme - e forse inaspettato - successo nel campo della Computer Vision. I Vision Transformers e moltissime delle sue varianti stanno ridefinendo oggi lo stato dell'arte su molti task di visione artificiale, dalla classificazione di immagini fino ai sistemi di visione per la guida autonoma. Ma cosa sono i Transformers? In che cosa consiste il meccanismo della self-attention che è alla base del loro funzionamento? Quali sono i suoi limiti? Saranno in grado di rimpiazzare le famose reti convoluzionali che hanno, a loro tempo, rivoluzionato la Computer Vision? In questo talk cercheremo di rispondere a tutte queste domande, offrendo un'ampia panoramica sulle idee fondanti, sulle architetture Transformer più utilizzate, e sulle applicazioni più promettenti.
Knowledge distillation aims at transferring “knowledge” acquired in one model (teacher) to another model (student) that is typically smaller.
Previous approaches can be expressed as a form of training the student with output activations of data examples represented by the teacher.
We introduce a novel approach, dubbed relational knowledge distillation (Relational KD), that transfers relations among data examples represented by the teacher.
As concrete realizations of Relational KD, we propose distance-wise and angle-wise distillation losses that penalize structural differences in relations.
Experiments conducted on different benchmark tasks show that the Relational KD improves the performance of the educated student networks with a significant margin, and even outperforms the teacher’s performance.
[Paper] Multiscale Vision Transformers(MVit)Susang Kim
multiscale feature hierarchies with the transformer model.
a multiscale pyramid of features with early layers operating at high spatial resolution to model simple low-level visual information visual pathway with a hierarchical model
Activation functions and Training Algorithms for Deep Neural networkGayatri Khanvilkar
Training of Deep neural network is difficult task. Deep neural network train with the help of training algorithms and activation function This is an overview of Activation Function and Training Algorithms used for Deep Neural Network. It underlines a brief comparative study of activation function and training algorithms.
On-device machine learning: TensorFlow on AndroidYufeng Guo
Machine learning has traditionally been the solely performed on servers and high performance machines. But there is great value is having on-device machine learning for mobile devices. Doing ML inference on mobile devices has huge potential and is still in its early stages. However, it's already more powerful than most realize.
In this demo-oriented talk, you will see some examples of deep learning models used for local prediction on mobile devices. Learn how to use TensorFlow to implement a machine learning model that is tailored to a custom dataset, and start making delightful experiences today!
Introducing TensorFlow: The game changer in building "intelligent" applicationsRokesh Jankie
This is the slidedeck used for the presentation of the Amsterdam Pipeline of Data Science, held in December 2016. TensorFlow in the open source library from Google to implement deep learning, neural networks. This is an introduction to Tensorflow.
Note: Videos are not included (which were shown during the presentation)
This slides explains how Convolution Neural Networks can be coded using Google TensorFlow.
Video available at : https://www.youtube.com/watch?v=EoysuTMmmMc
Deep Learning Models for Question AnsweringSujit Pal
Talk about a hobby project to apply Deep Learning models to predict answers to 8th grade science multiple choice questions for the Allen AI challenge on Kaggle.
Artificial Intelligence, Machine Learning and Deep LearningSujit Pal
Slides for talk Abhishek Sharma and I gave at the Gennovation tech talks (https://gennovationtalks.com/) at Genesis. The talk was part of outreach for the Deep Learning Enthusiasts meetup group at San Francisco. My part of the talk is covered from slides 19-34.
Suggestions:
1) For best quality, download the PDF before viewing.
2) Open at least two windows: One for the Youtube video, one for the screencast (link below), and optionally one for the slides themselves.
3) The Youtube video is shown on the first page of the slide deck, for slides, just skip to page 2.
Screencast: http://youtu.be/VoL7JKJmr2I
Video recording: http://youtu.be/CJRvb8zxRdE (Thanks to Al Friedrich!)
In this talk, we take Deep Learning to task with real world data puzzles to solve.
Data:
- Higgs binary classification dataset (10M rows, 29 cols)
- MNIST 10-class dataset
- Weather categorical dataset
- eBay text classification dataset (8500 cols, 500k rows, 467 classes)
- ECG heartbeat anomaly detection
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
Transform your Business with AI, Deep Learning and Machine LearningSri Ambati
Video: https://www.youtube.com/watch?v=R3IXd1iwqjc
Meetup: http://www.meetup.com/SF-Bay-ACM/events/231709894/
In this talk, Arno Candel presents a brief history of AI and how Deep Learning and Machine Learning techniques are transforming our everyday lives. Arno will introduce H2O, a scalable open-source machine learning platform, and show live demos on how to train sophisticated machine learning models on large distributed datasets. He will show how data scientists and application developers can use the Flow GUI, R, Python, Java, Scala, JavaScript and JSON to build smarter applications, and how to take them to production. He will present customer use cases from verticals including insurance, fraud, churn, fintech, and marketing.
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
Deep Learning is the area of machine learning and one of the most talked about trends in business and computer science today.
In this talk, I will give a review of Deep Learning explaining what it is, what kinds of tasks it can do today, and what it probably could do in the future.
The Unreasonable Benefits of Deep Learningindico data
Dan Kuster led a talk at Sentiment Analysis Symposium discussing why businesses should consider adopting deep learning solutions. Key takeaways include simplicity, accuracy, flexibility, and some hacks for working with the tech.
About the Session:
Machine learning is becoming the tool of choice for analyzing text and image data. While traditional text processing solutions rely on the ability of experts to encode domain knowledge, machine learning models learn this directly from the data. Deep learning is a branch of machine learning that like the human brain quickly learns hierarchical representations of concepts, and it has been key to unlocking state-of-the-art results on a range of text and image classification tasks such as sentiment analysis and beyond.
In this session, we will show the impact of a deep learning based approach over NLP and traditional machine learning based methods for text analysis across key dimensions such as accuracy, flexibility, and the amount of required training data. Specifically, we will discuss how deep learning models are now setting the records for state-of-the-art accuracy in sentiment analysis. We will also demonstrate the flexibility of this approach by showing how the features learned by one model can be easily reused in different domains (e.g., handling additional languages, or predicting new categories) to drastically reduce the time to deployment. Finally, we will touch on the ability of this method to handle additional types of data beyond text, e.g, images, for maximum insight.
Deep neural networks have revolutionized the data analytics scene by improving results in several and diverse benchmarks with the same recipe: learning feature representations from data. These achievements have raised the interest across multiple scientific fields, especially in those where large amounts of data and computation are available. This change of paradigm in data analytics has several ethical and economic implications that are driving large investments, political debates and sounding press coverage under the generic label of artificial intelligence (AI). This talk will present the fundamentals of deep learning through the classic example of image classification, and point at how the same principal has been adopted for several tasks. Finally, some of the forthcoming potentials and risks for AI will be pointed.
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2015-embedded-vision-summit-baidu
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Dr. Ren Wu, former distinguished scientist at Baidu's Institute of Deep Learning (IDL), presents the keynote talk, "Enabling Ubiquitous Visual Intelligence Through Deep Learning," at the May 2015 Embedded Vision Summit.
Deep learning techniques have been making headlines lately in computer vision research. Using techniques inspired by the human brain, deep learning employs massive replication of simple algorithms which learn to distinguish objects through training on vast numbers of examples. Neural networks trained in this way are gaining the ability to recognize objects as accurately as humans.
Some experts believe that deep learning will transform the field of vision, enabling the widespread deployment of visual intelligence in many types of systems and applications. But there are many practical problems to be solved before this goal can be reached. For example, how can we create the massive sets of real-world images required to train neural networks? And given their massive computational requirements, how can we deploy neural networks into applications like mobile and wearable devices with tight cost and power consumption constraints?
In this talk, Ren shares an insider’s perspective on these and other critical questions related to the practical use of neural networks for vision, based on the pioneering work being conducted by his former team at Baidu.
Note 1: Regarding the ImageNet results included in this presentation, the organizers of the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) have said: “Because of the violation of the regulations of the test server, these results may not be directly comparable to results obtained and reported by other teams.” (http://www.image-net.org/challenges/LSVRC/announcement-June-2-2015)
Note 2: The presenter, Ren Wu, has told the Embedded Vision Alliance that “There was some ambiguity with the rules. According to the ‘official’ interpretation of the rules, there should be no more than 52 submissions within a half year. For us, we achieved the reported results after 200 tests total within a half year. We believe there is no way to obtain any measurable gains, nor did we try to obtain any gains, from an 'extra' hundred tests as our networks have billions of parameters and are trained by tens of billions of training samples.”
An Introduction to Deep Learning I AWS Dev Day 2018AWS Germany
Through Amazon AI services, we learn how to add powerful features like natural language understanding(NLU), automatic speech recognition (ASR), visual search and image recognition, text to speech (TTS), and machine learning (ML) technologies to applications. https://aws.amazon.com/machine-learning/
Artificial Intelligence (A.I.) is a multidisciplinary field whose goal is to automate
activities that presently require human intelligence. Recent successes in A.I. include
computerized medical diagnosticians and systems that automatically customize
hardware to particular user requirements. The major problem areas addressed in A.I. can
be summarized as Perception, Manipulation, Reasoning, Communication, and Learning.
Perception is concerned with building models of the physical world from sensory input
(visual, audio, etc.). Manipulation is concerned with articulating appendages (e.g.,
mechanical arms, locomotion devices) in order to effect a desired state in the physical
world. Reasoning is concerned with higher level cognitive functions such as planning,
drawing inferential conclusions from a world model, diagnosing, designing, etc.
Communication treats the problem understanding and conveying information through
the use of language. Finally, Learning treats the problem of automatically improving
system performance over time based on the system's experience. Many important
technical concepts have arisen from A.I. that unify these diverse problem areas and that
form the foundation of the scientific discipline. Generally, A.I. systems function based
on a Knowledge Base of facts and rules that characterize the system's domain of
proficiency. The elements of a Knowledge Base consist of independently valid (or at
least plausible) chunks of information. The system must automatically organize and
utilize this information to solve the specific problems that it encounters. This
organization process can be generally characterized as a Search directed toward specific
goals. The search is made complex because of the need to determine the relevance of
information and because of the frequent occurrence of uncertain and ambiguous data.
Heuristics provide the A.I. system with a mechanism for focusing its attention and
controlling its searching processes. The necessarily adaptive organization of A.I.
systems yields the requirement for A.I. computational Architectures. All knowledge
utilized by the system must be represented within such an architecture. The acquisition
and encoding of real-world knowledge into A.I. architecture comprises the subfield of
Knowledge Engineering.
KEYWORDS – Artificial Intelligence, Machine Learning, Deep Learning, Encoding,
Subfield, Perception, Manipulation, Reasoning, Communication, and Learning.
Bringing Machine Learning and Knowledge Graphs Together
Six Core Aspects of Semantic AI:
- Hybrid Approach
- Data Quality
- Data as a Service
- Structured Data Meets Text
- No Black-box
- Towards Self-optimizing Machines
AI&BigData Lab. Артем Чернодуб "Распознавание изображений методом Lazy Deep ...GeeksLab Odessa
23.05.15 Одесса. Impact Hub Odessa. Конференция AI&BigData Lab
Артем Чернодуб (Computer Vision Team, ZZ Wolf)
"Распознавание изображений методом Lazy Deep Learning в фото-органайзере ZZ Photo"
В докладе рассматривается проблема распознавания изображений методами машинного зрения. Проводится краткий обзор существующих подзадач в этой области (детекция обьектов, классификация сцен, ассоциативный поиск в базах изображений, распознавание лиц и др.) и современных методов их решения с акцентом на глубокое обучение (Deep Learning).
Подробнее:
http://geekslab.co/
https://www.facebook.com/GeeksLab.co
https://www.youtube.com/user/GeeksLabVideo
Similar to Deep Learning for Data Scientists - Data Science ATL Meetup Presentation, 2014-01-08 (20)
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
Deep Learning for Data Scientists - Data Science ATL Meetup Presentation, 2014-01-08
1. Deep Learning
for Data Scientists
Andrew B. Gardner
agardner@momentics.com
http://linkd.in/1byADxC
www.momentics.com/deep-learning
2.
3. Deep Learning in the Press…
Ng
Hinton
LeCun
Zuckerberg
Google Hires Brains that Helped
Supercharge Machine Learning.
Wired 3/2013.
Kurzweil
Facebook taps ‘Deep Learning’ Giant for New AI Lab.
Wired 12/2013.
Is “Deep Learning” A Revolutions in
Artificial Intelligence?
The Man Behind the Google Brain: Andrew
Ng and the Quest for the New AI.
New Yorker 11/2012.
Wired 5/2013.
New Techniques from Google and Ray Kurzweil Are
Taking Artificial Intelligence to Another Level.
MIT Technology Review 5/2013.
4. … Publication & Search Trends …
Google Scholar Citations
Google Trends
600
big data
500
data science
400
300
“deep learning” +
“neural network”
deep learning
machine learning
200
100
0
‘06
‘11
‘06
‘11
domains: computer vision, speech & audio, bioinformatics, etc.
Conferences: NIPS, ICLR, ICML, …
5. … Industry & Products
• Google
Microsoft Real-time English-Chinese Translation
– Android Voice
Recognition
– Maps
– Image+
•
•
•
•
SIRI
Translation
Documents
…
https://www.youtube.com/watch?v=Nu-nlQqFCKg
Microsoft Chief Research Officer Rick
Rashid, 11/2012
6. Deep Learning Epicenters (North America)
de Freitas (UBC)
Microsoft
Bengio (U Montreal)
Hinton (U Toronto)
Facebook
Ng (Stanford)
Google
Yahoo
LeCun (NYU)
12. fa
ng
of
in
ch
on,
of
of
us
on
is
is
bly
How Good is “More Data?”
speech. The memory-based learner used only
the word before and word after as features.
Labels are expensive Less data
1.00
• More data dominates*
better techniques
.975
0.95
0.90
Test Accuracy
a
93,
In
is
fic
es
are
m
ber
• Often have lots of data
0.85
.825
0.80
Memory-Based
Winnow
0.75
Perceptron
Naïve Bayes
0.70
0.1
1
10
100
Millions of Words
1000
Learning curves for confusion set
Figure 1. Learning Curves for Confusion Set
disambiguation, e.g. {to, two, too}.
Disambiguation
We collected a 1
-billion-word training
corpus from a variety of English texts, including
• … we just don’t have
lots of labels
• What if there was a
way to use unlabeled
data?
“Scaling to Very Very Large Corpora for Natural Language Disambiguation,” Banko and Brill, 2001.
13. The Impact of Features
Intuitively: better features are good
• Critical to success – even more than data!
• How to create / engineer features?
– Typically shallow
• Domain-specific
• What if there was a way to automatically
learn features?
14. Machine Learning (What We Want)
Building a Cat Detector 2.0
bountiful
important*
Features + Detector
(Classifier)
end-to-end
15. AR” Building an Object Recognition System
”
“CAR”
Deep Nets Intuition
“CAR”
car
intermediate representations
CLASSIFIER
FEATURE
EXTRACTOR
label
IDEA: Use data to optimize features for the given task.
olutional DBN's for scalable unsup. learning...” ICML 2009
Lee et al. ICML 2009
12
Ranzato
2
Ranzato
13
Ranzato
Ranza
16. on from low
structure as
hical Another Example of Hierarchy
Learning
rchical Learning
mplexity from low
progression
ral progression from low
high level structure as
to high level structure as
natural complexity
in natural complexity
what is being
eto monitor whatisisbeing
the machine
o monitor what being
r
and guide the machine
es toto guide themachine
t and
er subspaces
tter subspaces
od lower level
llower level heads
ntation can be used for
sentation can be usedfor
ndistinct tasks for
be used
istinct tasks
s
faces
as
parts
edges
17. d tomachine machine
e guide the
he
subspaces Hierarchy Reusability?
faces
cars
elephants
chairs
wer level
be used forbe used for
tation can
tinct tasks
5
5
18. A Breakthrough
G. E. Hinton, S. Osindero, and Y. Teh, “A fast learning
algorithm for deep belief nets,” Neural
Computation, vol. 18, pp. 1527–1554, 2006.
G. E. Hinton and R. R. Salakhutdniov, “Reducing the
dimensionality of data with neural networks,”
Science, vol. 313, no. 5786, pp. 504-507, July 2006.
before
after
20. MNIST Sample Errors
Ciresan et al. “Deep Big Simple Neural Networks Excel on
Handwritten Digit Recognition,” 2010
21. Key Ideas
• Learn features from data
– Use all data
• Deep architecture
– Representation
– Computational efficiency
– Shared statistics
• Practical training
• State-of-the-art (it worked)
22. After: Cat Detector
unlabeled images (millions)
labeled images (few)
deep learning
network
more data
automatic (deep) features
24. This Is A Neuron
output
1. Sum all inputs (weighted)
y
x = w0 + w1z1 + w2 z2 + w3z3
f(x)
2. Nonlinearly transform
y = f ( x)
weights
w0 w1
w2
sigmoid
w3
tanh
1
bias
z1
z2
inputs
z3
activation function
25. A Neural Network
forward propagation: weighted sum inputs, produce activation, feed forward
cat
dog
Output
Hidden
13.5
weight
21
n_teeth
16
n_whiskers
Inputs
(the features)
26. Training
Back propagation of error.
1
0
cat
dog
total error at top
proportional
contributions going
backwards
13.5
weight
21
n_teeth
16
n_whiskers
27. After Training
network
layer weights
weights as a matrix
[.5, -.2, 4, .15, -1,…]
-.5
.4
0
.1
.1
.5
-1
2
[-.5, -.3, .4, 0, …]
-.3
.7
-.2
.4
we can view weight matrix as image
… plus performance evaluation & logging
28. Building Blocks
So many choices!
network topology
• Network Topology
– Number of layers
– Nodes per layer
• Layer Type
– Feedforward
– Restricted Boltzmann
– Autoencoder
– Recurrent
– Convolutional
layer type
neuron type
• Neuron Type
– Rectified Linear Unit
• Regularization
– Dropout
• Magic Numbers
29. A Deep Learning Recipe, 1.0
• Lots of data, some+
labels
• Train each RBM layer
greedily, successively
• Add an output layer
and train with labels
labels
30. A Few Other Important Things
• Deep Learning Recipe 2.0
– Dropout / regularization
– Rectified Linear Units
•
•
•
•
Convolutional networks
Hyperparameters
Not just neural networks
Practical Issues (GPU)
35. Application: Speech
frequencies
in window
“He can for example present significant university wide
issues to the senate.”
small time window
slide 15ms
phoneme
Spectrogram: window in time -> vector of frequences; slide; repeat
36. Automatic Speech
CDBNs for speech
Unlabeled TIMIT data -> convolutional DBN
Trained on unlabeled TIMIT corpus
Experimental R
• Speaker identification
TIMIT Speaker identification
Accuracy
Prior art (Reynolds, 1995)
99.7%
Convolutional DBN
100.0%
• Phone classification
TIMIT Phone classification
Accuracy
Clarkson et al. (1999)
77.6%
Gunawardana et al. (2005)
78.3%
Sung et al. (2007)
78.5%
Petrov et al. (2007)
78.6%
Sha & Saul (2006)
78.9%
Yu et al. (2009)
79.2%
Convolutional DBN
80.3%
Learned first-layer bases
Lee et al., “Unsupervised feature learning for audio classification using convolutional deep
68
belief networks”, NIPS 2009.
37. A Long List of Others
• Kaggle
– Merck Molecular Activation (‘12)
– Salary Prediction (‘13)
•
•
•
•
Learning to Play Atari Games (‘13)
NLP – chunking, NER, parsing, etc.
Activity recognition from video
Recommendations
38. Deep Learning In A Nutshell
•
•
•
•
•
•
•
•
Architectures vs. features
Deep vs. shallow
Automatic* features
Lots of data vs. best technique
Compute- vs. human intensive
State-of-the-art
Breaks expert, domain barrier
Details & tricks can be complex
http://www.deeplearning.net/
39. Interested in Deep Learning?
Connect for:
• Training Workshop (interest list)
• Projects / consulting
• Collaboration
• Questions
agardner@momentics.com
http://www.momentics.com/deep-learning/
Editor's Notes
(1:00)Thank organizers & attendeesMy background thesisInvitation to connectTalk in 3 parts: introduction and motivate the topichigh-level overview of deep learning detailsexamples
How many heard of deep learning
joke: Wired and ad placementCompanies are qcquiring talent and demonstrating use caseZuckerberg @ NIPS
Growing popularityLots of applications motivated by vision and audioSensible because of connections to perception, AI and neural networksRevolutions have participants
Products are seeing big liftExample of real-time translation kept it in the same voice!“I’m speaking in English and hopefully you’ll hear me speaking in Chinese in my own voice”
Apology for ommission
- As a data scientist, consume machine learning
Consider canonical problem: classificationCats and dogs, cats and data scientistsIn this case, we want to build a magic box that discriminates cats vs dogsPlay on the google cat detector: 1000 nodes, 16000 cores, 1 week per trial @ $1/hr = ? June 2012Cat detector detects better than a catLeaving data on the dable
Many examples, from all classes, requiredConsequence -> use less dataFeatures require lots of engineering and workExample here, SIFT, took over a decade for David Lowe to developMany examples of features: tail, fur, eyes, edges, height, etc.
Features: raw numbers to smaller, better pile of numbersMany examples, from all classes, requiredConsequence -> use less dataFeatures require lots of engineering and workExample here, SIFT, took over a decade for David Lowe to developMany examples of features: tail, fur, eyes, edges, height, etc.Best disciplined approach: copy and tweakShow of hands – how many of you have experienced this?
80% of the data scientist jobWe don’t scale – how long to get a Phd?Each loop we have to do invention and ideation“Won a kaggle contest using RF”Workflow, feature engineering
This is not always true, but good for high variance problemsWhat examples of extra data?Not just a little more data, but a lot of dataOften have a lot more data today in the connected world
No principled way to generate featuresNo playbook for alien data features
Modules that learn featuresStack and I get a hierarchical decomposition
Hinton split timeBefore & after
Describe MNIST, boring easy“everything works at 96% accuracy”
This network achieved 0.35% error using online backprop6 hidden layers, 2500, 2000, 1500, 1000, 500, 10 with validation & test error .35% & .32%
Data flows from bottom to topAffine + nonlinearityNonlinear regressionWe have to learn the weights and biasWe have to pick the activation function
Backprop topBackprop global
1000 categories25% -> 15% errorAcquired by Google 1/13