Learning with side information through modality hallucination, J. Hoffman et al., CVPR2016
http://www.cv-foundation.org/openaccess/content_cvpr_2016/html/Hoffman_Learning_With_Side_CVPR_2016_paper.html
"On human motion prediction using recurrent neural networks", Julieta Martinez, Michael J. Black, Javier Romero. CVPR2017
https://arxiv.org/abs/1705.02445
"Deep Variational Bayes Filters: Unsupervised Learning of State Space Models from Raw Data",
Maximilian Karl, Maximilian Soelch, Justin Bayer, Patrick van der Smagt, ICLR2017.
[Link] https://arxiv.org/abs/1605.06432
Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynami...Terry Taewoong Um
[Title] Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models (2018)
[Authors] Kurtland Chua, Roberto Calandra, Rowan McAllister, Sergey Levine
[Link] https://arxiv.org/abs/1805.12114
* This paper is accepted for the spotlight session at NIPS 2018
This presentation includes some of the contents related to the paper, "Neural Network Dynamics for Model-Based Deep Reinforcement Learning with Model-Free Fine-Tuning", Nagabandi et al. (ICML 2018).
Terry Taewoong Um proposes deformable convolutional networks. The document discusses introducing learnable offsets to convolutional filters and region of interest pooling layers to allow the networks to spatially transform based on the input data. This helps the networks better adapt to objects of different scales and aspect ratios. Experimental results show deformable convnets achieve state-of-the-art performance on semantic segmentation, object detection and other tasks. Code is available online for others to experiment with these techniques.
This is the slide that Terry. T. Um gave a presentation at Kookmin University in 22 June, 2014. Feel free to share it and please let me know if there is some misconception or something.
(http://t-robotics.blogspot.com)
(http://terryum.io)
A brief summary of Lie group formulation for robot mechanics. For more details, please refer to the book, "A first course in robot mechanics" written by Frank C. Park from the follow link.
http://robotics.snu.ac.kr/fcp/files/_pdf_files_publications/a_first_coruse_in_robot_mechanics.pdf
(http://terryum.io)
"On human motion prediction using recurrent neural networks", Julieta Martinez, Michael J. Black, Javier Romero. CVPR2017
https://arxiv.org/abs/1705.02445
"Deep Variational Bayes Filters: Unsupervised Learning of State Space Models from Raw Data",
Maximilian Karl, Maximilian Soelch, Justin Bayer, Patrick van der Smagt, ICLR2017.
[Link] https://arxiv.org/abs/1605.06432
Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynami...Terry Taewoong Um
[Title] Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models (2018)
[Authors] Kurtland Chua, Roberto Calandra, Rowan McAllister, Sergey Levine
[Link] https://arxiv.org/abs/1805.12114
* This paper is accepted for the spotlight session at NIPS 2018
This presentation includes some of the contents related to the paper, "Neural Network Dynamics for Model-Based Deep Reinforcement Learning with Model-Free Fine-Tuning", Nagabandi et al. (ICML 2018).
Terry Taewoong Um proposes deformable convolutional networks. The document discusses introducing learnable offsets to convolutional filters and region of interest pooling layers to allow the networks to spatially transform based on the input data. This helps the networks better adapt to objects of different scales and aspect ratios. Experimental results show deformable convnets achieve state-of-the-art performance on semantic segmentation, object detection and other tasks. Code is available online for others to experiment with these techniques.
This is the slide that Terry. T. Um gave a presentation at Kookmin University in 22 June, 2014. Feel free to share it and please let me know if there is some misconception or something.
(http://t-robotics.blogspot.com)
(http://terryum.io)
A brief summary of Lie group formulation for robot mechanics. For more details, please refer to the book, "A first course in robot mechanics" written by Frank C. Park from the follow link.
http://robotics.snu.ac.kr/fcp/files/_pdf_files_publications/a_first_coruse_in_robot_mechanics.pdf
(http://terryum.io)
Joint contrastive learning with infinite possibilitiestaeseon ryu
Contrastive Learning은 두 이미지가 유사한지 유사하지 않은 지에 대해서 어떤 label이 없이 피쳐들을 배우게 하는 머신 learning 테크닉 중에 하나입니다 우리는 기존에 있는 Supervised learning과 조금 차이가 있는데 Supervised learning은 label cost가 들고
그다음에 Task specific 하기 때문에 generalizability가 조금 떨어질 수 있습니다 하지만 Contrastive Learning은 label이 없이 진행하기때문에 label cost가 없고 generalizability가 조금 더 좋을수 있습니다. 해당 논문은 보다 유용한 Contrastive Learning을 위한 Joint Contrastive Learning에 대해 제안을 하는대요 https://youtu.be/0NLq-ikBP1I
Higher Order Fused Regularization for Supervised Learning with Grouped Parame...Koh Takeuchi
This document presents a new regularization technique for supervised learning with grouped parameters. The technique incorporates smoothness across overlapping parameter groups using a higher order fused regularizer. An efficient network flow algorithm is developed to optimize the non-smooth convex regularizer. Experimental results on synthetic and real-world datasets show improved predictive performance over existing regularization methods for linear regression tasks. Future work is proposed to extend the approach to non-linear models and applications involving matrices or tensors.
This document outlines the course details for CS 321 Algorithm Analysis and Design. It includes information on logistics like office hours and TAs, graded work like assignments and exams, starred problems, discussion policy, and an overview of design techniques, complexity issues, and key tools covered in the course. The document also provides pre-requisites and motivates the importance of algorithms through quotes about using both theory and practice to improve each other.
Deep Learning in Recommender Systems - RecSys Summer School 2017Balázs Hidasi
This is the presentation accompanying my tutorial about deep learning methods in the recommender systems domain. The tutorial consists of a brief general overview of deep learning and the introduction of the four most prominent research direction of DL in recsys as of 2017. Presented during RecSys Summer School 2017 in Bolzano, Italy.
Overview of TensorFlow For Natural Language Processingananth
TensorFlow open sourced recently by Google is one of the key frameworks that support development of deep learning architectures. In this slideset, part 1, we get started with a few basic primitives of TensorFlow. We will also discuss when and when not to use TensorFlow.
Generative adversarial networks (GANs) can be used to approximate the posterior distributions of Bayesian neural networks (BNNs). GANs are trained to generate samples from the posterior distribution learned by a BNN using stochastic gradient Langevin dynamics (SGLD). Specifically, a Wasserstein GAN with gradient penalty (WGAN-GP) is trained to match the posterior distribution by minimizing the Wasserstein distance between samples from the BNN's SGLD-approximated posterior and samples from the GAN's generator. This adversarial distillation technique allows parallel sampling from the BNN's posterior using only the GAN's parameters, providing computational and storage advantages over traditional Markov chain Monte Carlo methods for BNNs.
This document discusses how PageRank fails as a ranking algorithm in growing networks where nodes enter the network over time. Through numerical simulations of models of growing networks, the study finds that PageRank is biased based on how long nodes have been in the network, rather than their true importance, measured by fitness. The indegree of nodes provides a less biased ranking than PageRank in these growing networks. The study also analyzes empirical data and finds PageRank does not correlate as strongly with total relevance, a measure of importance, as indegree does.
In this presentation we discuss the hypothesis of MaxEnt models, describe the role of feature functions and their applications to Natural Language Processing (NLP). The training of the classifier is discussed in a later presentation.
Relational Transfer in Reinforcement Learningbutest
This document discusses transfer learning in humans and machines. It covers different types of transfer learning including hierarchical curriculum, multilingualism, and inductive logic programming. It also discusses approaches to transfer learning in reinforcement learning such as starting-point methods, hierarchical methods, and imitation methods. The author's research focuses on skill transfer and macro transfer in reinforcement learning domains like RoboCup soccer. The results show that skill transfer and macro transfer can improve performance on new related tasks.
This document presents an overview of lie group formulations for robot mechanics. It discusses rigid body motion using SO(3) and SE(3) transformations. It also covers generalized velocity and force, dynamics of open chain systems, hybrid dynamics, and recursive inverse dynamics algorithms. The document is authored by Terry Taewoong Um from the University of Waterloo and is intended to summarize concepts from another source on lie group dynamics.
This document provides an introduction to deep neural networks using TensorFlow. It begins with an example classification problem to motivate deep learning. It then discusses three types of deep learning models: unsupervised learning, convolutional neural networks, and recurrent neural networks. The remainder of the document demonstrates how to build deep learning models with TensorFlow, including logistic regression, multi-layer perceptrons, and convolutional neural networks. Code examples are provided in Jupyter notebooks on GitHub.
Large Scale Deep Learning with TensorFlow Jen Aman
Large-scale deep learning with TensorFlow allows storing and performing computation on large datasets to develop computer systems that can understand data. Deep learning models like neural networks are loosely based on what is known about the brain and become more powerful with more data, larger models, and more computation. At Google, deep learning is being applied across many products and areas, from speech recognition to image understanding to machine translation. TensorFlow provides an open-source software library for machine learning that has been widely adopted both internally at Google and externally.
GStreamer-VAAPI: Hardware-accelerated encoding and decoding on Intel hardware...Igalia
By Víctor M. Jáquez.
Slides at https://github.com/01org/gstreamer-vaapi/tree/master/docs/slides/gstconf2015
GStreamer-VAAPI is a set of GStreamer elements (vaapidecode, vaapipostroc, vaapisink, and several encoders) and libgstvapi, a library that wraps libva under a GObject/GStreamer semantics.
This talk will be about VAAPI and its integration with GStreamer. We will show a general overview of VAAPI architecture, the role of libgstvaapi, and finally, the design of GStreamer elements. Afterwards we will show what is ahead in the development of GStreamer-VAAPI, and the current problems and challenges.
Joint contrastive learning with infinite possibilitiestaeseon ryu
Contrastive Learning은 두 이미지가 유사한지 유사하지 않은 지에 대해서 어떤 label이 없이 피쳐들을 배우게 하는 머신 learning 테크닉 중에 하나입니다 우리는 기존에 있는 Supervised learning과 조금 차이가 있는데 Supervised learning은 label cost가 들고
그다음에 Task specific 하기 때문에 generalizability가 조금 떨어질 수 있습니다 하지만 Contrastive Learning은 label이 없이 진행하기때문에 label cost가 없고 generalizability가 조금 더 좋을수 있습니다. 해당 논문은 보다 유용한 Contrastive Learning을 위한 Joint Contrastive Learning에 대해 제안을 하는대요 https://youtu.be/0NLq-ikBP1I
Higher Order Fused Regularization for Supervised Learning with Grouped Parame...Koh Takeuchi
This document presents a new regularization technique for supervised learning with grouped parameters. The technique incorporates smoothness across overlapping parameter groups using a higher order fused regularizer. An efficient network flow algorithm is developed to optimize the non-smooth convex regularizer. Experimental results on synthetic and real-world datasets show improved predictive performance over existing regularization methods for linear regression tasks. Future work is proposed to extend the approach to non-linear models and applications involving matrices or tensors.
This document outlines the course details for CS 321 Algorithm Analysis and Design. It includes information on logistics like office hours and TAs, graded work like assignments and exams, starred problems, discussion policy, and an overview of design techniques, complexity issues, and key tools covered in the course. The document also provides pre-requisites and motivates the importance of algorithms through quotes about using both theory and practice to improve each other.
Deep Learning in Recommender Systems - RecSys Summer School 2017Balázs Hidasi
This is the presentation accompanying my tutorial about deep learning methods in the recommender systems domain. The tutorial consists of a brief general overview of deep learning and the introduction of the four most prominent research direction of DL in recsys as of 2017. Presented during RecSys Summer School 2017 in Bolzano, Italy.
Overview of TensorFlow For Natural Language Processingananth
TensorFlow open sourced recently by Google is one of the key frameworks that support development of deep learning architectures. In this slideset, part 1, we get started with a few basic primitives of TensorFlow. We will also discuss when and when not to use TensorFlow.
Generative adversarial networks (GANs) can be used to approximate the posterior distributions of Bayesian neural networks (BNNs). GANs are trained to generate samples from the posterior distribution learned by a BNN using stochastic gradient Langevin dynamics (SGLD). Specifically, a Wasserstein GAN with gradient penalty (WGAN-GP) is trained to match the posterior distribution by minimizing the Wasserstein distance between samples from the BNN's SGLD-approximated posterior and samples from the GAN's generator. This adversarial distillation technique allows parallel sampling from the BNN's posterior using only the GAN's parameters, providing computational and storage advantages over traditional Markov chain Monte Carlo methods for BNNs.
This document discusses how PageRank fails as a ranking algorithm in growing networks where nodes enter the network over time. Through numerical simulations of models of growing networks, the study finds that PageRank is biased based on how long nodes have been in the network, rather than their true importance, measured by fitness. The indegree of nodes provides a less biased ranking than PageRank in these growing networks. The study also analyzes empirical data and finds PageRank does not correlate as strongly with total relevance, a measure of importance, as indegree does.
In this presentation we discuss the hypothesis of MaxEnt models, describe the role of feature functions and their applications to Natural Language Processing (NLP). The training of the classifier is discussed in a later presentation.
Relational Transfer in Reinforcement Learningbutest
This document discusses transfer learning in humans and machines. It covers different types of transfer learning including hierarchical curriculum, multilingualism, and inductive logic programming. It also discusses approaches to transfer learning in reinforcement learning such as starting-point methods, hierarchical methods, and imitation methods. The author's research focuses on skill transfer and macro transfer in reinforcement learning domains like RoboCup soccer. The results show that skill transfer and macro transfer can improve performance on new related tasks.
This document presents an overview of lie group formulations for robot mechanics. It discusses rigid body motion using SO(3) and SE(3) transformations. It also covers generalized velocity and force, dynamics of open chain systems, hybrid dynamics, and recursive inverse dynamics algorithms. The document is authored by Terry Taewoong Um from the University of Waterloo and is intended to summarize concepts from another source on lie group dynamics.
This document provides an introduction to deep neural networks using TensorFlow. It begins with an example classification problem to motivate deep learning. It then discusses three types of deep learning models: unsupervised learning, convolutional neural networks, and recurrent neural networks. The remainder of the document demonstrates how to build deep learning models with TensorFlow, including logistic regression, multi-layer perceptrons, and convolutional neural networks. Code examples are provided in Jupyter notebooks on GitHub.
Large Scale Deep Learning with TensorFlow Jen Aman
Large-scale deep learning with TensorFlow allows storing and performing computation on large datasets to develop computer systems that can understand data. Deep learning models like neural networks are loosely based on what is known about the brain and become more powerful with more data, larger models, and more computation. At Google, deep learning is being applied across many products and areas, from speech recognition to image understanding to machine translation. TensorFlow provides an open-source software library for machine learning that has been widely adopted both internally at Google and externally.
GStreamer-VAAPI: Hardware-accelerated encoding and decoding on Intel hardware...Igalia
By Víctor M. Jáquez.
Slides at https://github.com/01org/gstreamer-vaapi/tree/master/docs/slides/gstconf2015
GStreamer-VAAPI is a set of GStreamer elements (vaapidecode, vaapipostroc, vaapisink, and several encoders) and libgstvapi, a library that wraps libva under a GObject/GStreamer semantics.
This talk will be about VAAPI and its integration with GStreamer. We will show a general overview of VAAPI architecture, the role of libgstvaapi, and finally, the design of GStreamer elements. Afterwards we will show what is ahead in the development of GStreamer-VAAPI, and the current problems and challenges.
The document provides an overview of the global mobile market in 2016, including key insights about app revenues, active devices, and consumers. Some of the main points covered are:
- Global direct consumer spending on apps will reach $44.8 billion in 2016 and be led by games, though non-game revenues are growing faster.
- China has overtaken the US as the largest app market, expected to generate $11.9 billion in revenues compared to $9.4 billion for the US.
- Apple has the largest share of active mobile devices worldwide at 34.8%, followed by Samsung, Huawei, Xiaomi, and Lenovo.
- There will be 2.3 billion active
알파고의 작동 원리를 설명한 슬라이드입니다.
English version: http://www.slideshare.net/ShaneSeungwhanMoon/how-alphago-works
- 비전공자 분들을 위한 티저: 바둑 인공지능은 과연 어떻게 만들까요? 딥러닝 딥러닝 하는데 그게 뭘까요? 바둑 인공지능은 또 어디에 쓰일 수 있을까요?
- 전공자 분들을 위한 티저: 알파고의 main components는 재밌게도 CNN (Convolutional Neural Network), 그리고 30년 전부터 유행하던 Reinforcement learning framework와 MCTS (Monte Carlo Tree Search) 정도입니다. 새로울 게 없는 재료들이지만 적절히 활용하는 방법이 신선하네요.
2016 아이펀팩토리 Dev Day 발표 자료
강연 제목 : Docker 로 Linux 없이 Linux 환경에서 개발하기
발표자 : 김진욱 CTO
<2016>
- 일시 : 2016년 9월 28 수요일 12:00~14:20
- 장소 : 넥슨 판교 사옥 지하 1층 교육실
This document discusses optimizing object-oriented code for performance. It begins with an overview of object-oriented programming and how CPU and memory performance have changed significantly since C++ was first created. It then analyzes a common scene tree example and finds it is slow due to excessive cache misses from scattered data. The solution is to restructure the code to have homogeneous, sequential data by allocating nodes and matrices contiguously in memory. Processing data in order and removing virtual function calls further improves performance. Prefetching is also able to reduce cache misses, resulting in a 6x speedup over the original implementation. The key lessons are to optimize for data locality and consider data-oriented design principles when performance is important.
The document discusses cloud networking, software-defined networking (SDN), and OpenStack. It covers characteristics of cloud networking like multi-tenancy and east-west traffic. SDN is described as enabling API-driven networking. Issues with OpenStack networking are mentioned. The document raises questions about SDN standards, controllers, and viable SDN solutions.
Deep learning (Machine learning) tutorial for beginnersTerry Taewoong Um
비전공자들을 위한 머신러닝 / 딥러닝 튜토리얼입니다.
This is a deep learning (machine learning) tutorial for beginners.
Contents
1. Introduction to machine learning & deep learning
2. DL methods:
Convolutional neural networks (CNN)
Recurrent neural networks (RNN)
Variational autoencoder (VAE)
Generative adversarial networks (GAN)
3. Can we believe deep neural networks?
이 슬라이드는 부산 동아대학교에서 2018년 7월 16일 2시간 특강을 위해 마련된 자료로, 비전공자들을 위해 수식보다 개념 이해를 위해 힘쓴 강의자료입니다. 나중에 테리의 딥러닝톡에서도 한번 설명을 붙여볼게요~ https://www.facebook.com/deeplearningtalk/
https://www.youtube.com/playlist?list=PL0oFI08O71gKEXITQ7OG2SCCXkrtid7Fq
The document discusses learning programming through MOOCs and machine learning. It provides data on a MOOC with over 160,000 students from 209 countries. It analyzes student error messages, submissions, and interactions to improve programming instructions. However, programming languages can be ambiguous and students struggle with different concepts. The document advocates for mastery learning through one-on-one tutoring and continual course improvements using data and machine learning.
What knowledge bases know (and what they don't)srazniewski
This document discusses knowledge bases and their completeness and recall. It begins by introducing knowledge bases and some examples of factual knowledge bases that have been created. It then discusses how knowledge bases are useful for question answering and language generation. However, it notes that knowledge bases know only a small portion of what is actually true. It discusses several approaches that have been used to assess knowledge base completeness, including rule mining to predict completeness based on patterns in the data, information extraction to add new facts, and analyzing data presence to estimate completeness of single-value properties. The document outlines challenges with each of these approaches and aims to better understand what knowledge bases are approximating in order to improve assessments of their recall.
Abstract : For many years, Machine Learning has focused on a key issue: the design of input features to solve prediction tasks. In this presentation, we show that many learning tasks from structured output prediction to zero-shot learning can benefit from an appropriate design of output features, broadening the scope of regression. As an illustration, I will briefly review different examples and recent results obtained in my team.
This document provides an overview of a machine learning course. It discusses the schedule, prerequisites, evaluation criteria, and preliminary program topics. The course will cover machine learning techniques including decision trees, instance-based learning, Bayesian learning, sequential data models, and combining learners. Students will complete lab assignments using a machine learning package and the exam will evaluate both lab assignments and theoretical questions. The goal of the course is to introduce students to machine learning approaches and their applications through examples, assignments, and lectures.
Deep Learning & NLP: Graphs to the Rescue!Roelof Pieters
This document provides an overview of deep learning and natural language processing techniques. It begins with a history of machine learning and how deep learning advanced beyond early neural networks using methods like backpropagation. Deep learning methods like convolutional neural networks and word embeddings are discussed in the context of natural language processing tasks. Finally, the document proposes some graph-based approaches to combining deep learning with NLP, such as encoding language structures in graphs or using finite state graphs trained with genetic algorithms.
Deep learning is a type of machine learning that uses neural networks with multiple layers between the input and output layers. It allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. Deep learning has achieved great success in computer vision, speech recognition, and natural language processing due to recent advances in algorithms, computing power, and the availability of large datasets. Deep learning models can learn complex patterns directly from large amounts of unlabeled data without relying on human-engineered features.
This document discusses AVL trees, which are self-balancing binary search trees. It covers:
- AVL trees maintain balance by ensuring the heights of left and right subtrees differ by at most one.
- Insertion and deletion may cause imbalance, requiring single or double rotations to rebalance the tree.
- Insertion can cause two types of imbalance fixed by single or double rotations respectively. Deletion is similar but allows propagating rotations to reduce rotations.
- Pseudocode and Java code examples are provided for AVL tree insertion, deletion, and rebalancing rotations.
This document discusses generic programming in Java. It begins with an introduction to generic methods, including how to define generic methods that can accept parameters of any data type. Examples are provided of generic methods using different data types like integers, doubles, and strings. The document also discusses passing parameters to generic methods, and defining generic methods that can accept a variable number of arguments using arrays, ellipses, and object classes. Finally, the basics of generic classes are covered, including why they are useful and examples of defining generic classes to handle arrays of different data types.
This document discusses generic programming in Java. It begins with an introduction to generic methods, including how to define generic methods that can accept arguments of different data types. Examples are provided of generic methods using type parameters, varargs methods using arrays and ellipsis, and variable methods using the Object class. The document then covers generic classes, explaining why they are useful and how to define a generic class. It provides examples of generic classes defined to handle arrays and abstract data types. The key benefits of generics in Java are that they allow code to be written that works with multiple data types rather than a specific one.
The document provides an overview of machine learning and discusses various concepts related to applying machine learning to real-world problems. It covers topics such as feature extraction, encoding input data, classification vs regression, evaluating model performance, and challenges like overfitting and underfitting models to data. Examples are given for different types of learning problems, including text classification, sentiment analysis, and predicting stock prices.
Transfer learning is a machine learning method where a model developed for a task is reused as the starting point for a model on a second task. The document discusses several transfer learning techniques including fine-tuning, multitask learning, domain adaptation, and zero-shot learning. Fine-tuning involves using a pre-trained model and updating its parameters on a new task with limited labeled data to prevent overfitting. Multitask learning trains a model simultaneously on multiple related tasks by using a shared representation. Domain adaptation aligns the distributions of the source and target domains using techniques like domain-adversarial training. Zero-shot learning recognizes new classes not seen during training by learning a semantic embedding space relating classes and attributes.
This document provides an overview of the CS760 Machine Learning course taught by David Page at the University of Wisconsin. The course will cover a broad survey of machine learning algorithms and applications over 30 class meetings. Topics will include both theoretical and practical aspects of supervised learning algorithms like naive Bayes, decision trees, neural networks, and support vector machines. Students will complete programming homework assignments applying various machine learning algorithms and a midterm exam. The primary goals of the course are to understand what learning systems should do and how existing systems work.
This document outlines the syllabus for a machine learning course. It introduces the instructor, teaching assistant, required textbook, and meeting schedule. It describes the course style as primarily algorithmic and experimental, covering many ML subfields. The goals are to understand what a learning system should do and how existing systems work. Background knowledge in languages, AI topics, and math is assumed, but no prior ML experience is needed. Requirements include biweekly programming homework, a midterm exam, and a final project. Grading will be based on homework, exam, project, and discussion participation. Policies on late homework and academic misconduct are also provided.
This document provides information about a machine learning course including logistics, content, and expectations. The course consists of 2 lectures and 1 lab session per week over 10 weeks. Assessments include a coursework, exam, and problem sets completed during lab sessions. Students are encouraged to attend all sessions, complete assignments, ask questions, and provide feedback. The course will cover key topics in deep learning including supervised, unsupervised, and reinforcement learning using neural networks applied to domains like computer vision, natural language processing, and more. Landmarks in the field and the 2018 Turing Award winners are also mentioned.
This document provides an overview of machine learning, including the different types of machine learning problems and algorithms. It discusses supervised learning problems like classification and regression where labels are provided, unsupervised learning problems where the goal is to find structure in unlabeled data like clustering and dimensionality reduction, and reinforcement learning problems where the learning signal comes from rewards. It also covers topics like generalization, learning as compression, nearest neighbor classification, Bayes' rule, Naive Bayes classifiers, and Bayesian networks as graphical models for representing complex relationships between variables.
Probing the Efficacy of the Algebra Project: A Summary of FindingsEDD SFSU
The document summarizes findings from a study that compared outcomes of students in the Algebra Project program to outcomes of students not in the program. The study used a mixed methods approach including quantitative analysis of test scores and survey responses, as well as qualitative analysis of teacher interviews and student focus groups.
Key findings include: 1) Quantitative analysis found no significant differences between program and non-program students on algebra test scores or affective survey measures, with a few exceptions. 2) Teacher interviews suggested the Algebra Project curriculum required extensive reworking to align with standards. 3) Student focus groups revealed differences in classroom dynamics and student preferences for the different approaches.
This document provides an outline for a presentation on machine learning and deep learning. It begins with an introduction to machine learning basics and types of learning. It then discusses what deep learning is and why it is useful. The main components and hyperparameters of deep learning models are explained, including activation functions, optimizers, cost functions, regularization methods, and tuning. Basic deep neural network architectures like convolutional and recurrent networks are described. An example application of relation extraction is provided. The document concludes by listing additional deep learning topics.
This document provides an overview of sorting algorithms including bubble sort, insertion sort, shellsort, and others. It discusses why sorting is important, provides pseudocode for common sorting algorithms, and gives examples of how each algorithm works on sample data. The runtime of sorting algorithms like insertion sort and shellsort are analyzed, with insertion sort having quadratic runtime in the worst case and shellsort having unknown but likely better than quadratic runtime.
Similar to Learning with side information through modality hallucination (2016) (20)
AI연구자가 뷰티테크 창업하여 실패하며 배운 7가지 레슨들 // 7 lessons an AI researcher learned fr...Terry Taewoong Um
10여년 AI연구자로 살다가 2019년 뷰티테크 스타트업 ART Lab을 창업했습니다. AI기술을 사람들의 일상에 뿌리내리고 싶다는 포부를 가지고 시작했는데, 지난 5년간 참 많은 실패와 배움을 얻었던 스타트업의 악전고투였던 것 같습니다.
저희 고군분투 이야기가 스타트업을 꿈꾸는 분들에게 도움이 될까 싶어 자료를 공유합니다. 참고로 본 자료는 2024년 6월 11일, 삼성의 CXI Speakers 시리즈 연사로서 발표한 자료입니다.
많은 분들에게 도움이 되었으면 좋겠습니다. ART Lab과 관련해 투자, 협업 등 문의가 있으신 분들은 terry@artlab.ai 로 연락 주시기 바랍니다. 감사합니다.
* 7가지 실수 요약
1. 그릇된 창업 동기: 고객과 문제가 없다.
2. 고객의 니즈가 아닌, 우리 기업의 니즈가 반영된 제품이었다
3. 앞단이 안풀리면 뒷단은 모두 소설이다.
4. 제품이 다르면 고객이 다르다
5. 기업 임원의 선택은 시장의 선택이 아니다.
6. 고객의 목소리를 똑바로 들었어야 했다.
7. 시장이 큰 문제가 아닌, 첫번째 얼리어답터의 문제를 풀어야 한다.
[관련기사] 아트랩 “피부 좋아지는 법, AI가 찾아드립니다”
https://n.news.naver.com/mnews/article/020/0003570419
3월 22일 카이스트 전산학부에서 진행된 AI x Education 포럼의 발표 내용입니다.
대학은 과연 최적화된 교육을 제공하고 있을까요? 인공지능 기술을 배우려면 꼭 대학원에 가야 할까요?
이 영상을 보시면 제가 요즘 어떤 교육을 꿈꾸고 어떤 일들을 벌이고 있는지 아실 수 있을 것입니다.
인공지능/로보틱스 기술을 배우는 가장 쉬운 길, ART Lab 유튜브 채널의 구독, 좋아요 부탁드려요~!
https://www.youtube.com/channel/UCzypbmDj_kVPDW3qWlrEFjA
A brief introduction to OCR (Optical character recognition)Terry Taewoong Um
These slides include the answers for the following questions:
- What is OCR?
- Why do we need it?
- Why is it difficult?
- Comparison between OCR & object detections
- Three approaches for text localization
- Three approaches for text recognition
Videos are also available from the below:
(Korean) https://youtu.be/ckRFBl_XWFg
(English) coming soon
[Reference] Hwalsuk Lee, https://www.slideshare.net/deview/111-ai
The document discusses calibration of modern neural networks. It notes that the softmax output of a network does not necessarily represent confidence. It examines ways to measure miscalibration in networks, including factors that can cause miscalibration like overfitting. Methods to calibrate networks are presented, such as temperature scaling. The findings of comparing these calibration techniques to related works are summarized.
The document outlines Gary Marcus's critique of deep learning. It discusses that while deep learning has achieved great successes, it also has many limitations in its scope. Specifically, it notes that deep learning is data hungry, shallow, lacks mechanisms for hierarchical structure and causation, struggles with open-ended inference, and is not transparent. The document also suggests areas for improvement, such as unsupervised learning, hybrid symbolic models, insights from psychology, and bolder research challenges to advance artificial intelligence.
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSIJNSA Journal
The smart irrigation system represents an innovative approach to optimize water usage in agricultural and landscaping practices. The integration of cutting-edge technologies, including sensors, actuators, and data analysis, empowers this system to provide accurate monitoring and control of irrigation processes by leveraging real-time environmental conditions. The main objective of a smart irrigation system is to optimize water efficiency, minimize expenses, and foster the adoption of sustainable water management methods. This paper conducts a systematic risk assessment by exploring the key components/assets and their functionalities in the smart irrigation system. The crucial role of sensors in gathering data on soil moisture, weather patterns, and plant well-being is emphasized in this system. These sensors enable intelligent decision-making in irrigation scheduling and water distribution, leading to enhanced water efficiency and sustainable water management practices. Actuators enable automated control of irrigation devices, ensuring precise and targeted water delivery to plants. Additionally, the paper addresses the potential threat and vulnerabilities associated with smart irrigation systems. It discusses limitations of the system, such as power constraints and computational capabilities, and calculates the potential security risks. The paper suggests possible risk treatment methods for effective secure system operation. In conclusion, the paper emphasizes the significant benefits of implementing smart irrigation systems, including improved water conservation, increased crop yield, and reduced environmental impact. Additionally, based on the security analysis conducted, the paper recommends the implementation of countermeasures and security approaches to address vulnerabilities and ensure the integrity and reliability of the system. By incorporating these measures, smart irrigation technology can revolutionize water management practices in agriculture, promoting sustainability, resource efficiency, and safeguarding against potential security threats.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECT
Learning with side information through modality hallucination (2016)
1. Terry Taewoong Um (terry.t.um@gmail.com)
University of Waterloo
Department of Electrical & Computer Engineering
Terry Taewoong Um
LEARNING WITH SIDE INFOR-
MATION THROUGH MODALITY
HALLUCINATION (2016)
1
2. Terry Taewoong Um (terry.t.um@gmail.com)
BEYOND SUPERVISED / UNSUPERVISED
2
supervised learning semi-supervised learning weakly-supervised learning
“Is object localization for free? Weakly-supervised
learning with convolutional neural networks (2015)”, M.
Oquab et al.
“Bayesian Semisupervised Learning with Deep Generative Models (2017)”, J. Gordon
et al.
• Various learning scenarios
• Learning with side information (modality)
(training) (test)
3. Terry Taewoong Um (terry.t.um@gmail.com)
MISSING INPUT DURING TEST
3
(training) (test)
Couch
zero-
padding…?
???
4. Terry Taewoong Um (terry.t.um@gmail.com)
MISSING INPUT DURING TEST
4
(training) (test)
Couch ???
generate
5. Terry Taewoong Um (terry.t.um@gmail.com)
MISSING INPUT DURING TEST
5
(training)
???
(test)
generate
Couch
6. Terry Taewoong Um (terry.t.um@gmail.com)
HALLUCINATION
6
(training) (test)
The red & blue should make similar features :
7. Terry Taewoong Um (terry.t.um@gmail.com)
RELATED WORKS
7
• RGB-D detection : exploit depth images
• Transfer learning and domain adaptation
: transfer the knowledge from a depth image to a RGB image
• Learning using privileged information : Training with a teacher
x : X-ray
x* : Clinician’s interpretation
y : Cancer Y/N
• Distillation : the output from one network is used as the target for a new network.
11. SEVERAL ISSUES
11
Terry Taewoong Um (terry.t.um@gmail.com)
• Training & Initialization
: First train the RGB & D-Net, and copy the D-Net to H-Net
• Which layer to hallucinate? Pool5
12. RESULTS
12
Terry Taewoong Um (terry.t.um@gmail.com)
• With new dataset (Pascal voc 2007)
• With trained dataset (NYUD2)
14. SUMMARY
14
Terry Taewoong Um (terry.t.um@gmail.com)
• If you have a missing modality at test time,
(Or if you have additional modality at training time,)
hallucinate!
• Good idea, but not a in-depth understanding…
• How can a RGB image “imagine” its missing depth image?
(Can we visualize
• Is the learned H-net generalizable to new images?
• Is this method effective to other modalities as well?
• Can we propose a domain-specific hallucination architecture?
• We may exploit more information (modalities) at training time than run-time
• Beyond supervised / unsupervised settings….