Face Feature Recognition System with Deep Belief Networks, for Korean/KIISE T...Mad Scientists
I submitted KIISE Thesis that <face>, 2014.
In this presentation, I present why I use deep learning to find facial features and what is limitation of before method.
Dsh data sensitive hashing for high dimensional k-nn searchWooSung Choi
Gao, Jinyang, et al. "Dsh: data sensitive hashing for high-dimensional k-nnsearch." Proceedings of the 2014 ACM SIGMOD international conference on Management of data. ACM, 2014.
[20140830, Pycon2014] NetworkX를 이용한 네트워크 분석Kyunghoon Kim
UNIST(유니스트)
NaturalScience Mathematical Sciences Kyunghoon Kim
자연과학부 수리과학과 김경훈
기본적인 네트워크 분석
Python Library NetworkX Tutorial Korean Version
http://www.pycon.kr/2014/program/7
Slideshare View 창에서는 슬라이드의 링크들이 모두 적용되지 않는 것 같습니다.
Save 하셔서 보시면 모든 링크들을 사용하실 수 있습니다.
신경망의 층이 늘어났을 때 (즉 신경망이 깊어졌을 때), 학습의 어려움을 해소하기 위해 제시된 선행학습 관점에서 DBN과 AE를 소개합니다. 또한 다른 접근법으로 문제를 해소하여 뒤 늦게 심층 신경망으로 간주되고 있는 CNN을 소개합니다.
DBN에 사용된 RBM, AE는 GAN과 더블어 비교사학습 방법을 이끄는 삼두마차입니다. CNN은 영상학습 분야의 단연 절대강자입니다.
모두를 위한 Deep Reinforcement Learning 강의를 요약정리
http://hunkim.github.io/ml/
실습에 사용된 코드
https://github.com/freepsw/tensorflow_examples/tree/master/20.RL_by_SungKim
Common Design for Distributed Machine LearningJunyoung Park
This document discusses common designs for distributed machine learning and deep learning. It covers techniques like data parallelism, model parallelism, and asynchronous vs synchronous algorithms. It provides examples of how techniques like distributed decision trees, random forests, gradient boosted trees, and hyperparameter tuning are implemented in distributed frameworks. It also discusses challenges in distributed deep learning like model parallelism and optimization algorithms for training very large neural networks across thousands of processors.
Face Feature Recognition System with Deep Belief Networks, for Korean/KIISE T...Mad Scientists
I submitted KIISE Thesis that <face>, 2014.
In this presentation, I present why I use deep learning to find facial features and what is limitation of before method.
Dsh data sensitive hashing for high dimensional k-nn searchWooSung Choi
Gao, Jinyang, et al. "Dsh: data sensitive hashing for high-dimensional k-nnsearch." Proceedings of the 2014 ACM SIGMOD international conference on Management of data. ACM, 2014.
[20140830, Pycon2014] NetworkX를 이용한 네트워크 분석Kyunghoon Kim
UNIST(유니스트)
NaturalScience Mathematical Sciences Kyunghoon Kim
자연과학부 수리과학과 김경훈
기본적인 네트워크 분석
Python Library NetworkX Tutorial Korean Version
http://www.pycon.kr/2014/program/7
Slideshare View 창에서는 슬라이드의 링크들이 모두 적용되지 않는 것 같습니다.
Save 하셔서 보시면 모든 링크들을 사용하실 수 있습니다.
신경망의 층이 늘어났을 때 (즉 신경망이 깊어졌을 때), 학습의 어려움을 해소하기 위해 제시된 선행학습 관점에서 DBN과 AE를 소개합니다. 또한 다른 접근법으로 문제를 해소하여 뒤 늦게 심층 신경망으로 간주되고 있는 CNN을 소개합니다.
DBN에 사용된 RBM, AE는 GAN과 더블어 비교사학습 방법을 이끄는 삼두마차입니다. CNN은 영상학습 분야의 단연 절대강자입니다.
모두를 위한 Deep Reinforcement Learning 강의를 요약정리
http://hunkim.github.io/ml/
실습에 사용된 코드
https://github.com/freepsw/tensorflow_examples/tree/master/20.RL_by_SungKim
Common Design for Distributed Machine LearningJunyoung Park
This document discusses common designs for distributed machine learning and deep learning. It covers techniques like data parallelism, model parallelism, and asynchronous vs synchronous algorithms. It provides examples of how techniques like distributed decision trees, random forests, gradient boosted trees, and hyperparameter tuning are implemented in distributed frameworks. It also discusses challenges in distributed deep learning like model parallelism and optimization algorithms for training very large neural networks across thousands of processors.
Continuous integration (CI) is about automating testing of code changes. This document discusses different types of tests like unit, functional, and integration testing. It provides examples of writing tests using the unittest framework in Python. Plugins like pytest-pep8 and pytest-cov can be used to check code style and test coverage. Fabric and Gitlab CI can help automate deployment and setup of code. The CI process pulls Docker images from a registry to run tests on code commits, then deploys passed code to production servers.
Neural networks are machine learning models inspired by the human brain. The Perceptron model in 1957 laid the foundation for neural networks, followed by the multi-layer Perceptron in 1969. A key development was the backpropagation algorithm in 1980 that allowed neural networks to learn from examples through backward propagation of errors.
This document discusses support vector machines (SVM), a supervised machine learning algorithm used for classification and regression analysis. It can perform both linear and nonlinear classification by using kernels to transform data into a higher dimension. Common kernels include linear, polynomial, radial basis function (RBF), and sigmoid. The document also mentions using SVM for tasks on Kaggle and provides a study plan and resources for learning more about SVM from Stanford University's CS231n course.
8. Initial Centroid
초기 값 위치에따라원하는결과가 나오지않을수있음
설정하는방식이중요
Random 방식은local optima에빠질위험이있음(여러번돌리자)
대부분구현체는K‑Means++ Algorithm을기본으로적용하고 있음
9. k‑means++: The Advantages of Careful Seeding
Random initial centroid at first
Calculate distance, D(x)
Choose next centroid from D(x)^2
centroid가 밀집되지않도록outlier는피하면서k개의초기 값을결정