PR-232: AutoML-Zero:Evolving Machine Learning Algorithms From Scratch
Paper link: https://arxiv.org/abs/2003.03384
Video presentation link: https://youtu.be/J__uJ79m01Q
PR-272: Accelerating Large-Scale Inference with Anisotropic Vector QuantizationSunghoon Joo
PR-272: Accelerating Large-Scale Inference with Anisotropic Vector Quantization
[Guo et al., ICML 2020]
Paper link: https://arxiv.org/abs/1908.10396
Video presentation link: https://youtu.be/cU46yR-A0cs
reviewed by Sunghoon Joo
PR-330: How To Train Your ViT? Data, Augmentation, and Regularization in Visi...Jinwon Lee
안녕하세요 TensorFlow Korea 논문 읽기 모임 PR-12의 330번째 논문 리뷰입니다.
오늘은 무려 5만개의 학습된 ViT model을 제공하는 구글스러운 논문을 리뷰해보았습니다. ViT가 CNN을 조금씩 대체해가고 있는데요, ViT는 CNN과 달리 inductive bias가 적은 관계로
좋은 성능을 위해서는 굉장히 많은 data가 필요하거나, augmentation과 regularization을 많이 써줘야 합니다.
그런데 이렇게 다양한 경우 즉 다양한 data, 다양한 model size, 다양한 augmentation 방법, 다양한 regularization, 다양한 data size 등등에 따른 ViT의 성능과 속도 등의 비교 분석 실험이 지금까지는 없었죠.
이 논문에서는 그 어려운 걸(?) 해냈습니다. 그리고 수많은 ViT를 이용해 실험을 하면서 몇가지 중요한 finding들을 찾았습니다.
요약하면 다음과 같습니다.
1. augmentation과 regularization을 잘 쓰면 1/10의 data로도 전체 data 다 쓴거랑 대부분 비슷한 성능을 낼 수 있다. 그런데 항상 그런건 아니다.
반대로 말하면 data가 10배 있으면 augmentation이나 regularization안 쓰고도 좋은 성능을 낼 수 있다.
2. downstream task 학습할 때 scratch부터 학습하는거랑 large dataset으로 pre-trained한 걸 이용해서 transfer learning하는 건 후자가 좋다.
3. transfer learning 할 때도 pre-trained model 중에 data 많이 써서 학습한게 더 좋다.
4. augmentation/regularization은 data가 많으면 별 도움이 안되고 둘 중에는 augmenation이 더 좋다.
5. pre-trained model이 많을 때 model을 고르는 방법은 그냥 upstream에서 제일 잘됐던 걸 고르면 얼추 잘된다.
6. 속도를 빠르게 하고 싶을 때는 model을 작은거 쓰지말고 patch size를 키워라. 그래야 성능이 별로 안떨어진다.
입니다.
흥미로운 결과들이 많으니 자세한 내용은 아래 영상을 참고해주세요!
감사합니다!
영상링크: https://youtu.be/A3RrAIx-KCc
논문링크: https://arxiv.org/abs/2106.10270
PR-207: YOLOv3: An Incremental ImprovementJinwon Lee
TensorFlow Korea 논문읽기모임 PR12 207번째 논문 review입니다
이번 논문은 YOLO v3입니다.
매우 유명한 논문이라서 크게 부연설명이 필요없을 것 같은데요, Object Detection algorithm들 중에 YOLO는 굉장히 특색있는 one-stage algorithm입니다. 이 논문에서는 YOLO v2(YOLO9000) 이후에 성능 향상을 위하여 어떤 것들을 적용하였는지 하나씩 설명해주고 있습니다. 또한 MS COCO의 metric인 average mAP에 대해서 비판하면서 mAP를 평가하는 방법에 대해서도 얘기를 하고 있는데요, 자세한 내용은 영상을 참고해주세요~
논문링크: https://arxiv.org/abs/1804.02767
영상링크: https://youtu.be/HMgcvgRrDcA
PR-272: Accelerating Large-Scale Inference with Anisotropic Vector QuantizationSunghoon Joo
PR-272: Accelerating Large-Scale Inference with Anisotropic Vector Quantization
[Guo et al., ICML 2020]
Paper link: https://arxiv.org/abs/1908.10396
Video presentation link: https://youtu.be/cU46yR-A0cs
reviewed by Sunghoon Joo
PR-330: How To Train Your ViT? Data, Augmentation, and Regularization in Visi...Jinwon Lee
안녕하세요 TensorFlow Korea 논문 읽기 모임 PR-12의 330번째 논문 리뷰입니다.
오늘은 무려 5만개의 학습된 ViT model을 제공하는 구글스러운 논문을 리뷰해보았습니다. ViT가 CNN을 조금씩 대체해가고 있는데요, ViT는 CNN과 달리 inductive bias가 적은 관계로
좋은 성능을 위해서는 굉장히 많은 data가 필요하거나, augmentation과 regularization을 많이 써줘야 합니다.
그런데 이렇게 다양한 경우 즉 다양한 data, 다양한 model size, 다양한 augmentation 방법, 다양한 regularization, 다양한 data size 등등에 따른 ViT의 성능과 속도 등의 비교 분석 실험이 지금까지는 없었죠.
이 논문에서는 그 어려운 걸(?) 해냈습니다. 그리고 수많은 ViT를 이용해 실험을 하면서 몇가지 중요한 finding들을 찾았습니다.
요약하면 다음과 같습니다.
1. augmentation과 regularization을 잘 쓰면 1/10의 data로도 전체 data 다 쓴거랑 대부분 비슷한 성능을 낼 수 있다. 그런데 항상 그런건 아니다.
반대로 말하면 data가 10배 있으면 augmentation이나 regularization안 쓰고도 좋은 성능을 낼 수 있다.
2. downstream task 학습할 때 scratch부터 학습하는거랑 large dataset으로 pre-trained한 걸 이용해서 transfer learning하는 건 후자가 좋다.
3. transfer learning 할 때도 pre-trained model 중에 data 많이 써서 학습한게 더 좋다.
4. augmentation/regularization은 data가 많으면 별 도움이 안되고 둘 중에는 augmenation이 더 좋다.
5. pre-trained model이 많을 때 model을 고르는 방법은 그냥 upstream에서 제일 잘됐던 걸 고르면 얼추 잘된다.
6. 속도를 빠르게 하고 싶을 때는 model을 작은거 쓰지말고 patch size를 키워라. 그래야 성능이 별로 안떨어진다.
입니다.
흥미로운 결과들이 많으니 자세한 내용은 아래 영상을 참고해주세요!
감사합니다!
영상링크: https://youtu.be/A3RrAIx-KCc
논문링크: https://arxiv.org/abs/2106.10270
PR-207: YOLOv3: An Incremental ImprovementJinwon Lee
TensorFlow Korea 논문읽기모임 PR12 207번째 논문 review입니다
이번 논문은 YOLO v3입니다.
매우 유명한 논문이라서 크게 부연설명이 필요없을 것 같은데요, Object Detection algorithm들 중에 YOLO는 굉장히 특색있는 one-stage algorithm입니다. 이 논문에서는 YOLO v2(YOLO9000) 이후에 성능 향상을 위하여 어떤 것들을 적용하였는지 하나씩 설명해주고 있습니다. 또한 MS COCO의 metric인 average mAP에 대해서 비판하면서 mAP를 평가하는 방법에 대해서도 얘기를 하고 있는데요, 자세한 내용은 영상을 참고해주세요~
논문링크: https://arxiv.org/abs/1804.02767
영상링크: https://youtu.be/HMgcvgRrDcA
Mlp mixer image_process_210613 deeplearning paper review!taeseon ryu
안녕하세요 딥러닝논문읽기모임 입니다!
오늘 소개드릴 논문은 MLP-Mixer라는 제목의 논문입니다.
해당 논문은 아직 아카이브에만 올라와 있고 구글 브레인팀에서 발표한 논문입니다.
CNN은 컴퓨터 비전에서 널리 사용하고 있는 레이어지만, 최근에는 Transformer와 같은 네트워크도 비전영역에 들어오기 시작하고, 몇몇 분야에서는 SOTA를 달성하기도 했습니다. 해당 논문은 Multi layer perceptron만을 사용하여 최신 논문들과 경쟁력이 있는 결과를 달성하는대 성공하였습니다.
논문에 디테일한 설명을 이미지처리팀 허다운님이 자세한 리뷰를 도와주셨습니다! 오늘도 많은 관심 미리 감사드립니다!
Auto DeepLab을 간단하게 소개를 먼저 드리면 Semantic Segmentation
테스크를 위한 모델입니다 저자들은 머신러닝을 통해서 세그멘테이션 네트워크 자체를 생성하고자 했습니다 아키텍처 Search 같은 경우에는 AutoML의 대표적인 방법인데요
그래서 이 논문의 제목이 Auto DeepLab인 이유도 이제 AutoML의 방법을 사용했기 때문입니다 저자들은 AutoML 측면에서 DARTS라는 논문을 참고로 해 갖고 다음에 Segmentation측면에서는 DeepLab V3을 많이 참고하였습니다 논문 리뷰를 이미지 처리팀 김선옥님이 디테일한 논문 리뷰 도와주셨습니다!
https://youtu.be/2886fuyKo9g
논문 제목부터 재미있어 보이는 주제 입니다. 오늘 딥러닝 논문읽기 모임에서 소개드릴 논문은 DEAR: Deep Reinforcement Learning for Online Advertising Impression in Recommender Systems, 강화학습을 이용한 온라인 추천 시스템 입니다. 비공개 된 정보들이 몇가지가 있지만, 아이디어면에서 여러분들이 충분히 재밌게 들으실수 있습니다. 강화학습의 기본적인 개념부터,
논문에 대한 디테일하고 깊이 있는 리뷰를
펀디멘탈팀 김창연 님이 도와주셨습니다!
오늘도 많은 관심 미리 감사드립니다!
추가로 .. 딥러닝 논문읽기 모임은 청강방 오픈채팅 방을 운영하고 있습니다. 최근 악성 홍보 봇 계정이 늘어나 방을 비밀번호를 걸어두게 되었습니다
딥러닝 청강방도 많은 관심 부탁드립니다!
청강방 링크 : https://open.kakao.com/o/gp6GHMMc
청강방 비밀번호 : 0501
Vision Transformer(ViT) / An Image is Worth 16*16 Words: Transformers for Ima...changedaeoh
computer vision 분야에서 dominant 한 Convolutional Layer를 일절 사용하지 않고, NLP에서 제안된 순수 Transformer의 architecture를 그대로 가져와 Attention과 일반 Feed Forward NN만을 이용하여 SOTA수준의 Image Classification Model을 구축한다.
TAVE research seminar 21.03.30 발표자료
발표자: 오창대
Deep Learning Fast MRI Using Channel Attention in Magnitude DomainJoonhyung Lee
My presentation on how we participated in the fastMRI Challanege in 2019.
Aside from theoretical considerations, it also explains key implementation issues that arise in all deep learning for MRI such as disk I/O and CPU/GPU load balancing.
Used for presentation at ISBI 2020 Oral session.
Accidentally wrote the title as "Deep Learning Sum-of-Squares Images in Accelerated Parallel MRI". Sorry for the mistake!
201907 AutoML and Neural Architecture SearchDaeJin Kim
Brief introduction of NAS
Review of EfficientNet (Google Brain), RandWire (FAIR) papers
NAS flow slide from KihoSuh's slideshare (https://www.slideshare.net/KihoSuh/neural-architecture-search-with-reinforcement-learning-76883153)
[References]
[1] EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks (https://arxiv.org/abs/1905.11946)
[2] Exploring Randomly Wired Neural Networks for Image Recognition (https://arxiv.org/abs/1904.01569)
classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects.we'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded.
[PR-325] Pixel-BERT: Aligning Image Pixels with Text by Deep Multi-Modal Tran...Sunghoon Joo
PR-325: Pixel-BERT: Aligning Image Pixels with Text by Deep Multi-Modal Transformers
paper link: https://arxiv.org/abs/2004.00849
youtube link: https://youtu.be/Kgh88DLHHTo
MetaPerturb: Transferable Regularizer for Heterogeneous Tasks and ArchitecturesMLAI2
Regularization and transfer learning are two popular techniques to enhance generalization on unseen data, which is a fundamental problem of machine learning. Regularization techniques are versatile, as they are task- and architecture-agnostic, but they do not exploit a large amount of data available. Transfer learning methods learn to transfer knowledge from one domain to another, but may not generalize across tasks and architectures, and may introduce new training cost for adapting to the target task. To bridge the gap between the two, we propose a transferable perturbation, MetaPerturb, which is meta-learned to improve generalization performance on unseen data. MetaPerturb is implemented as a set-based lightweight network that is agnostic to the size and the order of the input, which is shared across the layers. Then, we propose a meta-learning framework, to jointly train the perturbation function over heterogeneous tasks in parallel. As MetaPerturb is a set-function trained over diverse distributions across layers and tasks, it can generalize to heterogeneous tasks and architectures. We validate the efficacy and generality of MetaPerturb trained on a specific source domain and architecture, by applying it to the training of diverse neural architectures on heterogeneous target datasets against various regularizers and fine-tuning. The results show that the networks trained with MetaPerturb significantly outperform the baselines on most of the tasks and architectures, with a negligible increase in the parameter size and no hyperparameters to tune.
Mlp mixer image_process_210613 deeplearning paper review!taeseon ryu
안녕하세요 딥러닝논문읽기모임 입니다!
오늘 소개드릴 논문은 MLP-Mixer라는 제목의 논문입니다.
해당 논문은 아직 아카이브에만 올라와 있고 구글 브레인팀에서 발표한 논문입니다.
CNN은 컴퓨터 비전에서 널리 사용하고 있는 레이어지만, 최근에는 Transformer와 같은 네트워크도 비전영역에 들어오기 시작하고, 몇몇 분야에서는 SOTA를 달성하기도 했습니다. 해당 논문은 Multi layer perceptron만을 사용하여 최신 논문들과 경쟁력이 있는 결과를 달성하는대 성공하였습니다.
논문에 디테일한 설명을 이미지처리팀 허다운님이 자세한 리뷰를 도와주셨습니다! 오늘도 많은 관심 미리 감사드립니다!
Auto DeepLab을 간단하게 소개를 먼저 드리면 Semantic Segmentation
테스크를 위한 모델입니다 저자들은 머신러닝을 통해서 세그멘테이션 네트워크 자체를 생성하고자 했습니다 아키텍처 Search 같은 경우에는 AutoML의 대표적인 방법인데요
그래서 이 논문의 제목이 Auto DeepLab인 이유도 이제 AutoML의 방법을 사용했기 때문입니다 저자들은 AutoML 측면에서 DARTS라는 논문을 참고로 해 갖고 다음에 Segmentation측면에서는 DeepLab V3을 많이 참고하였습니다 논문 리뷰를 이미지 처리팀 김선옥님이 디테일한 논문 리뷰 도와주셨습니다!
https://youtu.be/2886fuyKo9g
논문 제목부터 재미있어 보이는 주제 입니다. 오늘 딥러닝 논문읽기 모임에서 소개드릴 논문은 DEAR: Deep Reinforcement Learning for Online Advertising Impression in Recommender Systems, 강화학습을 이용한 온라인 추천 시스템 입니다. 비공개 된 정보들이 몇가지가 있지만, 아이디어면에서 여러분들이 충분히 재밌게 들으실수 있습니다. 강화학습의 기본적인 개념부터,
논문에 대한 디테일하고 깊이 있는 리뷰를
펀디멘탈팀 김창연 님이 도와주셨습니다!
오늘도 많은 관심 미리 감사드립니다!
추가로 .. 딥러닝 논문읽기 모임은 청강방 오픈채팅 방을 운영하고 있습니다. 최근 악성 홍보 봇 계정이 늘어나 방을 비밀번호를 걸어두게 되었습니다
딥러닝 청강방도 많은 관심 부탁드립니다!
청강방 링크 : https://open.kakao.com/o/gp6GHMMc
청강방 비밀번호 : 0501
Vision Transformer(ViT) / An Image is Worth 16*16 Words: Transformers for Ima...changedaeoh
computer vision 분야에서 dominant 한 Convolutional Layer를 일절 사용하지 않고, NLP에서 제안된 순수 Transformer의 architecture를 그대로 가져와 Attention과 일반 Feed Forward NN만을 이용하여 SOTA수준의 Image Classification Model을 구축한다.
TAVE research seminar 21.03.30 발표자료
발표자: 오창대
Deep Learning Fast MRI Using Channel Attention in Magnitude DomainJoonhyung Lee
My presentation on how we participated in the fastMRI Challanege in 2019.
Aside from theoretical considerations, it also explains key implementation issues that arise in all deep learning for MRI such as disk I/O and CPU/GPU load balancing.
Used for presentation at ISBI 2020 Oral session.
Accidentally wrote the title as "Deep Learning Sum-of-Squares Images in Accelerated Parallel MRI". Sorry for the mistake!
201907 AutoML and Neural Architecture SearchDaeJin Kim
Brief introduction of NAS
Review of EfficientNet (Google Brain), RandWire (FAIR) papers
NAS flow slide from KihoSuh's slideshare (https://www.slideshare.net/KihoSuh/neural-architecture-search-with-reinforcement-learning-76883153)
[References]
[1] EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks (https://arxiv.org/abs/1905.11946)
[2] Exploring Randomly Wired Neural Networks for Image Recognition (https://arxiv.org/abs/1904.01569)
classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects.we'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded.
[PR-325] Pixel-BERT: Aligning Image Pixels with Text by Deep Multi-Modal Tran...Sunghoon Joo
PR-325: Pixel-BERT: Aligning Image Pixels with Text by Deep Multi-Modal Transformers
paper link: https://arxiv.org/abs/2004.00849
youtube link: https://youtu.be/Kgh88DLHHTo
MetaPerturb: Transferable Regularizer for Heterogeneous Tasks and ArchitecturesMLAI2
Regularization and transfer learning are two popular techniques to enhance generalization on unseen data, which is a fundamental problem of machine learning. Regularization techniques are versatile, as they are task- and architecture-agnostic, but they do not exploit a large amount of data available. Transfer learning methods learn to transfer knowledge from one domain to another, but may not generalize across tasks and architectures, and may introduce new training cost for adapting to the target task. To bridge the gap between the two, we propose a transferable perturbation, MetaPerturb, which is meta-learned to improve generalization performance on unseen data. MetaPerturb is implemented as a set-based lightweight network that is agnostic to the size and the order of the input, which is shared across the layers. Then, we propose a meta-learning framework, to jointly train the perturbation function over heterogeneous tasks in parallel. As MetaPerturb is a set-function trained over diverse distributions across layers and tasks, it can generalize to heterogeneous tasks and architectures. We validate the efficacy and generality of MetaPerturb trained on a specific source domain and architecture, by applying it to the training of diverse neural architectures on heterogeneous target datasets against various regularizers and fine-tuning. The results show that the networks trained with MetaPerturb significantly outperform the baselines on most of the tasks and architectures, with a negligible increase in the parameter size and no hyperparameters to tune.
Automated machine learning lectures given at the Advanced Course on Data Science & Machine Learning. AutoML, hyperparameter optimization, Bayesian optimization, Neural Architecture Search, Meta-learning, MAML
Ernest: Efficient Performance Prediction for Advanced Analytics on Apache Spa...Spark Summit
Recent workload trends indicate rapid growth in the deployment of machine learning, genomics and scientific workloads using Apache Spark. However, efficiently running these applications on
cloud computing infrastructure like Amazon EC2 is challenging and we find that choosing the right hardware configuration can significantly
improve performance and cost. The key to address the above challenge is having the ability to predict performance of applications under
various resource configurations so that we can automatically choose the optimal configuration. We present Ernest, a performance prediction
framework for large scale analytics. Ernest builds performance models based on the behavior of the job on small samples of data and then
predicts its performance on larger datasets and cluster sizes. Our evaluation on Amazon EC2 using several workloads shows that our prediction error is low while having a training overhead of less than 5% for long-running jobs.
Apache Spark Based Hyper-Parameter Selection and Adaptive Model Tuning for De...Databricks
Deep neural network training is time consuming, often take days and weeks, and a hard topic to master. Selecting the right hyper-parameters is difficult, but so important since it directly affects the behavior of the training algorithm and has a significant impact on performance and accuracy.
In this talk, we will discuss a novel approach using distributed Spark to explore the vast hyper-parameter search space to find a near optimal configuration according to a targeted quality of service (QoS). Several hyper-parameter and network architecture search approaches will be discussed and compared (e.g., Random, a tree-based Parzen, Bayesian, Reinforcement Learning, …). Furthermore, we will propose a framework and method to share information across different trials to make the searching process highly efficient.
We’ll also introduce a real-time monitoring, tuning and optimization mechanism for model training to detect early stop conditions and recommend better hyper-parameters. Finally, we will use real-world models and use cases to demonstrate how hyper-parameter selection and adaptive tuning accelerates model development and training when running Caffe and Tensorflow in our distributed Spark environment.
Scott Clark, Co-Founder and CEO, SigOpt at MLconf SF 2016MLconf
Using Bayesian Optimization to Tune Machine Learning Models: In this talk we briefly introduce Bayesian Global Optimization as an efficient way to optimize machine learning model parameters, especially when evaluating different parameters is time-consuming or expensive. We will motivate the problem and give example applications.
We will also talk about our development of a robust benchmark suite for our algorithms including test selection, metric design, infrastructure architecture, visualization, and comparison to other standard and open source methods. We will discuss how this evaluation framework empowers our research engineers to confidently and quickly make changes to our core optimization engine.
We will end with an in-depth example of using these methods to tune the features and hyperparameters of a real world problem and give several real world applications.
Driving Moore's Law with Python-Powered Machine Learning: An Insider's Perspe...PyData
People talk about a Moore's Law for gene sequencing, a Moore's Law for software, etc. This is talk is about *the* Moore's Law, the bull that the other "Laws" ride; and how Python-powered ML helps drive it. How do we keep making ever-smaller devices? How do we harness atomic-scale physics? Large-scale machine learning is key. The computation drives new chip designs, and those new chip designs are used for new computations, ad infinitum. High-dimensional regression, classification, active learning, optimization, ranking, clustering, density estimation, scientific visualization, massively parallel processing -- it all comes into play, and Python is powering it all.
Prediction as a service with ensemble model in SparkML and Python ScikitLearnJosef A. Habdank
Watch the recording of the speech done at Spark Summit Brussles 2016 here:
https://www.youtube.com/watch?v=wyfTjd9z1sY
Data Science with SparkML on DataBricks is a perfect platform for application of Ensemble Learning on massive a scale. This presentation describes Prediction-as-a-Service platform which can predict trends on 1 billion observed prices daily. In order to train ensemble model on a multivariate time series in thousands/millions dimensional space, one has to fragment the whole space into subspaces which exhibit a significant similarity. In order to achieve this, the vastly sparse space has to undergo dimensionality reduction into a parameters space which then is used to cluster the observations. The data in the resulting clusters is modeled in parallel using machine learning tools capable of coefficient estimation at the massive scale (SparkML and Scikit Learn). The estimated model coefficients are stored in a database to be used when executing predictions on demand via a web service. This approach enables training models fast enough to complete the task within a couple of hours, allowing daily or even real time updates of the coefficients. The above machine learning framework is used to predict the airfares used as support tool for the airline Revenue Management systems.
Using SigOpt to Tune Deep Learning Models with Nervana CloudSigOpt
In this talk I'll show how the Bayesian Optimization methods used by SigOpt, coupled with the incredibly scalable deep learning architecture provided with ncloud and neon, allow anyone it easily tune their models to quickly achieve higher accuracy. I'll walk through the techniques and show an explicit example with results.
PR-445: Token Merging: Your ViT But FasterSunghoon Joo
#PR12 season 5 [PR-455] Token Merging: Your ViT But Faster
This slide is a review of the paper "Token Merging: Your ViT But Faster"
Reviewed by Sunghoon Joo
Paper link: https://arxiv.org/abs/2210.09461
Youtube link: https://youtu.be/6nBYpM_ch0s
PR-433: Test-time Training with Masked AutoencodersSunghoon Joo
#PR12 season 5 [PR-433] Test-time training with masked autoencoders
This slide is a review of the paper "Test-time training with masked autoencoders."
Reviewed by Sunghoon Joo
Paper link: https://arxiv.org/abs/2209.07522
Youtube link: https://youtu.be/zOJ68s0F6JY
PR-383: Solving ImageNet: a Unified Scheme for Training any Backbone to Top R...Sunghoon Joo
Tensorflow KR PR-12 season4 slide
PR-383: Solving ImageNet: a Unified Scheme for Training any Backbone to Top Results Reviewer: Sunghoon Joo (VUNO Inc.)
Paper link: https://arxiv.org/abs/2204.03475
YouTube link: https://youtu.be/WeYuLO1nTmE
PR-339: Maintaining discrimination and fairness in class incremental learningSunghoon Joo
PR-339: Maintaining discrimination and fairness in class incremental learning
Paper link: http://arxiv.org/abs/1911.07053
Video presentation link: https://youtu.be/hptinxZIXT4
#class imbalance, #knowledge distillation, # class incremental learning
PR-313 Training BatchNorm and Only BatchNorm: On the Expressive Power of Rand...Sunghoon Joo
Training BatchNorm and Only BatchNorm: On the Expressive Power of Random Features in CNNs
Jonathan Frankle, David J. Schwab, Ari S. Morcos
ICLR 2021
Paper link: https://arxiv.org/abs/2008.09093
Video presentation link: https://youtu.be/bI8ceHOoYxk
reviewed by Sunghoon Joo (주성훈)
PR-285 Leveraging Semantic and Lexical Matching to Improve the Recall of Docu...Sunghoon Joo
PR-285: Leveraging Semantic and Lexical Matching to Improve the Recall of Document Retrieval Systems: A Hybrid Approach
[Saar Kuzi et al., 2020]
Paper link: https://arxiv.org/pdf/2010.01195.pdf
Video presentation link: https://youtu.be/QfkcN4SZ1Po
reviewed by Sunghoon Joo (주성훈)
PR-246: A deep learning system for differential diagnosis of skin diseasesSunghoon Joo
PR-246: A deep learning system for differential diagnosis of skin diseases
Paper link: https://arxiv.org/pdf/1909.05382.pdf
Video presentation link: https://youtu.be/8ZAtvPKqXeA
reviewed by Sunghoon Joo
PR173 : Automatic Chemical Design Using a Data-Driven Continuous Representati...Sunghoon Joo
Paper review slide.
Title : Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules
Paper url : https://pubs.acs.org/doi/full/10.1021/acscentsci.7b00572
video url : https://youtu.be/hk4e8ZCkNWg
PR-159 : Synergistic Image and Feature Adaptation: Towards Cross-Modality Dom...Sunghoon Joo
Paper review slide.
Title : Synergistic Image and Feature Adaptation: Towards Cross-Modality Domain Adaptation for Medical Image Segmentation
Paper url : https://arxiv.org/pdf/1901.08211
video url : https://youtu.be/sR7hBJGpwQo
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
Automobile Management System Project Report.pdfKamal Acharya
The proposed project is developed to manage the automobile in the automobile dealer company. The main module in this project is login, automobile management, customer management, sales, complaints and reports. The first module is the login. The automobile showroom owner should login to the project for usage. The username and password are verified and if it is correct, next form opens. If the username and password are not correct, it shows the error message.
When a customer search for a automobile, if the automobile is available, they will be taken to a page that shows the details of the automobile including automobile name, automobile ID, quantity, price etc. “Automobile Management System” is useful for maintaining automobiles, customers effectively and hence helps for establishing good relation between customer and automobile organization. It contains various customized modules for effectively maintaining automobiles and stock information accurately and safely.
When the automobile is sold to the customer, stock will be reduced automatically. When a new purchase is made, stock will be increased automatically. While selecting automobiles for sale, the proposed software will automatically check for total number of available stock of that particular item, if the total stock of that particular item is less than 5, software will notify the user to purchase the particular item.
Also when the user tries to sale items which are not in stock, the system will prompt the user that the stock is not enough. Customers of this system can search for a automobile; can purchase a automobile easily by selecting fast. On the other hand the stock of automobiles can be maintained perfectly by the automobile shop manager overcoming the drawbacks of existing system.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Vaccine management system project report documentation..pdfKamal Acharya
The Division of Vaccine and Immunization is facing increasing difficulty monitoring vaccines and other commodities distribution once they have been distributed from the national stores. With the introduction of new vaccines, more challenges have been anticipated with this additions posing serious threat to the already over strained vaccine supply chain system in Kenya.
3. 1. Research Background
Introduction
3/19
• AutoML
, architecture hyperparameter .
• (NAS, Neural Architecture Search)
• Hyperparameters
• Learning rule (activation function, full forward pass, data augmentation, weight optimization, layer and weight pruning)
AutoML
https://arxiv.org/pdf/1810.13306.pdf
4. 1. Research Background
Architecture search
• - Constrained search space
• building block search algorithm , NAS .
• constrained search space .
Search space :
Saining Xie et al. (2019) https://arxiv.org/pdf/1904.01569.pdf
PR-155
Golnaz Ghaisi et al. (2019) https://arxiv.org/pdf/1904.01569.pdf
PR-166
Yanan Sun et al. (2019) https://arxiv.org/pdf/1710.10741.pdf
4/19
5. 1. Research Background
AutoML-Zero
• We propose to automatically search for whole ML algorithms using little restriction
on form and only simple mathematical operations as building blocks.
Matrix decomposition derivative .
5/19
6. 1. Research Background
AutoML-Zero
• we propose to automatically search for whole ML algorithms using little restriction
on form and only simple mathematical operations as building blocks.
백지상태에서 시작해서 최종 알고리즘 까지
정말 어마어마한 search space…
4일
6/19
8. P=5, T=3 일 때,
2. Methods
Type (i)
랜덤 연산 삽입/삭제.
삭제 확률이 삽입 확률의 두 배
Type (ii)
함수 내 연산 전부 교체
Type (iii)
Argument 하나만 교체.
Real-valued constant 수정할 때,
[0.5, 2.0]사이의 수 임의선택 후
곱하고 10%의 확률로 부호 바꿈
Evolutionary method
T만큼 랜덤선택
8/19
12. 3. Experimental Results
AutoML-Zero hand-designed reference (2-layer FC NN)
• CIFAR-10 MNIST task
• 10 class , binary classification ; 10C2 = 45 pairs
• pair 8000 train/ 2000 valid example
• 45 36 – Tsearch
(search task . 1~10 evolution cycle )
• 45 9 – Tselect ( best accuracy )
• CIFAR-10 test set final evaluation
• Number of possible operations: 7/58/58 for Setup/Predict/Learn
Figure 6 1 illustration , (5, 20) .
12/19
• Training Epoch : 1 or 10; evolution parameter: P=100, T=10
• Maximum num. instructions for Setup/Predict/Learn: 21/21/45.
13. 3. Experimental Results
• Best model parameter (learning rate, uniform distribution mean ) Tselect dataset random search .
, linear/nonlinear baseline hyperparameters random search .
• [CIFAR-10 ] 5 trial best algorithm accuracy : 84.06 0.10%
Linear baseline : logistic regression, 77.65 0.22%
Nonlinear baseline : 2-layer fully connected neural network, 82.22 0.17%
• binary classification task :
1) SVHN (32 x 32 x 3) (88.12% AutoML-Zero vs. 59.58% linear baseline vs. 85.14% for nonlinear baseline)
2) down-sampled ImageNet (128 x 128 x 3) (80.78% vs. 76.44% vs. 78.44%)
3) Fashion MNIST (28 x 28 x 1) (98.60% vs. 97.90% vs. 98.21%).
search space design convolution batch normalization .
AutoML-Zero hand-designed reference (2-layer FC NN)
AutoML-Zero 2-layer FC NN .
13/19
14. 3. Experimental Results
Challenging task AutoML-Zero
1) Few training examples
• Training dataset 80 100 epoch ,
AutoML-Zero Noisy ReLU (dropout ) .
• ?
(80 examples) vs. (800 examples) 30 ,
(p<0.0005) noisy ReLU .
14/19
15. 3. Experimental Results
Challenging task AutoML-Zero
2) Fast training
• Training dataset 800 10 epoch ,
AutoML-Zero learning-rate decay .
• ?
10 epoch vs. 100 epoch 30 ,
10 epoch case 30 (30/30), 100 epoch case 3 (3/30) learning-rate decay
.
15/19