"Anime Generation with AI".
- Video: Generated Anime: https://youtu.be/X9j1fwexK2c
- Video: Other AI Solutions for Anime Production Issues: https://youtu.be/Gz90H1M7_u4
Artificial Intelligence Workshop, Collegio universitario Bertoni, Milano, 20 May 2017.
Audience of the workshop: undergraduate students without neural networks background.
Summary:
- Deep Learning Showcase
- What is deep learning and how it works
- How to start with deep learning
- Live demo: image recognition with Nvidia DIGITS
- Playground
Duration: 2 hours.
"Anime Generation with AI".
- Video: Generated Anime: https://youtu.be/X9j1fwexK2c
- Video: Other AI Solutions for Anime Production Issues: https://youtu.be/Gz90H1M7_u4
Artificial Intelligence Workshop, Collegio universitario Bertoni, Milano, 20 May 2017.
Audience of the workshop: undergraduate students without neural networks background.
Summary:
- Deep Learning Showcase
- What is deep learning and how it works
- How to start with deep learning
- Live demo: image recognition with Nvidia DIGITS
- Playground
Duration: 2 hours.
Interaction Networks for Learning about Objects, Relations and PhysicsKen Kuroki
For my presentation for a reading group. I have not in any way contributed this study, which is done by the researchers named on the first slide.
https://papers.nips.cc/paper/6418-interaction-networks-for-learning-about-objects-relations-and-physics
Introduction of "TrailBlazer" algorithmKatsuki Ohto
論文「Blazing the trails before beating the path: Sample-efficient Monte-Carlo planning」紹介スライドです。NIPS2016読み会@PFN(2017/1/19) https://connpass.com/event/47580/ にて。
Introduction of “Fairness in Learning: Classic and Contextual Bandits”Kazuto Fukuchi
This material consists of an introduction of a paper titled “Fairness in Learning: Classic and Contextual Bandits” from NIPS2016. This is presented at https://connpass.com/event/47580/.
Improving Variational Inference with Inverse Autoregressive FlowTatsuya Shirakawa
This slide was created for NIPS 2016 study meetup.
IAF and other related researches are briefly explained.
paper:
Diederik P. Kingma et al., "Improving Variational Inference with Inverse Autoregressive Flow", 2016
https://papers.nips.cc/paper/6581-improving-variational-autoencoders-with-inverse-autoregressive-flow
Deep Learning - The Past, Present and Future of Artificial IntelligenceLukas Masuch
In the last couple of years, deep learning techniques have transformed the world of artificial intelligence. One by one, the abilities and techniques that humans once imagined were uniquely our own have begun to fall to the onslaught of ever more powerful machines. Deep neural networks are now better than humans at tasks such as face recognition and object recognition. They’ve mastered the ancient game of Go and thrashed the best human players. “The pace of progress in artificial general intelligence is incredible fast” (Elon Musk – CEO Tesla & SpaceX) leading to an AI that “would be either the best or the worst thing ever to happen to humanity” (Stephen Hawking – Physicist).
What sparked this new hype? How is Deep Learning different from previous approaches? Let’s look behind the curtain and unravel the reality. This talk will introduce the core concept of deep learning, explore why Sundar Pichai (CEO Google) recently announced that “machine learning is a core transformative way by which Google is rethinking everything they are doing” and explain why “deep learning is probably one of the most exciting things that is happening in the computer industry“ (Jen-Hsun Huang – CEO NVIDIA).
Slides from my talk at Ficloud conference.
Talking about PROTEUS: Scalable online machine learning strategies for predictive analytics and real-time interactive visualization
Interaction Networks for Learning about Objects, Relations and PhysicsKen Kuroki
For my presentation for a reading group. I have not in any way contributed this study, which is done by the researchers named on the first slide.
https://papers.nips.cc/paper/6418-interaction-networks-for-learning-about-objects-relations-and-physics
Introduction of "TrailBlazer" algorithmKatsuki Ohto
論文「Blazing the trails before beating the path: Sample-efficient Monte-Carlo planning」紹介スライドです。NIPS2016読み会@PFN(2017/1/19) https://connpass.com/event/47580/ にて。
Introduction of “Fairness in Learning: Classic and Contextual Bandits”Kazuto Fukuchi
This material consists of an introduction of a paper titled “Fairness in Learning: Classic and Contextual Bandits” from NIPS2016. This is presented at https://connpass.com/event/47580/.
Improving Variational Inference with Inverse Autoregressive FlowTatsuya Shirakawa
This slide was created for NIPS 2016 study meetup.
IAF and other related researches are briefly explained.
paper:
Diederik P. Kingma et al., "Improving Variational Inference with Inverse Autoregressive Flow", 2016
https://papers.nips.cc/paper/6581-improving-variational-autoencoders-with-inverse-autoregressive-flow
Deep Learning - The Past, Present and Future of Artificial IntelligenceLukas Masuch
In the last couple of years, deep learning techniques have transformed the world of artificial intelligence. One by one, the abilities and techniques that humans once imagined were uniquely our own have begun to fall to the onslaught of ever more powerful machines. Deep neural networks are now better than humans at tasks such as face recognition and object recognition. They’ve mastered the ancient game of Go and thrashed the best human players. “The pace of progress in artificial general intelligence is incredible fast” (Elon Musk – CEO Tesla & SpaceX) leading to an AI that “would be either the best or the worst thing ever to happen to humanity” (Stephen Hawking – Physicist).
What sparked this new hype? How is Deep Learning different from previous approaches? Let’s look behind the curtain and unravel the reality. This talk will introduce the core concept of deep learning, explore why Sundar Pichai (CEO Google) recently announced that “machine learning is a core transformative way by which Google is rethinking everything they are doing” and explain why “deep learning is probably one of the most exciting things that is happening in the computer industry“ (Jen-Hsun Huang – CEO NVIDIA).
Slides from my talk at Ficloud conference.
Talking about PROTEUS: Scalable online machine learning strategies for predictive analytics and real-time interactive visualization
What you need to know to start an AI company?Mo Patel
An overview of why AI and Deep Learning are hot now? Overview f Machine Intelligence startups. What are the key ingredients for AI startup? How can AI startups compete with big tech companies and areas to focus on for differentiation?
DataStax | Meaningful User Experience with Graph Data (Chris Lacava, Expero) ...DataStax
Congratulations, your data is up and running in a graph database! This is the first step of many to unlocking the potential in your data. It’s easy to get mired in the complexities of graph technology and forget that real users, mere mortals, will need to use this information to inform mission critical tasks. To get the value out of your graph investment, you’ll need to provide an experience that enables users to explore and visualize your graph data in meaningful ways. In this talk we’ll take a hands on approach to applying user-centered strategies and leveraging the latest UI tools to rapidly create great experiences with graph data. Topics will include:
Tailoring experiences to the intended audience and data
Interacting with complex data shouldn’t be complicated for users. The key is to understand your users and build a solution that targets them.
Zeroing in on user goals and creating a solution that targets them in a fast, lightweight and iterative way
Using live data and rapid prototyping to inform your navigation, visualization selections and overall design
Determining the the right visualization for the job
Just because you can display almost anything doesn’t mean you should. Choosing the right visualizations to achieve specific goals is a key factor in unlocking the usefulness of graph. We’ll demonstrate how to match the right visualization against user needs.
When is it appropriate to break out of the standard node view and visualize graph data in another context like a geospatial map?
How do I determine the dominant dimensions to filter a node chart?
What is the simplest, most efficient way to traverse time with large data sets?
Do I need to visually expose the graph at all?
Cutting through the clutter on choosing the right visualization tools
Once you’ve got your goals set, how do you make it happen? Will it perform at scale? We’ll demonstrate example use cases with sample graph data and some of these tools to highlight practical uses.
Presentation given on 9th June 2017, updating a customer (who is in the Retail space) about IBM Power Systems. I cover who the boss is at the world wide level, OpenPOWER foundation, AI/Machine Learning/Deep Learning and PowerAI, NVlink Nutanix, Institute of Business Value, Cloud, Hybrid Cloud, PowerVC and OpenStack, IBM Power Systems and POWER9.
An overview of the workshop as presented at the 1st International Workshop on Benchmarking Linked Data (BLINK).
(HOBBIT project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 688227.)
Demystifying Deep Learning - Roberto Paredes Palacios @ PAPIs ConnectPAPIs.io
Deep Learning (DL) is becoming a big tsunami in the Machine Learning community. This talk aims at introducing DL, its motivation and main techniques. However, part of this talk is also devoted to demystify DL. What are the main advantages but also the main drawbacks of DL?. And what are the key issues that the practitioners have to consider?
Roberto Paredes is an Associate Professor at Departamento de Sistemas Informáticos y Computación DSIC of the Universidad Poliécnica de Valencia UPV. He belongs to the Pattern Recognition and Human Language Technologies Research Centre PRHLT. Roberto Paredes is the Director of the PRHLT and the President of the Spanish AERFAI Association. His main research interests are around the statistical learning, machine learning and more recently neural networks and deep learning.
Congratulations, your data is up and running in a graph database! This is the first step of many to unlocking the potential in your data. It’s easy to get mired in the complexities of graph technology and forget that real users, mere mortals, will need to use this information to inform mission critical tasks. To get the value out of your graph investment, you’ll need to provide an experience that enables users to explore and visualize your graph data in meaningful ways.
In this talk, we’ll take a hands on approach to applying user-centered strategies and leveraging the latest UI tools to rapidly create great experiences with graph data. Topics will include network analysis queries with Cypher and APOC, tailoring experiences to the intended audience and data, determining the the right visualization for the job and cutting through the clutter on choosing the right visualization tools.
SAAS IS THE ENEMY OF OPEN SOURCE GOOD THING THAT WE ARE IN THE POST-SAAS ERAOri Pekelman
My talk from Open Source Summit Paris 2016, on how our multi-cloud second generation PaaS, Platform.sh allows any Open Source vendor to create a sustainable non-evil SaaS model and what this means for enterprise customers. How Control and Productivity can be aligned.
Image Segmentation: Approaches and ChallengesApache MXNet
This slides go over the problem of deep semantic segmentation. It covers the different approaches taken, from hourglass autoencoder to pyramid networks.
Slides by Thomas Delteil
NVIDIA founder and CEO Jensen Huang took the stage in Munich — one of the hubs of the global auto industry — to introduce a powerful new AI computer for fully autonomous vehicles and a new VR application for those who design them.
「樹木モデルとランダムフォレスト(Tree-based Models and Random Forest) -機械学習による分類・予測-」。 Tree-based Model, Random Forest の入門的な内容です。機械学習・データマイニングセミナー 2010/10/07 。 hamadakoichi 濱田晃一
Introduction:
RNA interference (RNAi) or Post-Transcriptional Gene Silencing (PTGS) is an important biological process for modulating eukaryotic gene expression.
It is highly conserved process of posttranscriptional gene silencing by which double stranded RNA (dsRNA) causes sequence-specific degradation of mRNA sequences.
dsRNA-induced gene silencing (RNAi) is reported in a wide range of eukaryotes ranging from worms, insects, mammals and plants.
This process mediates resistance to both endogenous parasitic and exogenous pathogenic nucleic acids, and regulates the expression of protein-coding genes.
What are small ncRNAs?
micro RNA (miRNA)
short interfering RNA (siRNA)
Properties of small non-coding RNA:
Involved in silencing mRNA transcripts.
Called “small” because they are usually only about 21-24 nucleotides long.
Synthesized by first cutting up longer precursor sequences (like the 61nt one that Lee discovered).
Silence an mRNA by base pairing with some sequence on the mRNA.
Discovery of siRNA?
The first small RNA:
In 1993 Rosalind Lee (Victor Ambros lab) was studying a non- coding gene in C. elegans, lin-4, that was involved in silencing of another gene, lin-14, at the appropriate time in the
development of the worm C. elegans.
Two small transcripts of lin-4 (22nt and 61nt) were found to be complementary to a sequence in the 3' UTR of lin-14.
Because lin-4 encoded no protein, she deduced that it must be these transcripts that are causing the silencing by RNA-RNA interactions.
Types of RNAi ( non coding RNA)
MiRNA
Length (23-25 nt)
Trans acting
Binds with target MRNA in mismatch
Translation inhibition
Si RNA
Length 21 nt.
Cis acting
Bind with target Mrna in perfect complementary sequence
Piwi-RNA
Length ; 25 to 36 nt.
Expressed in Germ Cells
Regulates trnasposomes activity
MECHANISM OF RNAI:
First the double-stranded RNA teams up with a protein complex named Dicer, which cuts the long RNA into short pieces.
Then another protein complex called RISC (RNA-induced silencing complex) discards one of the two RNA strands.
The RISC-docked, single-stranded RNA then pairs with the homologous mRNA and destroys it.
THE RISC COMPLEX:
RISC is large(>500kD) RNA multi- protein Binding complex which triggers MRNA degradation in response to MRNA
Unwinding of double stranded Si RNA by ATP independent Helicase
Active component of RISC is Ago proteins( ENDONUCLEASE) which cleave target MRNA.
DICER: endonuclease (RNase Family III)
Argonaute: Central Component of the RNA-Induced Silencing Complex (RISC)
One strand of the dsRNA produced by Dicer is retained in the RISC complex in association with Argonaute
ARGONAUTE PROTEIN :
1.PAZ(PIWI/Argonaute/ Zwille)- Recognition of target MRNA
2.PIWI (p-element induced wimpy Testis)- breaks Phosphodiester bond of mRNA.)RNAse H activity.
MiRNA:
The Double-stranded RNAs are naturally produced in eukaryotic cells during development, and they have a key role in regulating gene expression .
Multi-source connectivity as the driver of solar wind variability in the heli...Sérgio Sacani
The ambient solar wind that flls the heliosphere originates from multiple
sources in the solar corona and is highly structured. It is often described
as high-speed, relatively homogeneous, plasma streams from coronal
holes and slow-speed, highly variable, streams whose source regions are
under debate. A key goal of ESA/NASA’s Solar Orbiter mission is to identify
solar wind sources and understand what drives the complexity seen in the
heliosphere. By combining magnetic feld modelling and spectroscopic
techniques with high-resolution observations and measurements, we show
that the solar wind variability detected in situ by Solar Orbiter in March
2022 is driven by spatio-temporal changes in the magnetic connectivity to
multiple sources in the solar atmosphere. The magnetic feld footpoints
connected to the spacecraft moved from the boundaries of a coronal hole
to one active region (12961) and then across to another region (12957). This
is refected in the in situ measurements, which show the transition from fast
to highly Alfvénic then to slow solar wind that is disrupted by the arrival of
a coronal mass ejection. Our results describe solar wind variability at 0.5 au
but are applicable to near-Earth observatories.
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
This pdf is about the Schizophrenia.
For more details visit on YouTube; @SELF-EXPLANATORY;
https://www.youtube.com/channel/UCAiarMZDNhe1A3Rnpr_WkzA/videos
Thanks...!
1. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
NIPS 2016 読み会
@Preferred Networks
2017/1/19
NIPS 2016
Overview and Deep Learning Topics
@hamadakoichi
濱田晃一
Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
2. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
2
AGENDA
◆Deep Learning Topics
◆NIPS 2016 Overview
◆Generative Adversarial Networks(GANs)
◆Recurrent Neural Networks(RNNs)
◆GANs
◆GANs in NIPS2016
◆Recent GANs
◆RNNs in NIPS2016
4. 4
Copyright (C) 2014 DeNA Co.,Ltd. All Rights Reserved.
AGENDA
◆Deep Learning Topics
◆NIPS 2016 Overview
◆Generative Adversarial Networks(GANs)
◆Recurrent Neural Networks(RNNs)
◆GANs
◆GANs in NIPS2016
◆Recent GANs
◆RNNs in NIPS2016
5. 5
Copyright (C) 2014 DeNA Co.,Ltd. All Rights Reserved.
AGENDA
◆Deep Learning Topics
◆NIPS 2016 Overview
◆Generative Adversarial Networks(GANs)
◆Recurrent Neural Networks(RNNs)
◆GANs
◆GANs in NIPS2016
◆Recent GANs
◆RNNs in NIPS2016
6. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
6
NIPS 2016
・第30回の開催
・期間: 2016年12月5-10日
・ICML 33回に続き長い伝統
・チュートリアル: 5(1日)
・本会議: 5-8(4日)
・ワークショップ: 9-10(2日)
・開催地: バルセロナ(スペイン)
貼る:会場雰囲気
7. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
7
NIPS 2016
参加者が 6000人に増加 (2015年の1.5倍)
※Terrence Sejnowskiは NIPS foundationの President
8. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
8
NIPS Features
・採択の92%はポスター
・採択率: 23%
・投稿数: 2500+、採択数: 568
・Oral(45) : 20分の口頭発表 + ポスター
・Poster(523) : ポスターのみ
・少数トラックでの進行(最大3)
(昨年までシングルトラックだったがパラレルに)
9. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
9
NIPS Features
・ポスター発表による活発な議論
(昨年までの19-24時の5時間ポスターからは時間縮小したが、最後まで活発な議論)
・210 min(3.5 hour)/ day
・130 Poster x 4 days
10. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
10
NIPS2016 Hot Topics
引用元:
The review process for NIPS 2016
http://www.tml.cs.uni-tuebingen.de/team/
luxburg/misc/nips2016/index.php
Deep Learning Computer Vision Large Scale Learning Learning Theory Optimization Sparsity
11. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
11
NIPS2016 Hot Topics
Tutorial 3/9、Symposium 2/3 が Deep Learning
Reinforcement Learning, Generative Adversarial Net, Recurrent Net
Tutorial Symposium
12. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
12
NIPS2016 Hot Topics
Tutorial Symposium
Tutorial 3/9、Symposium 2/3 が Deep Learning
Reinforcement Learning, Generative Adversarial Net, Recurrent Net
上記2トピックに関し、本会議論文をピックアップし概要紹介します
(Reinforcement Learningは、このNIPS読み会での個別論文の発表も多いため)
13. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
13
AGENDA
◆Deep Learning Topics
◆NIPS 2016 Overview
◆Generative Adversarial Networks(GANs)
◆Recurrent Neural Networks(RNNs)
◆GANs
◆GANs in NIPS2016
◆Recent GANs
◆RNNs in NIPS2016
14. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
14
AGENDA
◆Deep Learning Topics
◆NIPS 2016 Overview
◆Generative Adversarial Networks(GANs)
◆Recurrent Neural Networks(RNNs)
◆GANs
◆GANs in NIPS2016
◆Recent GANs
◆RNNs in NIPS2016
15. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
15
AGENDA
◆Deep Learning Topics
◆NIPS 2016 Overview
◆Generative Adversarial Networks(GANs)
◆Recurrent Neural Networks(RNNs)
◆GANs
◆GANs in NIPS2016
◆Recent GANs
◆RNNs in NIPS2016
16. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
16
Generative Adversarial Network (GAN)
Generative Adversarial Nets(GAN)
Goodfellow+, NIPS2014
17. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
17
Generative Adversarial Network (GAN)
Generator(生成器)と Discriminator(識別器)を戦わせ
生成精度を向上させる
識別器: “本物画像”と “生成器が作った偽画像”を識別する
生成器: 生成画像を識別器に“本物画像”と誤識別させようとする
(Goodfellow+, NIPS2014, Deep Learning Workshop, Presentation)
18. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
18
Generative Adversarial Network (GAN)
Minimax Objective function
Discriminator が
「本物画像」を「本物」と識別
(Goodfellow+, NIPS2014, Deep Learning Workshop, Presentation)
Discriminator が
「生成画像」を「偽物」と識別する
Discriminatorは
正しく識別しようとする
(最大化)
Generatorは Discriminator に誤識別させようとする(最小化)
Generator(生成器)と Discriminator(識別器)を戦わせ
生成精度を向上させる
19. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
19
自然画像の表現ベクトル空間学習・演算・画像生成
ICLR16: Deep Convolutional GAN : DCGAN (Radford+)
自然画像のクリアな画像生成 画像演算
Unsupervised Representation Learning with Deep
Convolutional Generative Adversarial Networks.
Alec Radford, Luke Metz, Soumith Chintala.
arXiv:1511.06434. In ICLR 2016.
20. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
20
ICML16: Autoencoding beyond pixels (Larsen+)
Autoencoding beyond pixels using a learned similarity metric.
Anders Boesen Lindbo Larsen, Søren Kaae Sønderby, Hugo Larochelle,
Ole Winther.
arXiv:1512.09300. In ICML 2016.
自然画像の表現ベクトル空間学習・演算・画像生成
21. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
21
ICML16: Generative Adversarial Text to Image Synthesis(Reed+)
Generative Adversarial Text to Image Synthesis.
Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen
Logeswaran, Bernt Schiele, Honglak Lee.
arXiv:1605.05396. In ICML 2016.
文章からの画像生成
文章で条件付したGAN
22. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
22
AGENDA
◆Deep Learning Topics
◆NIPS 2016 Overview
◆Generative Adversarial Networks(GANs)
◆Recurrent Neural Networks(RNNs)
◆GANs
◆GANs in NIPS2016
◆Recent GANs
◆RNNs in NIPS2016
23. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
23
Generative Adversarial Text to Image Synthesis(Reed+)
Learning What and Where to Draw.
Scott Reed, Zeynep Akata, Santosh Mohan, Samuel Tenka, Bernt Schiele, Honglak Lee.
arXiv:1610.02454. In NIPS 2016.
文章からの画像生成
表示位置情報も条件付したGAN
24. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
24
InfoGAN (Chen+)
InfoGAN: Interpretable Representation
Learning by Information Maximizing
Generative Adversarial Nets.
Xi Chen, Yan Duan, Rein Houthooft, John
Schulman, Ilya Sutskever, Pieter Abbeel.
arXiv:1606.03657. In NIPS 2016
Latent code c、Generator 出力との Mutual Information を加え
GANで狙って表現ベクトル空間を学習
25. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
25
3Dモデルの表現ベクトル空間学習・演算・生成
3D GAN (Wu+)
3Dモデルの生成 3Dモデル演算
写真からの3Dモデル生成
3D VAE-GAN
3D GAN
Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling.
Jiajun Wu, Chengkai Zhang, Tianfan Xue, William T. Freeman, Joshua B. Tenenbaum.
arXiv:1610.07584. In NIPS 2016.
26. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
26
Generating Videos with Scene Dynamics(Vondrick+)
動画の表現ベクトル空間学習・動画生成
Generating Videos with Scene Dynamics.
Carl Vondrick, Hamed Pirsiavash, Antonio Torralba. In NIPS 2016.
http://web.mit.edu/vondrick/tinyvideo/
動画生成 1画像からその後の動画生成
27. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
27
f-GAN (Nowozin+)
GAN目的関数を Symmetric JS-divergence から
f-divergence に一般化。各Divergence を用い学習・評価
f-GAN: Training Generative
Neural samplers using
variational Divergence
Minimization.
Sebastian Nowozin, Botond
Cseke, Ryota Tomioka.
arXiv:1606.00709.
In NIPS 2016.
Kernel Density Estimation on the MNIST
f-divergence
LSUN
28. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
28
Improved Techniques for Training GANs (Salimans+)
Improved Techniques for Training GANs.
Tim Salimans, Ian Goodfellow, Wojciech
Zaremba, Vicki Cheung, Alec Radford, Xi Chen.
arXiv:1606.03498. In NIPS 2016.
収束が難しいGANの学習方法論
GAN半教師あり学習
1. Feature Matching
2. Minibatch discrimination
3. Historical averaging
4. One-sided label smoothing
5. Virtual batch normalization
Techniques Semi-supervised learning
MNIST
Semi-supervised training
with feature matching
Semi-supervised training
with feature matching and
minibatch discrimination
CIFAR-10
Generated samples
29. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
29
AGENDA
◆Deep Learning Topics
◆NIPS 2016 Overview
◆Generative Adversarial Networks(GANs)
◆Recurrent Neural Networks(RNNs)
◆GANs
◆GANs in NIPS2016
◆Recent GANs
◆RNNs in NIPS2016
30. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
30
Extended Architectures for Generative Adversarial Nets 2016
Extended Architectures for GANs
Figure by Chris Olah (2016) : https://twitter.com/ch402/status/793535193835417601
Ex)
Conditional Image Synthesis With
Auxiliary Classifier GANs.
Augustus Odena, Christopher Olah,
Jonathon Shlens.
arXiv:1610.09585.
Generative Adversarial Net の各種拡張
31. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
31
Stack GAN: Text to PhotoRealistic Image Synthesis(Zhang+2016)
1段目で文章から低解像度画像を生成
2段目で低解像度画像から高解像度画像を生成
StackGAN: Text to Photo-realistic Image
Synthesis with Stacked Generative Adversarial
Networks.
Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang,
Xiaolei Huang, Xiaogang Wang, Dimitris Metaxas.
arXiv:1612.03242.
32. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
32
Plug & Play Generative Networks (Nguyen+2016)
高解像度な画像生成
227 x 227 ImageNet
Plug & Play Generative Networks: Conditional
Iterative Generation of Images in Latent Space.
Anh Nguyen, Jason Yosinski, Yoshua Bengio,
Alexey Dosovitskiy, Jeff Clune.
arXiv:1612.00005.
33. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
33
AGENDA
◆Deep Learning Topics
◆NIPS 2016 Overview
◆Generative Adversarial Networks(GANs)
◆Recurrent Neural Networks(RNNs)
◆GANs
◆GANs in NIPS2016
◆Recent GANs
◆RNNs in NIPS2016
34. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
34
Phased LSTM (Neil+)
時間で開閉するGateを導入した LSTM
Sensor Data 等、Event 駆動の長期系列特徴を学習
Phased LSTM: Accelerating Recurrent Network Training for Long or Event-based Sequences.
Daniel Neil, Michael Pfeiffer, Shih-Chii Liu.
arXiv:1610.09513. In NIPS 2016.
LSTM Phased LSTM
Phased LSTM Behavior
Frequency Discrimination Task
35. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
35
Using Fast Weights to Attend to the Recent Past (Ba+)
早く学習・減衰する Fast Weight 追加で、系列固有の情報を扱う
Slow Weight での長期特徴とあわせ、双方の系列特徴を学習
Using Fast Weights to Attend to the Recent Past.
Jimmy Ba, Geoffrey Hinton, Volodymyr Mnih, Joel Z. Leibo, Catalin Ionescu.
arXiv:1610.06258. In NIPS 2016.
Associative Retrieval Task
Classification Error Test Log Likelihood
36. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
36
Learning to learn by GD by GD (Andrychowicz+)
LSTMを用いたOptimizer
Parameterごとに 勾配系列から適切な次の更新量を算出
Learning to learn by gradient descent by gradient descent.
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford,
Nando de Freitas.
arXiv:1606.04474. In NIPS 2016.
37. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
37
Matching Network for One Shot Learning (Vinyals+)
Attention Mechanism を用いた One Shot Learning
参照構造を学習しておき、新規小規模データセットでも高精度で動作
Matching Networks for One Shot Learning.
Oriol Vinyals, Charles Blundell, Timothy Lillicrap,
Koray Kavukcuoglu, Daan Wierstra.
arXiv:1606.04080. In NIPS 2016.
Omniglot
miniImageNet
38. Copyright (C) 2016 DeNA Co.,Ltd. All Rights Reserved.
38
AGENDA
◆Deep Learning Topics
◆NIPS 2016 Overview
◆Generative Adversarial Networks(GANs)
◆Recurrent Neural Networks(RNNs)
◆GANs
◆GANs in NIPS2016
◆Recent GANs
◆RNNs in NIPS2016