SlideShare a Scribd company logo
1 of 68
Frontiers of Vision and Language:
Bridging Images and Texts by Deep Learning
The University of Tokyo
Yoshitaka Ushiku
losnuevetoros
Documents = Vision + Language
Vision & Language:
an emerging topic
• Integration of CV, NLP
and ML techs
• Several backgrounds
– Impact of Deep Learning
• Image recognition (CV)
• Machine translation (NLP)
– Growth of user generated
contents
– Exploratory researches on
Vision and Language
2012: Impact of Deep Learning
Academic AI startup A famous company
Many slides refer to the first use of CNN (AlexNet) on ImageNet
2012: Impact of Deep Learning
Academic AI startup A famous company
Large gap of error rates
on ImageNet
1st team: 15.3%
2nd team: 26.2%
Large gap of error rates
on ImageNet
1st team: 15.3%
2nd team: 26.2%
Large gap of error rates
on ImageNet
1st team: 15.3%
2nd team: 26.2%
Many slides refer to the first use of CNN (AlexNet) on ImageNet
2012: Impact of Deep Learning
According to the official site…
1st team w/ DL
Error rate: 15%
2nd team w/o DL
Error rate: 26%
[http://image-net.org/challenges/LSVRC/2012/results.html]
It’s me!!
2014: Another impact of Deep Learning
• Deep learning appears in machine translation
[Sutskever+, NIPS 2014]
– LSTM [Hochreiter+Schmidhuber, 1997] solves the gradient vanishing
problem in RNN
→Dealing with relations between distant words in a sentence
– Four-layer LSTM is trained in an end-to-end manner
→comparable to state-of-the-art (English to French)
• Emergence of common techs such as CNN/RNN
Reduction of barriers to get into CV+NLP
Input
Output
Growth of user generated contents
Especially in content posting/sharing service
• Facebook: 300 million photos per day
• YouTube: 400-hours videos per minute
Pōhutukawa blooms this
time of the year in New
Zealand. As the flowers
fall, the ground
underneath the trees look
spectacular.
Pairs of a sentence
+ a video / photo
→Collectable in
large quantities
Exploratory researches on Vision and Language
Captioning an image associated with its article
[Feng+Lapata, ACL 2010]
• Input: article + image Output: caption for image
• Dataset: Sets of article + image + caption
× 3361
King Toupu IV died at the
age of 88 last week.
Exploratory researches on Vision and Language
Captioning an image associated with its article
[Feng+Lapata, ACL 2010]
• Input: article + image Output: caption for image
• Dataset: Sets of article + image + caption
× 3361
King Toupu IV died at the
age of 88 last week.As a result of these backgrounds:
Various research topics such as …
Image Captioning
Group of people sitting
at a table with a dinner.
Tourists are standing on
the middle of a flat desert.
[Ushiku+, ICCV 2015]
Video Captioning
A man is holding a box of doughnuts.
Then he and a woman are standing next each other.
Then she is holding a plate of food.
[Shin+, ICIP 2016]
Multilingual + Image Caption Translation
Ein Masten mit zwei Ampeln
fur Autofahrer. (German)
A pole with two lights
for drivers. (English)
[Hitschler+, ACL 2016]
Visual Question Answering[Fukui+, EMNLP 2016]
Image Generation from Captions
This bird is blue with white
and has a very short beak.
This flower is white and
yellow in color, with petals
that are wavy and smooth.
[Zhang+, 2016]
Goal of this keynote
Looking over researches on vision&language
• Historical flow of each area
• Changes by Deep Learning
× Deep Learning enabled these researches
✓ Deep Learning boosted these researches
1. Image Captioning
2. Video Captioning
3. Multilingual + Image Caption Translation
4. Visual Question Answering
5. Image Generation from Captions
Frontiers of Vision and Language 1
Image Captioning
Every picture tells a story
Dataset:
Images + <object, action, scene> + Captions
1. Predict <object, action, scene> for an input
image using MRF
2. Search for the existing caption associated with
similar <object, action, scene>
<Horse, Ride, Field>
[Farhadi+, ECCV 2010]
Every picture tells a story
<pet, sleep, ground>
See something unexpected.
<transportation, move, track>
A man stands next to a train
on a cloudy day.
[Farhadi+, ECCV 2010]
Retrieve? Generate?
• Retrieve
• Generate
– Template-based
e.g. generating a Subject+Verb sentence
– Template-free
A small gray dog
on a leash.
A black dog
standing in
grassy area.
A small white dog
wearing a flannel
warmer.
Input Dataset
Retrieve? Generate?
• Retrieve
– A small gray dog on a leash.
• Generate
– Template-based
e.g. generating a Subject+Verb sentence
– Template-free
A small gray dog
on a leash.
A black dog
standing in
grassy area.
A small white dog
wearing a flannel
warmer.
Input Dataset
Retrieve? Generate?
• Retrieve
– A small gray dog on a leash.
• Generate
– Template-based
dog+stand ⇒ A dog stands.
– Template-free
A small gray dog
on a leash.
A black dog
standing in
grassy area.
A small white dog
wearing a flannel
warmer.
Input Dataset
Retrieve? Generate?
• Retrieve
– A small gray dog on a leash.
• Generate
– Template-based
dog+stand ⇒ A dog stands.
– Template-free
A small white dog standing on a leash.
A small gray dog
on a leash.
A black dog
standing in
grassy area.
A small white dog
wearing a flannel
warmer.
Input Dataset
Captioning with multi-keyphrases
[Ushiku+, ACM MM 2012]
End of sentence
[Ushiku+, ACM MM 2012]
Benefits of Deep Learning
• Refinement of image recognition [Krizhevsky+, NIPS 2012]
• Deep learning appears in machine translation
[Sutskever+, NIPS 2014]
– LSTM [Hochreiter+Schmidhuber, 1997] solves the gradient vanishing
problem in RNN
→Dealing with relations between distant words in a sentence
– Four-layer LSTM is trained in an end-to-end manner
→comparable to state-of-the-art (English to French)
Emergence of common techs such as CNN/RNN
Reduction of barriers to get into CV+NLP
Input
Output
Google NIC
Concatenation of Google’s methods
• GoogLeNet [Szegedy+, CVPR 2015]
• MT with LSTM
[Sutskever+, NIPS 2014]
Caption (word seq.) 𝑆0 … 𝑆 𝑁 for image 𝐼
𝑆0: beginning of the sentence
𝑆1 = LSTM CNN 𝐼
𝑆𝑡 = LSTM St−1 , 𝑡 = 2 … 𝑁 − 1
𝑆 𝑁: end of the sentence
[Vinyals+, CVPR 2015]
Examples of generated captions
[https://github.com/tensorflow/models/tree/master/im2txt]
[Vinyals+, CVPR 2015]
Comparison to [Ushiku+, ACM MM 2012]
Input image
[Ushiku+, ACM MM 2012]:
Conventional object recognition
Fisher Vector + Linear classifier
Neural image captioning:
Conventional object recognition
Convolutional Neural Network
Neural image captioning
Conventional machine translation
Recurrent Neural Network + beam search
[Ushiku+, ACM MM 2012]:
Conventional machine translation
Log Linear Model + beam search
Estimation of important words Connect the words with grammar model
• Trained using only images and captions
• Approaches are similar to each other
Current development: Accuracy
• Attention-based captioning [Xu+, ICML 2015]
– Focus on some areas for predicting each word!
– Both attention and caption models are trained
using pairs of an image & caption
Current development: Problem setting
Dense captioning
[Lin+, BMVC 2015] [Johnson+, CVPR 2016]
Current development: Problem setting
Generating captions for a photo sequence
[Park+Kim, NIPS 2015][Huang+, NAACL 2016]
The family
got
together for
a cookout.
They had a
lot of
delicious
food.
The dog
was happy
to be there.
They had a
great time
on the
beach.
They even
had a swim
in the water.
Current development: Problem setting
Captioning using sentiment terms
[Mathews+, AAAI 2016][Shin+, BMVC 2016]
Neutral caption
Positive caption
Frontiers of Vision and Language 2
Video Captioning
Before Deep Learning
• Grounding of languages and objects in videos
[Yu+Siskind, ACL 2013]
– Learning from only videos and their captions
– Experiment with a small object with few objects
– Controlled and small dataset
• Deep Learning should suite for this problem
– Image Captioning: single image → word sequence
– Video Captioning: image sequence → word
sequence
End-to-end learning by Deep Learning
• LRCN
[Donahue+, CVPR 2015]
– CNN+RNN for
• Action recognition
• Image / Video
Captioning
• Video to Text
[Venugopalan+, ICCV 2015]
– CNNs to recognize
• Objects from RGB frames
• Actions from flow images
– RNN for captioning
Video Captioning
A man is holding a box of doughnuts.
Then he and a woman are standing next each other.
Then she is holding a plate of food.
[Shin+, ICIP 2016]
Video Captioning
A boat is floating on the water near a mountain.
And a man riding a wave on top of a surfboard.
Then he on the surfboard in the water.
[Shin+, ICIP 2016]
Video Retrieval from Caption
• Input: Captions
• Output: A video related to the caption
10 sec video clip from 40 min database!
• Video captioning is also addressed
A woman in blue is
playing ping pong in a
room.
A guy is skiing with no
shirt on and yellow
snow pants.
A man is water skiing
while attached to a
long rope.
[Yamaguchi+, ICCV 2017]
Frontiers of Vision and Language 3
Multilingual +
Image Caption Translation
Towards multiple languages
Datasets with multilingual captions
• IAPR TC12 [Grubinger+, 2006] English + Germany
• Multi30K [Elliot+, 2016] English + Germany
• STAIR Captions [Yoshikawa+, 2017]
English + Japanese
Development of cross-lingual tasks
• Non-English-caption generation
• Image Caption Transration
Input: Pair of a caption in Language A + an image
or A caption in Language A
Output: Caption in Language B
Non-English-caption generation
Non-English-caption generation
Most researches: generate English Caption
• Japanese [Miyazaki+Shimizu, ACL 2016]
• Chinese [Li+, ICMR 2016]
• Turkish [Unal+, SIU 2016]
Çimlerde ko¸ san bir köpek
金色头发的小女孩
柵の中にキリンが一頭
立っています
Just collecting non-English captions?
Transfer learning among languages
[Miyazaki+Shimizu, ACL 2016]
• Vision-Language grounding Wim is transferred
• Efficient learning using small amount of captions
an elephant is
an elephant
一匹の 象が 土の
一匹の 象が
Image Caption Translation
Machine translation via visual data
Images can boost MT [Calixto+,2012]
• Example below (English to Portuguese):
Does the word “seal” in English
– mean “seal” similar to “stamp”?
– mean “seal” which is a sea animal?
• [Calixto+,2012] insist that the mistranslation can be
avoided using a related image (w/o experiments)
Mistranslation!
Input: Caption in Language A + image
• Caption translation via an associated image
[Elliott+, 2015] [Hitschler+, ACL 2016]
– Generate translation candidates
– Re-rank the candidates using similar images’
captions in Language B
Eine Person in
einem Anzug
und Krawatte
und einem Rock.
(In German)
Translation w/o the related image
A person in a suit and tie
and a rock.
Translation with the related image
A person in a suit and tie
and a skirt.
Input: Caption in Language A
• Cross-lingual document retrieval via images
[Funaki+Nakayama, EMNLP 2015]
• Zero-shot machine translation
[Nakayama+Nishida, 2017]
Frontiers of Vision and Language 4
Visual Question Answering
Visual Question Answering (VQA)
Proposed in Human-Computer Interfaces
• VizWiz [Bigham+, UIST 2010]
Manually solved on AMT
• Automation for the first time (w/o Deep Learning)
[Malinowski+Fritz, NIPS 2014]
• Similar term: Visual Turing Test [Malinowski+Fritz, 2014]
VQA: Visual Question Answering
• Established VQA as an AI problem
– Provided a benchmark dataset
– Experimental results with reasonable baselines
• Portal web site is also organized
– http://www.visualqa.org/
– Annual competition for VQA accuracy
[Antol+, ICCV 2015]
What color are her eyes?
What is the mustache made of?
VQA Dataset
Collected questions and answers on AMT
• Over 100K real images and 30K abstract images
• About 700K questions+10 answers for each
VQA=Multiclass Classification
Feature 𝑍𝐼+𝑄 is applied to an usual classifier
Question 𝑄
What objects are
found on the bed?
Answer 𝐴
bed sheets, pillow
Image 𝐼
Image feature
𝑥𝐼
Question feature
𝑥 𝑄
Integrated feature
𝑧𝐼+𝑄
Development of VQA
How to calculate the integrated feature 𝑧𝐼+𝑄?
• VQA [Antol+, ICCV 2015]: Just concatenate them
• Summation
例 Summation of an image feature with attention
and a question feature [Xu+Saenko, ECCV 2016]
• Multiplication
e.g.Bilinear multiplication using DFT
[Fukui+, EMNLP 2016]
• Hybrid of summation and multiplication
e.g.Concatenation of sum and multiplication
[Saito+, ICME 2017]
𝑧𝐼+𝑄 =
𝑥𝐼
𝑥 𝑄
𝑥𝐼 𝑥 𝑄
𝑥𝐼 𝑥 𝑄𝑧𝐼+𝑄 =
𝑧𝐼+𝑄 =
𝑧𝐼+𝑄 =
𝑥𝐼 𝑥 𝑄
𝑥𝐼 𝑥 𝑄
VQA Challenge
Examples from competition results
Q: What is the woman holding?
GT A: laptop
Machine A: laptop
Q: Is it going to rain soon?
GT A: yes
Machine A: yes
VQA Challenge
Examples from competition results
Q: Why is there snow on one
side of the stream and clear
grass on the other?
GT A: shade
Machine A: yes
Q: Is the hydrant painted a new
color?
GT A: yes
Machine A: no
Frontiers of Vision and Language 5
Image Generation from Captions
Image generation from input caption
Photo-realistic image generation itself is difficult
• [Mansimov+, ICLR 2016]: Incrementally draw using LSTM
• N.B. Photo synthesis is well studied [Hays+Efros, 2007]
Generative Adversarial Networks (GAN)
[Goodfellow+, NIPS 2014]
• Unconditional generative model
• Adversarial learning of Generator and Discriminator
• GAN using convolution … DCGAN [Radford+, ICLR 2016]
Before Conditional Generative Models
Generator
Random vector → Image
Discriminator
Discriminates real or fake
is a fake
image from Generator!
Generative Adversarial Networks (GAN)
[Goodfellow+, NIPS 2014]
• Unconditional generative model
• Adversarial learning of Generator and Discriminator
• GAN using convolution … DCGAN [Radford+, ICLR 2016]
Before Conditional Generative Models
Generator
Random vector → Image
Discriminator
Discriminates real or fake
is a fake
image from Generator!
Generative Adversarial Networks (GAN)
[Goodfellow+, NIPS 2014]
• Unconditional generative model
• Adversarial learning of Generator and Discriminator
• GAN using convolution … DCGAN [Radford+, ICLR 2016]
Before Conditional Generative Models
Generator
Random vector → Image
Discriminator
Discriminates real or fake
is a fake
image from Generator!
Generative Adversarial Networks (GAN)
[Goodfellow+, NIPS 2014]
• Unconditional generative model
• Adversarial learning of Generator and Discriminator
• GAN using convolution … DCGAN [Radford+, ICLR 2016]
Before Conditional Generative Models
Generator
Random vector → Image
Discriminator
Discriminates real or fake
is a fake
image from Generator!
Generative Adversarial Networks (GAN)
[Goodfellow+, NIPS 2014]
• Unconditional generative model
• Adversarial learning of Generator and Discriminator
• GAN using convolution … DCGAN [Radford+, ICLR 2016]
Before Conditional Generative Models
Generator
Random vector → Image
Discriminator
Discriminates real or fake
is a … hmm
Add a Caption to Generator and Discriminator
Conditional Generative Models
Tries to generate an image
・photo-realistic
・related to the caption
Tries to detect an image
・fake
・unrelated
[Reed+, ICML 2016]
Examples of generated images
• Birds (CUB) / Flowers (Oxford-102)
– About 10K images & 5 captions for each image
– 200 kinds of birds / 102 kinds of flowers
A tiny bird, with a tiny beak,
tarsus and feet, a blue crown,
blue coverts, and black
cheek patch
Bright droopy yellow petals
with burgundy streaks, and a
yellow stigma
[Reed+, ICML 2016]
Towards more realistic image generation
StackGAN [Zhang+, 2016]
Two-step GANs
• First GAN generates small and fuzzy image
• Second GAN enlarges and refines it
Examples of generated images
This bird is blue with white
and has a very short beak.
This flower is white and
yellow in color, with petals
that are wavy and smooth.
[Zhang+, 2016]
Examples of generated images
This bird is blue with white
and has a very short beak.
This flower is white and
yellow in color, with petals
that are wavy and smooth.
[Zhang+, 2016]
N.B. Results using dataset specialized in birds / flowers
→ More breakthrough is necessary to generate general images
Take-home Messages
• Looked over researches on vision and language
1. Image Captioning
2. Video Captioning
3. Multilingual + Image Caption Translation
4. Visual Question Answering
5. Image Generation from Captions
• Contributions of Deep Learning
– Most research themes exist before Deep Learning
– Commodity techs for processing images, videos and natural
languages
– Evolution of recognition and generation
Towards a new stage among vision and language!

More Related Content

What's hot

Semantic Segmentation - Fully Convolutional Networks for Semantic Segmentation
Semantic Segmentation - Fully Convolutional Networks for Semantic SegmentationSemantic Segmentation - Fully Convolutional Networks for Semantic Segmentation
Semantic Segmentation - Fully Convolutional Networks for Semantic Segmentation岳華 杜
 
Age and Gender Detection-converted.pdf
Age and Gender Detection-converted.pdfAge and Gender Detection-converted.pdf
Age and Gender Detection-converted.pdfMohammedMuzammil83
 
"Semantic Segmentation for Scene Understanding: Algorithms and Implementation...
"Semantic Segmentation for Scene Understanding: Algorithms and Implementation..."Semantic Segmentation for Scene Understanding: Algorithms and Implementation...
"Semantic Segmentation for Scene Understanding: Algorithms and Implementation...Edge AI and Vision Alliance
 
"Image and Video Summarization," a Presentation from the University of Washin...
"Image and Video Summarization," a Presentation from the University of Washin..."Image and Video Summarization," a Presentation from the University of Washin...
"Image and Video Summarization," a Presentation from the University of Washin...Edge AI and Vision Alliance
 
Face Detection and Recognition System
Face Detection and Recognition SystemFace Detection and Recognition System
Face Detection and Recognition SystemZara Tariq
 
Object Detection Classification, tracking and Counting
Object Detection Classification, tracking and CountingObject Detection Classification, tracking and Counting
Object Detection Classification, tracking and CountingShounak Mitra
 
Image Search: Then and Now
Image Search: Then and NowImage Search: Then and Now
Image Search: Then and NowSi Krishan
 
Introduction to Deep face detection and recognition
Introduction to Deep face detection and recognitionIntroduction to Deep face detection and recognition
Introduction to Deep face detection and recognitionApache MXNet
 
Semantic segmentation with Convolutional Neural Network Approaches
Semantic segmentation with Convolutional Neural Network ApproachesSemantic segmentation with Convolutional Neural Network Approaches
Semantic segmentation with Convolutional Neural Network ApproachesFellowship at Vodafone FutureLab
 
Introduction to object detection
Introduction to object detectionIntroduction to object detection
Introduction to object detectionBrodmann17
 
Machine Learning - Object Detection and Classification
Machine Learning - Object Detection and ClassificationMachine Learning - Object Detection and Classification
Machine Learning - Object Detection and ClassificationVikas Jain
 
Introduction of Knowledge Graphs
Introduction of Knowledge GraphsIntroduction of Knowledge Graphs
Introduction of Knowledge GraphsJeff Z. Pan
 
Deep learning for object detection
Deep learning for object detectionDeep learning for object detection
Deep learning for object detectionWenjing Chen
 
Diabetic Retinopathy Analysis using Fundus Image
Diabetic Retinopathy Analysis using Fundus ImageDiabetic Retinopathy Analysis using Fundus Image
Diabetic Retinopathy Analysis using Fundus ImageManjushree Mashal
 
Neo4j graphs in government
Neo4j graphs in governmentNeo4j graphs in government
Neo4j graphs in governmentNeo4j
 

What's hot (20)

Semantic Segmentation - Fully Convolutional Networks for Semantic Segmentation
Semantic Segmentation - Fully Convolutional Networks for Semantic SegmentationSemantic Segmentation - Fully Convolutional Networks for Semantic Segmentation
Semantic Segmentation - Fully Convolutional Networks for Semantic Segmentation
 
Age and Gender Detection-converted.pdf
Age and Gender Detection-converted.pdfAge and Gender Detection-converted.pdf
Age and Gender Detection-converted.pdf
 
"Semantic Segmentation for Scene Understanding: Algorithms and Implementation...
"Semantic Segmentation for Scene Understanding: Algorithms and Implementation..."Semantic Segmentation for Scene Understanding: Algorithms and Implementation...
"Semantic Segmentation for Scene Understanding: Algorithms and Implementation...
 
"Image and Video Summarization," a Presentation from the University of Washin...
"Image and Video Summarization," a Presentation from the University of Washin..."Image and Video Summarization," a Presentation from the University of Washin...
"Image and Video Summarization," a Presentation from the University of Washin...
 
Yolov5
Yolov5 Yolov5
Yolov5
 
face detection
face detectionface detection
face detection
 
Image recognition
Image recognitionImage recognition
Image recognition
 
Object detection
Object detectionObject detection
Object detection
 
Face Detection and Recognition System
Face Detection and Recognition SystemFace Detection and Recognition System
Face Detection and Recognition System
 
Object Detection Classification, tracking and Counting
Object Detection Classification, tracking and CountingObject Detection Classification, tracking and Counting
Object Detection Classification, tracking and Counting
 
Image Search: Then and Now
Image Search: Then and NowImage Search: Then and Now
Image Search: Then and Now
 
Introduction to Deep face detection and recognition
Introduction to Deep face detection and recognitionIntroduction to Deep face detection and recognition
Introduction to Deep face detection and recognition
 
Semantic segmentation with Convolutional Neural Network Approaches
Semantic segmentation with Convolutional Neural Network ApproachesSemantic segmentation with Convolutional Neural Network Approaches
Semantic segmentation with Convolutional Neural Network Approaches
 
Introduction to object detection
Introduction to object detectionIntroduction to object detection
Introduction to object detection
 
Machine Learning - Object Detection and Classification
Machine Learning - Object Detection and ClassificationMachine Learning - Object Detection and Classification
Machine Learning - Object Detection and Classification
 
Introduction of Knowledge Graphs
Introduction of Knowledge GraphsIntroduction of Knowledge Graphs
Introduction of Knowledge Graphs
 
Intro to AI & ML at Amazon
Intro to AI & ML at AmazonIntro to AI & ML at Amazon
Intro to AI & ML at Amazon
 
Deep learning for object detection
Deep learning for object detectionDeep learning for object detection
Deep learning for object detection
 
Diabetic Retinopathy Analysis using Fundus Image
Diabetic Retinopathy Analysis using Fundus ImageDiabetic Retinopathy Analysis using Fundus Image
Diabetic Retinopathy Analysis using Fundus Image
 
Neo4j graphs in government
Neo4j graphs in governmentNeo4j graphs in government
Neo4j graphs in government
 

Viewers also liked

C. G. Jung (1875 1961)
C. G. Jung (1875 1961)C. G. Jung (1875 1961)
C. G. Jung (1875 1961)Grant Heller
 
Deep Learning & NLP: Graphs to the Rescue!
Deep Learning & NLP: Graphs to the Rescue!Deep Learning & NLP: Graphs to the Rescue!
Deep Learning & NLP: Graphs to the Rescue!Roelof Pieters
 
Jung, carl gustav el hombre y sus simbolos
Jung, carl gustav   el hombre y sus simbolosJung, carl gustav   el hombre y sus simbolos
Jung, carl gustav el hombre y sus simbolosAlma Heil 916 NOS
 
MIGUEL SERRANO C.G.JUNG Y HERMANN HESSE
MIGUEL SERRANO C.G.JUNG Y HERMANN HESSEMIGUEL SERRANO C.G.JUNG Y HERMANN HESSE
MIGUEL SERRANO C.G.JUNG Y HERMANN HESSEAlma Heil 916 NOS
 
An introduction to Deep Learning with Apache MXNet (November 2017)
An introduction to Deep Learning with Apache MXNet (November 2017)An introduction to Deep Learning with Apache MXNet (November 2017)
An introduction to Deep Learning with Apache MXNet (November 2017)Julien SIMON
 
Presentation_2_11_15_PsychologyMeeting_SimpleVersion
Presentation_2_11_15_PsychologyMeeting_SimpleVersionPresentation_2_11_15_PsychologyMeeting_SimpleVersion
Presentation_2_11_15_PsychologyMeeting_SimpleVersionLu Vechi, PhD
 
Convolutional Neural Networks (DLAI D5L1 2017 UPC Deep Learning for Artificia...
Convolutional Neural Networks (DLAI D5L1 2017 UPC Deep Learning for Artificia...Convolutional Neural Networks (DLAI D5L1 2017 UPC Deep Learning for Artificia...
Convolutional Neural Networks (DLAI D5L1 2017 UPC Deep Learning for Artificia...Universitat Politècnica de Catalunya
 
Deep Learning for NLP: An Introduction to Neural Word Embeddings
Deep Learning for NLP: An Introduction to Neural Word EmbeddingsDeep Learning for NLP: An Introduction to Neural Word Embeddings
Deep Learning for NLP: An Introduction to Neural Word EmbeddingsRoelof Pieters
 
Deep learning - A Visual Introduction
Deep learning - A Visual IntroductionDeep learning - A Visual Introduction
Deep learning - A Visual IntroductionLukas Masuch
 
Python for Image Understanding: Deep Learning with Convolutional Neural Nets
Python for Image Understanding: Deep Learning with Convolutional Neural NetsPython for Image Understanding: Deep Learning with Convolutional Neural Nets
Python for Image Understanding: Deep Learning with Convolutional Neural NetsRoelof Pieters
 
Deep Learning Explained
Deep Learning ExplainedDeep Learning Explained
Deep Learning ExplainedMelanie Swan
 
Deep Learning - Convolutional Neural Networks
Deep Learning - Convolutional Neural NetworksDeep Learning - Convolutional Neural Networks
Deep Learning - Convolutional Neural NetworksChristian Perone
 
Portable Retinal Imaging and Medical Diagnostics
Portable Retinal Imaging and Medical DiagnosticsPortable Retinal Imaging and Medical Diagnostics
Portable Retinal Imaging and Medical DiagnosticsPetteriTeikariPhD
 
Large Scale Deep Learning with TensorFlow
Large Scale Deep Learning with TensorFlow Large Scale Deep Learning with TensorFlow
Large Scale Deep Learning with TensorFlow Jen Aman
 
[DSC 2016] 系列活動:李宏毅 / 一天搞懂深度學習
[DSC 2016] 系列活動:李宏毅 / 一天搞懂深度學習[DSC 2016] 系列活動:李宏毅 / 一天搞懂深度學習
[DSC 2016] 系列活動:李宏毅 / 一天搞懂深度學習台灣資料科學年會
 
Deep learning - Conceptual understanding and applications
Deep learning - Conceptual understanding and applicationsDeep learning - Conceptual understanding and applications
Deep learning - Conceptual understanding and applicationsBuhwan Jeong
 

Viewers also liked (20)

C. G. Jung (1875 1961)
C. G. Jung (1875 1961)C. G. Jung (1875 1961)
C. G. Jung (1875 1961)
 
Deep Learning & NLP: Graphs to the Rescue!
Deep Learning & NLP: Graphs to the Rescue!Deep Learning & NLP: Graphs to the Rescue!
Deep Learning & NLP: Graphs to the Rescue!
 
Jung, carl gustav el hombre y sus simbolos
Jung, carl gustav   el hombre y sus simbolosJung, carl gustav   el hombre y sus simbolos
Jung, carl gustav el hombre y sus simbolos
 
Carl Gustav Jung
Carl Gustav JungCarl Gustav Jung
Carl Gustav Jung
 
Carl Gustav Jung
Carl Gustav JungCarl Gustav Jung
Carl Gustav Jung
 
MIGUEL SERRANO C.G.JUNG Y HERMANN HESSE
MIGUEL SERRANO C.G.JUNG Y HERMANN HESSEMIGUEL SERRANO C.G.JUNG Y HERMANN HESSE
MIGUEL SERRANO C.G.JUNG Y HERMANN HESSE
 
Analytical Psychology
Analytical PsychologyAnalytical Psychology
Analytical Psychology
 
An introduction to Deep Learning with Apache MXNet (November 2017)
An introduction to Deep Learning with Apache MXNet (November 2017)An introduction to Deep Learning with Apache MXNet (November 2017)
An introduction to Deep Learning with Apache MXNet (November 2017)
 
Presentation_2_11_15_PsychologyMeeting_SimpleVersion
Presentation_2_11_15_PsychologyMeeting_SimpleVersionPresentation_2_11_15_PsychologyMeeting_SimpleVersion
Presentation_2_11_15_PsychologyMeeting_SimpleVersion
 
Convolutional Neural Networks (DLAI D5L1 2017 UPC Deep Learning for Artificia...
Convolutional Neural Networks (DLAI D5L1 2017 UPC Deep Learning for Artificia...Convolutional Neural Networks (DLAI D5L1 2017 UPC Deep Learning for Artificia...
Convolutional Neural Networks (DLAI D5L1 2017 UPC Deep Learning for Artificia...
 
Deep Learning for NLP: An Introduction to Neural Word Embeddings
Deep Learning for NLP: An Introduction to Neural Word EmbeddingsDeep Learning for NLP: An Introduction to Neural Word Embeddings
Deep Learning for NLP: An Introduction to Neural Word Embeddings
 
Deep learning - A Visual Introduction
Deep learning - A Visual IntroductionDeep learning - A Visual Introduction
Deep learning - A Visual Introduction
 
Python for Image Understanding: Deep Learning with Convolutional Neural Nets
Python for Image Understanding: Deep Learning with Convolutional Neural NetsPython for Image Understanding: Deep Learning with Convolutional Neural Nets
Python for Image Understanding: Deep Learning with Convolutional Neural Nets
 
Deep Learning Explained
Deep Learning ExplainedDeep Learning Explained
Deep Learning Explained
 
Deep Learning - Convolutional Neural Networks
Deep Learning - Convolutional Neural NetworksDeep Learning - Convolutional Neural Networks
Deep Learning - Convolutional Neural Networks
 
Holy eucharist
Holy eucharistHoly eucharist
Holy eucharist
 
Portable Retinal Imaging and Medical Diagnostics
Portable Retinal Imaging and Medical DiagnosticsPortable Retinal Imaging and Medical Diagnostics
Portable Retinal Imaging and Medical Diagnostics
 
Large Scale Deep Learning with TensorFlow
Large Scale Deep Learning with TensorFlow Large Scale Deep Learning with TensorFlow
Large Scale Deep Learning with TensorFlow
 
[DSC 2016] 系列活動:李宏毅 / 一天搞懂深度學習
[DSC 2016] 系列活動:李宏毅 / 一天搞懂深度學習[DSC 2016] 系列活動:李宏毅 / 一天搞懂深度學習
[DSC 2016] 系列活動:李宏毅 / 一天搞懂深度學習
 
Deep learning - Conceptual understanding and applications
Deep learning - Conceptual understanding and applicationsDeep learning - Conceptual understanding and applications
Deep learning - Conceptual understanding and applications
 

Similar to Frontiers of Vision and Language: Bridging Images and Texts by Deep Learning

Recognize, Describe, and Generate: Introduction of Recent Work at MIL
Recognize, Describe, and Generate: Introduction of Recent Work at MILRecognize, Describe, and Generate: Introduction of Recent Work at MIL
Recognize, Describe, and Generate: Introduction of Recent Work at MILYoshitaka Ushiku
 
ontribute to 'the Meaning of a Text.pptx
ontribute to 'the Meaning of a Text.pptxontribute to 'the Meaning of a Text.pptx
ontribute to 'the Meaning of a Text.pptxLAWRENCEJEREMYBRIONE
 
Generative adversarial network and its applications to speech signal and natu...
Generative adversarial network and its applications to speech signal and natu...Generative adversarial network and its applications to speech signal and natu...
Generative adversarial network and its applications to speech signal and natu...宏毅 李
 
Generative Adversarial Network and its Applications to Speech Processing an...
Generative Adversarial Network and its Applications to Speech Processing an...Generative Adversarial Network and its Applications to Speech Processing an...
Generative Adversarial Network and its Applications to Speech Processing an...宏毅 李
 
Teaching Chinese with Mac Tools
Teaching Chinese with Mac ToolsTeaching Chinese with Mac Tools
Teaching Chinese with Mac Toolsmakkyfung
 
ICDM 2019 Tutorial: Speech and Language Processing: New Tools and Applications
ICDM 2019 Tutorial: Speech and Language Processing: New Tools and ApplicationsICDM 2019 Tutorial: Speech and Language Processing: New Tools and Applications
ICDM 2019 Tutorial: Speech and Language Processing: New Tools and ApplicationsForward Gradient
 
[246]QANet: Towards Efficient and Human-Level Reading Comprehension on SQuAD
[246]QANet: Towards Efficient and Human-Level Reading Comprehension on SQuAD[246]QANet: Towards Efficient and Human-Level Reading Comprehension on SQuAD
[246]QANet: Towards Efficient and Human-Level Reading Comprehension on SQuADNAVER D2
 
From Semantics to Self-supervised Learning for Speech and Beyond
From Semantics to Self-supervised Learning for Speech and BeyondFrom Semantics to Self-supervised Learning for Speech and Beyond
From Semantics to Self-supervised Learning for Speech and Beyondlinshanleearchive
 
Deep Representation: Building a Semantic Image Search Engine
Deep Representation: Building a Semantic Image Search EngineDeep Representation: Building a Semantic Image Search Engine
Deep Representation: Building a Semantic Image Search EngineC4Media
 
Integrating Differentiated Instruction in Inclusive Classroom.pptx
Integrating Differentiated Instruction in Inclusive Classroom.pptxIntegrating Differentiated Instruction in Inclusive Classroom.pptx
Integrating Differentiated Instruction in Inclusive Classroom.pptxJeddeLeon6
 
画像キャプションと動作認識の最前線 〜データセットに注目して〜(第17回ステアラボ人工知能セミナー)
画像キャプションと動作認識の最前線 〜データセットに注目して〜(第17回ステアラボ人工知能セミナー)画像キャプションと動作認識の最前線 〜データセットに注目して〜(第17回ステアラボ人工知能セミナー)
画像キャプションと動作認識の最前線 〜データセットに注目して〜(第17回ステアラボ人工知能セミナー)STAIR Lab, Chiba Institute of Technology
 
Blending in the Open
Blending in the OpenBlending in the Open
Blending in the Openbbridges51
 
Understanding Deep Learning
Understanding Deep LearningUnderstanding Deep Learning
Understanding Deep LearningC4Media
 
MediaEval 2012 Opening
MediaEval 2012 OpeningMediaEval 2012 Opening
MediaEval 2012 OpeningMediaEval2012
 
170704admnet beamer-public
170704admnet beamer-public170704admnet beamer-public
170704admnet beamer-publicHiroshi Ueda
 

Similar to Frontiers of Vision and Language: Bridging Images and Texts by Deep Learning (20)

Recognize, Describe, and Generate: Introduction of Recent Work at MIL
Recognize, Describe, and Generate: Introduction of Recent Work at MILRecognize, Describe, and Generate: Introduction of Recent Work at MIL
Recognize, Describe, and Generate: Introduction of Recent Work at MIL
 
ontribute to 'the Meaning of a Text.pptx
ontribute to 'the Meaning of a Text.pptxontribute to 'the Meaning of a Text.pptx
ontribute to 'the Meaning of a Text.pptx
 
Generative adversarial network and its applications to speech signal and natu...
Generative adversarial network and its applications to speech signal and natu...Generative adversarial network and its applications to speech signal and natu...
Generative adversarial network and its applications to speech signal and natu...
 
Generative Adversarial Network and its Applications to Speech Processing an...
Generative Adversarial Network and its Applications to Speech Processing an...Generative Adversarial Network and its Applications to Speech Processing an...
Generative Adversarial Network and its Applications to Speech Processing an...
 
Teaching Chinese with Mac Tools
Teaching Chinese with Mac ToolsTeaching Chinese with Mac Tools
Teaching Chinese with Mac Tools
 
ICDM 2019 Tutorial: Speech and Language Processing: New Tools and Applications
ICDM 2019 Tutorial: Speech and Language Processing: New Tools and ApplicationsICDM 2019 Tutorial: Speech and Language Processing: New Tools and Applications
ICDM 2019 Tutorial: Speech and Language Processing: New Tools and Applications
 
[246]QANet: Towards Efficient and Human-Level Reading Comprehension on SQuAD
[246]QANet: Towards Efficient and Human-Level Reading Comprehension on SQuAD[246]QANet: Towards Efficient and Human-Level Reading Comprehension on SQuAD
[246]QANet: Towards Efficient and Human-Level Reading Comprehension on SQuAD
 
From Semantics to Self-supervised Learning for Speech and Beyond
From Semantics to Self-supervised Learning for Speech and BeyondFrom Semantics to Self-supervised Learning for Speech and Beyond
From Semantics to Self-supervised Learning for Speech and Beyond
 
Deep Representation: Building a Semantic Image Search Engine
Deep Representation: Building a Semantic Image Search EngineDeep Representation: Building a Semantic Image Search Engine
Deep Representation: Building a Semantic Image Search Engine
 
Mobile videos 15.6.22
Mobile videos 15.6.22Mobile videos 15.6.22
Mobile videos 15.6.22
 
Integrating Differentiated Instruction in Inclusive Classroom.pptx
Integrating Differentiated Instruction in Inclusive Classroom.pptxIntegrating Differentiated Instruction in Inclusive Classroom.pptx
Integrating Differentiated Instruction in Inclusive Classroom.pptx
 
画像キャプションと動作認識の最前線 〜データセットに注目して〜(第17回ステアラボ人工知能セミナー)
画像キャプションと動作認識の最前線 〜データセットに注目して〜(第17回ステアラボ人工知能セミナー)画像キャプションと動作認識の最前線 〜データセットに注目して〜(第17回ステアラボ人工知能セミナー)
画像キャプションと動作認識の最前線 〜データセットに注目して〜(第17回ステアラボ人工知能セミナー)
 
Blending in the Open
Blending in the OpenBlending in the Open
Blending in the Open
 
The NLP Muppets revolution!
The NLP Muppets revolution!The NLP Muppets revolution!
The NLP Muppets revolution!
 
Digital Storytelling ITSC
Digital Storytelling ITSCDigital Storytelling ITSC
Digital Storytelling ITSC
 
vim
vimvim
vim
 
Understanding Deep Learning
Understanding Deep LearningUnderstanding Deep Learning
Understanding Deep Learning
 
Powerpoint A R T
Powerpoint A R TPowerpoint A R T
Powerpoint A R T
 
MediaEval 2012 Opening
MediaEval 2012 OpeningMediaEval 2012 Opening
MediaEval 2012 Opening
 
170704admnet beamer-public
170704admnet beamer-public170704admnet beamer-public
170704admnet beamer-public
 

More from Yoshitaka Ushiku

機械学習を民主化する取り組み
機械学習を民主化する取り組み機械学習を民主化する取り組み
機械学習を民主化する取り組みYoshitaka Ushiku
 
ドメイン適応の原理と応用
ドメイン適応の原理と応用ドメイン適応の原理と応用
ドメイン適応の原理と応用Yoshitaka Ushiku
 
Reinforced Cross-Modal Matching and Self-Supervised Imitation Learning for Vi...
Reinforced Cross-Modal Matching and Self-Supervised Imitation Learning for Vi...Reinforced Cross-Modal Matching and Self-Supervised Imitation Learning for Vi...
Reinforced Cross-Modal Matching and Self-Supervised Imitation Learning for Vi...Yoshitaka Ushiku
 
これからの Vision & Language ~ Acadexit した4つの理由
これからの Vision & Language ~ Acadexit した4つの理由これからの Vision & Language ~ Acadexit した4つの理由
これからの Vision & Language ~ Acadexit した4つの理由Yoshitaka Ushiku
 
視覚と対話の融合研究
視覚と対話の融合研究視覚と対話の融合研究
視覚と対話の融合研究Yoshitaka Ushiku
 
Women Also Snowboard: Overcoming Bias in Captioning Models(関東CV勉強会 ECCV 2018 ...
Women Also Snowboard: Overcoming Bias in Captioning Models(関東CV勉強会 ECCV 2018 ...Women Also Snowboard: Overcoming Bias in Captioning Models(関東CV勉強会 ECCV 2018 ...
Women Also Snowboard: Overcoming Bias in Captioning Models(関東CV勉強会 ECCV 2018 ...Yoshitaka Ushiku
 
Vision-and-Language Navigation: Interpreting visually-grounded navigation ins...
Vision-and-Language Navigation: Interpreting visually-grounded navigation ins...Vision-and-Language Navigation: Interpreting visually-grounded navigation ins...
Vision-and-Language Navigation: Interpreting visually-grounded navigation ins...Yoshitaka Ushiku
 
Sequence Level Training with Recurrent Neural Networks (関東CV勉強会 強化学習論文読み会)
Sequence Level Training with Recurrent Neural Networks (関東CV勉強会 強化学習論文読み会)Sequence Level Training with Recurrent Neural Networks (関東CV勉強会 強化学習論文読み会)
Sequence Level Training with Recurrent Neural Networks (関東CV勉強会 強化学習論文読み会)Yoshitaka Ushiku
 
Learning Cooperative Visual Dialog with Deep Reinforcement Learning(関東CV勉強会 I...
Learning Cooperative Visual Dialog with Deep Reinforcement Learning(関東CV勉強会 I...Learning Cooperative Visual Dialog with Deep Reinforcement Learning(関東CV勉強会 I...
Learning Cooperative Visual Dialog with Deep Reinforcement Learning(関東CV勉強会 I...Yoshitaka Ushiku
 
今後のPRMU研究会を考える
今後のPRMU研究会を考える今後のPRMU研究会を考える
今後のPRMU研究会を考えるYoshitaka Ushiku
 
Self-Critical Sequence Training for Image Captioning (関東CV勉強会 CVPR 2017 読み会)
Self-Critical Sequence Training for Image Captioning (関東CV勉強会 CVPR 2017 読み会)Self-Critical Sequence Training for Image Captioning (関東CV勉強会 CVPR 2017 読み会)
Self-Critical Sequence Training for Image Captioning (関東CV勉強会 CVPR 2017 読み会)Yoshitaka Ushiku
 
Asymmetric Tri-training for Unsupervised Domain Adaptation
Asymmetric Tri-training for Unsupervised Domain AdaptationAsymmetric Tri-training for Unsupervised Domain Adaptation
Asymmetric Tri-training for Unsupervised Domain AdaptationYoshitaka Ushiku
 
Deep Learning による視覚×言語融合の最前線
Deep Learning による視覚×言語融合の最前線Deep Learning による視覚×言語融合の最前線
Deep Learning による視覚×言語融合の最前線Yoshitaka Ushiku
 
Leveraging Visual Question Answering for Image-Caption Ranking (関東CV勉強会 ECCV ...
Leveraging Visual Question Answeringfor Image-Caption Ranking (関東CV勉強会 ECCV ...Leveraging Visual Question Answeringfor Image-Caption Ranking (関東CV勉強会 ECCV ...
Leveraging Visual Question Answering for Image-Caption Ranking (関東CV勉強会 ECCV ...Yoshitaka Ushiku
 
We Are Humor Beings: Understanding and Predicting Visual Humor (関東CV勉強会 CVPR ...
We Are Humor Beings: Understanding and Predicting Visual Humor (関東CV勉強会 CVPR ...We Are Humor Beings: Understanding and Predicting Visual Humor (関東CV勉強会 CVPR ...
We Are Humor Beings: Understanding and Predicting Visual Humor (関東CV勉強会 CVPR ...Yoshitaka Ushiku
 
ごあいさつ 或いはMATLAB教徒がPythonistaに改宗した話 (関東CV勉強会)
ごあいさつ 或いはMATLAB教徒がPythonistaに改宗した話 (関東CV勉強会)ごあいさつ 或いはMATLAB教徒がPythonistaに改宗した話 (関東CV勉強会)
ごあいさつ 或いはMATLAB教徒がPythonistaに改宗した話 (関東CV勉強会)Yoshitaka Ushiku
 
Generating Notifications for Missing Actions: Don’t forget to turn the lights...
Generating Notifications for Missing Actions:Don’t forget to turn the lights...Generating Notifications for Missing Actions:Don’t forget to turn the lights...
Generating Notifications for Missing Actions: Don’t forget to turn the lights...Yoshitaka Ushiku
 
画像キャプションの自動生成
画像キャプションの自動生成画像キャプションの自動生成
画像キャプションの自動生成Yoshitaka Ushiku
 
Unsupervised Object Discovery and Localization in the Wild: Part-Based Match...
Unsupervised Object Discovery and Localization in the Wild:Part-Based Match...Unsupervised Object Discovery and Localization in the Wild:Part-Based Match...
Unsupervised Object Discovery and Localization in the Wild: Part-Based Match...Yoshitaka Ushiku
 
CVPR 2015 論文紹介(NTT研究所内勉強会用資料)
CVPR 2015 論文紹介(NTT研究所内勉強会用資料)CVPR 2015 論文紹介(NTT研究所内勉強会用資料)
CVPR 2015 論文紹介(NTT研究所内勉強会用資料)Yoshitaka Ushiku
 

More from Yoshitaka Ushiku (20)

機械学習を民主化する取り組み
機械学習を民主化する取り組み機械学習を民主化する取り組み
機械学習を民主化する取り組み
 
ドメイン適応の原理と応用
ドメイン適応の原理と応用ドメイン適応の原理と応用
ドメイン適応の原理と応用
 
Reinforced Cross-Modal Matching and Self-Supervised Imitation Learning for Vi...
Reinforced Cross-Modal Matching and Self-Supervised Imitation Learning for Vi...Reinforced Cross-Modal Matching and Self-Supervised Imitation Learning for Vi...
Reinforced Cross-Modal Matching and Self-Supervised Imitation Learning for Vi...
 
これからの Vision & Language ~ Acadexit した4つの理由
これからの Vision & Language ~ Acadexit した4つの理由これからの Vision & Language ~ Acadexit した4つの理由
これからの Vision & Language ~ Acadexit した4つの理由
 
視覚と対話の融合研究
視覚と対話の融合研究視覚と対話の融合研究
視覚と対話の融合研究
 
Women Also Snowboard: Overcoming Bias in Captioning Models(関東CV勉強会 ECCV 2018 ...
Women Also Snowboard: Overcoming Bias in Captioning Models(関東CV勉強会 ECCV 2018 ...Women Also Snowboard: Overcoming Bias in Captioning Models(関東CV勉強会 ECCV 2018 ...
Women Also Snowboard: Overcoming Bias in Captioning Models(関東CV勉強会 ECCV 2018 ...
 
Vision-and-Language Navigation: Interpreting visually-grounded navigation ins...
Vision-and-Language Navigation: Interpreting visually-grounded navigation ins...Vision-and-Language Navigation: Interpreting visually-grounded navigation ins...
Vision-and-Language Navigation: Interpreting visually-grounded navigation ins...
 
Sequence Level Training with Recurrent Neural Networks (関東CV勉強会 強化学習論文読み会)
Sequence Level Training with Recurrent Neural Networks (関東CV勉強会 強化学習論文読み会)Sequence Level Training with Recurrent Neural Networks (関東CV勉強会 強化学習論文読み会)
Sequence Level Training with Recurrent Neural Networks (関東CV勉強会 強化学習論文読み会)
 
Learning Cooperative Visual Dialog with Deep Reinforcement Learning(関東CV勉強会 I...
Learning Cooperative Visual Dialog with Deep Reinforcement Learning(関東CV勉強会 I...Learning Cooperative Visual Dialog with Deep Reinforcement Learning(関東CV勉強会 I...
Learning Cooperative Visual Dialog with Deep Reinforcement Learning(関東CV勉強会 I...
 
今後のPRMU研究会を考える
今後のPRMU研究会を考える今後のPRMU研究会を考える
今後のPRMU研究会を考える
 
Self-Critical Sequence Training for Image Captioning (関東CV勉強会 CVPR 2017 読み会)
Self-Critical Sequence Training for Image Captioning (関東CV勉強会 CVPR 2017 読み会)Self-Critical Sequence Training for Image Captioning (関東CV勉強会 CVPR 2017 読み会)
Self-Critical Sequence Training for Image Captioning (関東CV勉強会 CVPR 2017 読み会)
 
Asymmetric Tri-training for Unsupervised Domain Adaptation
Asymmetric Tri-training for Unsupervised Domain AdaptationAsymmetric Tri-training for Unsupervised Domain Adaptation
Asymmetric Tri-training for Unsupervised Domain Adaptation
 
Deep Learning による視覚×言語融合の最前線
Deep Learning による視覚×言語融合の最前線Deep Learning による視覚×言語融合の最前線
Deep Learning による視覚×言語融合の最前線
 
Leveraging Visual Question Answering for Image-Caption Ranking (関東CV勉強会 ECCV ...
Leveraging Visual Question Answeringfor Image-Caption Ranking (関東CV勉強会 ECCV ...Leveraging Visual Question Answeringfor Image-Caption Ranking (関東CV勉強会 ECCV ...
Leveraging Visual Question Answering for Image-Caption Ranking (関東CV勉強会 ECCV ...
 
We Are Humor Beings: Understanding and Predicting Visual Humor (関東CV勉強会 CVPR ...
We Are Humor Beings: Understanding and Predicting Visual Humor (関東CV勉強会 CVPR ...We Are Humor Beings: Understanding and Predicting Visual Humor (関東CV勉強会 CVPR ...
We Are Humor Beings: Understanding and Predicting Visual Humor (関東CV勉強会 CVPR ...
 
ごあいさつ 或いはMATLAB教徒がPythonistaに改宗した話 (関東CV勉強会)
ごあいさつ 或いはMATLAB教徒がPythonistaに改宗した話 (関東CV勉強会)ごあいさつ 或いはMATLAB教徒がPythonistaに改宗した話 (関東CV勉強会)
ごあいさつ 或いはMATLAB教徒がPythonistaに改宗した話 (関東CV勉強会)
 
Generating Notifications for Missing Actions: Don’t forget to turn the lights...
Generating Notifications for Missing Actions:Don’t forget to turn the lights...Generating Notifications for Missing Actions:Don’t forget to turn the lights...
Generating Notifications for Missing Actions: Don’t forget to turn the lights...
 
画像キャプションの自動生成
画像キャプションの自動生成画像キャプションの自動生成
画像キャプションの自動生成
 
Unsupervised Object Discovery and Localization in the Wild: Part-Based Match...
Unsupervised Object Discovery and Localization in the Wild:Part-Based Match...Unsupervised Object Discovery and Localization in the Wild:Part-Based Match...
Unsupervised Object Discovery and Localization in the Wild: Part-Based Match...
 
CVPR 2015 論文紹介(NTT研究所内勉強会用資料)
CVPR 2015 論文紹介(NTT研究所内勉強会用資料)CVPR 2015 論文紹介(NTT研究所内勉強会用資料)
CVPR 2015 論文紹介(NTT研究所内勉強会用資料)
 

Recently uploaded

Stronger Together: Developing an Organizational Strategy for Accessible Desig...
Stronger Together: Developing an Organizational Strategy for Accessible Desig...Stronger Together: Developing an Organizational Strategy for Accessible Desig...
Stronger Together: Developing an Organizational Strategy for Accessible Desig...caitlingebhard1
 
Tales from a Passkey Provider Progress from Awareness to Implementation.pptx
Tales from a Passkey Provider  Progress from Awareness to Implementation.pptxTales from a Passkey Provider  Progress from Awareness to Implementation.pptx
Tales from a Passkey Provider Progress from Awareness to Implementation.pptxFIDO Alliance
 
Continuing Bonds Through AI: A Hermeneutic Reflection on Thanabots
Continuing Bonds Through AI: A Hermeneutic Reflection on ThanabotsContinuing Bonds Through AI: A Hermeneutic Reflection on Thanabots
Continuing Bonds Through AI: A Hermeneutic Reflection on ThanabotsLeah Henrickson
 
The Ultimate Prompt Engineering Guide for Generative AI: Get the Most Out of ...
The Ultimate Prompt Engineering Guide for Generative AI: Get the Most Out of ...The Ultimate Prompt Engineering Guide for Generative AI: Get the Most Out of ...
The Ultimate Prompt Engineering Guide for Generative AI: Get the Most Out of ...SOFTTECHHUB
 
How to Check CNIC Information Online with Pakdata cf
How to Check CNIC Information Online with Pakdata cfHow to Check CNIC Information Online with Pakdata cf
How to Check CNIC Information Online with Pakdata cfdanishmna97
 
Modernizing Legacy Systems Using Ballerina
Modernizing Legacy Systems Using BallerinaModernizing Legacy Systems Using Ballerina
Modernizing Legacy Systems Using BallerinaWSO2
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2
 
Design and Development of a Provenance Capture Platform for Data Science
Design and Development of a Provenance Capture Platform for Data ScienceDesign and Development of a Provenance Capture Platform for Data Science
Design and Development of a Provenance Capture Platform for Data SciencePaolo Missier
 
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)Samir Dash
 
ERP Contender Series: Acumatica vs. Sage Intacct
ERP Contender Series: Acumatica vs. Sage IntacctERP Contender Series: Acumatica vs. Sage Intacct
ERP Contender Series: Acumatica vs. Sage IntacctBrainSell Technologies
 
The Zero-ETL Approach: Enhancing Data Agility and Insight
The Zero-ETL Approach: Enhancing Data Agility and InsightThe Zero-ETL Approach: Enhancing Data Agility and Insight
The Zero-ETL Approach: Enhancing Data Agility and InsightSafe Software
 
Introduction to FIDO Authentication and Passkeys.pptx
Introduction to FIDO Authentication and Passkeys.pptxIntroduction to FIDO Authentication and Passkeys.pptx
Introduction to FIDO Authentication and Passkeys.pptxFIDO Alliance
 
Harnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptx
Harnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptxHarnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptx
Harnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptxFIDO Alliance
 
Event-Driven Architecture Masterclass: Challenges in Stream Processing
Event-Driven Architecture Masterclass: Challenges in Stream ProcessingEvent-Driven Architecture Masterclass: Challenges in Stream Processing
Event-Driven Architecture Masterclass: Challenges in Stream ProcessingScyllaDB
 
Choreo: Empowering the Future of Enterprise Software Engineering
Choreo: Empowering the Future of Enterprise Software EngineeringChoreo: Empowering the Future of Enterprise Software Engineering
Choreo: Empowering the Future of Enterprise Software EngineeringWSO2
 
Portal Kombat : extension du réseau de propagande russe
Portal Kombat : extension du réseau de propagande russePortal Kombat : extension du réseau de propagande russe
Portal Kombat : extension du réseau de propagande russe中 央社
 
Intro to Passkeys and the State of Passwordless.pptx
Intro to Passkeys and the State of Passwordless.pptxIntro to Passkeys and the State of Passwordless.pptx
Intro to Passkeys and the State of Passwordless.pptxFIDO Alliance
 
Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)
Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)
Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)Paige Cruz
 
WSO2 Micro Integrator for Enterprise Integration in a Decentralized, Microser...
WSO2 Micro Integrator for Enterprise Integration in a Decentralized, Microser...WSO2 Micro Integrator for Enterprise Integration in a Decentralized, Microser...
WSO2 Micro Integrator for Enterprise Integration in a Decentralized, Microser...WSO2
 

Recently uploaded (20)

Stronger Together: Developing an Organizational Strategy for Accessible Desig...
Stronger Together: Developing an Organizational Strategy for Accessible Desig...Stronger Together: Developing an Organizational Strategy for Accessible Desig...
Stronger Together: Developing an Organizational Strategy for Accessible Desig...
 
Tales from a Passkey Provider Progress from Awareness to Implementation.pptx
Tales from a Passkey Provider  Progress from Awareness to Implementation.pptxTales from a Passkey Provider  Progress from Awareness to Implementation.pptx
Tales from a Passkey Provider Progress from Awareness to Implementation.pptx
 
Continuing Bonds Through AI: A Hermeneutic Reflection on Thanabots
Continuing Bonds Through AI: A Hermeneutic Reflection on ThanabotsContinuing Bonds Through AI: A Hermeneutic Reflection on Thanabots
Continuing Bonds Through AI: A Hermeneutic Reflection on Thanabots
 
The Ultimate Prompt Engineering Guide for Generative AI: Get the Most Out of ...
The Ultimate Prompt Engineering Guide for Generative AI: Get the Most Out of ...The Ultimate Prompt Engineering Guide for Generative AI: Get the Most Out of ...
The Ultimate Prompt Engineering Guide for Generative AI: Get the Most Out of ...
 
How to Check CNIC Information Online with Pakdata cf
How to Check CNIC Information Online with Pakdata cfHow to Check CNIC Information Online with Pakdata cf
How to Check CNIC Information Online with Pakdata cf
 
Modernizing Legacy Systems Using Ballerina
Modernizing Legacy Systems Using BallerinaModernizing Legacy Systems Using Ballerina
Modernizing Legacy Systems Using Ballerina
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering Developers
 
Design and Development of a Provenance Capture Platform for Data Science
Design and Development of a Provenance Capture Platform for Data ScienceDesign and Development of a Provenance Capture Platform for Data Science
Design and Development of a Provenance Capture Platform for Data Science
 
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
 
ERP Contender Series: Acumatica vs. Sage Intacct
ERP Contender Series: Acumatica vs. Sage IntacctERP Contender Series: Acumatica vs. Sage Intacct
ERP Contender Series: Acumatica vs. Sage Intacct
 
The Zero-ETL Approach: Enhancing Data Agility and Insight
The Zero-ETL Approach: Enhancing Data Agility and InsightThe Zero-ETL Approach: Enhancing Data Agility and Insight
The Zero-ETL Approach: Enhancing Data Agility and Insight
 
Introduction to FIDO Authentication and Passkeys.pptx
Introduction to FIDO Authentication and Passkeys.pptxIntroduction to FIDO Authentication and Passkeys.pptx
Introduction to FIDO Authentication and Passkeys.pptx
 
Harnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptx
Harnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptxHarnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptx
Harnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptx
 
Event-Driven Architecture Masterclass: Challenges in Stream Processing
Event-Driven Architecture Masterclass: Challenges in Stream ProcessingEvent-Driven Architecture Masterclass: Challenges in Stream Processing
Event-Driven Architecture Masterclass: Challenges in Stream Processing
 
Choreo: Empowering the Future of Enterprise Software Engineering
Choreo: Empowering the Future of Enterprise Software EngineeringChoreo: Empowering the Future of Enterprise Software Engineering
Choreo: Empowering the Future of Enterprise Software Engineering
 
Portal Kombat : extension du réseau de propagande russe
Portal Kombat : extension du réseau de propagande russePortal Kombat : extension du réseau de propagande russe
Portal Kombat : extension du réseau de propagande russe
 
Intro to Passkeys and the State of Passwordless.pptx
Intro to Passkeys and the State of Passwordless.pptxIntro to Passkeys and the State of Passwordless.pptx
Intro to Passkeys and the State of Passwordless.pptx
 
Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)
Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)
Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)
 
WSO2 Micro Integrator for Enterprise Integration in a Decentralized, Microser...
WSO2 Micro Integrator for Enterprise Integration in a Decentralized, Microser...WSO2 Micro Integrator for Enterprise Integration in a Decentralized, Microser...
WSO2 Micro Integrator for Enterprise Integration in a Decentralized, Microser...
 
Overview of Hyperledger Foundation
Overview of Hyperledger FoundationOverview of Hyperledger Foundation
Overview of Hyperledger Foundation
 

Frontiers of Vision and Language: Bridging Images and Texts by Deep Learning

  • 1. Frontiers of Vision and Language: Bridging Images and Texts by Deep Learning The University of Tokyo Yoshitaka Ushiku losnuevetoros
  • 2. Documents = Vision + Language Vision & Language: an emerging topic • Integration of CV, NLP and ML techs • Several backgrounds – Impact of Deep Learning • Image recognition (CV) • Machine translation (NLP) – Growth of user generated contents – Exploratory researches on Vision and Language
  • 3. 2012: Impact of Deep Learning Academic AI startup A famous company Many slides refer to the first use of CNN (AlexNet) on ImageNet
  • 4. 2012: Impact of Deep Learning Academic AI startup A famous company Large gap of error rates on ImageNet 1st team: 15.3% 2nd team: 26.2% Large gap of error rates on ImageNet 1st team: 15.3% 2nd team: 26.2% Large gap of error rates on ImageNet 1st team: 15.3% 2nd team: 26.2% Many slides refer to the first use of CNN (AlexNet) on ImageNet
  • 5. 2012: Impact of Deep Learning According to the official site… 1st team w/ DL Error rate: 15% 2nd team w/o DL Error rate: 26% [http://image-net.org/challenges/LSVRC/2012/results.html] It’s me!!
  • 6. 2014: Another impact of Deep Learning • Deep learning appears in machine translation [Sutskever+, NIPS 2014] – LSTM [Hochreiter+Schmidhuber, 1997] solves the gradient vanishing problem in RNN →Dealing with relations between distant words in a sentence – Four-layer LSTM is trained in an end-to-end manner →comparable to state-of-the-art (English to French) • Emergence of common techs such as CNN/RNN Reduction of barriers to get into CV+NLP Input Output
  • 7. Growth of user generated contents Especially in content posting/sharing service • Facebook: 300 million photos per day • YouTube: 400-hours videos per minute Pōhutukawa blooms this time of the year in New Zealand. As the flowers fall, the ground underneath the trees look spectacular. Pairs of a sentence + a video / photo →Collectable in large quantities
  • 8. Exploratory researches on Vision and Language Captioning an image associated with its article [Feng+Lapata, ACL 2010] • Input: article + image Output: caption for image • Dataset: Sets of article + image + caption × 3361 King Toupu IV died at the age of 88 last week.
  • 9. Exploratory researches on Vision and Language Captioning an image associated with its article [Feng+Lapata, ACL 2010] • Input: article + image Output: caption for image • Dataset: Sets of article + image + caption × 3361 King Toupu IV died at the age of 88 last week.As a result of these backgrounds: Various research topics such as …
  • 10. Image Captioning Group of people sitting at a table with a dinner. Tourists are standing on the middle of a flat desert. [Ushiku+, ICCV 2015]
  • 11. Video Captioning A man is holding a box of doughnuts. Then he and a woman are standing next each other. Then she is holding a plate of food. [Shin+, ICIP 2016]
  • 12. Multilingual + Image Caption Translation Ein Masten mit zwei Ampeln fur Autofahrer. (German) A pole with two lights for drivers. (English) [Hitschler+, ACL 2016]
  • 14. Image Generation from Captions This bird is blue with white and has a very short beak. This flower is white and yellow in color, with petals that are wavy and smooth. [Zhang+, 2016]
  • 15. Goal of this keynote Looking over researches on vision&language • Historical flow of each area • Changes by Deep Learning × Deep Learning enabled these researches ✓ Deep Learning boosted these researches 1. Image Captioning 2. Video Captioning 3. Multilingual + Image Caption Translation 4. Visual Question Answering 5. Image Generation from Captions
  • 16. Frontiers of Vision and Language 1 Image Captioning
  • 17. Every picture tells a story Dataset: Images + <object, action, scene> + Captions 1. Predict <object, action, scene> for an input image using MRF 2. Search for the existing caption associated with similar <object, action, scene> <Horse, Ride, Field> [Farhadi+, ECCV 2010]
  • 18. Every picture tells a story <pet, sleep, ground> See something unexpected. <transportation, move, track> A man stands next to a train on a cloudy day. [Farhadi+, ECCV 2010]
  • 19. Retrieve? Generate? • Retrieve • Generate – Template-based e.g. generating a Subject+Verb sentence – Template-free A small gray dog on a leash. A black dog standing in grassy area. A small white dog wearing a flannel warmer. Input Dataset
  • 20. Retrieve? Generate? • Retrieve – A small gray dog on a leash. • Generate – Template-based e.g. generating a Subject+Verb sentence – Template-free A small gray dog on a leash. A black dog standing in grassy area. A small white dog wearing a flannel warmer. Input Dataset
  • 21. Retrieve? Generate? • Retrieve – A small gray dog on a leash. • Generate – Template-based dog+stand ⇒ A dog stands. – Template-free A small gray dog on a leash. A black dog standing in grassy area. A small white dog wearing a flannel warmer. Input Dataset
  • 22. Retrieve? Generate? • Retrieve – A small gray dog on a leash. • Generate – Template-based dog+stand ⇒ A dog stands. – Template-free A small white dog standing on a leash. A small gray dog on a leash. A black dog standing in grassy area. A small white dog wearing a flannel warmer. Input Dataset
  • 25. Benefits of Deep Learning • Refinement of image recognition [Krizhevsky+, NIPS 2012] • Deep learning appears in machine translation [Sutskever+, NIPS 2014] – LSTM [Hochreiter+Schmidhuber, 1997] solves the gradient vanishing problem in RNN →Dealing with relations between distant words in a sentence – Four-layer LSTM is trained in an end-to-end manner →comparable to state-of-the-art (English to French) Emergence of common techs such as CNN/RNN Reduction of barriers to get into CV+NLP Input Output
  • 26. Google NIC Concatenation of Google’s methods • GoogLeNet [Szegedy+, CVPR 2015] • MT with LSTM [Sutskever+, NIPS 2014] Caption (word seq.) 𝑆0 … 𝑆 𝑁 for image 𝐼 𝑆0: beginning of the sentence 𝑆1 = LSTM CNN 𝐼 𝑆𝑡 = LSTM St−1 , 𝑡 = 2 … 𝑁 − 1 𝑆 𝑁: end of the sentence [Vinyals+, CVPR 2015]
  • 27. Examples of generated captions [https://github.com/tensorflow/models/tree/master/im2txt] [Vinyals+, CVPR 2015]
  • 28. Comparison to [Ushiku+, ACM MM 2012] Input image [Ushiku+, ACM MM 2012]: Conventional object recognition Fisher Vector + Linear classifier Neural image captioning: Conventional object recognition Convolutional Neural Network Neural image captioning Conventional machine translation Recurrent Neural Network + beam search [Ushiku+, ACM MM 2012]: Conventional machine translation Log Linear Model + beam search Estimation of important words Connect the words with grammar model • Trained using only images and captions • Approaches are similar to each other
  • 29. Current development: Accuracy • Attention-based captioning [Xu+, ICML 2015] – Focus on some areas for predicting each word! – Both attention and caption models are trained using pairs of an image & caption
  • 30. Current development: Problem setting Dense captioning [Lin+, BMVC 2015] [Johnson+, CVPR 2016]
  • 31. Current development: Problem setting Generating captions for a photo sequence [Park+Kim, NIPS 2015][Huang+, NAACL 2016] The family got together for a cookout. They had a lot of delicious food. The dog was happy to be there. They had a great time on the beach. They even had a swim in the water.
  • 32. Current development: Problem setting Captioning using sentiment terms [Mathews+, AAAI 2016][Shin+, BMVC 2016] Neutral caption Positive caption
  • 33. Frontiers of Vision and Language 2 Video Captioning
  • 34. Before Deep Learning • Grounding of languages and objects in videos [Yu+Siskind, ACL 2013] – Learning from only videos and their captions – Experiment with a small object with few objects – Controlled and small dataset • Deep Learning should suite for this problem – Image Captioning: single image → word sequence – Video Captioning: image sequence → word sequence
  • 35. End-to-end learning by Deep Learning • LRCN [Donahue+, CVPR 2015] – CNN+RNN for • Action recognition • Image / Video Captioning • Video to Text [Venugopalan+, ICCV 2015] – CNNs to recognize • Objects from RGB frames • Actions from flow images – RNN for captioning
  • 36. Video Captioning A man is holding a box of doughnuts. Then he and a woman are standing next each other. Then she is holding a plate of food. [Shin+, ICIP 2016]
  • 37. Video Captioning A boat is floating on the water near a mountain. And a man riding a wave on top of a surfboard. Then he on the surfboard in the water. [Shin+, ICIP 2016]
  • 38. Video Retrieval from Caption • Input: Captions • Output: A video related to the caption 10 sec video clip from 40 min database! • Video captioning is also addressed A woman in blue is playing ping pong in a room. A guy is skiing with no shirt on and yellow snow pants. A man is water skiing while attached to a long rope. [Yamaguchi+, ICCV 2017]
  • 39. Frontiers of Vision and Language 3 Multilingual + Image Caption Translation
  • 40. Towards multiple languages Datasets with multilingual captions • IAPR TC12 [Grubinger+, 2006] English + Germany • Multi30K [Elliot+, 2016] English + Germany • STAIR Captions [Yoshikawa+, 2017] English + Japanese Development of cross-lingual tasks • Non-English-caption generation • Image Caption Transration Input: Pair of a caption in Language A + an image or A caption in Language A Output: Caption in Language B
  • 42. Non-English-caption generation Most researches: generate English Caption • Japanese [Miyazaki+Shimizu, ACL 2016] • Chinese [Li+, ICMR 2016] • Turkish [Unal+, SIU 2016] Çimlerde ko¸ san bir köpek 金色头发的小女孩 柵の中にキリンが一頭 立っています
  • 43. Just collecting non-English captions? Transfer learning among languages [Miyazaki+Shimizu, ACL 2016] • Vision-Language grounding Wim is transferred • Efficient learning using small amount of captions an elephant is an elephant 一匹の 象が 土の 一匹の 象が
  • 45. Machine translation via visual data Images can boost MT [Calixto+,2012] • Example below (English to Portuguese): Does the word “seal” in English – mean “seal” similar to “stamp”? – mean “seal” which is a sea animal? • [Calixto+,2012] insist that the mistranslation can be avoided using a related image (w/o experiments) Mistranslation!
  • 46. Input: Caption in Language A + image • Caption translation via an associated image [Elliott+, 2015] [Hitschler+, ACL 2016] – Generate translation candidates – Re-rank the candidates using similar images’ captions in Language B Eine Person in einem Anzug und Krawatte und einem Rock. (In German) Translation w/o the related image A person in a suit and tie and a rock. Translation with the related image A person in a suit and tie and a skirt.
  • 47. Input: Caption in Language A • Cross-lingual document retrieval via images [Funaki+Nakayama, EMNLP 2015] • Zero-shot machine translation [Nakayama+Nishida, 2017]
  • 48. Frontiers of Vision and Language 4 Visual Question Answering
  • 49. Visual Question Answering (VQA) Proposed in Human-Computer Interfaces • VizWiz [Bigham+, UIST 2010] Manually solved on AMT • Automation for the first time (w/o Deep Learning) [Malinowski+Fritz, NIPS 2014] • Similar term: Visual Turing Test [Malinowski+Fritz, 2014]
  • 50. VQA: Visual Question Answering • Established VQA as an AI problem – Provided a benchmark dataset – Experimental results with reasonable baselines • Portal web site is also organized – http://www.visualqa.org/ – Annual competition for VQA accuracy [Antol+, ICCV 2015] What color are her eyes? What is the mustache made of?
  • 51. VQA Dataset Collected questions and answers on AMT • Over 100K real images and 30K abstract images • About 700K questions+10 answers for each
  • 52. VQA=Multiclass Classification Feature 𝑍𝐼+𝑄 is applied to an usual classifier Question 𝑄 What objects are found on the bed? Answer 𝐴 bed sheets, pillow Image 𝐼 Image feature 𝑥𝐼 Question feature 𝑥 𝑄 Integrated feature 𝑧𝐼+𝑄
  • 53. Development of VQA How to calculate the integrated feature 𝑧𝐼+𝑄? • VQA [Antol+, ICCV 2015]: Just concatenate them • Summation 例 Summation of an image feature with attention and a question feature [Xu+Saenko, ECCV 2016] • Multiplication e.g.Bilinear multiplication using DFT [Fukui+, EMNLP 2016] • Hybrid of summation and multiplication e.g.Concatenation of sum and multiplication [Saito+, ICME 2017] 𝑧𝐼+𝑄 = 𝑥𝐼 𝑥 𝑄 𝑥𝐼 𝑥 𝑄 𝑥𝐼 𝑥 𝑄𝑧𝐼+𝑄 = 𝑧𝐼+𝑄 = 𝑧𝐼+𝑄 = 𝑥𝐼 𝑥 𝑄 𝑥𝐼 𝑥 𝑄
  • 54. VQA Challenge Examples from competition results Q: What is the woman holding? GT A: laptop Machine A: laptop Q: Is it going to rain soon? GT A: yes Machine A: yes
  • 55. VQA Challenge Examples from competition results Q: Why is there snow on one side of the stream and clear grass on the other? GT A: shade Machine A: yes Q: Is the hydrant painted a new color? GT A: yes Machine A: no
  • 56. Frontiers of Vision and Language 5 Image Generation from Captions
  • 57. Image generation from input caption Photo-realistic image generation itself is difficult • [Mansimov+, ICLR 2016]: Incrementally draw using LSTM • N.B. Photo synthesis is well studied [Hays+Efros, 2007]
  • 58. Generative Adversarial Networks (GAN) [Goodfellow+, NIPS 2014] • Unconditional generative model • Adversarial learning of Generator and Discriminator • GAN using convolution … DCGAN [Radford+, ICLR 2016] Before Conditional Generative Models Generator Random vector → Image Discriminator Discriminates real or fake is a fake image from Generator!
  • 59. Generative Adversarial Networks (GAN) [Goodfellow+, NIPS 2014] • Unconditional generative model • Adversarial learning of Generator and Discriminator • GAN using convolution … DCGAN [Radford+, ICLR 2016] Before Conditional Generative Models Generator Random vector → Image Discriminator Discriminates real or fake is a fake image from Generator!
  • 60. Generative Adversarial Networks (GAN) [Goodfellow+, NIPS 2014] • Unconditional generative model • Adversarial learning of Generator and Discriminator • GAN using convolution … DCGAN [Radford+, ICLR 2016] Before Conditional Generative Models Generator Random vector → Image Discriminator Discriminates real or fake is a fake image from Generator!
  • 61. Generative Adversarial Networks (GAN) [Goodfellow+, NIPS 2014] • Unconditional generative model • Adversarial learning of Generator and Discriminator • GAN using convolution … DCGAN [Radford+, ICLR 2016] Before Conditional Generative Models Generator Random vector → Image Discriminator Discriminates real or fake is a fake image from Generator!
  • 62. Generative Adversarial Networks (GAN) [Goodfellow+, NIPS 2014] • Unconditional generative model • Adversarial learning of Generator and Discriminator • GAN using convolution … DCGAN [Radford+, ICLR 2016] Before Conditional Generative Models Generator Random vector → Image Discriminator Discriminates real or fake is a … hmm
  • 63. Add a Caption to Generator and Discriminator Conditional Generative Models Tries to generate an image ・photo-realistic ・related to the caption Tries to detect an image ・fake ・unrelated [Reed+, ICML 2016]
  • 64. Examples of generated images • Birds (CUB) / Flowers (Oxford-102) – About 10K images & 5 captions for each image – 200 kinds of birds / 102 kinds of flowers A tiny bird, with a tiny beak, tarsus and feet, a blue crown, blue coverts, and black cheek patch Bright droopy yellow petals with burgundy streaks, and a yellow stigma [Reed+, ICML 2016]
  • 65. Towards more realistic image generation StackGAN [Zhang+, 2016] Two-step GANs • First GAN generates small and fuzzy image • Second GAN enlarges and refines it
  • 66. Examples of generated images This bird is blue with white and has a very short beak. This flower is white and yellow in color, with petals that are wavy and smooth. [Zhang+, 2016]
  • 67. Examples of generated images This bird is blue with white and has a very short beak. This flower is white and yellow in color, with petals that are wavy and smooth. [Zhang+, 2016] N.B. Results using dataset specialized in birds / flowers → More breakthrough is necessary to generate general images
  • 68. Take-home Messages • Looked over researches on vision and language 1. Image Captioning 2. Video Captioning 3. Multilingual + Image Caption Translation 4. Visual Question Answering 5. Image Generation from Captions • Contributions of Deep Learning – Most research themes exist before Deep Learning – Commodity techs for processing images, videos and natural languages – Evolution of recognition and generation Towards a new stage among vision and language!

Editor's Notes

  1. In ILSVRC 2012, the only team that used CNN for the first time in the history of ILSVRC won the first place with overwhelming accuracy. This incident has caused widespread deep learning so far, and this result has been reported on so many slides. As you can see, slides from academics, AI startups participating in this GTC, and a famous company holding this GTC report the same thing.
  2. The says that there was a large gap of error rates on ImageNet. Whereas the 2nd team achieved 26.2%, 1st team achieved 15.3%. Again, there was a large gap of error rates, there was a large gap of error rates. The 1st team is very famous, but some of you may be curious about the 2nd team; who are they?
  3. You can easily know the answer because the official site still has the information about ILSVRC 2012. Yes, the 1st team with deep learning achieved 15% error, the 2nd team without deep learning achieved 26% error … and if you scroll down this web page, the members of the second team are shown in a table. There seems to be several guys in the second team, and now please remember this name. It is hard to pronounce. Yoshitaka Ushiku.
  4. Therefore, we propose a new approach by solving a novel problem “multi-keyphrase problem”. We assume that the contents of images can be … For example, if the image of the locomotive is the input, two keyphrases “” and “” are important. Only with these keyphrases, we can generate a sentence by connecting them using a grammar knowledge. And even a rare image like the last one, can be explained by estimating “man bites”, which describe the relation between “man” and “bite”. (叩け そして 読め) = “comes down to”