This document describes a neural network model for generating image captions to help visually impaired people understand images. A convolutional neural network extracts image features, which are fed into a recurrent neural network or long short-term memory network to generate natural language captions. The model achieves state-of-the-art performance on image captioning tasks and has the potential to greatly improve the lives of visually impaired individuals by allowing them to understand images through automatically generated captions.
Image Captioning Generator using Deep Machine Learningijtsrd
Technologys scope has evolved into one of the most powerful tools for human development in a variety of fields.AI and machine learning have become one of the most powerful tools for completing tasks quickly and accurately without the need for human intervention. This project demonstrates how deep machine learning can be used to create a caption or a sentence for a given picture. This can be used for visually impaired persons, as well as automobiles for self identification, and for various applications to verify quickly and easily. The Convolutional Neural Network CNN is used to describe the alphabet, and the Long Short Term Memory LSTM is used to organize the right meaningful sentences in this model. The flicker 8k and flicker 30k datasets were used to train this. Sreejith S P | Vijayakumar A "Image Captioning Generator using Deep Machine Learning" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-4 , June 2021, URL: https://www.ijtsrd.compapers/ijtsrd42344.pdf Paper URL: https://www.ijtsrd.comcomputer-science/artificial-intelligence/42344/image-captioning-generator-using-deep-machine-learning/sreejith-s-p
Image captioning with Keras and Tensorflow - Debarko De @ PractoDebarko De
This slideshow talks about how to create a image captioning system just like Google's Show and Tell Model. This will walk you through the training phase and final prediction file.n
We will use 7 emotions namely - We have used 7 emotions namely - 'Angry', 'Disgust'濫, 'Fear', 'Happy', 'Neutral', 'Sad'☹️, 'Surprise' to train and test our algorithm using Convolution Neural Networks.
Image Captioning Generator using Deep Machine Learningijtsrd
Technologys scope has evolved into one of the most powerful tools for human development in a variety of fields.AI and machine learning have become one of the most powerful tools for completing tasks quickly and accurately without the need for human intervention. This project demonstrates how deep machine learning can be used to create a caption or a sentence for a given picture. This can be used for visually impaired persons, as well as automobiles for self identification, and for various applications to verify quickly and easily. The Convolutional Neural Network CNN is used to describe the alphabet, and the Long Short Term Memory LSTM is used to organize the right meaningful sentences in this model. The flicker 8k and flicker 30k datasets were used to train this. Sreejith S P | Vijayakumar A "Image Captioning Generator using Deep Machine Learning" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-4 , June 2021, URL: https://www.ijtsrd.compapers/ijtsrd42344.pdf Paper URL: https://www.ijtsrd.comcomputer-science/artificial-intelligence/42344/image-captioning-generator-using-deep-machine-learning/sreejith-s-p
Image captioning with Keras and Tensorflow - Debarko De @ PractoDebarko De
This slideshow talks about how to create a image captioning system just like Google's Show and Tell Model. This will walk you through the training phase and final prediction file.n
We will use 7 emotions namely - We have used 7 emotions namely - 'Angry', 'Disgust'濫, 'Fear', 'Happy', 'Neutral', 'Sad'☹️, 'Surprise' to train and test our algorithm using Convolution Neural Networks.
Explaining video summarization based on the focus of attentionVasileiosMezaris
Presentation of paper "Explaining video summarization based on
the focus of attention", by E. Apostolidis, G. Balaouras, V. Mezaris, I. Patras, delivered at IEEE ISM 2022, Dec. 2022, Naples, Italy.
In this paper we propose a method for explaining
video summarization. We start by formulating the problem as
the creation of an explanation mask which indicates the parts
of the video that influenced the most the estimates of a video
summarization network, about the frames’ importance. Then, we
explain how the typical analysis pipeline of attention-based networks for video summarization can be used to define explanation
signals, and we examine various attention-based signals that have
been studied as explanations in the NLP domain. We evaluate
the performance of these signals by investigating the video
summarization network’s input-output relationship according
to different replacement functions, and utilizing measures that quantify the capability of explanations to spot the most and
least influential parts of a video. We run experiments using an
attention-based network (CA-SUM) and two datasets (SumMe
and TVSum) for video summarization. Our evaluations indicate the advanced performance of explanations formed using the inherent attention weights, and demonstrate the ability of our
method to explain the video summarization results using clues
about the focus of the attention mechanism.
Unsupervised Video Summarization via Attention-Driven Adversarial LearningVasileiosMezaris
"Unsupervised Video Summarization via Attention-Driven Adversarial Learning", by E. Apostolidis, E. Adamantidou, A. Metsai, V. Mezaris, I. Patras. Proceedings of the 26th Int. Conf. on Multimedia Modeling (MMM2020), Daejeon, Korea, Jan. 2020.
Facial Emotion Recognition: A Deep Learning approachAshwinRachha
Neural Networks lie at the apogee of Machine Learning algorithms. With a large set of data and automatic feature selection and extraction process, Convolutional Neural Networks are second to none. Neural Networks can be very effective in classification problems.
Facial Emotion Recognition is a technology that helps companies and individuals evaluate customers and optimize their products and services by most relevant and pertinent feedback.
Image classification using convolutional neural networkKIRAN R
For separating the images from a large collection of images or from a large dataset this classifier can be used, Here deep neural network is used for training and classifying the images. The convolutional neural network is the most suitable algorithm for classifier images. This Classifier is a machine learning model, so the more you train it the more will be the accuracy.
Explaining video summarization based on the focus of attentionVasileiosMezaris
Presentation of paper "Explaining video summarization based on
the focus of attention", by E. Apostolidis, G. Balaouras, V. Mezaris, I. Patras, delivered at IEEE ISM 2022, Dec. 2022, Naples, Italy.
In this paper we propose a method for explaining
video summarization. We start by formulating the problem as
the creation of an explanation mask which indicates the parts
of the video that influenced the most the estimates of a video
summarization network, about the frames’ importance. Then, we
explain how the typical analysis pipeline of attention-based networks for video summarization can be used to define explanation
signals, and we examine various attention-based signals that have
been studied as explanations in the NLP domain. We evaluate
the performance of these signals by investigating the video
summarization network’s input-output relationship according
to different replacement functions, and utilizing measures that quantify the capability of explanations to spot the most and
least influential parts of a video. We run experiments using an
attention-based network (CA-SUM) and two datasets (SumMe
and TVSum) for video summarization. Our evaluations indicate the advanced performance of explanations formed using the inherent attention weights, and demonstrate the ability of our
method to explain the video summarization results using clues
about the focus of the attention mechanism.
Unsupervised Video Summarization via Attention-Driven Adversarial LearningVasileiosMezaris
"Unsupervised Video Summarization via Attention-Driven Adversarial Learning", by E. Apostolidis, E. Adamantidou, A. Metsai, V. Mezaris, I. Patras. Proceedings of the 26th Int. Conf. on Multimedia Modeling (MMM2020), Daejeon, Korea, Jan. 2020.
Facial Emotion Recognition: A Deep Learning approachAshwinRachha
Neural Networks lie at the apogee of Machine Learning algorithms. With a large set of data and automatic feature selection and extraction process, Convolutional Neural Networks are second to none. Neural Networks can be very effective in classification problems.
Facial Emotion Recognition is a technology that helps companies and individuals evaluate customers and optimize their products and services by most relevant and pertinent feedback.
Image classification using convolutional neural networkKIRAN R
For separating the images from a large collection of images or from a large dataset this classifier can be used, Here deep neural network is used for training and classifying the images. The convolutional neural network is the most suitable algorithm for classifier images. This Classifier is a machine learning model, so the more you train it the more will be the accuracy.
Empresa privada con varias plataformas periodísticas. Uno de los principales medios de comunicación en Puerto Rico. Alcanzan más lectores que otro periódico en el país.
L'igiene e la sicurezza dei luoghi di lavoro passa inevitabilmente da una consulenza adeguata e qualificata che solo l'esperienza sul campo può confermare.
This presentation was provided to the VTA Board of Directors on Dec 8, 2016. It presents an overview of the measure, the work plan and upcoming policy decisions.
Natural Language Description Generation for Image using Deep Learning Archite...ijtsrd
Automatic natural description generation of an image is currently a challenging task. To generate a natural language description of the image, the system is implemented by combining with the techniques of computer vision and natural language processing. This paper presents different deep learning models for generating the natural language description of the image. Moreover, we discussed how the deep learning model, which works for the natural language description of an image, can be implemented. This deep learning model consists of Convolutional Neural Network CNN as well as Recurrent Neural Network RNN . The CNN is used for extracting the features from the image and RNN is used for generating the natural language description. To implement the deep learning model in generating the natural language description of an image, we have applied the Flickr 8K dataset and we have also evaluated the performance of the model using the standard evaluation matrices. These experiments show that the model is frequently giving accurate natural language descriptions for an input image. Phyu Phyu Khaing | Mie Mie Aung | Myint San "Natural Language Description Generation for Image using Deep Learning Architecture" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26708.pdfPaper URL: https://www.ijtsrd.com/computer-science/other/26708/natural-language-description-generation-for-image-using-deep-learning-architecture/phyu-phyu-khaing
The goal of this report is the presentation of our biometry and security course’s project: Face recognition for Labeled Faces in the Wild dataset using Convolutional Neural Network technology with Graphlab Framework.
Shallow vs. Deep Image Representations: A Comparative Study with Enhancements...CSCJournals
The traditional approach for solving the object recognition problem requires image representations to be first extracted and then fed to a learning model such as an SVM. These representations are handcrafted and heavily engineered by running the object image through a sequence of pipeline steps which requires a good prior knowledge of the problem domain in order to engineer these representations. Moreover, since the classification is done in a separate step, the resultant handcrafted representations are not tuned by the learning model which prevents it from learning complex representations that might would give it more discriminative power. However, in end-to-end deep learning models, image representations along with the classification decision boundary are all learnt directly from the raw data requiring no prior knowledge of the problem domain. These models deeply learn the object image representation hierarchically in multiple layers corresponding to multiple levels of abstraction resulting in representations that are more discriminative and give better results on challenging benchmarks. In contrast to the traditional handcrafted representations, the performance of deep representations improves with the introduction of more data, and more learning layers (more depth) and they perform well on large-scale machine learning problems. The purpose of this study is six fold: (1) review the literature of the pipeline processes used in the previous state-of-the-art codebook model approach for tackling the problem of generic object recognition, (2) Introduce several enhancements in the local feature extraction and normalization steps of the recognition pipeline, (3) compare the enhancements proposed to different encoding methods and contrast them to previous results, (4) experiment with current state-of-the-art deep model architectures used for object recognition, (5) compare between deep representations extracted from the deep learning model and shallow representations handcrafted through the recognition pipeline, and finally, (6) improve the results further by combining multiple different deep learning models into an ensemble and taking the maximum posterior probability.
Semantic Concept Detection in Video Using Hybrid Model of CNN and SVM Classif...CSCJournals
In today's era of digitization and fast internet, many video are uploaded on websites, a mechanism is required to access this video accurately and efficiently. Semantic concept detection achieve this task accurately and is used in many application like multimedia annotation, video summarization, annotation, indexing and retrieval. Video retrieval based on semantic concept is efficient and challenging research area. Semantic concept detection bridges the semantic gap between low level extraction of features from key-frame or shot of video and high level interpretation of the same as semantics. Semantic Concept detection automatically assigns labels to video from predefined vocabulary. This task is considered as supervised machine learning problem. Support vector machine (SVM) emerged as default classifier choice for this task. But recently Deep Convolutional Neural Network (CNN) has shown exceptional performance in this area. CNN requires large dataset for training. In this paper, we present framework for semantic concept detection using hybrid model of SVM and CNN. Global features like color moment, HSV histogram, wavelet transform, grey level co-occurrence matrix and edge orientation histogram are selected as low level features extracted from annotated groundtruth video dataset of TRECVID. In second pipeline, deep features are extracted using pretrained CNN. Dataset is partitioned in three segments to deal with data imbalance issue. Two classifiers are separately trained on all segments and fusion of scores is performed to detect the concepts in test dataset. The system performance is evaluated using Mean Average Precision for multi-label dataset. The performance of the proposed framework using hybrid model of SVM and CNN is comparable to existing approaches.
APPLICATION OF CONVOLUTIONAL NEURAL NETWORK IN LAWN MEASUREMENTsipij
Lawn area measurement is an application of image processing and deep learning. Researchers used
hierarchical networks, segmented images, and other methods to measure the lawn area. Methods’
effectiveness and accuracy varies. In this project, deep learning method, specifically Convolutional neural
network, was applied to measure the lawn area. We used Keras and TensorFlow in Python to develop a
model that was trained on the dataset of houses then tuned the parameters with GridSearchCV in ScikitLearn (a machine learning library in Python) to estimate the lawn area. Convolutional neural network or
shortly CNN shows high accuracy (94 -97%). We may conclude that deep learning method, especially
CNN, could be a good method with a high state-of-art accuracy.
APPLICATION OF CONVOLUTIONAL NEURAL NETWORK IN LAWN MEASUREMENTsipij
Lawn area measurement is an application of image processing and deep learning. Researchers used hierarchical networks, segmented images, and other methods to measure the lawn area. Methods’ effectiveness and accuracy varies. In this project, deep learning method, specifically Convolutional neural network, was applied to measure the lawn area. We used Keras and TensorFlow in Python to develop a model that was trained on the dataset of houses then tuned the parameters with GridSearchCV in Scikit- Learn (a machine learning library in Python) to estimate the lawn area. Convolutional neural network or shortly CNN shows high accuracy (94 -97%). We may conclude that deep learning method, especially CNN, could be a good method with a high state-of-art accuracy.
One-shot learning is an object categorization problem in computer vision. Whereas most machine learning based object categorization algorithms require training on hundreds or thousands of images and very large datasets, one-shot learning aims to learn information about object categories from one, or only a few, training images
Scene recognition using Convolutional Neural NetworkDhirajGidde
Scene recognition is one of the hallmark tasks of computer vision, allowing definition of a context for object recognition. Whereas the tremendous recent progress in object recognition tasks is due to the availability of large datasets like ImageNet and the rise of Convolutional Neural Networks (CNNs) for learning high-level features, performance at scene recognition has not attained the same level of success.
THE EFFECT OF PHYSICAL BASED FEATURES FOR RECOGNITION OF RECAPTURED IMAGESijcsit
It is very simple and easier to recapture a high quality images from LCD screens with the development of multimedia technology and digital devices. In authentication, the use of such recaptured images can be very dangerous. So, it is very important to recognize the recaptured images in order to increase authenticity. Even though, there are a number of features that have been proposed in various state-of-theart
visual recognition tasks, but it is still difficult to decide which feature or combination of features have more significant impact on this task. In this paper an image recapture detection method based on set of physical based features including texture, HSV colour and blurriness is proposed. Also, this paper evaluates the performance of different distinctive featuresin the context of recognition of recaptured
images. Several experimental setups have been conducted in order to demonstrate the performance of the proposed method. In all these experimental results, the proposed method is efficient with good recognition rate. Among the combination of low-level features, CS-LBP detection is to operator which is used to extract the texture feature is the most robust feature.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Automated Neural Image Caption Generator for Visually Impaired People
1. 000
001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
Automated Neural Image Caption Generator
for Visually Impaired People
Christopher Elamri, Teun de Planque
Department of Computer Science
Stanford University
{mcelamri, teun}@stanford.edu
Abstract
Being able to automatically describe the content of an image using properly
formed English sentences is a challenging task, but it could have great impact
by helping visually impaired people better understand their surroundings. Most
modern mobile phones are able to capture photographs, making it possible for the
visually impaired to make images of their environments. These images can then
be used to generate captions that can be read out loud to the visually impaired,
so that they can get a better sense of what is happening around them. In this
paper, we present a deep recurrent architecture that automatically generates brief
explanations of images. Our models use a convolutional neural network (CNN) to
extract features from an image. These features are then fed into a vanilla recurrent
neural network (RNN) or a Long Short-Term Memory (LSTM) network to gener-
ate a description of the image in valid English. Our models achieve comparable
to state of the art performance, and generate highly descriptive captions that can
potentially greatly improve the lives of visually impaired people.
1 Introduction
Visual impairment, also known as vision impairment or vision loss, is a decreased ability to see to
a degree that causes problems not fixable by usual means, such as glasses. According to the World
Health Organization, 285 million people are visually impaired worldwide, including over 39 million
blind people [1]. Living with visual impairment can be challenging, since many daily-life situations
are difficult to understand without good visual acuity.
Technology has the potential to significantly improve the lives of visually impaired people (Figure
1). Access technology such as screen readers, screen magnifiers, and refreshable Braille displays
enable the blind to use mainstream computer applications and mobile phones giving them access
to previously inaccessible information. Another such technology that could improve the lives of
the visually impaired is image caption generation. Most modern mobile phones are able to capture
photographs, making it possible for the visually impaired to make images of their surroundings.
These images can be used to generate captions that can be read out loud to give visually impaired
people a better understanding of their surroundings. Image caption generation can also make the
web more accessible to visually impaired people. The last decade has seen the triumph of the rich
graphical desktop, replete with colourful icons, controls, buttons, and images. Automated caption
generation of online images can make the web a more inviting place for visually impaired surfers.
Being able to automatically describe the content of an image using properly formed English sen-
tences is a very challenging task. This task is significantly harder, for example, than the well-studied
image classification or object recognition tasks, which have been a main focus in the computer vi-
sion community. Indeed, a description must capture not only the objects contained in an image,
but it also must express how these objects relate to each other as well as their attributes and the
1
2. 054
055
056
057
058
059
060
061
062
063
064
065
066
067
068
069
070
071
072
073
074
075
076
077
078
079
080
081
082
083
084
085
086
087
088
089
090
091
092
093
094
095
096
097
098
099
100
101
102
103
104
105
106
107
Figure 1: Visually impaired people can greatly benefit from technological solutions that can help
them better understand their surroundings.
activities they are involved in. Moreover, the above semantic knowledge has to be expressed in a
natural language like English, which means that a language model is needed in addition to visual
understanding.
In this paper, we apply deep learning techniques to the image caption generation task. We first extract
image features using a CNN. Specifically, we extract a 4096-Dimensional image feature vector from
the fc7 layer of the VGG-16 network pretrained on ImageNet. We then reduce the dimension of this
image feature vector using Principal Component Analysis (PCA). This resulting feature vector is
then fed into a vanilla RNN or a LSTM. The vanilla RNN and LSTM generate a description of the
image in valid English. Both the RNN and LSTM based model achieve results comparable to those
achieved by the state of-the-art models.
2 Related Work
Most work in visual recognition has originally focused on image classification, i.e. assigning labels
corresponding to a fixed number of categories to images. Great progress in image classification
has been made over the last couple of years, especially with the use of deep learning techniques
[2, 3]. Nevertheless, a category label still provides limited information about an image, and espe-
cially visually impaired people can benefit from more detailed descriptions. Some initial attempts
at generating more detailed image descriptions have been made, for instance by Farhadi et al. and
Kulkarni et al. [4, 5], but these models are generally dependent on hard-coded sentences and visual
concepts. In addition, the goal of most of these works is to accurately describe the content of an
image in a single sentence. However, this one sentence requirement unnecessarily limits the quality
of the descriptions generated by the model. Several works, for example by Li et al., Gould et al.,
and Fidler et al., focused on obtaining a holistic understanding of scenes and objects depicted on
images [6, 7, 8, 9]. Nonetheless, the goal of these works was to correctly assign labels correspond-
ing to a fixed number of categories to the scene type of an image, instead of generating higher-level
explanations of the scenes and objects depicted on an image.
Generating sentences that describe the content of images has already been explored. Several works
attempt to solve this task by finding the image in the training set that is most similar to the test
image and then returning the caption associated with the test image [4, 10, 11, 12, 13]. Jia et al.,
Kuznetsova et al., and Li et al. find multiple similar images, and combine their captions to generate
the resulting caption [14, 15, 16]. Kuznetsova et al., and Gupta et al. tried using a fixed sentence
template in combination with object detection and feature learning [5, 17, 18]. They tried to identify
objects and features contained in the image, and based on the identified objects contained in the
image they used their sentence template to create sentences describing the image. Nevertheless, this
approach greatly limits the output variety of the model.
Recently there has been a resurgence of interest in image caption generation, as a result of the lat-
est developments in deep learning [2, 19, 20, 21, 22]. Several deep learning approaches have been
developed for generating higher level word descriptions of images [21, 22]. Convolutional Neu-
2
3. 108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
ral Networks have been shown to be powerful models for image classification and object detection
tasks. In addition, new models to obtain low-dimensional vector representations of words such as
word2vec, and GloVe (Global Vectors for Word Representation) and Recurrent Neural Networks
can together create models that combine image features with language modeling to generate image
descriptions [21, 22]. Karpathy et al. developed a Multimodal Recurrent Neural Network archi-
tecture that uses inferred alignments to learn to generate novel descriptions of image regions [21].
Similarly, Kiros et al. used a log-bilinear model that generates full sentence descriptions for images
[22]. However, their model uses a fixed window context [22].
3 Technical Approach
Overview. We implemented a deep recurrent architecture that automatically produces short descrip-
tions of images. Our models use a CNN, which was pretrained on ImageNet, to obtain images
features. We then feed these features into either a vanilla RNN or a LSTM network (Figure 2) to
generate a description of the image in valid English.
3.1 CNN-based Image Feature Extractor
For feature extraction, we use a CNN. CNNs have been widely used and studied for images tasks,
and are currently state-of-the-art methods for object recognition and detection [20]. Concretely, for
all input images, we extract features from the fc7 layer of the VGG-16 network pretrained on Ima-
geNet [23], which is very well tuned for object detection. We obtained a 4096-Dimensional image
feature vector that we reduce using Principal Component Analysis (PCA) to a 512-Dimensional im-
age feature vector due to computational constraints. We feed these features into the first layer of our
RNN or LSTM at the first iteration [24].
3.2 RNN-based Sentence Generator
We first experiment with vanilla RNNs as they have been shown to be powerful models for process-
ing sequential data [25, 26]. Vanilla RNNs can learn complex temporal dynamics by mapping input
sequences to a sequence of hidden states, and hidden states to outputs via the following recurrent
equations.
ht = f(Whhht−1 + Wxhxt) (1)
yt = Whyht (2)
where f is an element-wise non-linearity, ht ∈ RN
is the hidden state with N hidden units, and yt
is the output at time t. In our implementation, we use a hyperbolic tangent as our element-wise non-
linearity. For a length T input sequence x1, x2, ..., xT , the updates above are computed sequentially
as h1 (letting h0 = 0), y1, h2, y2,... hT , yT .
Figure 2: Image Retrieval System and Language Generating Pipeline.
3
4. 162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
3.3 LSTM-based Sentence Generator
Although RNNs have proven successful on tasks such as text generation and speech recognition
[25, 26], it is difficult to train them to learn long-term dynamics. This problem is likely due to
the vanishing and exploding gradients problem that can result from propagating the gradients down
through the many layers of the recurrent networks. LSTM networks (Figure 3) provide a solution by
incorporating memory units that allow the networks to learn when to forget previous hidden states
and when to update hidden states when given new information [24].
At each time-step, we receive an input xt ∈ RD
and the previous hidden state ht−1 ∈ RH
, the
LSTM also maintains an H-dimensional cell state, so we also get the previous cell state ct−1 ∈ RH
.
The learnable parameters of the LSTM are an input-to-hidden matrix Wx ∈ R4HxD
, a hidden-to-
hidden matrix Wh ∈ R4HxH
, and a bias vector b ∈ R4H
.
At each time step, we compute an activation vector a ∈ R4H
as
a = Wxxt + Whht−1 + b (3)
We then divide a into 4 vectors ai, af , ao, ag ∈ RH
where ai consists of the first H elements of a,
af is the next H elements of a, etc.. We then compute four gates which control whether to forget
the current cell value f ∈ RH
, if it should read its input i ∈ RH
, and whether to output the new cell
value o ∈ RH
, and the block input g ∈ RH
.
i = σ(ai) (4)
f = σ(af ) (5)
o = σ(ao) (6)
g = tanh(ag) (7)
where σ is the sigmoid function and tanh is the hyperbolic tangent; both are applied element-wise.
Finally, we compute the next cell state ct which encodes knowledge at every time step of what inputs
have been observed up to this step, and the next hidden state ht as
ct = f ◦ ct−1 + i ◦ g (8)
ht = o ◦ tanh(ct) (9)
where ◦ represents the Hadamard product. The inclusion of these multiplicative gates permits the
regulation of information flow through the computational unit, allowing for more stable gradients
and long-term sequence dependencies [24]. Such multiplicative gates make it possible to train the
LSTM robustly as these gates deal well with exploding and vanishing gradients. The non-linearities
are sigmoid σ() and hyperbolic tangent tanh().
Procedure. Our LSTM model takes the image I and a sequence of inputs vectors (x1, ..., xT ).
It then computes a sequence of hidden states (h1, ..., ht) and a sequence of outputs (y1, ..., yt) by
following the recurrence relation for t = 1 to T:
bv = Whi[CNN(I)] (10)
ht = f(Whxxt + Whhht−1 + bh + 1(t = 1) ◦ bv) (11)
yt = Softmax(Wohht + bo) (12)
where Whi, Whx, Whh, Woh, xi, bh, and bo are learnable parameters and CNN(I) represents the
image features extracted by the CNN.
Training. We train our LSTM model to correctly predict the next word (yt) based on the current
word (xt), and the previous context (ht−1). We do this as follows: we set h0 = 0, x1 to the START
vector, and the desired label y1 as the first word in the sequence. We then set x2 to the word vector
corresponding to the first word generated by the network. Based on this first word vector and the
previous context the network then predicts the second word, etc. The word vectors are generated
using the word2vec embedding model as described by Mikolov et. al [27]. During the last step, xT
represent the last word, and yT is set to an END token.
4
5. 216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
Figure 3: LSTM unit and its gates
Testing. To predict a sentence, we obtain the image features bv, set h0 = 0, set x1 to the
START vector, and compute the distribution over the first word y1. Accordingly, we pick the
argmax from the distribution, set its embedding vector as x2, and repeat the procedure until the
END token is generated.
Softmax Loss. At every time-step, we generate a score for each word in the vocabulary.
We then use the ground truth words in combination with the softmax function to compute the losses
and gradients. We sum the losses over time and average them over the minibatch. Since we operate
over minibatches and because different generated sentences may have different lengths, we append
NULL tokens to the end of each caption so that they all have the same lengths. In addition, our
loss function accepts a mask array that informs it on which elements of the scores counts towards
the loss in order to prevent the NULL tokens to count towards the loss or gradient.
Optimization. We use Stochastic Gradient Descent (SGD) with mini-batches of 25 image-
sentence pairs and momentum of 0.95. We cross-validate the learning rate and the weight decay.
We achieved our best results using Adam, which is a method for efficient stochastic optimization
that only requires first-order gradients and computes individual adaptive learning rates for different
parameters from estimates of first and second moments of the gradients [28]. Adam’s main
advantages are that the magnitudes of parameter updates are invariant to rescaling of the gradients,
its step-size is approximately bounded by the step-size hyperparameter, and it automatically
performs a form of step-size annealing [28].
4 Experiments
4.1 Dataset
For this exercise we will use the 2014 release of the Microsoft COCO dataset which has become
the standard testbed for image captioning [29]. The dataset consists of 80,000 training images and
40,000 validation images, each annotated with 5 captions written by workers on Amazon Mechanical
Turk. Four example images with captions can be seen in Figure 4. We convert all sentences to lower-
case, and discard non-alphanumeric characters.
5
6. 270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
Figure 4: Example images and captions from the Microsoft COCO Caption dataset.
4.2 Evaluation Metric
For each image we expect a caption that provides a correct but brief explanation in valid English
of the images. The closer the generated caption is to the captions written by workers on Amazon
mechanical Turk the better.
The effectiveness of our model is tested on 40,000 images contained in the Microsoft COCO dataset.
We evaluate the generated captions using the following metrics: BLEU (Bilingual Evaluation Un-
derstudy) [30], METEOR (Metric for Evaluation of Translation with Explicit Ordering) [31], and
CIDEr (Consensus-based Image Description Evaluation) [32]. Each method evaluates a candidate
sentence by measuring how well it matches a set of five reference sentences written by humans. The
BLEU score is computed by counting the number of matches between the n-grams of the candi-
date caption and the n-grams of the reference caption. METEOR was designed to fix some of the
problems found in the more popular BLEU metric, and also produce good correlation with human
judgement at the sentence or segment level [30]. METEOR differs from the BLEU metric in that
BLEU seeks correlation at the corpus level [31]. The CIDEr metric was specifically developed for
evaluating image captions [32]. It is a measure of consensus based on how often n-grams in can-
didate captions are present in references captions. It measures the consensus in image captions by
performing a Term Frequency Inverse Document Frequency (TF-IDF) weighting for each n-gram,
because frequent n-grams in references are less informative [32]. For all three metrics (i.e. BLEU,
METEOR, and CIDEr) the higher the score, the better the candidate caption is [30][31][32].
4.3 Quantitative Results
We report the BLEU, METEOR and CIDEr scores in Figure 5 and compare it to the results obtained
in the literature. Both our RNN and LSTM model achieve close to state-of-the-art performance. Our
LSTM model performs slightly better than our RNN model; it achieves a higher BLEU, METEOR,
and CIDEr score than the RNN model.
4.4 Qualitative Results
Our models generates sensible descriptions of images in valid English (Figure 6 and 7). As can
be seen from example groundings in Figure 5, the model discovers interpretable visual-semantic
correspondences, even for relatively small objects such as the phones in Figure 7. The generated
descriptions are accurate enough to be helpful for visually impaired people. In general, we find that
a relatively large portion of generated sentences (60%) can be found in the training data.
6
8. 378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
5 Conclusion
We have presented a deep learning model that automatically generates image captions with the goal
of helping visually impaired people better understand their environments. Our described model is
based on a CNN that encodes an image into a compact representation, followed by a RNN that
generates corresponding sentences based on the learned image features. We showed that this model
achieves comparable to state-of-the-art performance, and that the generated captions are highly de-
scriptive of the objects and scenes depicted on the images. Because of the high quality of the
generated image descriptions, visually impaired people can greatly benefit and get a better sense
of their surroundings using text-to-speech technology. Future work can include this text-to-speech
technology, so that the generated descriptions are automatically read out loud to visually impaired
people. In addition, future work could focus on translating videos directly to sentences instead of
generating captions of images. Static images can only provide blind people with information about
one specific instant of time, while video caption generation could potentially provide blind people
with continuous real time information. LSTMs could be used in combination with CNNs to translate
videos to English descriptions.
Acknowledgments
We would like to thank the CS224D course staff for their ongoing support.
References
[1] ”Visual Impairment and Blindness.” World Health Organization. (2014). Web. 10 Apr. 2016
[2] Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang,
Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ”ImageNet Large
Scale Visual Recognition Challenge.” International Journal of Computer Vision Int J Comput Vis 115.3 (2015):
211-52. Web. 19 Apr. 2016
[3] Everingham, Mark, Luc Van Gool, Christopher K. I. Williams, John Winn, and Andrew Zisserman. ”The
Pascal Visual Object Classes (VOC) Challenge.” International Journal of Computer Vision Int J Comput Vis
88.2 (2009): 303-38. Web. 22 May 2016
[4] Farhadi, Ali, Mohsen Hejrati, Mohammad Amin Sadeghi, Peter Young, Cyrus Rashtchian, Julia Hocken-
maier, and David Forsyth. ”Every Picture Tells a Story: Generating Sentences from Images.” Computer Vision
ECCV 2010 Lecture Notes in Computer Science (2010): 15-29. Web. 5 Apr. 2016
[5] Kulkarni, Girish, Visruth Premraj, Sagnik Dhar, Siming Li, Yejin Choi, Alexander C. Berg, and Tamara L.
Berg. ”Baby Talk: Understanding and Generating Simple Image Descriptions.” Cvpr 2011 (2011). Web. 27
May 2016
[6] Li, Li-Jia, R. Socher, and Li Fei-Fei. ”Towards Total Scene Understanding: Classification, Annotation and
Segmentation in an Automatic Framework.” 2009 IEEE Conference on Computer Vision and Pattern Recogni-
tion (2009). Web. 21 Apr. 2016
[7] Gould, Stephen, Richard Fulton, and Daphne Koller. ”Decomposing a Scene into Geometric and Semanti-
cally Consistent Regions.” 2009 IEEE 12th International Conference on Computer Vision (2009). Web. 6 May
2016
[8] Fidler, Sanja, Abhishek Sharma, and Raquel Urtasun. ”A Sentence Is Worth a Thousand Pixels.” 2013 IEEE
Conference on Computer Vision and Pattern Recognition (2013). Web. 18 May 2016
[9] Li, Li-Jia, and Li Fei-Fei. ”What, Where and Who? Classifying Events by Scene and Object Recognition.”
2007 IEEE 11th International Conference on Computer Vision (2007). Web. 10 Apr. 2016
[10] Lazaridou, Angeliki, Nghia The Pham, and Marco Baroni. ”Combining Language and Vision with a
Multimodal Skip-gram Model.” Proceedings of the 2015 Conference of the North American Chapter of the
Association for Computational Linguistics: Human Language Technologies (2015). Web. 23 May 2016
[11] Hodosh, Young, and Hockenmaier. ”Framing image description as a ranking task: data, models and
evaluation metrics.” Journal of Artificial Intelligence Research (2013). Web. 3 Apr. 2016
[12] Socher, Richard, Andrej Karpathy, Quoc V. Le, Christopher Manning, and Andrew Y. Ng. ”Grounded
compositional semantics for finding and describing images with sentences.” Transactions of the Association for
Computational Linguistics (TACL) (2014). Web. 24 May 2016
8
9. 432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
[13] Ordonez, Vicente, Girish Kulkarni, and Tamara L. Berg. ”Im2text: Describing images using 1 million
captioned photographs.” NIPS: 1143-1151 (2011). Web. 29 Apr. 2016
[14] Jia, Yangqing, Mathieu Salzmann, and Trevor Darrell. ”Learning Cross-modality Similarity for Multino-
mial Data.” 2011 International Conference on Computer Vision (2011). Web. 28 May 2016
[15] Kuznetsova, Polina, Vicente Ordonez, Alexander C. Berg, Tamara Berg, and Yejin Choi. ”Collective
generation of natural image descriptions.” Proceedings of the 50th Annual Meeting of the Association for Com-
putational Linguistics 1 (2012): 359:368. Web. 30 Apr. 2016
[16] Li, Siming and Kulkarni, Girish and Berg, Tamara L. and Berg, Alexander C. and Choi, Yejin. ”Compos-
ing simple image descriptions using web-scale n-grams.” Proceedings of the Fifteenth Conference on Compu-
tational Natural Language Learning: 220-228 (2011). Web. 27 Apr. 2016
[17] Kuznetsova, Polina, Vicente Ordonez, Tamara Berg, Yejin Choi. ”TREETALK: Composition and Com-
pression of Trees for Image Descriptions.” Transactions of the Association for Computational Linguistics 2
(2014): 351-362. Web. 1 Apr. 2016
[18] Gupta and Mannem. ”From image annotation to image description. In Neural information processing.”
Springer (2012). Web. 7 Apr. 2015
[19] LeCun, Bottou, Bengio, and Haffner. ”Gradient- based learning applied to document recognition.” Pro-
ceedings of the IEEE (1998): 86(11):22782324. Web. 27 May 2016
[20] Krizhevsky, Sutskever, and Hinton. ”Imagenet classification with deep convolutional neural networks.”
NIPS (2012). Web. 28 Apr. 2016
[21] Karpathy, Andrej, and Li Fei-Fei. ”Deep Visual-semantic Alignments for Generating Image Descriptions.”
2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015). Web. 29 May 2016
[22] Kiros Ryan, Rich Zemel, and Ruslan Salakhutdinov. ”Multimodal neural language models.” Proceedings
of the 31st International Conference on Machine Learning (ICML-14): 595-603 (2014). Web. 21 May 2016
[23] Simonyan, Karen and Andrew Zisserman. ”Very deep convolutional networks for large-scale image recog-
nition.” CoRR (2014). Web. 28 May 2016
[24] Hochreiter, Sepp, and Jrgen Schmidhuber. ”Long Short-Term Memory.” Neural Computation 9.8 (1997):
1735-780. Web. 23 Apr. 2016
[25] Graves, Alex. ”Generating sequences with recurrent neural networks.” CoRR (2013). Web. 30 May 2016
[26] Graves, Alex and Navdeep Jaitly. ”Towards end-to-end speech recognition with recurrent neural networks.”
Proceedings of the 31st International Converence on Machine Learning (ICML-14): 1764-1772 (2014). Web.
28 May 2016
[27] Mikolov, Tomas, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. ”Distributed representations
of words and phrases and their compositionality.” Advances in Neural Information Processing Systems (NIPS)
26: 3111-3119 (2013). Web. 29 Apr. 2016
[28] Kingma, Diederik and Jimmy Ba. ”Adam: A method for stochastic optimization.” CoRR (2015). Web. 19
May 2016
[29] Lin, Tsung-Yi, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollr,
and C. Lawrence Zitnick. ”Microsoft COCO: Common Objects in Context.” Computer Vision ECCV 2014
Lecture Notes in Computer Science (2014): 740-55. Web. 27 May 2016
[30] Papineni, Kishore, Salim Roukos, Todd Ward, and Wei-Jing Zhu, Bleu: a method for automatic evaluation
of machine translation.” Proceedings of the 40th Annual Meeting on Association for Computation Linguistics
(ACL): 311-318 (2002). Web. 24 May 2016
[31] Denkowski, Michael, and Alon Lavie. ”Meteor Universal: Language Specific Translation Evaluation for
Any Target Language.” Proceedings of the Ninth Workshop on Statistical Machine Translation (2014). Web.
22 Apr. 2016
[32] Vedantam, Ramakrishna, C. Lawrence Zitnick, and Devi Parikh. ”CIDEr: Consensus-based Image De-
scription Evaluation.” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015).
Web. 24 May 2016
[33] Vinyals, Oriol, Alexander Toshev, Samy Bengio, and Dumitru Erhan. ”Show and Tell: A Neural Image
Caption Generator.” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015). Web.
25 May 2016
9
10. 486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
[34] Chen, Xinlei and C. Lawrence Zitnick. Learning a Recurrent Visual Representation for Image Caption
Generation. CoRR abs/1411.5654 (2014). Web. 19 May 2016
[35] Fang, Hao, Saurabh Gupta, Forrest Iandola, Rupesh K. Srivastava, Li Deng, Piotr Dollar, Jianfeng Gao,
Xiaodong He, Margaret Mitchell, John C. Platt, C. Lawrence Zitnick, and Geoffrey Zweig. ”From Captions
to Visual Concepts and Back.” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
(2015). Web. 27 Apr. 2016
[36] Donahue, Jeff, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan,
Trevor Darrell, and Kate Saenko. ”Long-term Recurrent Convolutional Networks for Visual Recognition and
Description.” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015). Web. 20
Apr. 2016
10