https://okokprojects.com/
IEEE PROJECTS 2023-2024 TITLE LIST
WhatsApp : +91-8144199666
From Our Title List the Cost will be,
Mail Us: okokprojects@gmail.com
Website: : https://www.okokprojects.com
: http://www.ieeeproject.net
Support Including Packages
=======================
* Complete Source Code
* Complete Documentation
* Complete Presentation Slides
* Flow Diagram
* Database File
* Screenshots
* Execution Procedure
* Video Tutorials
* Supporting Softwares
Support Specialization
=======================
* 24/7 Support
* Ticketing System
* Voice Conference
* Video On Demand
* Remote Connectivity
* Document Customization
* Live Chat Support
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...
A Deep Multimodal Learning Approach to Perceive Basic Needs of Humans From Instagram Profile.pdf
1. A Deep Multimodal Learning Approach
to Perceive Basic Needs of Humans
From Instagram Profile
Abstract
Nowadays, a significant part of our time is spent sharing multimodal data on
social media sites such as Instagram, Facebook
way through which users present themselves to social media can provide
useful insights into their behaviours, personalities, perspectives, motives and
needs. This article proposes to use multimodal data collected from Instagram
accounts to predict the five basic prototypical needs described in Glasser's
choice theory (i.e., Survival , Power , Freedom , Belonging , and Fun ). We
automate the identification of the unconsciously perceived needs from
Instagram profiles by using both
approach aggregates the visual and textual features extracted using deep
learning and constructs a homogeneous representation for each profile
through the proposed Bag
classification on the fusion of both modalities. We validate our proposal on a
large database, consensually annotated by two expert psychologists, with
A Deep Multimodal Learning Approach
to Perceive Basic Needs of Humans
From Instagram Profile
Nowadays, a significant part of our time is spent sharing multimodal data on
social media sites such as Instagram, Facebook and Twitter. The particular
way through which users present themselves to social media can provide
useful insights into their behaviours, personalities, perspectives, motives and
needs. This article proposes to use multimodal data collected from Instagram
accounts to predict the five basic prototypical needs described in Glasser's
choice theory (i.e., Survival , Power , Freedom , Belonging , and Fun ). We
automate the identification of the unconsciously perceived needs from
Instagram profiles by using both visual and textual contents. The proposed
approach aggregates the visual and textual features extracted using deep
learning and constructs a homogeneous representation for each profile
through the proposed Bag-of-Content . Finally, we perform multi
classification on the fusion of both modalities. We validate our proposal on a
large database, consensually annotated by two expert psychologists, with
A Deep Multimodal Learning Approach
to Perceive Basic Needs of Humans
Nowadays, a significant part of our time is spent sharing multimodal data on
and Twitter. The particular
way through which users present themselves to social media can provide
useful insights into their behaviours, personalities, perspectives, motives and
needs. This article proposes to use multimodal data collected from Instagram
accounts to predict the five basic prototypical needs described in Glasser's
choice theory (i.e., Survival , Power , Freedom , Belonging , and Fun ). We
automate the identification of the unconsciously perceived needs from
visual and textual contents. The proposed
approach aggregates the visual and textual features extracted using deep
learning and constructs a homogeneous representation for each profile
Content . Finally, we perform multi-label
classification on the fusion of both modalities. We validate our proposal on a
large database, consensually annotated by two expert psychologists, with
2. more than 30,000 images, captions and comments. Experiments show
promising accuracy and complementary information between visual and
textual cues.