SlideShare a Scribd company logo
1 of 15
Download to read offline
INDIAN INSTITUTE OF TECHNOLOGY ROORKEE
A Comprehensive Analysis on Co-Saliency Detection on
Learning Approaches
in
3rd International Conference on Innovative Practices in
Technology and Management
By
Dr. Anurag Vijay Agrawal (Paper ID A22)
DAY 2 (Morning) : 23rd February 2023 Track 7 (10.30 AM - 1:00 PM)
A research paper presentation
2
Contents
• Abstract
• Introduction
• Related Works
• Methodology
• Results and Discussion
• Conclusion
• References
3
Abstract
• Co-saliency recognition is a fast-growing field in computer vision. Co-saliency
detection, a unique field of optical sensitivity, relates to the identification of shared
and salient backgrounds from two or even more pertinent images, and it is broadly
applicable to a variety of computer vision tasks.
– Various co-saliency detection techniques are composed of three modules: extracting image
characteristics, examining informative signals or elements, and creating effective computer
foundations to construct co-saliency. Even though several strategies have been created, there hasn't
yet been a thorough analysis and assessment of co-saliency prediction methods in the literature.
This work intends to provide a thorough analysis of the foundations, difficulties, and implications
of co-saliency detecting.
• An overview is offered based on the connected computer vision works, investigates
the detection history, outline and classify the important algorithms and address
certain unresolved difficulties, explain the possible co-saliency identification
applications as point out certain unsolved obstacles and interesting future appears to
work.
4
Introduction
• Many imaging devices, including digital photography and mobile phones, can store
vast amounts of image and video information. Such information is readily available to
photo-sharing platforms like Flickr and Facebook.
– Co-saliency denotes the salient visual stimuli that are shared by all members of an image group.
Co-saliency identification is a quantitative challenge that seeks to identify the shared and
prominent foreground areas in a set of images. Because the semantic subcategories of co-salient
entities are unavailable, the computer must deduce them from the image group's content. Consider
an image collection, an image are separated into four constituents: ordinary background (CB),
common forefront (CF), unusual background (unCB) and unusual forefront (unCF). Finding the
salient areas (foregrounds) from the available images is the problem of co-saliency detection.
The current co-saliency detection techniques primarily concentrate on resolving
three important issues:
i) collecting representative features to define the foregrounds;
ii) relevant signals or elements to characterise co-saliency; and
iii) creating efficient computational methods to construct co-saliency.
5
Introduction
Fig. 1 Sample for Co-saliency Detection
6
Related Works Saliency Detection
• Saliency detection highlights human-attractive areas in the single image. The detection algorithms are
grouped into two types: eye fixation identification methods and prominent object classification models. In
contrast to the latter, which seeks to identify and segment the entire salient items that stand from its
surroundings and draw people's attention, the former aims to forecast fixation positions while individuals are
observing images in the natural world at will. Some early approaches to predict salient sites relied on local
and global contrasting, with the salient parts of image are distinguishable from its surroundings or overall
composition background. Some techniques began including comprehensive information, high-level previous
convictions, background priors, and high-level priors.
7
Related Works video saliency
• Video saliency, also known as spatiotemporal saliency, seeks to predict prominent object areas in video
frames or shorts. How to include illuminating motion cues, which are essential in video capture, is the main
problem at hand.
• In particular, Xia et al. presented a differential description of centre-based saliency by modelling the spatio-
temporal regions as dynamic textures was motivated by biological principles of perceptual (motion) grouping.
Their approach provides a combined, systematic characterisation of saliency's spatial - spectral elements. The
spatial - spectral saliency images were initially created independently before being combined into one using a
spatially and temporally adaptable entropy-based uncertainty weighing technique.
• Li et al., unified 's regional framework for investigating short-term continuity took into account both intra-
frame and different conditions saliency based on various factors, such as colour and mobility.
8
Methodology
• During pre-processing, the input image is divided into various processing elements
(image pixels, super-pixel segments and pixel clusters). The attributes are
retrieved to investigate the characteristics of every compute unit and created co-
saliency bottom-up cue (single cue exploration). Finally, bottom-up co-saliency cue
data are combined to create input image co-saliency maps. The indication is the
primary issue with bottom-up approaches. The co-saliency cue's fundamental
function equation is:
co-saliency=saliency*repeatedness (1)
9
Methodology
• Adopted methods for examining bottom-up signals have evolved from early basic
methods (ranking and matching) to modern, potent methods (pattern mining and
matrix factorization). Weighted combinations are typically combined by multiplying
or averaging, but some newer approaches have used superior tactics to take use
of both multiplies and averages. The loose collection co-saliency detection that
detects the cluster-level co-saliency using bottom-up saliency signals, is a
representation of the average bottom-up approach. The pre-processing stage was
initially carried out by the scientists using k-means based clustering.
– RGB colour feature was derived to specify the discovered clusters.
– The last co-saliency prediction is derived by integrating the bottom-up process using a
straightforward multiplication technique.
– Subsequently, bottom-up cues (contrast, spatial, and matching cues) were examined and
expresses the co-saliency.
10
Results and Discussion
• The image pair dataset comprises 106 image pairs with a frequency of roughly
128100 pixels. It is a benchmark dataset created for evaluation of co-saliency
identification and also includes weakly annotated ground truth data. Each image
has one or more identical items placed against various backdrops. This dataset
has been used often by previous co-saliency identification methods. Since this
Image Pair dataset is less congested than previous benchmark functions,
prevailing co-saliency algorithms have obtained remarkable outcomes (AUC of
97% and AP of 93% respectively).
11
Conclusion
• We evaluated the co-saliency identification literature in this study.
• We reviewed recent developments in this area, examined the key methods, and
spoke about co-saliency detection's problems and prospective uses.
• In conclusion, co-saliency recognition is a recently established and quickly
expanding study field in the computer vision field.
12
References
• [1] Ali Borji. 2012. Boosting bottom-up and top-down visual features for saliency estimation. In IEEE Conference on
Computer Vision and Pattern Recognition. Providence, IEEE, 438–445.
• [2] Ali Borji, Ming-Ming Cheng, Huaizu Jiang, and Jia Li. 2015. Salient object detection: A benchmark. IEEE
Transactions on Image Processing 24, 12 (2015), 5706–5722.
• [3] Ali Borji and James Tanner. 2016. Reconciling saliency and object center-bias hypotheses in explaining free-viewing
fixations. IEEE Transactions on Neural Networks and Learning Systems 27, 6 (2016), 1214–1226.
• [4] Xiaochun Cao, Yupeng Cheng, Zhiqiang Tao, and Huazhu Fu. 2014. Co-saliency detection via base reconstruction. In
ACM International Conference on Multimedia. Florida, ACM, 997–1000.
• [5] Pratik Prabhanjan Brahma, Dapeng Wu, and Yiyuan She. 2016. Why deep learning works: A manifold
disentanglement perspective. IEEE Transactions on Neural Networks and Learning Systems 27, 10 (2016), 1997–2008.
• [6] Xiaochun Cao, Zhiqiang Tao, Bao Zhang, Huazhu Fu, and Xuewei Li. 2013. Saliency map fusion based on rank-one
constraint. In IEEE International Conference on Multimedia and Expo. San Jose, IEEE, 1–8.
• [7] Xiaochun Cao, Zhiqiang Tao, Bao Zhang, Huazhu Fu, and Wei Feng. 2014. Self-adaptively weighted co-saliency
detection via rank constraint. IEEE Transactions on Image Processing 23, 9 (2014), 4175–4186.
13
References
• [8] Xiaochun Cao, Changqing Zhang, Huazhu Fu, Xiaojie Guo, and Qi Tian. 2016. Saliency-aware nonparametric
foreground annotation based on weakly labeled data. IEEE Transactions on Neural Networks and Learning Systems 27, 6
(2016), 1253–1265.
• [9] Xiaojun Chang, Zhigang Ma, Yi Yang, Zhiqiang Zeng, and Alexander G. Hauptmann. 2017. Bi-level semantic
representation analysis for multimedia event detection. IEEE Trans Cybernetics 47, 5 (2017), 1180–1197.
• [10] Kai-Yueh Chang, Tyng-Luh Liu, and Shang-Hong Lai. 2011. From co-saliency to co-segmentation: An efficient and
fully unsupervised energy minimization model. In IEEE Conference on Computer Vision and Pattern Recognition.
Colorado Springs, IEEE, 2129–2136.
• [11] Yi-Lei Chen and Chiou-Ting Hsu. 2014. Implicit rank-sparsity decomposition: Applications to saliency/co-saliency
detection. In International Conference on Pattern Recognition. Stockholm, IEEE, 2305–2310.
• [12] Ming Cheng, Niloy J. Mitra, Xumin Huang, Philip H. S. Torr, and Song Hu. 2015. Global contrast based salient
region detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 37, 3 (2015), 569–582.
• [13] Yue Deng, Qionghai Dai, Risheng Liu, Zengke Zhang, and Sanqing Hu. 2013. Low-rank structure learning via
nonconvex heuristic recovery. IEEE Transactions on Neural Networks and Learning Systems 24, 3 (2013), 383–396.
14
References
• [14] Yuming Fang, Zhou Wang, Weisi Lin, and Zhijun Fang. 2014. Video saliency incorporating spatiotemporal cues and
uncertainty weighting. IEEE Transactions on Image Processing 23, 9 (2014), 3910–3921.
• [15] Rajagopal Sudarmani, Kanagaraj Venusamy, Sathish Sivaraman, Poongodi Jayaraman, Kannadhasan Suriyan,
Manjunathan Alagarsamy, “Machine to machine communication enabled internet of things: a review”, International
Journal of Reconfigurable and Embedded Systems, 2022, 11(2), pp. 126-134.
• [16] Roselin Suganthi Jesudoss, Rajeswari Kaleeswaran, Manjunathan Alagarsamy, Dineshkumar Thangaraju, Dinesh
Paramathi Mani, Kannadhasan Suriyan, “Comparative study of BER with NOMA system in different fading channels”,
Bulletin of Electrical Engineering and Informatics, 2022, 11(2), pp. 854–861.
15
Thanks

More Related Content

Similar to A Comprehensive Analysis on Co-Saliency Detection on Learning Approaches in 3rd International Conference on Innovative Practices in Technology and Management

adaptive metric learning for saliency detection base paper
adaptive metric learning for saliency detection base paperadaptive metric learning for saliency detection base paper
adaptive metric learning for saliency detection base paperSuresh Nagalla
 
A Survey on Approaches for Object Tracking
A Survey on Approaches for Object TrackingA Survey on Approaches for Object Tracking
A Survey on Approaches for Object Trackingjournal ijrtem
 
Proposed Multi-object Tracking Algorithm Using Sobel Edge Detection operator
Proposed Multi-object Tracking Algorithm Using Sobel Edge Detection operatorProposed Multi-object Tracking Algorithm Using Sobel Edge Detection operator
Proposed Multi-object Tracking Algorithm Using Sobel Edge Detection operatorQUESTJOURNAL
 
Human Re-identification with Global and Local Siamese Convolution Neural Network
Human Re-identification with Global and Local Siamese Convolution Neural NetworkHuman Re-identification with Global and Local Siamese Convolution Neural Network
Human Re-identification with Global and Local Siamese Convolution Neural NetworkTELKOMNIKA JOURNAL
 
An Approach to Face Recognition Using Feed Forward Neural Network
An Approach to Face Recognition Using Feed Forward Neural NetworkAn Approach to Face Recognition Using Feed Forward Neural Network
An Approach to Face Recognition Using Feed Forward Neural NetworkEditor IJCATR
 
A Survey of Image Segmentation based on Artificial Intelligence and Evolution...
A Survey of Image Segmentation based on Artificial Intelligence and Evolution...A Survey of Image Segmentation based on Artificial Intelligence and Evolution...
A Survey of Image Segmentation based on Artificial Intelligence and Evolution...IOSR Journals
 
Object tracking a survey
Object tracking a surveyObject tracking a survey
Object tracking a surveyHaseeb Hassan
 
Review of Image Segmentation Techniques based on Region Merging Approach
Review of Image Segmentation Techniques based on Region Merging ApproachReview of Image Segmentation Techniques based on Region Merging Approach
Review of Image Segmentation Techniques based on Region Merging ApproachEditor IJMTER
 
A Survey On: Content Based Image Retrieval Systems Using Clustering Technique...
A Survey On: Content Based Image Retrieval Systems Using Clustering Technique...A Survey On: Content Based Image Retrieval Systems Using Clustering Technique...
A Survey On: Content Based Image Retrieval Systems Using Clustering Technique...IJMIT JOURNAL
 
Paper id 24201475
Paper id 24201475Paper id 24201475
Paper id 24201475IJRAT
 
ZERNIKE-ENTROPY IMAGE SIMILARITY MEASURE BASED ON JOINT HISTOGRAM FOR FACE RE...
ZERNIKE-ENTROPY IMAGE SIMILARITY MEASURE BASED ON JOINT HISTOGRAM FOR FACE RE...ZERNIKE-ENTROPY IMAGE SIMILARITY MEASURE BASED ON JOINT HISTOGRAM FOR FACE RE...
ZERNIKE-ENTROPY IMAGE SIMILARITY MEASURE BASED ON JOINT HISTOGRAM FOR FACE RE...AM Publications
 
Image Recognition Expert System based on deep learning
Image Recognition Expert System based on deep learningImage Recognition Expert System based on deep learning
Image Recognition Expert System based on deep learningPRATHAMESH REGE
 
Image Segmentation and Classification using Neural Network
Image Segmentation and Classification using Neural NetworkImage Segmentation and Classification using Neural Network
Image Segmentation and Classification using Neural NetworkAIRCC Publishing Corporation
 
Image Segmentation and Classification using Neural Network
Image Segmentation and Classification using Neural NetworkImage Segmentation and Classification using Neural Network
Image Segmentation and Classification using Neural NetworkAIRCC Publishing Corporation
 
Survey of The Problem of Object Detection In Real Images
Survey of The Problem of Object Detection In Real ImagesSurvey of The Problem of Object Detection In Real Images
Survey of The Problem of Object Detection In Real ImagesCSCJournals
 
A Review of Edge Detection Techniques for Image Segmentation
A Review of Edge Detection Techniques for Image SegmentationA Review of Edge Detection Techniques for Image Segmentation
A Review of Edge Detection Techniques for Image SegmentationIIRindia
 

Similar to A Comprehensive Analysis on Co-Saliency Detection on Learning Approaches in 3rd International Conference on Innovative Practices in Technology and Management (20)

adaptive metric learning for saliency detection base paper
adaptive metric learning for saliency detection base paperadaptive metric learning for saliency detection base paper
adaptive metric learning for saliency detection base paper
 
H018124360
H018124360H018124360
H018124360
 
A Survey on Approaches for Object Tracking
A Survey on Approaches for Object TrackingA Survey on Approaches for Object Tracking
A Survey on Approaches for Object Tracking
 
Proposed Multi-object Tracking Algorithm Using Sobel Edge Detection operator
Proposed Multi-object Tracking Algorithm Using Sobel Edge Detection operatorProposed Multi-object Tracking Algorithm Using Sobel Edge Detection operator
Proposed Multi-object Tracking Algorithm Using Sobel Edge Detection operator
 
CBIR with RF
CBIR with RFCBIR with RF
CBIR with RF
 
Human Re-identification with Global and Local Siamese Convolution Neural Network
Human Re-identification with Global and Local Siamese Convolution Neural NetworkHuman Re-identification with Global and Local Siamese Convolution Neural Network
Human Re-identification with Global and Local Siamese Convolution Neural Network
 
An Approach to Face Recognition Using Feed Forward Neural Network
An Approach to Face Recognition Using Feed Forward Neural NetworkAn Approach to Face Recognition Using Feed Forward Neural Network
An Approach to Face Recognition Using Feed Forward Neural Network
 
A Survey of Image Segmentation based on Artificial Intelligence and Evolution...
A Survey of Image Segmentation based on Artificial Intelligence and Evolution...A Survey of Image Segmentation based on Artificial Intelligence and Evolution...
A Survey of Image Segmentation based on Artificial Intelligence and Evolution...
 
Object tracking a survey
Object tracking a surveyObject tracking a survey
Object tracking a survey
 
Review of Image Segmentation Techniques based on Region Merging Approach
Review of Image Segmentation Techniques based on Region Merging ApproachReview of Image Segmentation Techniques based on Region Merging Approach
Review of Image Segmentation Techniques based on Region Merging Approach
 
A Survey On: Content Based Image Retrieval Systems Using Clustering Technique...
A Survey On: Content Based Image Retrieval Systems Using Clustering Technique...A Survey On: Content Based Image Retrieval Systems Using Clustering Technique...
A Survey On: Content Based Image Retrieval Systems Using Clustering Technique...
 
Paper id 24201475
Paper id 24201475Paper id 24201475
Paper id 24201475
 
Paper of Final Year Project.pdf
Paper of Final Year Project.pdfPaper of Final Year Project.pdf
Paper of Final Year Project.pdf
 
A010210106
A010210106A010210106
A010210106
 
ZERNIKE-ENTROPY IMAGE SIMILARITY MEASURE BASED ON JOINT HISTOGRAM FOR FACE RE...
ZERNIKE-ENTROPY IMAGE SIMILARITY MEASURE BASED ON JOINT HISTOGRAM FOR FACE RE...ZERNIKE-ENTROPY IMAGE SIMILARITY MEASURE BASED ON JOINT HISTOGRAM FOR FACE RE...
ZERNIKE-ENTROPY IMAGE SIMILARITY MEASURE BASED ON JOINT HISTOGRAM FOR FACE RE...
 
Image Recognition Expert System based on deep learning
Image Recognition Expert System based on deep learningImage Recognition Expert System based on deep learning
Image Recognition Expert System based on deep learning
 
Image Segmentation and Classification using Neural Network
Image Segmentation and Classification using Neural NetworkImage Segmentation and Classification using Neural Network
Image Segmentation and Classification using Neural Network
 
Image Segmentation and Classification using Neural Network
Image Segmentation and Classification using Neural NetworkImage Segmentation and Classification using Neural Network
Image Segmentation and Classification using Neural Network
 
Survey of The Problem of Object Detection In Real Images
Survey of The Problem of Object Detection In Real ImagesSurvey of The Problem of Object Detection In Real Images
Survey of The Problem of Object Detection In Real Images
 
A Review of Edge Detection Techniques for Image Segmentation
A Review of Edge Detection Techniques for Image SegmentationA Review of Edge Detection Techniques for Image Segmentation
A Review of Edge Detection Techniques for Image Segmentation
 

Recently uploaded

(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
High Profile Call Girls Dahisar Arpita 9907093804 Independent Escort Service ...
High Profile Call Girls Dahisar Arpita 9907093804 Independent Escort Service ...High Profile Call Girls Dahisar Arpita 9907093804 Independent Escort Service ...
High Profile Call Girls Dahisar Arpita 9907093804 Independent Escort Service ...Call girls in Ahmedabad High profile
 
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSISrknatarajan
 
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptxthe ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptxhumanexperienceaaa
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
Introduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxIntroduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxupamatechverse
 
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)Suman Mia
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVRajaP95
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations120cr0395
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINESIVASHANKAR N
 
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escortsranjana rawat
 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxupamatechverse
 
Porous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingPorous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingrakeshbaidya232001
 
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Serviceranjana rawat
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130Suhani Kapoor
 

Recently uploaded (20)

★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
 
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
High Profile Call Girls Dahisar Arpita 9907093804 Independent Escort Service ...
High Profile Call Girls Dahisar Arpita 9907093804 Independent Escort Service ...High Profile Call Girls Dahisar Arpita 9907093804 Independent Escort Service ...
High Profile Call Girls Dahisar Arpita 9907093804 Independent Escort Service ...
 
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
 
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSIS
 
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptxthe ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
 
Introduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxIntroduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptx
 
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
 
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCRCall Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
 
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptx
 
Porous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingPorous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writing
 
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
 

A Comprehensive Analysis on Co-Saliency Detection on Learning Approaches in 3rd International Conference on Innovative Practices in Technology and Management

  • 1. INDIAN INSTITUTE OF TECHNOLOGY ROORKEE A Comprehensive Analysis on Co-Saliency Detection on Learning Approaches in 3rd International Conference on Innovative Practices in Technology and Management By Dr. Anurag Vijay Agrawal (Paper ID A22) DAY 2 (Morning) : 23rd February 2023 Track 7 (10.30 AM - 1:00 PM) A research paper presentation
  • 2. 2 Contents • Abstract • Introduction • Related Works • Methodology • Results and Discussion • Conclusion • References
  • 3. 3 Abstract • Co-saliency recognition is a fast-growing field in computer vision. Co-saliency detection, a unique field of optical sensitivity, relates to the identification of shared and salient backgrounds from two or even more pertinent images, and it is broadly applicable to a variety of computer vision tasks. – Various co-saliency detection techniques are composed of three modules: extracting image characteristics, examining informative signals or elements, and creating effective computer foundations to construct co-saliency. Even though several strategies have been created, there hasn't yet been a thorough analysis and assessment of co-saliency prediction methods in the literature. This work intends to provide a thorough analysis of the foundations, difficulties, and implications of co-saliency detecting. • An overview is offered based on the connected computer vision works, investigates the detection history, outline and classify the important algorithms and address certain unresolved difficulties, explain the possible co-saliency identification applications as point out certain unsolved obstacles and interesting future appears to work.
  • 4. 4 Introduction • Many imaging devices, including digital photography and mobile phones, can store vast amounts of image and video information. Such information is readily available to photo-sharing platforms like Flickr and Facebook. – Co-saliency denotes the salient visual stimuli that are shared by all members of an image group. Co-saliency identification is a quantitative challenge that seeks to identify the shared and prominent foreground areas in a set of images. Because the semantic subcategories of co-salient entities are unavailable, the computer must deduce them from the image group's content. Consider an image collection, an image are separated into four constituents: ordinary background (CB), common forefront (CF), unusual background (unCB) and unusual forefront (unCF). Finding the salient areas (foregrounds) from the available images is the problem of co-saliency detection. The current co-saliency detection techniques primarily concentrate on resolving three important issues: i) collecting representative features to define the foregrounds; ii) relevant signals or elements to characterise co-saliency; and iii) creating efficient computational methods to construct co-saliency.
  • 5. 5 Introduction Fig. 1 Sample for Co-saliency Detection
  • 6. 6 Related Works Saliency Detection • Saliency detection highlights human-attractive areas in the single image. The detection algorithms are grouped into two types: eye fixation identification methods and prominent object classification models. In contrast to the latter, which seeks to identify and segment the entire salient items that stand from its surroundings and draw people's attention, the former aims to forecast fixation positions while individuals are observing images in the natural world at will. Some early approaches to predict salient sites relied on local and global contrasting, with the salient parts of image are distinguishable from its surroundings or overall composition background. Some techniques began including comprehensive information, high-level previous convictions, background priors, and high-level priors.
  • 7. 7 Related Works video saliency • Video saliency, also known as spatiotemporal saliency, seeks to predict prominent object areas in video frames or shorts. How to include illuminating motion cues, which are essential in video capture, is the main problem at hand. • In particular, Xia et al. presented a differential description of centre-based saliency by modelling the spatio- temporal regions as dynamic textures was motivated by biological principles of perceptual (motion) grouping. Their approach provides a combined, systematic characterisation of saliency's spatial - spectral elements. The spatial - spectral saliency images were initially created independently before being combined into one using a spatially and temporally adaptable entropy-based uncertainty weighing technique. • Li et al., unified 's regional framework for investigating short-term continuity took into account both intra- frame and different conditions saliency based on various factors, such as colour and mobility.
  • 8. 8 Methodology • During pre-processing, the input image is divided into various processing elements (image pixels, super-pixel segments and pixel clusters). The attributes are retrieved to investigate the characteristics of every compute unit and created co- saliency bottom-up cue (single cue exploration). Finally, bottom-up co-saliency cue data are combined to create input image co-saliency maps. The indication is the primary issue with bottom-up approaches. The co-saliency cue's fundamental function equation is: co-saliency=saliency*repeatedness (1)
  • 9. 9 Methodology • Adopted methods for examining bottom-up signals have evolved from early basic methods (ranking and matching) to modern, potent methods (pattern mining and matrix factorization). Weighted combinations are typically combined by multiplying or averaging, but some newer approaches have used superior tactics to take use of both multiplies and averages. The loose collection co-saliency detection that detects the cluster-level co-saliency using bottom-up saliency signals, is a representation of the average bottom-up approach. The pre-processing stage was initially carried out by the scientists using k-means based clustering. – RGB colour feature was derived to specify the discovered clusters. – The last co-saliency prediction is derived by integrating the bottom-up process using a straightforward multiplication technique. – Subsequently, bottom-up cues (contrast, spatial, and matching cues) were examined and expresses the co-saliency.
  • 10. 10 Results and Discussion • The image pair dataset comprises 106 image pairs with a frequency of roughly 128100 pixels. It is a benchmark dataset created for evaluation of co-saliency identification and also includes weakly annotated ground truth data. Each image has one or more identical items placed against various backdrops. This dataset has been used often by previous co-saliency identification methods. Since this Image Pair dataset is less congested than previous benchmark functions, prevailing co-saliency algorithms have obtained remarkable outcomes (AUC of 97% and AP of 93% respectively).
  • 11. 11 Conclusion • We evaluated the co-saliency identification literature in this study. • We reviewed recent developments in this area, examined the key methods, and spoke about co-saliency detection's problems and prospective uses. • In conclusion, co-saliency recognition is a recently established and quickly expanding study field in the computer vision field.
  • 12. 12 References • [1] Ali Borji. 2012. Boosting bottom-up and top-down visual features for saliency estimation. In IEEE Conference on Computer Vision and Pattern Recognition. Providence, IEEE, 438–445. • [2] Ali Borji, Ming-Ming Cheng, Huaizu Jiang, and Jia Li. 2015. Salient object detection: A benchmark. IEEE Transactions on Image Processing 24, 12 (2015), 5706–5722. • [3] Ali Borji and James Tanner. 2016. Reconciling saliency and object center-bias hypotheses in explaining free-viewing fixations. IEEE Transactions on Neural Networks and Learning Systems 27, 6 (2016), 1214–1226. • [4] Xiaochun Cao, Yupeng Cheng, Zhiqiang Tao, and Huazhu Fu. 2014. Co-saliency detection via base reconstruction. In ACM International Conference on Multimedia. Florida, ACM, 997–1000. • [5] Pratik Prabhanjan Brahma, Dapeng Wu, and Yiyuan She. 2016. Why deep learning works: A manifold disentanglement perspective. IEEE Transactions on Neural Networks and Learning Systems 27, 10 (2016), 1997–2008. • [6] Xiaochun Cao, Zhiqiang Tao, Bao Zhang, Huazhu Fu, and Xuewei Li. 2013. Saliency map fusion based on rank-one constraint. In IEEE International Conference on Multimedia and Expo. San Jose, IEEE, 1–8. • [7] Xiaochun Cao, Zhiqiang Tao, Bao Zhang, Huazhu Fu, and Wei Feng. 2014. Self-adaptively weighted co-saliency detection via rank constraint. IEEE Transactions on Image Processing 23, 9 (2014), 4175–4186.
  • 13. 13 References • [8] Xiaochun Cao, Changqing Zhang, Huazhu Fu, Xiaojie Guo, and Qi Tian. 2016. Saliency-aware nonparametric foreground annotation based on weakly labeled data. IEEE Transactions on Neural Networks and Learning Systems 27, 6 (2016), 1253–1265. • [9] Xiaojun Chang, Zhigang Ma, Yi Yang, Zhiqiang Zeng, and Alexander G. Hauptmann. 2017. Bi-level semantic representation analysis for multimedia event detection. IEEE Trans Cybernetics 47, 5 (2017), 1180–1197. • [10] Kai-Yueh Chang, Tyng-Luh Liu, and Shang-Hong Lai. 2011. From co-saliency to co-segmentation: An efficient and fully unsupervised energy minimization model. In IEEE Conference on Computer Vision and Pattern Recognition. Colorado Springs, IEEE, 2129–2136. • [11] Yi-Lei Chen and Chiou-Ting Hsu. 2014. Implicit rank-sparsity decomposition: Applications to saliency/co-saliency detection. In International Conference on Pattern Recognition. Stockholm, IEEE, 2305–2310. • [12] Ming Cheng, Niloy J. Mitra, Xumin Huang, Philip H. S. Torr, and Song Hu. 2015. Global contrast based salient region detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 37, 3 (2015), 569–582. • [13] Yue Deng, Qionghai Dai, Risheng Liu, Zengke Zhang, and Sanqing Hu. 2013. Low-rank structure learning via nonconvex heuristic recovery. IEEE Transactions on Neural Networks and Learning Systems 24, 3 (2013), 383–396.
  • 14. 14 References • [14] Yuming Fang, Zhou Wang, Weisi Lin, and Zhijun Fang. 2014. Video saliency incorporating spatiotemporal cues and uncertainty weighting. IEEE Transactions on Image Processing 23, 9 (2014), 3910–3921. • [15] Rajagopal Sudarmani, Kanagaraj Venusamy, Sathish Sivaraman, Poongodi Jayaraman, Kannadhasan Suriyan, Manjunathan Alagarsamy, “Machine to machine communication enabled internet of things: a review”, International Journal of Reconfigurable and Embedded Systems, 2022, 11(2), pp. 126-134. • [16] Roselin Suganthi Jesudoss, Rajeswari Kaleeswaran, Manjunathan Alagarsamy, Dineshkumar Thangaraju, Dinesh Paramathi Mani, Kannadhasan Suriyan, “Comparative study of BER with NOMA system in different fading channels”, Bulletin of Electrical Engineering and Informatics, 2022, 11(2), pp. 854–861.