A Comprehensive Analysis on Co-Saliency Detection on Learning Approaches in 3rd International Conference on Innovative Practices in Technology and Management
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
More Related Content
Similar to A Comprehensive Analysis on Co-Saliency Detection on Learning Approaches in 3rd International Conference on Innovative Practices in Technology and Management
A Review of Edge Detection Techniques for Image SegmentationIIRindia
Similar to A Comprehensive Analysis on Co-Saliency Detection on Learning Approaches in 3rd International Conference on Innovative Practices in Technology and Management (20)
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
A Comprehensive Analysis on Co-Saliency Detection on Learning Approaches in 3rd International Conference on Innovative Practices in Technology and Management
1. INDIAN INSTITUTE OF TECHNOLOGY ROORKEE
A Comprehensive Analysis on Co-Saliency Detection on
Learning Approaches
in
3rd International Conference on Innovative Practices in
Technology and Management
By
Dr. Anurag Vijay Agrawal (Paper ID A22)
DAY 2 (Morning) : 23rd February 2023 Track 7 (10.30 AM - 1:00 PM)
A research paper presentation
3. 3
Abstract
• Co-saliency recognition is a fast-growing field in computer vision. Co-saliency
detection, a unique field of optical sensitivity, relates to the identification of shared
and salient backgrounds from two or even more pertinent images, and it is broadly
applicable to a variety of computer vision tasks.
– Various co-saliency detection techniques are composed of three modules: extracting image
characteristics, examining informative signals or elements, and creating effective computer
foundations to construct co-saliency. Even though several strategies have been created, there hasn't
yet been a thorough analysis and assessment of co-saliency prediction methods in the literature.
This work intends to provide a thorough analysis of the foundations, difficulties, and implications
of co-saliency detecting.
• An overview is offered based on the connected computer vision works, investigates
the detection history, outline and classify the important algorithms and address
certain unresolved difficulties, explain the possible co-saliency identification
applications as point out certain unsolved obstacles and interesting future appears to
work.
4. 4
Introduction
• Many imaging devices, including digital photography and mobile phones, can store
vast amounts of image and video information. Such information is readily available to
photo-sharing platforms like Flickr and Facebook.
– Co-saliency denotes the salient visual stimuli that are shared by all members of an image group.
Co-saliency identification is a quantitative challenge that seeks to identify the shared and
prominent foreground areas in a set of images. Because the semantic subcategories of co-salient
entities are unavailable, the computer must deduce them from the image group's content. Consider
an image collection, an image are separated into four constituents: ordinary background (CB),
common forefront (CF), unusual background (unCB) and unusual forefront (unCF). Finding the
salient areas (foregrounds) from the available images is the problem of co-saliency detection.
The current co-saliency detection techniques primarily concentrate on resolving
three important issues:
i) collecting representative features to define the foregrounds;
ii) relevant signals or elements to characterise co-saliency; and
iii) creating efficient computational methods to construct co-saliency.
6. 6
Related Works Saliency Detection
• Saliency detection highlights human-attractive areas in the single image. The detection algorithms are
grouped into two types: eye fixation identification methods and prominent object classification models. In
contrast to the latter, which seeks to identify and segment the entire salient items that stand from its
surroundings and draw people's attention, the former aims to forecast fixation positions while individuals are
observing images in the natural world at will. Some early approaches to predict salient sites relied on local
and global contrasting, with the salient parts of image are distinguishable from its surroundings or overall
composition background. Some techniques began including comprehensive information, high-level previous
convictions, background priors, and high-level priors.
7. 7
Related Works video saliency
• Video saliency, also known as spatiotemporal saliency, seeks to predict prominent object areas in video
frames or shorts. How to include illuminating motion cues, which are essential in video capture, is the main
problem at hand.
• In particular, Xia et al. presented a differential description of centre-based saliency by modelling the spatio-
temporal regions as dynamic textures was motivated by biological principles of perceptual (motion) grouping.
Their approach provides a combined, systematic characterisation of saliency's spatial - spectral elements. The
spatial - spectral saliency images were initially created independently before being combined into one using a
spatially and temporally adaptable entropy-based uncertainty weighing technique.
• Li et al., unified 's regional framework for investigating short-term continuity took into account both intra-
frame and different conditions saliency based on various factors, such as colour and mobility.
8. 8
Methodology
• During pre-processing, the input image is divided into various processing elements
(image pixels, super-pixel segments and pixel clusters). The attributes are
retrieved to investigate the characteristics of every compute unit and created co-
saliency bottom-up cue (single cue exploration). Finally, bottom-up co-saliency cue
data are combined to create input image co-saliency maps. The indication is the
primary issue with bottom-up approaches. The co-saliency cue's fundamental
function equation is:
co-saliency=saliency*repeatedness (1)
9. 9
Methodology
• Adopted methods for examining bottom-up signals have evolved from early basic
methods (ranking and matching) to modern, potent methods (pattern mining and
matrix factorization). Weighted combinations are typically combined by multiplying
or averaging, but some newer approaches have used superior tactics to take use
of both multiplies and averages. The loose collection co-saliency detection that
detects the cluster-level co-saliency using bottom-up saliency signals, is a
representation of the average bottom-up approach. The pre-processing stage was
initially carried out by the scientists using k-means based clustering.
– RGB colour feature was derived to specify the discovered clusters.
– The last co-saliency prediction is derived by integrating the bottom-up process using a
straightforward multiplication technique.
– Subsequently, bottom-up cues (contrast, spatial, and matching cues) were examined and
expresses the co-saliency.
10. 10
Results and Discussion
• The image pair dataset comprises 106 image pairs with a frequency of roughly
128100 pixels. It is a benchmark dataset created for evaluation of co-saliency
identification and also includes weakly annotated ground truth data. Each image
has one or more identical items placed against various backdrops. This dataset
has been used often by previous co-saliency identification methods. Since this
Image Pair dataset is less congested than previous benchmark functions,
prevailing co-saliency algorithms have obtained remarkable outcomes (AUC of
97% and AP of 93% respectively).
11. 11
Conclusion
• We evaluated the co-saliency identification literature in this study.
• We reviewed recent developments in this area, examined the key methods, and
spoke about co-saliency detection's problems and prospective uses.
• In conclusion, co-saliency recognition is a recently established and quickly
expanding study field in the computer vision field.
12. 12
References
• [1] Ali Borji. 2012. Boosting bottom-up and top-down visual features for saliency estimation. In IEEE Conference on
Computer Vision and Pattern Recognition. Providence, IEEE, 438–445.
• [2] Ali Borji, Ming-Ming Cheng, Huaizu Jiang, and Jia Li. 2015. Salient object detection: A benchmark. IEEE
Transactions on Image Processing 24, 12 (2015), 5706–5722.
• [3] Ali Borji and James Tanner. 2016. Reconciling saliency and object center-bias hypotheses in explaining free-viewing
fixations. IEEE Transactions on Neural Networks and Learning Systems 27, 6 (2016), 1214–1226.
• [4] Xiaochun Cao, Yupeng Cheng, Zhiqiang Tao, and Huazhu Fu. 2014. Co-saliency detection via base reconstruction. In
ACM International Conference on Multimedia. Florida, ACM, 997–1000.
• [5] Pratik Prabhanjan Brahma, Dapeng Wu, and Yiyuan She. 2016. Why deep learning works: A manifold
disentanglement perspective. IEEE Transactions on Neural Networks and Learning Systems 27, 10 (2016), 1997–2008.
• [6] Xiaochun Cao, Zhiqiang Tao, Bao Zhang, Huazhu Fu, and Xuewei Li. 2013. Saliency map fusion based on rank-one
constraint. In IEEE International Conference on Multimedia and Expo. San Jose, IEEE, 1–8.
• [7] Xiaochun Cao, Zhiqiang Tao, Bao Zhang, Huazhu Fu, and Wei Feng. 2014. Self-adaptively weighted co-saliency
detection via rank constraint. IEEE Transactions on Image Processing 23, 9 (2014), 4175–4186.
13. 13
References
• [8] Xiaochun Cao, Changqing Zhang, Huazhu Fu, Xiaojie Guo, and Qi Tian. 2016. Saliency-aware nonparametric
foreground annotation based on weakly labeled data. IEEE Transactions on Neural Networks and Learning Systems 27, 6
(2016), 1253–1265.
• [9] Xiaojun Chang, Zhigang Ma, Yi Yang, Zhiqiang Zeng, and Alexander G. Hauptmann. 2017. Bi-level semantic
representation analysis for multimedia event detection. IEEE Trans Cybernetics 47, 5 (2017), 1180–1197.
• [10] Kai-Yueh Chang, Tyng-Luh Liu, and Shang-Hong Lai. 2011. From co-saliency to co-segmentation: An efficient and
fully unsupervised energy minimization model. In IEEE Conference on Computer Vision and Pattern Recognition.
Colorado Springs, IEEE, 2129–2136.
• [11] Yi-Lei Chen and Chiou-Ting Hsu. 2014. Implicit rank-sparsity decomposition: Applications to saliency/co-saliency
detection. In International Conference on Pattern Recognition. Stockholm, IEEE, 2305–2310.
• [12] Ming Cheng, Niloy J. Mitra, Xumin Huang, Philip H. S. Torr, and Song Hu. 2015. Global contrast based salient
region detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 37, 3 (2015), 569–582.
• [13] Yue Deng, Qionghai Dai, Risheng Liu, Zengke Zhang, and Sanqing Hu. 2013. Low-rank structure learning via
nonconvex heuristic recovery. IEEE Transactions on Neural Networks and Learning Systems 24, 3 (2013), 383–396.
14. 14
References
• [14] Yuming Fang, Zhou Wang, Weisi Lin, and Zhijun Fang. 2014. Video saliency incorporating spatiotemporal cues and
uncertainty weighting. IEEE Transactions on Image Processing 23, 9 (2014), 3910–3921.
• [15] Rajagopal Sudarmani, Kanagaraj Venusamy, Sathish Sivaraman, Poongodi Jayaraman, Kannadhasan Suriyan,
Manjunathan Alagarsamy, “Machine to machine communication enabled internet of things: a review”, International
Journal of Reconfigurable and Embedded Systems, 2022, 11(2), pp. 126-134.
• [16] Roselin Suganthi Jesudoss, Rajeswari Kaleeswaran, Manjunathan Alagarsamy, Dineshkumar Thangaraju, Dinesh
Paramathi Mani, Kannadhasan Suriyan, “Comparative study of BER with NOMA system in different fading channels”,
Bulletin of Electrical Engineering and Informatics, 2022, 11(2), pp. 854–861.