In this work, we present a deep learning-based approach for image tampering localization fusion. This approach is designed to combine the outcomes of multiple image forensics algorithms and provides a fused tampering localization map, which requires no expert knowledge and is easier to interpret by end users. Our fusion framework includes a set of five individual tampering localization methods for splicing localization on JPEG images. The proposed deep learning fusion model is an adapted architecture, initially proposed for the image restoration task, that performs multiple operations in parallel, weighted by an attention mechanism to enable the selection of proper operations depending on the input signals. This weighting process can be very beneficial for cases where the input signal is very diverse, as in our case where the output signals of multiple image forensics algorithms are combined. Evaluation in three publicly available forensics datasets demonstrates that the performance of the proposed approach is competitive, outperforming the individual forensics techniques as well as another recently proposed fusion framework in the majority of cases.
Face Recognition Based on Deep Learning (Yurii Pashchenko Technology Stream) IT Arena
Lviv IT Arena is a conference specially designed for programmers, designers, developers, top managers, inverstors, entrepreneur and startuppers. Annually it takes place on 2-4 of October in Lviv at the Arena Lviv stadium. In 2015 conference gathered more than 1400 participants and over 100 speakers from companies like Facebook. FitBit, Mail.ru, HP, Epson and IBM. More details about conference at itarene.lviv.ua.
Master thesis defence by Manuel Martos-Asensio
Advisors: Horst Eidenberger (Technische Universtität Viena) and Xavier Giró-i-Nieto (Universitat Politècnica de Catalunya)
More details
Face Recognition Based on Deep Learning (Yurii Pashchenko Technology Stream) IT Arena
Lviv IT Arena is a conference specially designed for programmers, designers, developers, top managers, inverstors, entrepreneur and startuppers. Annually it takes place on 2-4 of October in Lviv at the Arena Lviv stadium. In 2015 conference gathered more than 1400 participants and over 100 speakers from companies like Facebook. FitBit, Mail.ru, HP, Epson and IBM. More details about conference at itarene.lviv.ua.
Master thesis defence by Manuel Martos-Asensio
Advisors: Horst Eidenberger (Technische Universtität Viena) and Xavier Giró-i-Nieto (Universitat Politècnica de Catalunya)
More details
The recent emergence of machine learning and deep learning methods for medical image analysis has enabled the development of intelligent medical imaging-based diagnosis systems that can assist physicians in making better decisions about a patient’s health. In particular, skin imaging is a field where these new methods can be applied with a high rate of success.
This thesis focuses on the problem of automatic skin lesion detection, particularly on melanoma detection, by applying semantic segmentation and classification from dermoscopic images using a deep learning based approach. For the first problem, a U-Net convolutional neural network architecture is applied for an accurate extraction of the lesion region. For the second problem, the current model performs a binary classification (benign versus malignant) that can be used for early melanoma detection. The model is general enough to be extended to multi-class skin lesion classification. The proposed solution is built around the VGG-Net ConvNet architecture and uses the transfer learning paradigm. Finally, this work performs a comparative evaluation of classification alone (using the entire image) against a combination of the two approaches (segmentation followed by classification) in order to assess which of them achieves better classification results.
https://imatge.upc.edu/web/publications/keyframe-based-video-summarization-designer
This Final Degree Work extends two previous projects and consists in carrying out an improvement of the video keyframe extraction module from one of them called Designer Master, by integrating the algorithms that were developed in the other, Object Maps.
Firstly the proposed solution is explained, which consists in a shot detection method, where the input video is sampled uniformly and afterwards, cumulative pixel-to-pixel difference is applied and a classifier decides which frames are keyframes or not.
Last, to validate our approach we conducted a user study in which both applications were compared. Users were asked to complete a survey regarding to different summaries created by means of the original application and with the one developed in this project. The results obtained were analyzed and they showed that the improvement done in the keyframes extraction module improves slightly the application performance and the quality of the generated summaries.
An automated approach for the recognition of bengali license plates presentationMD Abdullah Al Nasim
Automatic Number Plate Recognition (ALPR) is a system for automatically identifying the license plates of any vehicle. This process is important for tracking, ticketing, and any billing system, among other things. With the use of information and communication technology (ICT), all systems are being automated, including the vehicle tracking system. This study proposes a hybrid method for detecting license plates using characters from them. Our captured image information was used for the recognition procedure in Bangladeshi vehicles, which is the topic of this study. Here, for license plate detection, the YOLO model was used where 81\% was correctly predicted. And then, for license plate segmentation, Otsu's Thresholding was used and eventually, for character recognition, the CNN model was applied. This model will allow the vehicle's automated license plate detection system to avoid any misuse.
auto-assistance system for visually impaired personshahsamkit73
The World Health Organization (WHO) reported that there are 285 million visually-impaired people worldwide. Among these individuals, there are 39 million who are totally blind. There have been several systems designed to support visually-impaired people and to improve the quality of their lives. One of the most difficult activities that must be conducted by visually impaired is indoor navigation. In indoor environment, visually impaired should be aware of obstacles in front of them and be able to avoid it. The use of powered wheelchairs with high transportability and obstacle avoidance intelligence is one of the great steps towards the integration of physically disabled and mentally handicapped people. The disable person will not be able to visualize the object so this Auto-assistance system may suffice the requirement. Auto-Assistance System operating in dynamic environments need to sense its surrounding environment and adapt the control signal in real time to avoid collisions and protect the users. Auto-Assistance System that assist or replace user control could be developed to serve for these users, utilizing systems and algorithms from Auto-Assistance robots. This system could be used to assist disable in their mobility by warning of obstacles. The system could be used in indoor environment like hospital, public garden area. So, we are designing an Auto-assistance system which will help the visually impaired person to work independently. In this system we would be detecting the obstruction in the path of visually impaired person using USB Camera & help them to avoid the collisions.
GitHub Link: https://github.com/shahsamkit73/Auto-Assistance-System-for-visually-impaired
Color based image processing , tracking and automation using matlabKamal Pradhan
Image processing is a form of signal processing in which the input is an image, such as a photograph or video frame. The output of image processing may be either an image or, a set of characteristics or parameters related to the image. Most image-processing techniques involve treating the image as a two-dimensional signal and applying standard signal-processing techniques to it. This project aims at processing the real time images captured by a Webcam for motion detection and Color Recognition and system automation using MATLAB programming.
In color based image processing we work with colors instead of object. Color provides powerful information for object recognition. A simple and effective recognition scheme is to represent and match images on the basis of color histograms.
Tracking refers to detection of the path of the color once the color based processing is done the color becomes the object to be tracked this can be very helpful in security purposes.
Automation refers to an automated system is any system that does not require human intervention. In this project I’ve automated the mouse that work with our gesture and do the desired tasks.
An Image Based PCB Fault Detection and Its Classificationrahulmonikasharma
The field of electronics is skyrocketing like never before. The habitat for the electronic components is a printed circuit board (PCB). With the advent of newer and finer technologies it has almost become impossible to detect the faults in a printed circuit board manually which consumes lot of manpower and time. This paper proposes a simple and cost effective method of fault diagnosis in a PCB using image processing techniques. In addition to fault detection and its classification this paper addresses various problems faced during the pre-processing phase. This paper overcomes the drawbacks of the previous works such as improper orientations of the image and size variations of the image. Basically image subtraction algorithm is used for fault detection. The most commonly occurring faults are concentrated in this work and the same are implemented using MATLAB tool.
Strategy for Foreground Movement Identification Adaptive to Background Variat...IJECEIAES
Video processing has gained a lot of significance because of its applications in various areas of research. This includes monitoring movements in public places for surveillance. Video sequences from various standard datasets such as I2R, CAVIAR and UCSD are often referred for video processing applications and research. Identification of actors as well as the movements in video sequences should be accomplished with the static and dynamic background. The significance of research in video processing lies in identifying the foreground movement of actors and objects in video sequences. Foreground identification can be done with a static or dynamic background. This type of identification becomes complex while detecting the movements in video sequences with a dynamic background. For identification of foreground movement in video sequences with dynamic background, two algorithms are proposed in this article. The algorithms are termed as Frame Difference between Neighboring Frames using Hue, Saturation and Value (FDNF-HSV) and Frame Difference between Neighboring Frames using Greyscale (FDNF-G). With regard to F-measure, recall and precision, the proposed algorithms are evaluated with state-of-art techniques. Results of evaluation show that, the proposed algorithms have shown enhanced performance.
Александр Заричковый "Faster than real-time face detection"Fwdays
I will talk about object and face detection problems, evolution of different approaches to solving these problems and about the ideas behind each of these approaches. Also I will describe meta-architecture that achieve state of the art results on faces detection problem and works faster than real-time.
Keynote WFIoT2019 - Data Graph, Knowledge Graphs Ontologies, Internet of Thin...Amélie Gyrard
Keynote “Trends on Data Graphs & Security for the Internet of Things”
(Extended Version) #WF-IoT World Forum Internet of Things
Workshop on #Security and #Privacy for #InternetofThings and Cyber-Physical Systems #CPS
#Security #Toolbox #Attacks and #Countermeasures #STAC
#Security #KnowledgeGraphs #Ontologies
Speaker: Dr. Ghislain Atemezing(Research & Development Director, MONDECA, Paris, France) @gatemezing
Credits: Dr. Amelie Gyrard (Kno.e.sis, Wright State University, Ohio, USA)
The recent emergence of machine learning and deep learning methods for medical image analysis has enabled the development of intelligent medical imaging-based diagnosis systems that can assist physicians in making better decisions about a patient’s health. In particular, skin imaging is a field where these new methods can be applied with a high rate of success.
This thesis focuses on the problem of automatic skin lesion detection, particularly on melanoma detection, by applying semantic segmentation and classification from dermoscopic images using a deep learning based approach. For the first problem, a U-Net convolutional neural network architecture is applied for an accurate extraction of the lesion region. For the second problem, the current model performs a binary classification (benign versus malignant) that can be used for early melanoma detection. The model is general enough to be extended to multi-class skin lesion classification. The proposed solution is built around the VGG-Net ConvNet architecture and uses the transfer learning paradigm. Finally, this work performs a comparative evaluation of classification alone (using the entire image) against a combination of the two approaches (segmentation followed by classification) in order to assess which of them achieves better classification results.
https://imatge.upc.edu/web/publications/keyframe-based-video-summarization-designer
This Final Degree Work extends two previous projects and consists in carrying out an improvement of the video keyframe extraction module from one of them called Designer Master, by integrating the algorithms that were developed in the other, Object Maps.
Firstly the proposed solution is explained, which consists in a shot detection method, where the input video is sampled uniformly and afterwards, cumulative pixel-to-pixel difference is applied and a classifier decides which frames are keyframes or not.
Last, to validate our approach we conducted a user study in which both applications were compared. Users were asked to complete a survey regarding to different summaries created by means of the original application and with the one developed in this project. The results obtained were analyzed and they showed that the improvement done in the keyframes extraction module improves slightly the application performance and the quality of the generated summaries.
An automated approach for the recognition of bengali license plates presentationMD Abdullah Al Nasim
Automatic Number Plate Recognition (ALPR) is a system for automatically identifying the license plates of any vehicle. This process is important for tracking, ticketing, and any billing system, among other things. With the use of information and communication technology (ICT), all systems are being automated, including the vehicle tracking system. This study proposes a hybrid method for detecting license plates using characters from them. Our captured image information was used for the recognition procedure in Bangladeshi vehicles, which is the topic of this study. Here, for license plate detection, the YOLO model was used where 81\% was correctly predicted. And then, for license plate segmentation, Otsu's Thresholding was used and eventually, for character recognition, the CNN model was applied. This model will allow the vehicle's automated license plate detection system to avoid any misuse.
auto-assistance system for visually impaired personshahsamkit73
The World Health Organization (WHO) reported that there are 285 million visually-impaired people worldwide. Among these individuals, there are 39 million who are totally blind. There have been several systems designed to support visually-impaired people and to improve the quality of their lives. One of the most difficult activities that must be conducted by visually impaired is indoor navigation. In indoor environment, visually impaired should be aware of obstacles in front of them and be able to avoid it. The use of powered wheelchairs with high transportability and obstacle avoidance intelligence is one of the great steps towards the integration of physically disabled and mentally handicapped people. The disable person will not be able to visualize the object so this Auto-assistance system may suffice the requirement. Auto-Assistance System operating in dynamic environments need to sense its surrounding environment and adapt the control signal in real time to avoid collisions and protect the users. Auto-Assistance System that assist or replace user control could be developed to serve for these users, utilizing systems and algorithms from Auto-Assistance robots. This system could be used to assist disable in their mobility by warning of obstacles. The system could be used in indoor environment like hospital, public garden area. So, we are designing an Auto-assistance system which will help the visually impaired person to work independently. In this system we would be detecting the obstruction in the path of visually impaired person using USB Camera & help them to avoid the collisions.
GitHub Link: https://github.com/shahsamkit73/Auto-Assistance-System-for-visually-impaired
Color based image processing , tracking and automation using matlabKamal Pradhan
Image processing is a form of signal processing in which the input is an image, such as a photograph or video frame. The output of image processing may be either an image or, a set of characteristics or parameters related to the image. Most image-processing techniques involve treating the image as a two-dimensional signal and applying standard signal-processing techniques to it. This project aims at processing the real time images captured by a Webcam for motion detection and Color Recognition and system automation using MATLAB programming.
In color based image processing we work with colors instead of object. Color provides powerful information for object recognition. A simple and effective recognition scheme is to represent and match images on the basis of color histograms.
Tracking refers to detection of the path of the color once the color based processing is done the color becomes the object to be tracked this can be very helpful in security purposes.
Automation refers to an automated system is any system that does not require human intervention. In this project I’ve automated the mouse that work with our gesture and do the desired tasks.
An Image Based PCB Fault Detection and Its Classificationrahulmonikasharma
The field of electronics is skyrocketing like never before. The habitat for the electronic components is a printed circuit board (PCB). With the advent of newer and finer technologies it has almost become impossible to detect the faults in a printed circuit board manually which consumes lot of manpower and time. This paper proposes a simple and cost effective method of fault diagnosis in a PCB using image processing techniques. In addition to fault detection and its classification this paper addresses various problems faced during the pre-processing phase. This paper overcomes the drawbacks of the previous works such as improper orientations of the image and size variations of the image. Basically image subtraction algorithm is used for fault detection. The most commonly occurring faults are concentrated in this work and the same are implemented using MATLAB tool.
Strategy for Foreground Movement Identification Adaptive to Background Variat...IJECEIAES
Video processing has gained a lot of significance because of its applications in various areas of research. This includes monitoring movements in public places for surveillance. Video sequences from various standard datasets such as I2R, CAVIAR and UCSD are often referred for video processing applications and research. Identification of actors as well as the movements in video sequences should be accomplished with the static and dynamic background. The significance of research in video processing lies in identifying the foreground movement of actors and objects in video sequences. Foreground identification can be done with a static or dynamic background. This type of identification becomes complex while detecting the movements in video sequences with a dynamic background. For identification of foreground movement in video sequences with dynamic background, two algorithms are proposed in this article. The algorithms are termed as Frame Difference between Neighboring Frames using Hue, Saturation and Value (FDNF-HSV) and Frame Difference between Neighboring Frames using Greyscale (FDNF-G). With regard to F-measure, recall and precision, the proposed algorithms are evaluated with state-of-art techniques. Results of evaluation show that, the proposed algorithms have shown enhanced performance.
Александр Заричковый "Faster than real-time face detection"Fwdays
I will talk about object and face detection problems, evolution of different approaches to solving these problems and about the ideas behind each of these approaches. Also I will describe meta-architecture that achieve state of the art results on faces detection problem and works faster than real-time.
Keynote WFIoT2019 - Data Graph, Knowledge Graphs Ontologies, Internet of Thin...Amélie Gyrard
Keynote “Trends on Data Graphs & Security for the Internet of Things”
(Extended Version) #WF-IoT World Forum Internet of Things
Workshop on #Security and #Privacy for #InternetofThings and Cyber-Physical Systems #CPS
#Security #Toolbox #Attacks and #Countermeasures #STAC
#Security #KnowledgeGraphs #Ontologies
Speaker: Dr. Ghislain Atemezing(Research & Development Director, MONDECA, Paris, France) @gatemezing
Credits: Dr. Amelie Gyrard (Kno.e.sis, Wright State University, Ohio, USA)
Transfer Learning Model for Image Segmentation by Integrating U-NetPlusPlus a...YutaSuzuki27
In the image classification task, we only need to learn local features, but in the image segmentation task, we also need to learn positional information. Therefore, there is a difference between the image segmentation task and the image classification task in the features to be learned. In this study, we propose SE-U-Net++, which efficiently learns both local features and positional information by incorporating SE blocks, and a transfer learning algorithm that bridges the difference between the tasks by comparing parameters in the convolutional layer.
Cloud Computing Needs for Earth Observation Data Analysis: EGI and EOSC-hubBjörn Backeberg
This presentation was given during the Japan Geosciences Union 2019. Session details can be found at http://www.jpgu.org/meeting_e2019/SessionList_en/detail/M-GI31.htm
Secure IoT Systems Monitor Framework using Probabilistic Image EncryptionIJAEMSJORNAL
In recent years, the modeling of human behaviors and patterns of activity for recognition or detection of special events has attracted considerable research interest. Various methods abounding to build intelligent vision systems aimed at understanding the scene and making correct semantic inferences from the observed dynamics of moving targets. Many systems include detection, storage of video information, and human-computer interfaces. Here we present not only an update that expands previous similar surveys but also a emphasis on contextual abnormal detection of human activity , especially in video surveillance applications. The main purpose of this survey is to identify existing methods extensively, and to characterize the literature in a manner that brings to attention key challenges.
Testing Challenges and Approaches in Edge ComputingAxel Rennoch
As known from the Internet of Things (IoT) testing there also exist multiple challenges for the Edge Computing (EC) quality assurance and automated testing process. Developers and QA experts need to understand specific requirements and possible approaches to be applied in Edge Computing Test design, definition and execution. Special attention will be given to existing approaches, testing techniques and tools which follow standardized methods, are freely available and successfully applied for various mobile and fixed network solutions.
End-to-end deep auto-encoder for segmenting a moving object with limited tra...IJECEIAES
Deep learning-based approaches have been widely used in various applications, including segmentation and classification. However, a large amount of data is required to train such techniques. Indeed, in the surveillance video domain, there are few accessible data due to acquisition and experiment complexity. In this paper, we propose an end-to-end deep auto-encoder system for object segmenting from surveillance videos. Our main purpose is to enhance the process of distinguishing the foreground object when only limited data are available. To this end, we propose two approaches based on transfer learning and multi-depth auto-encoders to avoid over-fitting by combining classical data augmentation and principal component analysis (PCA) techniques to improve the quality of training data. Our approach achieves good results outperforming other popular models, which used the same principle of training with limited data. In addition, a detailed explanation of these techniques and some recommendations are provided. Our methodology constitutes a useful strategy for increasing samples in the deep learning domain and can be applied to improve segmentation accuracy. We believe that our strategy has a considerable interest in various applications such as medical and biological fields, especially in the early stages of experiments where there are few samples.
This talk was given at a workshop entitled "Cybersecurity Engagement in a Research Environment" at Rady School of Management at UCSD. The workshop was organized by Michael Corn, the UCSD CISO. It tries to provoke discussion around the cybersecurity features and requirements of international science collaborations, as well as more generally, federated cyberinfrastructure systems.
Overview and introductory remarks for the OGF sessions held May 21-22, 2015 co-located with the European Grid Initiative 2015 conference that took place the week of May 18-22, 2015 in Lisbon, Portugal. For details, see https://www.ogf.org/ogf/doku.php/events/ogf-44
TTO2021: Cross-Lingual Rumour Stance Classification: a First Study with BERT...Weverify
By Carolina Scarton. Presentation at the Truth and Trust Online Conference (TTO 2021). Link: https://truthandtrustonline.com/wp-content/uploads/2021/10/TTO2021_paper_31.pdf
Demo presentation of the MeVer tools for disinformation detection consists of Context aggregation and analysis, Image forensics, DeepFake detector, Near duplicate detection, Visual location estimation and Network analysis and visualization.
DETECTING AND VERIFYING ONLINE DISINFORMATION:
HOW NLP AND DATA ANALYSIS CAN HELP.
By Carolina Scarton
Youtube link: https://www.youtube.com/watch?v=JPq3WFhbgsY
LIMITS AND RISKS OF USING AI FOR FACT-CHECKING:
QUESTIONS OF EFFECTIVENESS AND LEGALITY OF AI-DRIVEN DISINFORMATION DETECTION AND MODERATION.
EDMO workshop.
By Kalina Bontcheva
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Operation-wise Attention Network for Tampering Localization Fusion.
1. Operation-wise Attention Network for
Tampering Localization Fusion
Polychronis Charitidis, Giorgos Kordopatis-Zilos, Symeon Papadopoulos, Ioannis
Kompatsiaris
MeVer Team @ Information Technologies Institute (ITI) /
Centre for Research & Technology Hellas (CERTH)
Content-Based Multimedia Indexing Conference, June 28-30, 2021
2. WeVerify Project
● Goals
○ Address the advanced content verification challenges
○ Social media and web content analysis for detection of disinformation
○ Exposure of misleading and fabricated content
○ Platform for collaborative, decentralised content verification, tracking, and debunking.
● Developed tools
○ DeepFake detection service
○ Image Verification Assistant
3. Image Verification Assistant
● Goal: forgery localization in images.
● Report from various image forensics algorithms.
○ JPEG based methods, Noise-based methods, Deep-learning based methods
○ Focuses on splicing and copy-move manipulations
● Inspect the multiple reports in tandem.
Tampered Image Mask Localizations from Forensic Algorithms
Source: DEFACTO dataset
4. Motivation
● Observations:
○ Many forensics output visualizations increases the complexity of the results, especially for
non-experts.
■ Each algorithm has requires specific knowledge for proper interpretation.
○ Some of these forensics results are complementary to each other so their combination could
potentially lead to better results
● Solution and contributions:
○ Develop a fully automatic fusion approach that is able to combine diverse forensics signals.
○ The combined result:
■ is more robust and accurate
■ is easier to interpret and requires no specialized knowledge
■ empower non-experts in image verification
5. Methodology
● For this work we select 5 forensics algorithms for fusion.
● These algorithms were selected among others based on their performance on
forgery localization datasets
○ ADQ1 and DCT that both base their detection on analysis of the JPEG compression, in the
transform domain
○ BLK and CAGI that base their detection on analysis of the JPEG compression in the spatial
domain
○ Splicebuster which is a noise-based detector
● Train a deep learning architecture to fuse the diverse tampering localization
algorithms
○ Fully automatic
○ Complex and diverse features
○ Availability of large-scale datasets
6. Models
● We considered two different models:
○ Eff-B4-Unet: A U-Net based architecture that uses Efficient-B4 as an encoder
○ Operation-wise Attention Fusion network (OwAF), which is an adapted image
restoration architecture
■ Operation-wise Attention layer:
7. Training and Evaluation process
● Training dataset:
○ DEFACTO dataset (Mahfoudi et al., 2019)
○ Contains various synthetic manipulations like splicing and copy-move
○ 15,000 tampered images / 75,000 forensics algorithms localizations
● Evaluation datasets:
○ DEFACTO test dataset
■ Contains 1000 tampered images
○ CASIA V2.0 dataset (Dong et al., 2013)
■ Contains 5,123 tampered images
○ The IFS-TC Image Forensics Challenge set
■ Contains 450 tampered images
● Compared our approach with another fusion approach (Iakovidou et al., 2020)
● Metrics: F1, IoU
11. Discussion and Limitations
● The reported experimental results are promising and in many cases
outperform the individual forensics techniques.
● Our automatic approach outperforms a competing fusion approach in many
cases.
● The results of our approach are easier to interpret by non-experts.
● An important limitation of this work is the generalization ability of the
fusion model.
● Our approach performance depends on the performance of the individual
forensic algorithms.
12. Future work
● To deal with the generalization, we will try to increase the size of the training
dataset and include different manipulations from other datasets.
● We will experiment with task-specific regularization techniques, like
localization map dropout.
● We plan to experiment with multi-stream fusion architectures that besides the
forensics localization maps, will consider the input image itself.
13. Thank you!
Polychronis Charitidis / charitidis@iti.gr
Media Verification Team / https://mever.gr / @meverteam
WeVerify project / http://www.weverify.eu / @WeVerify
Editor's Notes
Hello, my name is Polychronis Charitidis and I am going to present the study that I have conducted alongside with my colleagues Giorgos Kordopatis-Zilos, Symeon Papadopoulos and Ioannis Kompatsiaris with the title “Operation-wise Attention Network for Tampering Localization Fusion”. I am a member of the media verification team of Information Technologies Institute which is part of the Centre for Research & Technology Hellas which is located in Thessaloniki, Greece . My main research interests are in the field of media forensics and content verification.
The work I am going to present was conducted in the context of the WeVerify project. This project is an ongoing EU Horizon 2020 project. The main goals of WeVerify is to address the advanced content verification challenges. Also to analyse social media and web content in order to detect disinformation campaigns and finally to expose and debunk misleading and manipulated content. The outcome of the project aims to be a platform for collaborative, decentralised content verification, tracking, and debunking. There are a lot of tools that were developed or enhanced during Weferify. Example of such tools is a deepfake detection service, which detects facial manipulations in images or videos. Another example is some improvements that were made to an already existing tool, the image verification Assistant that uses image forensic algorithms to provide reports regarding potential forgeries in images. The work presented here showcases a particular enchantment on this tool.
As i mentioned, the main goal of image verification assistant is to localize potential forgeries in images. Due to the large number of possible forgery types and transformations that can be applied to an image it is beneficial for a forensic report to include results from multiple forensic algorithms that cover a wide range of them. So the image verification assistant provides a report that consists of localizations from JPEG- based methods, Noise-based methods, Deep-learning based methods and focuses on manipulation types like splicing and copy-move. So the process of verification is straightforward. A user inserts an image for inspection to the tool. This image might be tampered like the image in this example below on the left. The forgery is shown with yellow in the mask next to it, and gets a report from various Forensics algorithms on the right.
Now for an expert user it might be easy to draw a conclusion from the image verification assistant. But there are some important observations. First is that although discovering manipulation traces is desirable,adding a lot forensics visualizations increases the complexity of a media verification tool, especially for non-expert users. The reason is that each algorithm has a different output that requires specific knowledge for proper interpretation. Consequently,this quickly becomes overwhelming for the non-experts. Another observation is that in many cases some of these forensics results are complementary to each other so their combination could potentially lead to better results. In this work, we aim to address these observations. The main objective is to develop a fully automatic fusion approach using deep learning, being able to leverage diverse forensics signals, so as to improve the robustness and reliability of the overall localization system. This final visualization will retain the most important features of the individual algorithms leading to more accurate results. This result will be easier to interpret and will require no additional specialized knowledge. This outcome can empower non-experts like fact-checkers and journalists, to actively contribute to image verification tasks.
For this work in order to simplify the process, we select a subset of the forensics algorithms that appear on Image Verification Assistant to be considered for fusion. Based on the evaluation results of another work, we select a set of five methods as the building blocks of the fusion model. These are ADQ1 and DCT that both base their detection on analysis of the JPEG compression, in the transform domain, BLK and CAGI that base their detection on analysis of the JPEG compression in the spatial domain and Splicebuster which is a noise-based detector. In this work, we adopt a deep learning-based fusion approach for the following reasons. First, we aspire to develop a fully automatic approach without the need for heuristic tuning or manual intervention. Second, the complex and diverse nature of the input signal calls for an effective approach to automatically extract the most important features, which is something that deep learning excels at. Finally, the availability of large-scale datasets, which are required by deep learning approaches, makes the training of a deep learning-based model feasible.
For the fusion model, we consider two different deep learning architectures. The first model is a U-Net based architecture. U-Net is a convolutional neural network that was initially applied for semantic segmentation in a medical context, but nowadays, it has got a much broader application field. The network only uses convolutions without any fully connected layers. The U-Net architecture has a lot of variants. For the fusion task, we use a variant of U-Net architecture that uses Efficient as the encoder part. The second model that we employ is a simple architecture of neural networks that was proposed for the problem of image restoration. This architecture is suitable for the fusion problem because it uses attention to capture important features by examining which operations are the most beneficial, depending on the input signal. Another important aspect of this architecture is that it focuses on low-level features, which is important for the fusion task, as semantic or high-level representations are often not or useful for the problem. After experimenting, we adapted this architecture by reducing the number of layers, replacing the dilated convolution of the original approach and added more operations to be weighted with attention. Tthe operation wise attention layer can be seen in this slide. In each layer a number of convolutional and pooling operations are applied to the input features. These are weighted by an attention layer and concatenated. The resulting features are processed by 1x1 convolution. Finally the layer input is added to the resulting feature map just like in residual architectures.
For training these architectures we use the DEFACTO dataset which contains various synthetic manipulations like splicing and copy-move. We use fifteen thousand tampered images for training. For each image we use the forensic algorithms to produce 5 tampering localization results. This means that the total input for the fusion model is 75000 localizations. For evaluation of our method we use three datasets. The first is 1000 seperate images from the DEFACTO dataset. The second is CASIA version two datasets which contains five thousand one hundred twenty three images and the last one in IFS-TC datasets which contains four hundred fifty images. In our reported results we compare our approach with another statistical and heuristic based fusion approach that considers the same forensic algorithms. In our experiments we report the F1 and Intersection over Union metrics
In the first experiment, we investigate the performance of the two proposed fusion models in the DEFACTO test dataset. We can see that the OwAF network outperforms the Eff-B4-Unet in all evaluation metrics. Evaluation results for individual algorithms are very low when compared to the fusion approaches. The best performing individual model is ADQ1.The figure in this slide shows random examples from the DEFACTO test dataset. The first column shows the input images. The next five columns show the outputs of the individual tampering localization algorithms. The final two columns show the ground truth mask, which reveals the actual location of the forgery and the fusion result of the best performing OwAF. It is evident from these examples that the fusion architecture learned to combine the diverse signals in order to localize the tampered region. One interesting observation is that for each input example, there are usually different algorithms that better localize the forgery. This means that the fusion model learned to detect proper signals that contribute to a correct localization. For example, in the first row, Splicebuster and CAGI spot the tampering, but in row three, ADQ1 and DCT do so. In both cases, the fusion model has identified these signals and provides a correct result.
To further investigate the fusion performance, we compare our best performing approach with another fusion framework. For evaluation the CASIA v2 dataset in order to examine the generalization capabilities of the fusion model that was trained with the DEFACTO dataset. We can observe slightly better performance in every metric from individual models compared to those in the previous experiment. This means that this dataset contains images with manipulations that can be localized better by the individual algorithms. ADQ1 and DCT are the best performing individual approaches. Regarding the fusion methods evaluation,our approach outperforms the competing fusion framework. One notable observation is that the performance of OwAF is significantly worse than the evaluation results that are reported on the previous slide. This is a clear indication that our trained models have overfitted to the training set manipulations. The fusion model possibly learned to localize specific forgeries, like shapes and patterns from the outputs of individual algorithms that frequently appear in the DEFACTO dataset. Yet, the proposed approach is still better than individual algorithms and also outperforms the competing fusion framework in terms of both F1 score and IoU. The figure shows some successful examples of tampering localization outputs produced by the fusion model and the individual methods. In most examples, ADQ1 and DCT visualization better localize the tampering.
For the evaluation results on the First IFS-TC, a significant decrease in the performance of the individual algorithms can be observed. One exception is the Spicebuster performance, which increased compared to previous evaluations. Splicebuster even outperforms both fusion approaches. Iakovidou et al. also achieve marginally better performance than our fusion model in this dataset. One possible explanation is that our fusion model learned to focus more on the individual localization maps that achieved better performance in the training set, namely the ADQ1 and DCT. On the contrary, in the First IFS-TC case, the best performing individual algorithm is Splicebuster and this possibly justifies the poor performance of the OwAF approach. To verify this we show the better localized results in this dataset. We can see in the figure that in each successful case the forgery has been localized by DCT and ADQ1 as well.
To sum up, from our experiments, it is evident that our approach is promising and in many cases outperform the individual forensics techniques and other competing frameworks. Additionally, the results of our approach are easier to interpret by non-experts. On the other hand, the main challenge of the proposed approach stems from overfitting to the training data. This leads to lack of generalization to unseen manipulations. Namely, we get relatively poor predictions for datasets that have different types of manipulations compared to those that appeared in the training dataset. Additionally, the low evaluation performance of individual algorithms is a major indication that the forgery localization problem is very difficult and is even more challenging to design a general fusion solution that receives noisy signals from these algorithms.
For future steps we plan to focus on countering the issue of overfitting. We will experiment with larger datasets and combine datasets with diverse manipulations. Also we will experiment with task-specific regularization approaches like localization map dropout. Finally, so far we used only signals from forensics algorithms, we plan to experiment with multi-stream fusion architectures that besides these signals, will also consider the input image itself.
Thank you very much for your attention. Also If you are interested in experimenting with our Image verification Assistant service please don’t hesitate to send me an email. I will be happy to answer any questions you may have.