Digital Image Forensic is significantly becoming popular owing to the increasing usage of the images as a media of information propagation. However, owing to the presence of various image editing tools and software, there is also an increasing threat to image content security. Reviewing the existing approaches to identify the traces or artifacts states that there is a large scope of optimization to be implemented to enhance the processing further. Therefore, this paper presents a novel framework that performs cost-effective optimization of digital forensic technique with an idea of accurately localizing the area of tampering as well as offers a capability to mitigate the attacks of various forms. The study outcome shows that the proposed system offers better outcomes in contrast to the existing system to a significant scale to prove that minor novelty in design attributes could induce better improvement with respect to accuracy as well as resilience toward all potential image threats.
Framework for Contextual Outlier Identification using Multivariate Analysis a...IJECEIAES
Majority of the existing commercial application for video surveillance system only captures the event frames where the accuracy level of captures is too poor. We reviewed the existing system to find that at present there is no such research technique that offers contextual-based scene identification of outliers. Therefore, we presented a framework that uses unsupervised learning approach to perform precise identification of outliers for a given video frames concerning the contextual information of the scene. The proposed system uses matrix decomposition method using multivariate analysis to maintain an equilibrium better faster response time and higher accuracy of the abnormal event/object detection as an outlier. Using an analytical methodology, the proposed system blocking operation followed by sparsity to perform detection. The study outcome shows that proposed system offers an increasing level of accuracy in contrast to the existing system with faster response time.
Towards A More Secure Web Based Tele Radiology System: A Steganographic ApproachCSCJournals
While it is possible to make a patient's medical images available to a practicing radiologist online e.g. through open network systems inter connectivity and email attachments, these methods don't guarantee the security, confidentiality and tamper free reliability required in a medical information system infrastructure. The possibility of securely and covertly transmitting such medical images remotely for clinical interpretation and diagnosis through a secure steganographic technique was the focus of this study.
We propose a method that uses an Enhanced Least Significant Bit (ELSB) steganographic insertion method to embed a patient's Medical Image (MI) in the spatial domain of a cover digital image and his/her health records in the frequency domain of the same cover image as a watermark to ensure tamper detection and nonrepudiation. The ELSB method uses the Marsenne Twister (MT) Pseudo Random Number Generator (PRNG) to randomly embed and conceal the patient's data in the cover image. This technique significantly increases the imperceptibility of the hidden information to steganalysis thereby enhancing the security of the embedded patient's data.
In measuring the effectiveness of the proposed method, the study adopted the Design Science Research (DSR) methodology, a paradigm for problem solving in computing and Information Systems (IS) that involves design and implementation of artefacts and methods considered novel and the analytical testing of the performance of such artefacts in pursuit of understanding and enhancing an existing method, artefact or practice.
The fidelity measures of the stego images from the proposed method were compared with those from the traditional Least Significant Bit (LSB) method in order to establish the imperceptibility of the embedded information. The results demonstrated improvements of between 1 to 2.6 decibels (dB) in the Peak Signal to Noise Ratio (PSNR), and up to 0.4 MSE ratios for the proposed method.
Effective Parameters of Image Steganography TechniquesEditor IJCATR
Steganography is a branch of information hiding method to hide secret data in the media such as audio, images, videos, etc.
The use of images is very common in the world of electronic communication. In this paper, the parameters that are important in
steganography images, have been studied and analyzed. Steganography purposes of security, robustness and capacity of which three
are located at three vertices of a triangle, each note entail ignoring others. The main parameters of the methods steganography they've
Security, Capacity, Psnr, Mse, Ber, Ssim are the results of the implementation show, steganography methods that these parameters
provide have mentioned goals than other methods have improved
deep learning applications in medical image analysis brain tumorVenkat Projects
The tremendous success of machine learning algorithms at image recognition tasks in recent years intersects with a time of dramatically increased use of electronic medical records and diagnostic imaging. This review introduces the machine learning algorithms as applied to medical image analysis, focusing on convolutional neural networks, and emphasizing clinical aspects of the _eld. The advantage of machine learning in an era of medical big data is that signi_cant hierarchal relationships within the data can be discovered algorithmically without laborious hand-crafting of features. We cover key research areas and applications of medical image classi_cation, localization, detection, segmentation, and registration. We conclude by discussing research obstacles, emerging trends, and possible future directions.
Efficiency of LSB steganography on medical information IJECEIAES
The development of the medical field had led to the transformation of communication from paper information into the digital form. Medical information security had become a great concern as the medical field is moving towards the digital world and hence patient information, disease diagnosis and so on are all being stored in the digital image. Therefore, to improve the medical information security, securing of patient information and the increasing requirements for communication to be transferred between patients, client, medical practitioners, and sponsors is essential to be secured. The core aim of this research is to make available a complete knowledge about the research trends on LSB Steganography Technique, which are applied to securing medical information such as text, image, audio, video and graphics and also discuss the efficiency of the LSB technique. The survey findings show that LSB steganography technique is efficient in securing medical information from intruder.
IRJET- A Survey on Medical Image Interpretation for Predicting PneumoniaIRJET Journal
This document summarizes research on using machine learning and deep learning techniques to interpret medical images and predict pneumonia. It first discusses how medical image analysis is an active field for machine learning. It then reviews several related studies on using convolutional neural networks (CNNs) and transfer learning to classify chest x-rays and detect pneumonia. Specifically, it examines research on developing CNN models for pneumonia classification and using pre-trained CNN architectures like VGG16, VGG19, and ResNet with transfer learning. The document concludes that computer-aided diagnosis systems using deep learning can provide accurate predictions to assist radiologists in pneumonia diagnosis from chest x-rays.
COVID-19 detection from scarce chest X-Ray image data using few-shot deep lea...Shruti Jadon
In the current COVID-19 pandemic situation, there is an urgent need to screen infected patients quickly and accurately. Using deep learning models trained on chest X-ray images can become an efficient method for screening COVID-19 patients in these situations. Deep learning approaches are already widely used in the medical community. However, they require a large amount of data to be accurate.
The document compares the performance of different machine learning models for detecting COVID-19 from CT scans, including single models like SVM, NB, MLP, CNN and ensemble models like AdaBoost and GBDT. Based on accuracy, precision, recall, F1-score and MCC metrics, the SVM model achieved the best performance with an accuracy of 99.2%, followed by CNN and AdaBoost. While MLP, NB and GBDT showed lower performance, CNN had the advantage of automatically detecting important image features.
Framework for Contextual Outlier Identification using Multivariate Analysis a...IJECEIAES
Majority of the existing commercial application for video surveillance system only captures the event frames where the accuracy level of captures is too poor. We reviewed the existing system to find that at present there is no such research technique that offers contextual-based scene identification of outliers. Therefore, we presented a framework that uses unsupervised learning approach to perform precise identification of outliers for a given video frames concerning the contextual information of the scene. The proposed system uses matrix decomposition method using multivariate analysis to maintain an equilibrium better faster response time and higher accuracy of the abnormal event/object detection as an outlier. Using an analytical methodology, the proposed system blocking operation followed by sparsity to perform detection. The study outcome shows that proposed system offers an increasing level of accuracy in contrast to the existing system with faster response time.
Towards A More Secure Web Based Tele Radiology System: A Steganographic ApproachCSCJournals
While it is possible to make a patient's medical images available to a practicing radiologist online e.g. through open network systems inter connectivity and email attachments, these methods don't guarantee the security, confidentiality and tamper free reliability required in a medical information system infrastructure. The possibility of securely and covertly transmitting such medical images remotely for clinical interpretation and diagnosis through a secure steganographic technique was the focus of this study.
We propose a method that uses an Enhanced Least Significant Bit (ELSB) steganographic insertion method to embed a patient's Medical Image (MI) in the spatial domain of a cover digital image and his/her health records in the frequency domain of the same cover image as a watermark to ensure tamper detection and nonrepudiation. The ELSB method uses the Marsenne Twister (MT) Pseudo Random Number Generator (PRNG) to randomly embed and conceal the patient's data in the cover image. This technique significantly increases the imperceptibility of the hidden information to steganalysis thereby enhancing the security of the embedded patient's data.
In measuring the effectiveness of the proposed method, the study adopted the Design Science Research (DSR) methodology, a paradigm for problem solving in computing and Information Systems (IS) that involves design and implementation of artefacts and methods considered novel and the analytical testing of the performance of such artefacts in pursuit of understanding and enhancing an existing method, artefact or practice.
The fidelity measures of the stego images from the proposed method were compared with those from the traditional Least Significant Bit (LSB) method in order to establish the imperceptibility of the embedded information. The results demonstrated improvements of between 1 to 2.6 decibels (dB) in the Peak Signal to Noise Ratio (PSNR), and up to 0.4 MSE ratios for the proposed method.
Effective Parameters of Image Steganography TechniquesEditor IJCATR
Steganography is a branch of information hiding method to hide secret data in the media such as audio, images, videos, etc.
The use of images is very common in the world of electronic communication. In this paper, the parameters that are important in
steganography images, have been studied and analyzed. Steganography purposes of security, robustness and capacity of which three
are located at three vertices of a triangle, each note entail ignoring others. The main parameters of the methods steganography they've
Security, Capacity, Psnr, Mse, Ber, Ssim are the results of the implementation show, steganography methods that these parameters
provide have mentioned goals than other methods have improved
deep learning applications in medical image analysis brain tumorVenkat Projects
The tremendous success of machine learning algorithms at image recognition tasks in recent years intersects with a time of dramatically increased use of electronic medical records and diagnostic imaging. This review introduces the machine learning algorithms as applied to medical image analysis, focusing on convolutional neural networks, and emphasizing clinical aspects of the _eld. The advantage of machine learning in an era of medical big data is that signi_cant hierarchal relationships within the data can be discovered algorithmically without laborious hand-crafting of features. We cover key research areas and applications of medical image classi_cation, localization, detection, segmentation, and registration. We conclude by discussing research obstacles, emerging trends, and possible future directions.
Efficiency of LSB steganography on medical information IJECEIAES
The development of the medical field had led to the transformation of communication from paper information into the digital form. Medical information security had become a great concern as the medical field is moving towards the digital world and hence patient information, disease diagnosis and so on are all being stored in the digital image. Therefore, to improve the medical information security, securing of patient information and the increasing requirements for communication to be transferred between patients, client, medical practitioners, and sponsors is essential to be secured. The core aim of this research is to make available a complete knowledge about the research trends on LSB Steganography Technique, which are applied to securing medical information such as text, image, audio, video and graphics and also discuss the efficiency of the LSB technique. The survey findings show that LSB steganography technique is efficient in securing medical information from intruder.
IRJET- A Survey on Medical Image Interpretation for Predicting PneumoniaIRJET Journal
This document summarizes research on using machine learning and deep learning techniques to interpret medical images and predict pneumonia. It first discusses how medical image analysis is an active field for machine learning. It then reviews several related studies on using convolutional neural networks (CNNs) and transfer learning to classify chest x-rays and detect pneumonia. Specifically, it examines research on developing CNN models for pneumonia classification and using pre-trained CNN architectures like VGG16, VGG19, and ResNet with transfer learning. The document concludes that computer-aided diagnosis systems using deep learning can provide accurate predictions to assist radiologists in pneumonia diagnosis from chest x-rays.
COVID-19 detection from scarce chest X-Ray image data using few-shot deep lea...Shruti Jadon
In the current COVID-19 pandemic situation, there is an urgent need to screen infected patients quickly and accurately. Using deep learning models trained on chest X-ray images can become an efficient method for screening COVID-19 patients in these situations. Deep learning approaches are already widely used in the medical community. However, they require a large amount of data to be accurate.
The document compares the performance of different machine learning models for detecting COVID-19 from CT scans, including single models like SVM, NB, MLP, CNN and ensemble models like AdaBoost and GBDT. Based on accuracy, precision, recall, F1-score and MCC metrics, the SVM model achieved the best performance with an accuracy of 99.2%, followed by CNN and AdaBoost. While MLP, NB and GBDT showed lower performance, CNN had the advantage of automatically detecting important image features.
Preprocessing Techniques for Image Mining on Biopsy ImagesIJERA Editor
This document discusses preprocessing techniques for image mining on biopsy images. It begins with an introduction to biomedical imaging and image mining. The key steps in image mining are described as image retrieval, preprocessing, feature extraction, data mining, and interpretation. Various preprocessing techniques are then evaluated on biopsy images, including interpolation, thresholding, and segmentation. Bicubic interpolation and Otsu thresholding produced good results for enhancing renal biopsy images. Overall, the document evaluates different preprocessing methods and their effects on biopsy images to help extract meaningful features for disease detection through image mining.
3.survey on wireless intelligent video surveillance system using moving objec...Alexander Decker
This document summarizes a survey on wireless intelligent video surveillance systems using moving object recognition technology. It describes the typical components and processing steps of such systems, including motion detection using background subtraction, object tracking, and transmitting detected objects wirelessly. The document also reviews the evolution of video surveillance technologies from analog CCTV to modern wireless intelligent systems and discusses design considerations like video quality and compression.
11.0003www.iiste.org call for paper.survey on wireless intelligent video surv...Alexander Decker
This document summarizes a survey on wireless intelligent video surveillance systems using moving object recognition technology. It describes the typical components and processes involved, including capturing video, creating a background template, detecting moving objects via background subtraction, and sending the separated moving object to a destination. It also discusses techniques for motion detection, such as frame differencing, background subtraction, and optical flow. The system architecture involves a camera capturing video, subtracting frames to identify moving objects, and transmitting just the moving objects.
DATA AUGMENTATION TECHNIQUES AND TRANSFER LEARNING APPROACHES APPLIED TO FACI...ijaia
The face expression is the first thing we pay attention to when we want to understand a person’s state of
mind. Thus, the ability to recognize facial expressions in an automatic way is a very interesting research
field. In this paper, because the small size of available training datasets, we propose a novel data
augmentation technique that improves the performances in the recognition task. We apply geometrical
transformations and build from scratch GAN models able to generate new synthetic images for each
emotion type. Thus, on the augmented datasets we fine tune pretrained convolutional neural networks with
different architectures. To measure the generalization ability of the models, we apply extra-database
protocol approach, namely we train models on the augmented versions of training dataset and test them on
two different databases. The combination of these techniques allows to reach average accuracy values of
the order of 85% for the InceptionResNetV2 model.
DEEP-LEARNING-BASED HUMAN INTENTION PREDICTION WITH DATA AUGMENTATIONijaia
Data augmentation has been broadly applied in training deep-learning models to increase the diversity of
data. This study ingestigates the effectiveness of different data augmentation methods for deep-learningbased human intention prediction when only limited training data is available. A human participant pitches
a ball to nine potential targets in our experiment. We expect to predict which target the participant pitches
the ball to. Firstly, the effectiveness of 10 data augmentation groups is evaluated on a single-participant
data set using RGB images. Secondly, the best data augmentation method (i.e., random cropping) on the
single-participant data set is further evaluated on a multi-participant data set to assess its generalization
ability. Finally, the effectiveness of random cropping on fusion data of RGB images and optical flow is
evaluated on both single- and multi-participant data sets. Experiment results show that: 1) Data
augmentation methods that crop or deform images can improve the prediction performance; 2) Random
cropping can be generalized to the multi-participant data set (prediction accuracy is improved from 50%
to 57.4%); and 3) Random cropping with fusion data of RGB images and optical flow can further improve
the prediction accuracy from 57.4% to 63.9% on the multi-participant data set.
MITIGATION TECHNIQUES TO OVERCOME DATA HARM IN MODEL BUILDING FOR MLijaia
Given the impact of Machine Learning (ML) on individuals and the society, understanding how harm might
be occur throughout the ML life cycle becomes critical more than ever. By offering a framework to
determine distinct potential sources of downstream harm in ML pipeline, the paper demonstrates the
importance of choices throughout distinct phases of data collection, development, and deployment that
extend far beyond just model training. Relevant mitigation techniques are also suggested for being used
instead of merely relying on generic notions of what counts as fairness.
The document lists Shamik Tiwari's research publications and academic activities. It includes 5 published journal articles, 1 book chapter, and 2 accepted journal articles. It also lists that he coordinates academic monitoring, curriculum development, guides Ph.D. students, delivered lectures, and serves as a reviewer for several journals. He has also completed many online courses and achieved high student feedback for his online teaching.
MitoGame: Gamification Method for Detecting Mitosis from Histopathological I...IRJET Journal
This document proposes a method called MitoGame that uses gamification and crowdsourcing to detect mitosis in histopathological breast cancer images. Convolutional neural networks (CNNs) are trained on expert-annotated images to generate ground truth labels. Non-expert crowds then annotate images through an online game for mitosis detection. The crowd annotations are aggregated and used to retrain the CNNs, improving their ability to detect mitosis. This allows large datasets to be annotated without relying solely on medical experts. Analysis shows crowds can perform as well as experts at this task when guided by a game interface and CNN predictions. The goal is to leverage crowdsourcing to help train accurate CNN models for automated mitosis detection and breast cancer
The increasing use of distributed authentication architecture
has made interoperability of systems an important issue. Interoperabil ity of systems affects the maturity of the technology and also improves confidence of users in the technology. Biometric systems are not immune to the concerns of interoperability. Interoperability of fingerprint sensors and its effect on the overall performance of the recognition system is an area of interest with a considerable amount of work directed
towards it. This research analyzed effects of interoperability on error rates for fingerprint datasets captured from two optical sensors and a capacitive sensor when using a single commercially available fingerprint
matching algorithm. The main aim of this research was to emulate a
centralized storage and matching architecture with multiple acquisition
stations. Fingerprints were collected from 44 individuals on all three sensors and interoperable False Reject Rates of less than .31% were achieved using two different enrolment strategies.
Design and Development of an Algorithm for Image Clustering In Textile Image ...IJCSEA Journal
All textile industries aim to produce competitive materials and the competition enhancement depends mainly on designs and quality of the dresses produced by each industry. Every day, a vast amount of textile images are being generated such as images of shirts, jeans, t-shirts and sarees. A principal driver of innovation is World Wide Web, unleashing publication at the scale of tens and millions of content creators. Images play an important role as a picture is worth thousand words in the field of textile design and marketing. A retrieving of images needs special concepts such as image annotation, context, and image content and image values. This research work aimed at studying the image mining process in detail and analyzes the methods for retrieval. The textile images analyze various methods for clustering the images and developing an algorithm for the same. The retrieval method considered is based on relevance feedback, scalable method, edge histogram and color layout. The image clustering algorithm is designed based on color descriptors and k-means clustering algorithm. A software prototype to prove the proposed algorithm has been developed using net beans integrated development environment and found successful.
As we know the fingerprint is unique of every living objects. It is quite difficult to find out the prints.
Usually the Forensics use Fine powder and duct tapes to identify the prints of living object. As powder is
exceptionally muddled, so such molecule can cause loss of information after that examination the information is
coordinated with the system. The proposed system consists of an embedded device in which it consists of ultra
light to glow the fingerprints details. After that we can detect the fingerprint, analysis and it will checks on the
database, and it will return the output after matching. For matching and analysis of the Fingerprint, we will be
using the Algorithm for matching.
Performance Comparison of Face Recognition Using DCT Against Face Recognition...CSCJournals
In this paper, a face recognition system using simple Vector quantization (VQ) technique is proposed. Four different VQ algorithms namely LBG, KPE, KMCG and KFCG are used to generate codebooks of desired size. Euclidean distance is used as similarity measure to compare the feature vector of test image with that of trainee images. Proposed algorithms are tested on two different databases. One is Georgia Tech Face Database which contains color JPEG images, all are of different size. Another database used for experimental purpose is Indian Face Database. It contains color bitmap images. Using above VQ techniques, codebooks of different size are generated and recognition rate is calculated for each codebook size. This recognition rate is compared with the one obtained by applying DCT on image and LBG-VQ algorithm which is used as benchmark in vector quantization. Results show that KFCG outperforms other three VQ techniques and gives better recognition rate up to 85.4% for Georgia Tech Face Database and 90.66% for Indian Face Database. As no Euclidean distance computations are involved in KMCG and KFCG, they require less time to generate the codebook as compared to LBG and KPE
REVIEWING PROCESS MINING APPLICATIONS AND TECHNIQUES IN EDUCATIONijaia
Process Mining (PM) emerged from business process management but has recently been applied to
educational data and has been found to facilitate the understanding of the educational process.
Educational Process Mining (EPM) bridges the gap between process analysis and data analysis, based on
the techniques of model discovery, conformance checking and extension of existing process models. We
present a systematic review of the recent and current status of research in the EPM domain, focusing on
application domains, techniques, tools and models, to highlight the use of EPM in comprehending and
improving educational processes.
A REVIEW ON BLIND STILL IMAGE STEGANALYSIS TECHNIQUES USING FEATURES EXTRACTI...IJCSEIT Journal
Steganography is the technique for hiding secret information in other data such as still, multimedia
images, text, audio. Whereas Steganalysis is the reverse technique in which detection of the secret
information is done in the stego image. Steganalysis can be classified on the basis of the techniques used
classified statistical techniques, pattern classification techniques and visual detection techniques .All the
existing techniques can be broadly classified on the basis of the information required for the designing of
the steganalysis. They are targeted and blind steganalysis In targeted technique, we first look at
steganalysis techniques is designed for a particular steganographic embedding algorithm in mind whereas
in blind steganalysis is general class of steganalysis techniques which can be implemented with any
steganographic embedding algorithm, even an unknown algorithm. In this paper, an extensive review
report is presented chronologically on the Blind Image Steganalysis for the still stego images using the
classification techniques.
IRJET- Comparative Analysis of Video Processing Object DetectionIRJET Journal
This document summarizes research on comparative analysis of video processing object detection techniques. It begins with an abstract describing the goal of object detection in images and videos and challenges involved. It then discusses benefits of object detection and provides a literature review summarizing the approaches of 15 other research papers on object detection, including approaches using background subtraction, segmentation, feature extraction and deep learning algorithms. The document concludes by stating that object detection has wide applications and research is ongoing to improve accuracy and robustness of detection.
Medical image is an important parameter for diagnosis to many diseases. Now day’s
telemedicine is major treatment based on medical images. The World Health Organization
(WHO) established the Global Observatory for eHealth (GOe) to review the benefits that
Information and communication technologies (ICTs) can bring to health care and patients’
wellbeing. Securing medical images is important to protect the privacy of patients and assure
data integrity. In this paper a new self-adaptive medical image encryption algorithm is proposed
to improve its robustness. A corresponding size of matrix in the top right corner was created by
the pixel gray-scale value of the top left corner under Chebyshev mapping. The gray-scale value
of the top right corner block was then replaced by the matrix created before. The remaining
blocks were encrypted in the same manner in clockwise until the top left corner block was finally
encrypted. This algorithm is not restricted to the size of image and it is suitable to gray images
and color images, which leads to better robustness. Meanwhile, the introduction of gray-scale value diffusion system equips this algorithm with powerful function of diffusion and disturbance.
IRJET- Classifying Chest Pathology Images using Deep Learning TechniquesIRJET Journal
This document discusses classifying chest pathology images using deep learning techniques. It explores using pre-trained convolutional neural networks (CNNs) to classify chest radiograph images as either healthy or pathological, and to identify specific pathologies. The document reviews previous work on applying deep learning to medical image analysis. It then proposes using features extracted from pre-trained CNN models to classify chest radiographs, focusing on classifying images as healthy vs. pathological as an important screening task. The strengths of deep learning approaches for analyzing various chest diseases are explored.
Embedding Patient Information In Medical Images Using LBP and LTPcsijjournal
In this paper a new efficient interleaving methodology in which patient textual information is embedded in medical images considering Magnetic Resonance Image, Ultrasonic image and CT images of that patient body. The main disadvantages in traditional techniques to Embed patient information in medical images is inability to withstand attacks and Bit Error Rate (BER) when maximum number of characters to be embedded are absent in proposed algorithm. The New robust Embedding Techniques, Local Binary Pattern (LBP) and Local Ternary Pattern (LTP) are used to embed the patient information in medical images. LBP Technique is mainly used to provide the security for the patient information. Along with the security high volume of patient information can be embedded inside the medical image using LTP technique. Statistical parameters such as Normalized Root Mean Square Error, PSNR (Peak Signal to Noise Ratio) are used to measure the reliability of the present technique. Experimental results strongly indicate that the present technique with zero BER (Bit Error Rate), higher PSNR (Peak Signal to Noise Ratio), and high volume of data embedding capacity achieved using LTP as compared with the other data embedding techniques. This technique is found to be robust and watermark information is recoverable without distortion.
This document summarizes techniques for detecting tampered digital images. It discusses passive ("blind") methods that detect forgeries by analyzing the statistical properties and digital fingerprints of images without prior knowledge. These techniques examine inconsistencies introduced during tampering that alter the image's noise, compression, color, and other attributes. The document also outlines different types of forgeries like copy-move, splicing, retouching, and techniques using JPEG compression and lighting analysis. It reviews papers on demosaicing regularity detection and noise variation analysis for passive forgery identification.
This document summarizes a research paper that proposes a robust image alignment method for tampering detection using SVM clustering of bag of features. The method first extracts bag of features from an image and clusters them using SVM clustering for effective image alignment. An image hash based on the bag of features is attached to images before transmission. The hash is then analyzed at the destination to detect any geometric transformations applied to the received image and align it to the original for tampering detection. The proposed approach provides good performance compared to state-of-the-art methods in localizing tampered regions through robust image alignment.
General Purpose Image Tampering Detection using Convolutional Neural Network ...sipij
Digital image tampering detection has been an active area of research in recent times due to the ease with
which digital image can be modified to convey false or misleading information. To address this problem,
several studies have proposed forensics algorithms for digital image tampering detection. While these
approaches have shown remarkable improvement, most of them only focused on detecting a specific type of
image tampering. The limitation of these approaches is that new forensic method must be designed for
each new manipulation approach that is developed. Consequently, there is a need to develop methods
capable of detecting multiple tampering operations. In this paper, we proposed a novel general purpose
image tampering scheme based on CNNs and Local Optimal Oriented Pattern (LOOP) which is capable of
detecting five types of image tampering in both binary and multiclass scenarios. Unlike the existing deep
learning techniques which used constrained pre-processing layers to suppress the effect of image content
in order to capture image tampering traces, our method uses LOOP features, which can effectively subdue
the effect image content, thus, allowing the proposed CNNs to capture the needed features to distinguish
among different types of image tampering. Through a number of detailed experiments, our results
demonstrate that the proposed general purpose image tampering method can achieve high detection
accuracies in individual and multiclass image tampering detections respectively and a comparative
analysis of our results with the existing state of the arts reveals that the proposed model is more robust
than most of the exiting methods.
General Purpose Image Tampering Detection using Convolutional Neural Network ...sipij
Digital image tampering detection has been an active area of research in recent times due to the ease with
which digital image can be modified to convey false or misleading information. To address this problem,
several studies have proposed forensics algorithms for digital image tampering detection. While these
approaches have shown remarkable improvement, most of them only focused on detecting a specific type of
image tampering. The limitation of these approaches is that new forensic method must be designed for
each new manipulation approach that is developed. Consequently, there is a need to develop methods
capable of detecting multiple tampering operations. In this paper, we proposed a novel general purpose
image tampering scheme based on CNNs and Local Optimal Oriented Pattern (LOOP) which is capable of
detecting five types of image tampering in both binary and multiclass scenarios. Unlike the existing deep
learning techniques which used constrained pre-processing layers to suppress the effect of image content
in order to capture image tampering traces, our method uses LOOP features, which can effectively subdue
the effect image content, thus, allowing the proposed CNNs to capture the needed features to distinguish
among different types of image tampering. Through a number of detailed experiments, our results
demonstrate that the proposed general purpose image tampering method can achieve high detection
accuracies in individual and multiclass image tampering detections respectively and a comparative
analysis of our results with the existing state of the arts reveals that the proposed model is more robust
than most of the exiting methods.
Preprocessing Techniques for Image Mining on Biopsy ImagesIJERA Editor
This document discusses preprocessing techniques for image mining on biopsy images. It begins with an introduction to biomedical imaging and image mining. The key steps in image mining are described as image retrieval, preprocessing, feature extraction, data mining, and interpretation. Various preprocessing techniques are then evaluated on biopsy images, including interpolation, thresholding, and segmentation. Bicubic interpolation and Otsu thresholding produced good results for enhancing renal biopsy images. Overall, the document evaluates different preprocessing methods and their effects on biopsy images to help extract meaningful features for disease detection through image mining.
3.survey on wireless intelligent video surveillance system using moving objec...Alexander Decker
This document summarizes a survey on wireless intelligent video surveillance systems using moving object recognition technology. It describes the typical components and processing steps of such systems, including motion detection using background subtraction, object tracking, and transmitting detected objects wirelessly. The document also reviews the evolution of video surveillance technologies from analog CCTV to modern wireless intelligent systems and discusses design considerations like video quality and compression.
11.0003www.iiste.org call for paper.survey on wireless intelligent video surv...Alexander Decker
This document summarizes a survey on wireless intelligent video surveillance systems using moving object recognition technology. It describes the typical components and processes involved, including capturing video, creating a background template, detecting moving objects via background subtraction, and sending the separated moving object to a destination. It also discusses techniques for motion detection, such as frame differencing, background subtraction, and optical flow. The system architecture involves a camera capturing video, subtracting frames to identify moving objects, and transmitting just the moving objects.
DATA AUGMENTATION TECHNIQUES AND TRANSFER LEARNING APPROACHES APPLIED TO FACI...ijaia
The face expression is the first thing we pay attention to when we want to understand a person’s state of
mind. Thus, the ability to recognize facial expressions in an automatic way is a very interesting research
field. In this paper, because the small size of available training datasets, we propose a novel data
augmentation technique that improves the performances in the recognition task. We apply geometrical
transformations and build from scratch GAN models able to generate new synthetic images for each
emotion type. Thus, on the augmented datasets we fine tune pretrained convolutional neural networks with
different architectures. To measure the generalization ability of the models, we apply extra-database
protocol approach, namely we train models on the augmented versions of training dataset and test them on
two different databases. The combination of these techniques allows to reach average accuracy values of
the order of 85% for the InceptionResNetV2 model.
DEEP-LEARNING-BASED HUMAN INTENTION PREDICTION WITH DATA AUGMENTATIONijaia
Data augmentation has been broadly applied in training deep-learning models to increase the diversity of
data. This study ingestigates the effectiveness of different data augmentation methods for deep-learningbased human intention prediction when only limited training data is available. A human participant pitches
a ball to nine potential targets in our experiment. We expect to predict which target the participant pitches
the ball to. Firstly, the effectiveness of 10 data augmentation groups is evaluated on a single-participant
data set using RGB images. Secondly, the best data augmentation method (i.e., random cropping) on the
single-participant data set is further evaluated on a multi-participant data set to assess its generalization
ability. Finally, the effectiveness of random cropping on fusion data of RGB images and optical flow is
evaluated on both single- and multi-participant data sets. Experiment results show that: 1) Data
augmentation methods that crop or deform images can improve the prediction performance; 2) Random
cropping can be generalized to the multi-participant data set (prediction accuracy is improved from 50%
to 57.4%); and 3) Random cropping with fusion data of RGB images and optical flow can further improve
the prediction accuracy from 57.4% to 63.9% on the multi-participant data set.
MITIGATION TECHNIQUES TO OVERCOME DATA HARM IN MODEL BUILDING FOR MLijaia
Given the impact of Machine Learning (ML) on individuals and the society, understanding how harm might
be occur throughout the ML life cycle becomes critical more than ever. By offering a framework to
determine distinct potential sources of downstream harm in ML pipeline, the paper demonstrates the
importance of choices throughout distinct phases of data collection, development, and deployment that
extend far beyond just model training. Relevant mitigation techniques are also suggested for being used
instead of merely relying on generic notions of what counts as fairness.
The document lists Shamik Tiwari's research publications and academic activities. It includes 5 published journal articles, 1 book chapter, and 2 accepted journal articles. It also lists that he coordinates academic monitoring, curriculum development, guides Ph.D. students, delivered lectures, and serves as a reviewer for several journals. He has also completed many online courses and achieved high student feedback for his online teaching.
MitoGame: Gamification Method for Detecting Mitosis from Histopathological I...IRJET Journal
This document proposes a method called MitoGame that uses gamification and crowdsourcing to detect mitosis in histopathological breast cancer images. Convolutional neural networks (CNNs) are trained on expert-annotated images to generate ground truth labels. Non-expert crowds then annotate images through an online game for mitosis detection. The crowd annotations are aggregated and used to retrain the CNNs, improving their ability to detect mitosis. This allows large datasets to be annotated without relying solely on medical experts. Analysis shows crowds can perform as well as experts at this task when guided by a game interface and CNN predictions. The goal is to leverage crowdsourcing to help train accurate CNN models for automated mitosis detection and breast cancer
The increasing use of distributed authentication architecture
has made interoperability of systems an important issue. Interoperabil ity of systems affects the maturity of the technology and also improves confidence of users in the technology. Biometric systems are not immune to the concerns of interoperability. Interoperability of fingerprint sensors and its effect on the overall performance of the recognition system is an area of interest with a considerable amount of work directed
towards it. This research analyzed effects of interoperability on error rates for fingerprint datasets captured from two optical sensors and a capacitive sensor when using a single commercially available fingerprint
matching algorithm. The main aim of this research was to emulate a
centralized storage and matching architecture with multiple acquisition
stations. Fingerprints were collected from 44 individuals on all three sensors and interoperable False Reject Rates of less than .31% were achieved using two different enrolment strategies.
Design and Development of an Algorithm for Image Clustering In Textile Image ...IJCSEA Journal
All textile industries aim to produce competitive materials and the competition enhancement depends mainly on designs and quality of the dresses produced by each industry. Every day, a vast amount of textile images are being generated such as images of shirts, jeans, t-shirts and sarees. A principal driver of innovation is World Wide Web, unleashing publication at the scale of tens and millions of content creators. Images play an important role as a picture is worth thousand words in the field of textile design and marketing. A retrieving of images needs special concepts such as image annotation, context, and image content and image values. This research work aimed at studying the image mining process in detail and analyzes the methods for retrieval. The textile images analyze various methods for clustering the images and developing an algorithm for the same. The retrieval method considered is based on relevance feedback, scalable method, edge histogram and color layout. The image clustering algorithm is designed based on color descriptors and k-means clustering algorithm. A software prototype to prove the proposed algorithm has been developed using net beans integrated development environment and found successful.
As we know the fingerprint is unique of every living objects. It is quite difficult to find out the prints.
Usually the Forensics use Fine powder and duct tapes to identify the prints of living object. As powder is
exceptionally muddled, so such molecule can cause loss of information after that examination the information is
coordinated with the system. The proposed system consists of an embedded device in which it consists of ultra
light to glow the fingerprints details. After that we can detect the fingerprint, analysis and it will checks on the
database, and it will return the output after matching. For matching and analysis of the Fingerprint, we will be
using the Algorithm for matching.
Performance Comparison of Face Recognition Using DCT Against Face Recognition...CSCJournals
In this paper, a face recognition system using simple Vector quantization (VQ) technique is proposed. Four different VQ algorithms namely LBG, KPE, KMCG and KFCG are used to generate codebooks of desired size. Euclidean distance is used as similarity measure to compare the feature vector of test image with that of trainee images. Proposed algorithms are tested on two different databases. One is Georgia Tech Face Database which contains color JPEG images, all are of different size. Another database used for experimental purpose is Indian Face Database. It contains color bitmap images. Using above VQ techniques, codebooks of different size are generated and recognition rate is calculated for each codebook size. This recognition rate is compared with the one obtained by applying DCT on image and LBG-VQ algorithm which is used as benchmark in vector quantization. Results show that KFCG outperforms other three VQ techniques and gives better recognition rate up to 85.4% for Georgia Tech Face Database and 90.66% for Indian Face Database. As no Euclidean distance computations are involved in KMCG and KFCG, they require less time to generate the codebook as compared to LBG and KPE
REVIEWING PROCESS MINING APPLICATIONS AND TECHNIQUES IN EDUCATIONijaia
Process Mining (PM) emerged from business process management but has recently been applied to
educational data and has been found to facilitate the understanding of the educational process.
Educational Process Mining (EPM) bridges the gap between process analysis and data analysis, based on
the techniques of model discovery, conformance checking and extension of existing process models. We
present a systematic review of the recent and current status of research in the EPM domain, focusing on
application domains, techniques, tools and models, to highlight the use of EPM in comprehending and
improving educational processes.
A REVIEW ON BLIND STILL IMAGE STEGANALYSIS TECHNIQUES USING FEATURES EXTRACTI...IJCSEIT Journal
Steganography is the technique for hiding secret information in other data such as still, multimedia
images, text, audio. Whereas Steganalysis is the reverse technique in which detection of the secret
information is done in the stego image. Steganalysis can be classified on the basis of the techniques used
classified statistical techniques, pattern classification techniques and visual detection techniques .All the
existing techniques can be broadly classified on the basis of the information required for the designing of
the steganalysis. They are targeted and blind steganalysis In targeted technique, we first look at
steganalysis techniques is designed for a particular steganographic embedding algorithm in mind whereas
in blind steganalysis is general class of steganalysis techniques which can be implemented with any
steganographic embedding algorithm, even an unknown algorithm. In this paper, an extensive review
report is presented chronologically on the Blind Image Steganalysis for the still stego images using the
classification techniques.
IRJET- Comparative Analysis of Video Processing Object DetectionIRJET Journal
This document summarizes research on comparative analysis of video processing object detection techniques. It begins with an abstract describing the goal of object detection in images and videos and challenges involved. It then discusses benefits of object detection and provides a literature review summarizing the approaches of 15 other research papers on object detection, including approaches using background subtraction, segmentation, feature extraction and deep learning algorithms. The document concludes by stating that object detection has wide applications and research is ongoing to improve accuracy and robustness of detection.
Medical image is an important parameter for diagnosis to many diseases. Now day’s
telemedicine is major treatment based on medical images. The World Health Organization
(WHO) established the Global Observatory for eHealth (GOe) to review the benefits that
Information and communication technologies (ICTs) can bring to health care and patients’
wellbeing. Securing medical images is important to protect the privacy of patients and assure
data integrity. In this paper a new self-adaptive medical image encryption algorithm is proposed
to improve its robustness. A corresponding size of matrix in the top right corner was created by
the pixel gray-scale value of the top left corner under Chebyshev mapping. The gray-scale value
of the top right corner block was then replaced by the matrix created before. The remaining
blocks were encrypted in the same manner in clockwise until the top left corner block was finally
encrypted. This algorithm is not restricted to the size of image and it is suitable to gray images
and color images, which leads to better robustness. Meanwhile, the introduction of gray-scale value diffusion system equips this algorithm with powerful function of diffusion and disturbance.
IRJET- Classifying Chest Pathology Images using Deep Learning TechniquesIRJET Journal
This document discusses classifying chest pathology images using deep learning techniques. It explores using pre-trained convolutional neural networks (CNNs) to classify chest radiograph images as either healthy or pathological, and to identify specific pathologies. The document reviews previous work on applying deep learning to medical image analysis. It then proposes using features extracted from pre-trained CNN models to classify chest radiographs, focusing on classifying images as healthy vs. pathological as an important screening task. The strengths of deep learning approaches for analyzing various chest diseases are explored.
Embedding Patient Information In Medical Images Using LBP and LTPcsijjournal
In this paper a new efficient interleaving methodology in which patient textual information is embedded in medical images considering Magnetic Resonance Image, Ultrasonic image and CT images of that patient body. The main disadvantages in traditional techniques to Embed patient information in medical images is inability to withstand attacks and Bit Error Rate (BER) when maximum number of characters to be embedded are absent in proposed algorithm. The New robust Embedding Techniques, Local Binary Pattern (LBP) and Local Ternary Pattern (LTP) are used to embed the patient information in medical images. LBP Technique is mainly used to provide the security for the patient information. Along with the security high volume of patient information can be embedded inside the medical image using LTP technique. Statistical parameters such as Normalized Root Mean Square Error, PSNR (Peak Signal to Noise Ratio) are used to measure the reliability of the present technique. Experimental results strongly indicate that the present technique with zero BER (Bit Error Rate), higher PSNR (Peak Signal to Noise Ratio), and high volume of data embedding capacity achieved using LTP as compared with the other data embedding techniques. This technique is found to be robust and watermark information is recoverable without distortion.
This document summarizes techniques for detecting tampered digital images. It discusses passive ("blind") methods that detect forgeries by analyzing the statistical properties and digital fingerprints of images without prior knowledge. These techniques examine inconsistencies introduced during tampering that alter the image's noise, compression, color, and other attributes. The document also outlines different types of forgeries like copy-move, splicing, retouching, and techniques using JPEG compression and lighting analysis. It reviews papers on demosaicing regularity detection and noise variation analysis for passive forgery identification.
This document summarizes a research paper that proposes a robust image alignment method for tampering detection using SVM clustering of bag of features. The method first extracts bag of features from an image and clusters them using SVM clustering for effective image alignment. An image hash based on the bag of features is attached to images before transmission. The hash is then analyzed at the destination to detect any geometric transformations applied to the received image and align it to the original for tampering detection. The proposed approach provides good performance compared to state-of-the-art methods in localizing tampered regions through robust image alignment.
General Purpose Image Tampering Detection using Convolutional Neural Network ...sipij
Digital image tampering detection has been an active area of research in recent times due to the ease with
which digital image can be modified to convey false or misleading information. To address this problem,
several studies have proposed forensics algorithms for digital image tampering detection. While these
approaches have shown remarkable improvement, most of them only focused on detecting a specific type of
image tampering. The limitation of these approaches is that new forensic method must be designed for
each new manipulation approach that is developed. Consequently, there is a need to develop methods
capable of detecting multiple tampering operations. In this paper, we proposed a novel general purpose
image tampering scheme based on CNNs and Local Optimal Oriented Pattern (LOOP) which is capable of
detecting five types of image tampering in both binary and multiclass scenarios. Unlike the existing deep
learning techniques which used constrained pre-processing layers to suppress the effect of image content
in order to capture image tampering traces, our method uses LOOP features, which can effectively subdue
the effect image content, thus, allowing the proposed CNNs to capture the needed features to distinguish
among different types of image tampering. Through a number of detailed experiments, our results
demonstrate that the proposed general purpose image tampering method can achieve high detection
accuracies in individual and multiclass image tampering detections respectively and a comparative
analysis of our results with the existing state of the arts reveals that the proposed model is more robust
than most of the exiting methods.
General Purpose Image Tampering Detection using Convolutional Neural Network ...sipij
Digital image tampering detection has been an active area of research in recent times due to the ease with
which digital image can be modified to convey false or misleading information. To address this problem,
several studies have proposed forensics algorithms for digital image tampering detection. While these
approaches have shown remarkable improvement, most of them only focused on detecting a specific type of
image tampering. The limitation of these approaches is that new forensic method must be designed for
each new manipulation approach that is developed. Consequently, there is a need to develop methods
capable of detecting multiple tampering operations. In this paper, we proposed a novel general purpose
image tampering scheme based on CNNs and Local Optimal Oriented Pattern (LOOP) which is capable of
detecting five types of image tampering in both binary and multiclass scenarios. Unlike the existing deep
learning techniques which used constrained pre-processing layers to suppress the effect of image content
in order to capture image tampering traces, our method uses LOOP features, which can effectively subdue
the effect image content, thus, allowing the proposed CNNs to capture the needed features to distinguish
among different types of image tampering. Through a number of detailed experiments, our results
demonstrate that the proposed general purpose image tampering method can achieve high detection
accuracies in individual and multiclass image tampering detections respectively and a comparative
analysis of our results with the existing state of the arts reveals that the proposed model is more robust
than most of the exiting methods.
Broadcasting Forensics Using Machine Learning Approachesijtsrd
Broadcasting forensic is the practice of using scientific methods and techniques to analyse and authenticate Multimedia content. Over the past decade, consumer grade imaging sensors have become increasingly prevalent, generating vast quantities of images and videos that are used for various public and private communication purposes. Such applications include publicity, advocacy, disinformation, and deception, among others. This paper aims to develop tools that can extract knowledge from these visuals and comprehend their provenance. However, many images and videos undergo modification and manipulation before public release, which can misrepresent the facts and deceive viewers. To address this issue, we propose a set of forensics and counter forensic techniques that can help establish the authenticity and integrity of Multimedia content. Additionally, we suggest ways to modify the content intentionally to mislead potential adversaries. Our proposed tools are evaluated using publicly available datasets and independently organized challenges. Our results show that the forensics and counter forensic techniques can accurately identify manipulated content and can help restore the original image or video. Furthermore, in this paper demonstrate that the modified content can successfully deceive potential adversaries while remaining undetected by state of the art forensic methods. Amit Kapoor | Prof. Vinod Mahor "Broadcasting Forensics Using Machine Learning Approaches" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-7 | Issue-3 , June 2023, URL: https://www.ijtsrd.com.com/papers/ijtsrd57545.pdf Paper URL: https://www.ijtsrd.com.com/engineering/computer-engineering/57545/broadcasting-forensics-using-machine-learning-approaches/amit-kapoor
A simplified and novel technique to retrieve color images from hand-drawn sk...IJECEIAES
This document presents a novel technique for retrieving color images from hand-drawn sketches. The proposed method uses K-means clustering and bag-of-attributes approaches to extract key information from sketches. It introduces a unique indexing scheme to make the retrieval process faster and more accurate. The study implemented the technique in MATLAB and found it offered better accuracy and faster processing times than existing feature extraction methods.
A New Deep Learning Based Technique To Detect Copy Move Forgery In Digital Im...IRJET Journal
This document proposes a new deep learning technique to detect copy move forgery in digital images. It uses a VGG16 CNN model to extract feature vectors from image blocks. Euclidean distance is used to measure similarity between feature vectors and detect matching blocks, indicating potential forgery. The proposed method is evaluated on the CoMoFoD dataset and achieves higher F1-scores than ResNet50 and EfficientNet models, detecting forged regions more accurately.
Real Time Crime Detection using Deep LearningIRJET Journal
The document discusses the application of deep learning techniques for real-time crime detection. It provides an overview of various deep learning architectures and methods used, including CNNs, LSTMs, YOLO, ResNet50, and R-CNN models. These techniques are applied to analyze data sources like surveillance footage and social media to identify criminal activities. The paper also examines challenges in real-time crime detection using deep learning and discusses ethical considerations. Key deep learning models discussed are Spot Crime (which uses CNNs for behavior classification), YOLO (for object detection), and combining ResNet50 and LSTM for crime classification in videos.
Image Forgery Detection Using Deep Neural NetworkIRJET Journal
This document presents a study on detecting image forgeries using deep learning techniques. Specifically, it compares the performance of an Error Level Analysis (ELA) model combined with a Convolutional Neural Network (CNN) to a pre-trained VGG-16 model. The ELA-CNN model achieves remarkably high accuracy of 99.87% and correctly identifies 99% of unknown images. In contrast, the VGG-16 model achieves slightly lower accuracy of 97.93% and validation rate of 75.87%. The study thus demonstrates that deep learning methods, especially when combined with techniques like ELA, can significantly improve image forgery detection over traditional approaches.
Concept drift and machine learning model for detecting fraudulent transaction...IJECEIAES
The document presents a machine learning model for detecting fraudulent transactions in a streaming environment that addresses concept drift. The proposed approach uses the extreme gradient boosting (XGBoost) algorithm and employs four algorithms to continuously detect concept drift in data streams. The approach is evaluated on credit card and Twitter fraud datasets and is shown to outperform traditional machine learning models in terms accuracy, precision, and recall, and is more robust to concept drift. The proposed approach can be utilized as a real-time fraud detection system across different industries.
Automatic video censoring system using deep learningIJECEIAES
Due to the extensive use of video-sharing platforms and services, the amount of such all kinds of content on the web has become massive. This abundance of information is a problem controlling the kind of content that may be present in such a video. More than telling if the content is suitable for children and sensitive people or not, figuring it out is also important what parts of it contains such content, for preserving parts that would be discarded in a simple broad analysis. To tackle this problem, a comparison was done for popular image deep learning models: MobileNetV2, Xception model, InceptionV3, VGG16, VGG19, ResNet101 and ResNet50 to seek the one that is most suitable for the required application. Also, a system is developed that would automatically censor inappropriate content such as violent scenes with the help of deep learning. The system uses a transfer learning mechanism using the VGG16 model. The experiments suggested that the model showed excellent performance for the automatic censoring application that could also be used in other similar applications.
IRJET-Gaussian Filter based Biometric System Security EnhancementIRJET Journal
M.Selvi, T.Manickam, C.N.Marimuthu"Gaussian Filter based Biometric System Security Enhancement", International Research Journal of Engineering and Technology (IRJET), Volume2,issue-01 April 2015.e-ISSN:2395-0056, p-ISSN:2395-0072. www.irjet.net
Abstract
A novel software-based fake detection method that can be used in multiple biometric systems to detect different types of fraudulent access attempts. To ensure the actual presence of a real legitimate trait in contrast to a fake self-manufactured synthetic or reconstructed sample is a significant problem in biometric authentication, which requires the development of new and efficient protection measures. To enhance the security of biometric recognition frameworks, by adding liveness assessment in a fast, user-friendly, and non-intrusive manner, through the use of image quality assessment.
The proposed approach presents a very low degree of complexity, which makes it suitable for real-time applications, using 25 general image quality features extracted from one image (i.e., the same acquired for authentication purposes) to distinguish between legitimate and impostor samples. Multi-biometric and Multi-attack protection method which targets to overcome part of these limitations through the use of Image Quality Assessment (IQA).
Moreover, being software-based, it presents the usual advantages of this type of approaches: fast, as it only needs one image (i.e., the same sample acquired for biometric recognition) to detect whether it is real or fake, non-intrusive; user-friendly (transparent to the user), cheap and easy to embed in already functional systems and no hardware is required).
1) The document discusses copy-move forgery detection using the Discrete Wavelet Transform (DWT) method. Copy-move forgery involves copying and pasting a part of an image within the same image to conceal information.
2) Previous work has used the PCA algorithm to detect incompatible pixels, but this study proposes using the DWT and GLCM algorithms instead. The proposed algorithm is tested in MATLAB and evaluated based on PSNR and MSE values.
3) The study finds that the proposed DWT and GLCM algorithm performs better than the previous PCA-only approach, providing more accurate forgery detection while maintaining good performance metrics.
Facial recognition based on enhanced neural networkIAESIJAI
Accurate automatic face recognition (FR) has only become a practical goal of biometrics research in recent years. Detection and recognition are the primary steps for identifying faces in this research, and The Viola-Jones algorithm implements to discover faces in images. This paper presents a neural network solution called modify bidirectional associative memory (MBAM). The basic idea is to recognize the image of a human's face, extract the face image, enter it into the MBAM, and identify it. The output ID for the face image from the network should be similar to the ID for the image entered previously in the training phase. The tests have conducted using the suggested model using 100 images. Results show that FR accuracy is 100% for all images used, and the accuracy after adding noise is the proportions that differ between the images used according to the noise ratio. Recognition results for the mobile camera images were more satisfactory than those for the Face94 dataset.
ERROR LEVEL ANALYSIS IN IMAGE FORGERY DETECTIONIRJET Journal
This document summarizes a research paper that proposes using error level analysis (ELA) as part of a deep learning approach to detect digital image forgeries. ELA works by comparing error levels in an image to identify areas that have been manipulated. The researchers incorporated ELA feature extraction into a convolutional neural network (CNN) model for image forgery detection. Their results found that including ELA improved test and validation accuracy by around 2.7% while only increasing processing time by 5.6%. The paper concludes that combining ELA with deep learning can enhance the detection of image forgeries.
Person identification based on facial biometrics in different lighting condit...IJECEIAES
Technological development is an inherent feature of this time, that reliance on electronic applications in all daily transactions (business management, banking, financial transfers, health, and other important aspects of life). Identifying and confirming identity is one of the complex challenges. Therefore, relying on biological properties gives reliable results. People can be identified in pictures, films, or real-time using facial recognition technology. A face individual is a unique identifying biological characteristic to authenticate them and prevents permits another person to assume that individual’s identity without their knowledge or consent. This article proposes the identification model by facial individual characteristics, based on the deep neural network (DNN). The proposed method extracts the spatial information available in an image, analysis this information to extract the salient features, and makes the identifying decision based on these features. This model presents successful and promising results, the accuracy achieves by the proposed system reaches 99.5% (+/- 0.16%) and the values of the loss function reach 0.0308 over the Pins Face Recognition dataset to identify 105 subjects.
This document summarizes techniques for detecting tampered or forged digital images. It discusses both active techniques that require prior information like watermarks, and passive/"blind" techniques that can detect forgeries without prior info by analyzing inconsistencies in statistical image properties introduced during tampering. Specific techniques mentioned include detecting inconsistencies in noise levels, color filter array properties, JPEG compression quality, and identifying copy-move or image splicing operations. The document also reviews several papers on techniques like analyzing demosaicing patterns, noise analysis, SIFT feature clustering, and alpha matting.
A new image steganography algorithm basedIJNSA Journal
In recent years, the rapid growth of information technology and digital communication has become very
important to secure information transmission between the sender and receiver. Therefore, steganography
introduces strongly to hide information and to communicate a secret data in an appropriate multimedia
carrier, e.g., image, audio and video files. In this paper, a new algorithm for image steganography has
been proposed to hide a large amount of secret data presented by secret color image. This algorithm is
based on different size image segmentations (DSIS) and modified least significant bits (MLSB), where the
DSIS algorithm has been applied to embed a secret image randomly instead of sequentially; this approach
has been applied before embedding process. The number of bit to be replaced at each byte is non uniform,
it bases on byte characteristics by constructing an effective hypothesis. The simulation results justify that
the proposed approach is employed efficiently and satisfied high imperceptible with high payload capacity
reached to four bits per byte.
A NEW IMAGE STEGANOGRAPHY ALGORITHM BASED ON MLSB METHOD WITH RANDOM PIXELS S...IJNSA Journal
In recent years, the rapid growth of information technology and digital communication has become very important to secure information transmission between the sender and receiver. Therefore, steganography introduces strongly to hide information and to communicate a secret data in an appropriate multimedia carrier, e.g., image, audio and video files. In this paper, a new algorithm for image steganography has been proposed to hide a large amount of secret data presented by secret color image. This algorithm is based on different size image segmentations (DSIS) and modified least significant bits (MLSB), where the DSIS algorithm has been applied to embed a secret image randomly instead of sequentially; this approach has been applied before embedding process. The number of bit to be replaced at each byte is non uniform, it bases on byte characteristics by constructing an effective hypothesis. The simulation results justify that the proposed approach is employed efficiently and satisfied high imperceptible with high payload capacity reached to four bits per byte.
Fisher exact Boschloo and polynomial vector learning for malware detection IJECEIAES
Computer technology shows swift progress that has infiltrated people’s lives with the candidness and pliability of computers to work ease shows security breaches. Thus, malware detection methods perform modifications in running the malware based on behavioral and content factors. The factors are taken into consideration compromises of convergence rate and speed. This research paper proposed a method called fisher exact Boschloo and polynomial vector learning (FEB-PVL) to perform both content and behavioral-based malware detection with early convergence to speed up the process. First, the input dataset is provided as input then fisher exact Boschloo’s test Bernoulli feature extraction model is applied to obtain independent observations of two binary variables. Next, the extracted network features form input to polynomial regression support vector learning to different malware classes from benign classes. The proposed method validates the results with respect to the malware and the benign files. The present research aimed to develop the behaviors to detect the accuracy process of the features that have minimum time speeds the overall performances. The proposed FEB-PVL increases the true positive rate and reduces the false positive rate and hence increasing the precision rate using FEB-PVL by 7% compared to existing approaches.
Image Forgery Detection Methods- A ReviewIRJET Journal
This document reviews various methods for detecting image forgery. It begins with an introduction to the topic, explaining the need for image forgery detection techniques due to the widespread manipulation of images online. It then categorizes common types of image manipulation and provides a literature review comparing the accuracy and citations of different detection techniques, such as CNN-based methods, transform-domain methods using DCT and DWT, and methods analyzing JPEG compression artifacts. The review finds that CNN-based methods generally achieve the highest accuracy, around 90-100%, but also notes transform-domain and JPEG-based methods can also achieve reasonably high accuracy ranging from 70-100% depending on the technique and testing parameters.
Similar to Novel framework for optimized digital forensic for mitigating complex image attacks (20)
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Neural network optimizer of proportional-integral-differential controller par...IJECEIAES
Wide application of proportional-integral-differential (PID)-regulator in industry requires constant improvement of methods of its parameters adjustment. The paper deals with the issues of optimization of PID-regulator parameters with the use of neural network technology methods. A methodology for choosing the architecture (structure) of neural network optimizer is proposed, which consists in determining the number of layers, the number of neurons in each layer, as well as the form and type of activation function. Algorithms of neural network training based on the application of the method of minimizing the mismatch between the regulated value and the target value are developed. The method of back propagation of gradients is proposed to select the optimal training rate of neurons of the neural network. The neural network optimizer, which is a superstructure of the linear PID controller, allows increasing the regulation accuracy from 0.23 to 0.09, thus reducing the power consumption from 65% to 53%. The results of the conducted experiments allow us to conclude that the created neural superstructure may well become a prototype of an automatic voltage regulator (AVR)-type industrial controller for tuning the parameters of the PID controller.
An improved modulation technique suitable for a three level flying capacitor ...IJECEIAES
This research paper introduces an innovative modulation technique for controlling a 3-level flying capacitor multilevel inverter (FCMLI), aiming to streamline the modulation process in contrast to conventional methods. The proposed
simplified modulation technique paves the way for more straightforward and
efficient control of multilevel inverters, enabling their widespread adoption and
integration into modern power electronic systems. Through the amalgamation of
sinusoidal pulse width modulation (SPWM) with a high-frequency square wave
pulse, this controlling technique attains energy equilibrium across the coupling
capacitor. The modulation scheme incorporates a simplified switching pattern
and a decreased count of voltage references, thereby simplifying the control
algorithm.
A review on features and methods of potential fishing zoneIJECEIAES
This review focuses on the importance of identifying potential fishing zones in seawater for sustainable fishing practices. It explores features like sea surface temperature (SST) and sea surface height (SSH), along with classification methods such as classifiers. The features like SST, SSH, and different classifiers used to classify the data, have been figured out in this review study. This study underscores the importance of examining potential fishing zones using advanced analytical techniques. It thoroughly explores the methodologies employed by researchers, covering both past and current approaches. The examination centers on data characteristics and the application of classification algorithms for classification of potential fishing zones. Furthermore, the prediction of potential fishing zones relies significantly on the effectiveness of classification algorithms. Previous research has assessed the performance of models like support vector machines, naïve Bayes, and artificial neural networks (ANN). In the previous result, the results of support vector machine (SVM) were 97.6% more accurate than naive Bayes's 94.2% to classify test data for fisheries classification. By considering the recent works in this area, several recommendations for future works are presented to further improve the performance of the potential fishing zone models, which is important to the fisheries community.
Electrical signal interference minimization using appropriate core material f...IJECEIAES
As demand for smaller, quicker, and more powerful devices rises, Moore's law is strictly followed. The industry has worked hard to make little devices that boost productivity. The goal is to optimize device density. Scientists are reducing connection delays to improve circuit performance. This helped them understand three-dimensional integrated circuit (3D IC) concepts, which stack active devices and create vertical connections to diminish latency and lower interconnects. Electrical involvement is a big worry with 3D integrates circuits. Researchers have developed and tested through silicon via (TSV) and substrates to decrease electrical wave involvement. This study illustrates a novel noise coupling reduction method using several electrical involvement models. A 22% drop in electrical involvement from wave-carrying to victim TSVs introduces this new paradigm and improves system performance even at higher THz frequencies.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
Enhancing battery system identification: nonlinear autoregressive modeling fo...IJECEIAES
Precisely characterizing Li-ion batteries is essential for optimizing their
performance, enhancing safety, and prolonging their lifespan across various
applications, such as electric vehicles and renewable energy systems. This
article introduces an innovative nonlinear methodology for system
identification of a Li-ion battery, employing a nonlinear autoregressive with
exogenous inputs (NARX) model. The proposed approach integrates the
benefits of nonlinear modeling with the adaptability of the NARX structure,
facilitating a more comprehensive representation of the intricate
electrochemical processes within the battery. Experimental data collected
from a Li-ion battery operating under diverse scenarios are employed to
validate the effectiveness of the proposed methodology. The identified
NARX model exhibits superior accuracy in predicting the battery's behavior
compared to traditional linear models. This study underscores the
importance of accounting for nonlinearities in battery modeling, providing
insights into the intricate relationships between state-of-charge, voltage, and
current under dynamic conditions.
Smart grid deployment: from a bibliometric analysis to a surveyIJECEIAES
Smart grids are one of the last decades' innovations in electrical energy.
They bring relevant advantages compared to the traditional grid and
significant interest from the research community. Assessing the field's
evolution is essential to propose guidelines for facing new and future smart
grid challenges. In addition, knowing the main technologies involved in the
deployment of smart grids (SGs) is important to highlight possible
shortcomings that can be mitigated by developing new tools. This paper
contributes to the research trends mentioned above by focusing on two
objectives. First, a bibliometric analysis is presented to give an overview of
the current research level about smart grid deployment. Second, a survey of
the main technological approaches used for smart grid implementation and
their contributions are highlighted. To that effect, we searched the Web of
Science (WoS), and the Scopus databases. We obtained 5,663 documents
from WoS and 7,215 from Scopus on smart grid implementation or
deployment. With the extraction limitation in the Scopus database, 5,872 of
the 7,215 documents were extracted using a multi-step process. These two
datasets have been analyzed using a bibliometric tool called bibliometrix.
The main outputs are presented with some recommendations for future
research.
Use of analytical hierarchy process for selecting and prioritizing islanding ...IJECEIAES
One of the problems that are associated to power systems is islanding
condition, which must be rapidly and properly detected to prevent any
negative consequences on the system's protection, stability, and security.
This paper offers a thorough overview of several islanding detection
strategies, which are divided into two categories: classic approaches,
including local and remote approaches, and modern techniques, including
techniques based on signal processing and computational intelligence.
Additionally, each approach is compared and assessed based on several
factors, including implementation costs, non-detected zones, declining
power quality, and response times using the analytical hierarchy process
(AHP). The multi-criteria decision-making analysis shows that the overall
weight of passive methods (24.7%), active methods (7.8%), hybrid methods
(5.6%), remote methods (14.5%), signal processing-based methods (26.6%),
and computational intelligent-based methods (20.8%) based on the
comparison of all criteria together. Thus, it can be seen from the total weight
that hybrid approaches are the least suitable to be chosen, while signal
processing-based methods are the most appropriate islanding detection
method to be selected and implemented in power system with respect to the
aforementioned factors. Using Expert Choice software, the proposed
hierarchy model is studied and examined.
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...IJECEIAES
The power generated by photovoltaic (PV) systems is influenced by
environmental factors. This variability hampers the control and utilization of
solar cells' peak output. In this study, a single-stage grid-connected PV
system is designed to enhance power quality. Our approach employs fuzzy
logic in the direct power control (DPC) of a three-phase voltage source
inverter (VSI), enabling seamless integration of the PV connected to the
grid. Additionally, a fuzzy logic-based maximum power point tracking
(MPPT) controller is adopted, which outperforms traditional methods like
incremental conductance (INC) in enhancing solar cell efficiency and
minimizing the response time. Moreover, the inverter's real-time active and
reactive power is directly managed to achieve a unity power factor (UPF).
The system's performance is assessed through MATLAB/Simulink
implementation, showing marked improvement over conventional methods,
particularly in steady-state and varying weather conditions. For solar
irradiances of 500 and 1,000 W/m2
, the results show that the proposed
method reduces the total harmonic distortion (THD) of the injected current
to the grid by approximately 46% and 38% compared to conventional
methods, respectively. Furthermore, we compare the simulation results with
IEEE standards to evaluate the system's grid compatibility.
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...IJECEIAES
Photovoltaic systems have emerged as a promising energy resource that
caters to the future needs of society, owing to their renewable, inexhaustible,
and cost-free nature. The power output of these systems relies on solar cell
radiation and temperature. In order to mitigate the dependence on
atmospheric conditions and enhance power tracking, a conventional
approach has been improved by integrating various methods. To optimize
the generation of electricity from solar systems, the maximum power point
tracking (MPPT) technique is employed. To overcome limitations such as
steady-state voltage oscillations and improve transient response, two
traditional MPPT methods, namely fuzzy logic controller (FLC) and perturb
and observe (P&O), have been modified. This research paper aims to
simulate and validate the step size of the proposed modified P&O and FLC
techniques within the MPPT algorithm using MATLAB/Simulink for
efficient power tracking in photovoltaic systems.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Remote field-programmable gate array laboratory for signal acquisition and de...IJECEIAES
A remote laboratory utilizing field-programmable gate array (FPGA) technologies enhances students’ learning experience anywhere and anytime in embedded system design. Existing remote laboratories prioritize hardware access and visual feedback for observing board behavior after programming, neglecting comprehensive debugging tools to resolve errors that require internal signal acquisition. This paper proposes a novel remote embeddedsystem design approach targeting FPGA technologies that are fully interactive via a web-based platform. Our solution provides FPGA board access and debugging capabilities beyond the visual feedback provided by existing remote laboratories. We implemented a lab module that allows users to seamlessly incorporate into their FPGA design. The module minimizes hardware resource utilization while enabling the acquisition of a large number of data samples from the signal during the experiments by adaptively compressing the signal prior to data transmission. The results demonstrate an average compression ratio of 2.90 across three benchmark signals, indicating efficient signal acquisition and effective debugging and analysis. This method allows users to acquire more data samples than conventional methods. The proposed lab allows students to remotely test and debug their designs, bridging the gap between theory and practice in embedded system design.
Detecting and resolving feature envy through automated machine learning and m...IJECEIAES
Efficiently identifying and resolving code smells enhances software project quality. This paper presents a novel solution, utilizing automated machine learning (AutoML) techniques, to detect code smells and apply move method refactoring. By evaluating code metrics before and after refactoring, we assessed its impact on coupling, complexity, and cohesion. Key contributions of this research include a unique dataset for code smell classification and the development of models using AutoGluon for optimal performance. Furthermore, the study identifies the top 20 influential features in classifying feature envy, a well-known code smell, stemming from excessive reliance on external classes. We also explored how move method refactoring addresses feature envy, revealing reduced coupling and complexity, and improved cohesion, ultimately enhancing code quality. In summary, this research offers an empirical, data-driven approach, integrating AutoML and move method refactoring to optimize software project quality. Insights gained shed light on the benefits of refactoring on code quality and the significance of specific features in detecting feature envy. Future research can expand to explore additional refactoring techniques and a broader range of code metrics, advancing software engineering practices and standards.
Smart monitoring technique for solar cell systems using internet of things ba...IJECEIAES
Rapidly and remotely monitoring and receiving the solar cell systems status parameters, solar irradiance, temperature, and humidity, are critical issues in enhancement their efficiency. Hence, in the present article an improved smart prototype of internet of things (IoT) technique based on embedded system through NodeMCU ESP8266 (ESP-12E) was carried out experimentally. Three different regions at Egypt; Luxor, Cairo, and El-Beheira cities were chosen to study their solar irradiance profile, temperature, and humidity by the proposed IoT system. The monitoring data of solar irradiance, temperature, and humidity were live visualized directly by Ubidots through hypertext transfer protocol (HTTP) protocol. The measured solar power radiation in Luxor, Cairo, and El-Beheira ranged between 216-1000, 245-958, and 187-692 W/m 2 respectively during the solar day. The accuracy and rapidity of obtaining monitoring results using the proposed IoT system made it a strong candidate for application in monitoring solar cell systems. On the other hand, the obtained solar power radiation results of the three considered regions strongly candidate Luxor and Cairo as suitable places to build up a solar cells system station rather than El-Beheira.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
Low power architecture of logic gates using adiabatic techniquesnooriasukmaningtyas
The growing significance of portable systems to limit power consumption in ultra-large-scale-integration chips of very high density, has recently led to rapid and inventive progresses in low-power design. The most effective technique is adiabatic logic circuit design in energy-efficient hardware. This paper presents two adiabatic approaches for the design of low power circuits, modified positive feedback adiabatic logic (modified PFAL) and the other is direct current diode based positive feedback adiabatic logic (DC-DB PFAL). Logic gates are the preliminary components in any digital circuit design. By improving the performance of basic gates, one can improvise the whole system performance. In this paper proposed circuit design of the low power architecture of OR/NOR, AND/NAND, and XOR/XNOR gates are presented using the said approaches and their results are analyzed for powerdissipation, delay, power-delay-product and rise time and compared with the other adiabatic techniques along with the conventional complementary metal oxide semiconductor (CMOS) designs reported in the literature. It has been found that the designs with DC-DB PFAL technique outperform with the percentage improvement of 65% for NOR gate and 7% for NAND gate and 34% for XNOR gate over the modified PFAL techniques at 10 MHz respectively.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
2. Int J Elec & Comp Eng ISSN: 2088-8708
Novel framework for optimized digital forensic for mitigating complex image attacks (Shashidhar T M)
5199
associated with the statistical information of the natural images irrespective of attempting to perform
disclosure of any artifacts resulting from image attacks. In the case of image forensics, the mechanism
connected to acquisition as well as tampering methods that could leave traces [7]. The prime target of digital
image forensics is to perform exposure to such artifacts by extracting the current information associated with
digital image processing. The cumulative outcomes of multimedia security are also supporting this
mechanism. In order to offer a better form of the process of image forensic, it is necessary to perform an
investigation of the possibility of all connectivity between various approaches of image security practices.
The process of digital image processing associated with the forensic practices encounters multiple
possibilities of the challenges related to steganography and digital watermarking [8]. The role of digital
image forensic offers better supportability of the security policies. The authentication of the digital images is
quite a challenging practice to be ascertained as well as is potentially challenging task to offer data
integrity too.
Therefore, such an approach is called a passive process, and sometimes it is also called a blind
image forensic problem. At present, various approaches are being carried out towards digital image forensic
with various forms of schemes and strategies [9]. However, existing approaches do not facilitate better
optimization in its process of performing identification of the area that has been subjected to illegitimate
corruption by the attacker. The idea of the proposed system is to perform the development of a novel
optimized framework that can leverage the performance of the image forensic, thereby facilitating resistivity
against potential threats against attacks over the image. The organization of the proposed manuscript is as
follows: Section 1 discusses the existing literature where different techniques are discussed for detection
schemes used in power transmission lines followed by a discussion of research problems and the proposed
solution. Section 2 discusses algorithm implementation towards identifying the location of image tampering,
and it is further followed by a discussion of result analysis in section 3. Finally, the conclusive remarks are
provided in section 4.
There have been various works that have been carried out towards studying the effectiveness of
image forensics [10]. These section further upgrades more research work towards a similar topic.
Most recently, the adoption of a slicing-based approach towards the bit planes has been carried out by
Rhee [11]. The authors have also used the method for the identification of median filtering. Xiao et al. [12]
have developed a machine learning approach of histogram equalization towards improving the captures of
surveillance system aiming forensic application. Ma et al. [13] have used a convolution neural network for
assisting in image retrieval. Study towards the identification of representation learning can be used for
searching a specific form of an image that can be used in image forensic using a convolution neural network,
as seen in the work of Han et al. [14]. Reillo et al. [15] have used attribute-based identification of varied
signatures of humans by extracting the geometric attributes. The work of Neubert et al. [16] has presented
a solution towards resisting forging face shape based attack using a morphing based approach for better
biometric security. Guo et al. [17] have carried out an identification of the image based on forged color
information using histogram-based as well as an attribute-based encoding mechanism. Bayar and Stamm [18]
have facilitated image forensic by developing a model using deep learning and enhanced convolution neural
networks. Hao et al. [19] have presented a forensic analysis of the palmprint using a supervised learning
algorithm using a dual feature in order to assess the quality of the image. Shan et al. [20] have used
a deblocking filtering mechanism for the identification of the traces caused by the usage of median filtering.
Singh and Singh [21] have carried out the experiment with respect to the statistically-based approach of
the second order for resisting JPEG-based attacks over an image using supervised classifiers. The work of
Zeinstra et al. [22] has carried out an anti-forensic based approach for safeguarding facial image.
Chen et al. [23] have discussed blind forensic method associated with the image forensics. Kim et al. [24]
have presented anti-forensic methods using deep neural networks integrated with convolution neural
networks to eliminate the artifacts of filters being used. Dam et al. [25] have developed a three-dimensional
mechanism for reconstructing the facial structure based on the standard reflectance model for better
comparison scores with a forensic sample. Pun et al. [26] have presented an image hashing approach for
aligning images in order to identify any form of object detection. Korus and Huang [27] have carried out
localization of the tampering event over the image using a multi-scale analysis where the authors have used
Markov fields. Conotter et al. [28] have used a linear filtering approach for identifying the series of various
functions that lead to attacks on the image over JPEG compression. Sarreshtedari and Akhaee [29] have used
a channel coding approach for protecting any possibility of image tampering. Fan et al. [30] have presented
work towards the removal of traces against digital filters, also focusing on enhancing the quality of
the image. Therefore, there are various approaches to promoting the performance of image forensic. The next
section outlines about problems associated with it.
3. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 10, No. 5, October 2020 : 5198 - 5207
5200
The identified problems associated with the existing studies are as follows:
- Adoption of machine learning (especially neural network) offers good accuracy but at the cost of
immense computational complexity.
- Majority of the existing approach are carried out on the basis of pixel-based approach which is not only
time consuming but also memory dependent.
- At present, no significant optimization work has been carried out towards the performance of image
forensics.
- Existing solutions are quite specific to particular form of image attack and hence they fail to address
potential solution towards complex image-attacks.
Hence, the statement of problem can be represented as “Developing a cost effective computational
framework that leverages more performance optimization targeting to resist maximum potential threats over
image forensics”. The next part of the study highlights a solution to address this problem.
The current work is an extension of our prior framework towards artifacts removal [31], while
this part of the study introduces a novel analytical framework that is capable of optimizing the performance
of the framework towards resisting potential threats over images. The plan for the adoption of the proposed
methodology is discussed in Figure 1. The implementation of the proposed study is carried out considering
the fact those regions within an image that have been subjected to corruption are accurately identified.
The study considers a case study of image-based attack in such a way that irrespective of any form of attack,
the performance towards capturing the location of temperament. The underlying concept of the proposed
system is that it attacker always leaves its traces of attack in the form of artifacts that are required to be
identified as a part of the mitigation process.
According to Figure 1, the proposed system takes the input of the image, which has already been
tampered in its location called a forged image. The next part of the algorithm is associated with the process of
block partitioning, which is about representing the complete image with blocks of different dimensions
followed by partitioning it. The third block is meant for performing extraction of significant attributes so that
various forms of artifacts owing to the process of the image capturing or tampering are considered.
The extracted attribute is finally subjected to a binary classification process to speed up the process of
identifying the region of attack and typical area. Interestingly, the proposed system is capable of identifying
any form of artifacts that are illegitimatey incorporated by an adversary while launching an attack.
The proposed system performs the extraction of the attributes based on statistically descriptive parameters
for higher accuracy and speeding up the process. This process offers a further better representation of
the attributes for the better classification process. Finally, the artifacts associated with the complex attack
over the image are identified in the proposed system, which further leads to the process of performance
analysis. The next part of the discussion is about algorithm implementation.
Tampered
Image
Image Block Partition
Operation
Extraction of primary
attribute on the basis of
Colored Mosaic
Binary Classfication
Mitigating Complex Image
Attack
Performance
Analysis
Figure 1. Proposed analytical framework
2. ALGORITHM IMPLEMENTATION
The core purpose of the proposed algorithm design is to ensure that the processing is carried out in
such a way that there is a better solution toward resisting the form of the attack over the image, especially
when the attack is quite of complex nature. The complete assessment and investigation of the image are
carried out considering 2000 models that are characterized by colored high definition quality and is captured
from high definition image capturing device. The complete process of implementation is carried out
considering 4 stages of implementation in order to mitigate complex image attack. The discussion of
the algorithms used in this part of the proposed system is briefed below:
4. Int J Elec & Comp Eng ISSN: 2088-8708
Novel framework for optimized digital forensic for mitigating complex image attacks (Shashidhar T M)
5201
2.1. Algorihm for image block partition operation
This is the first algorithm of implementation where the logic is to extract all essential information
that will assist in formulating essential attributes for the given image on the basis of the constructed blocks.
This attributes are used for identifying the core region of image intrusion over the tampered image.
The algorithm takes the input of original image, convert them in matrix, and consider all the row fields (hf)
and column fields (vf) which after processing yields an outcome of Bp (block partitioned image). The steps of
the algorithm are as follows:
Algorithm for Imeg Block Partition
Input: hf/vf (horizontal and vertical field)
Output: Bp (block partitioned image)
Start
1. For i=1:xmax, where xmax=[hf vf]
2. Read elem (xmax)
3. Bpobtain block_elem
4. End
End
The discussions of the algorithmic steps are as follows: The algorithm targets to exract significant
attribute so that essential source identificational traits of the given image are analyzed more closely. This step
will assist in later part of operation in order to act as a baseline attributes to distinguise the original region
and tampred region of the given image. For effective analysis, the proposed study considers multiple
dimension of the block size in order to create a test-case. The algorithm reads all the elements in each field
(Line-1) and stores the cell-based information and not pixel-based informaion (Line-2). All the cell-based
information are then grouped in the form of blocks that are now partitioned and now stored in another matrix
called as Bp or partitioned block elements. The advantage of this algoritm is that identification time of
the corrupted region will be faster as the identification process is carried out not on the basis of the pixels but
on the basis of the blocks, which is the first step towards optimization.
2.2. Algorithm for extraction of primary attribute on the basis of colored mosaic
This is the second part of implementation that focuses on extracting the final attributes based on
which the final differentiation for original and tampered region will be carried out. The algorithm takes
the input of actual scope of the blocks obtained form the prior algorithm and yields an outcome of final
attributes after performing processing on the top of it. The target is to finalize the attributes. The significant
steps of the algorithm implementation are as follows:
Algorithm for Extraction of primary attribute on the basis of Colored Mosaic
Input: b1/b2 (lower and higher number of blocks)
Output: Fatt (Final Attributes)
Start
1. For i: b1:b2
2. extract gc
3. Apply bilear kernel
4. Patt[Eprob, β, d], Fatt(Patt, β, card(b)) //final feauture
5. αf(Fatt, c)
6. Compute α, Fatt, c
7. Compute γ and E
8. Extract Fatt(Bm, c)
End
The discussions of the algorithmic steps are as follows: The complete algorithm is cdesigned on
the basis of possibilities of artifacts as the result of any form of extrinsic problem while capturing the image
from image capturing device e.g. camera. In order to execute this algorithm, there are various parameters that
plays a significant role in processing e.g. cardinality of the attributes being extracted in the form of block
card(b), filters used in mosaic form β, probability of error Eprob, primary attribute Patt in the form of variance
value of obtained along with the pixels that are interpolated. The proposed study considers the complete
scope of the block obtained from prior algorithm where b1 and b2 represents minimum and maximum size of
block (Line-1). The next step is to apply bilinear kernel in order to construct a predictor (Line-2). The next
step of the algorithm will be to construct a matrix for primary attribute Patt and final attribute Fatt (Line-3).
The formulation of the primary attribute is carried out using probability of error Eprob, filters used in mosaic
β, and dimensions d (Line-4) while the final attribute Fatt is computed using obtained primary attribute in
prior step, filters of mosaic β, and cardinality of dimension of blocks (Lme-4). The next step is about
obtaining maximum a posteriori α by applying a function f(x) with an input arguments of final attributed
obtained Fatt and statistical constant c (Line-5). Basically, the statistical constant c is calculated with
respect to mean and standard deviation. The study finally stores the obtained value α in discrete matrix and
5. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 10, No. 5, October 2020 : 5198 - 5207
5202
other associated parameters (Line-6). Finally, the algorithm computes transition state γ and cumulative
error E (Line-7) followed by updating of the final attributes Fatt (Line-8). Basically, this step of the algorithm
processing is carried out on the basis that maximum likelihood function that generates two states of
transition. The algorithm is capable of assessing various form sof errors and artifacts that are explored in
different levels and obtains its statistical attributes in order to perform computation of any possibility of
tamperment of the image and it can do it without any predefined information of forgery. The implementation
of this algorithm also exhibits that a better form of optimizing charecteristics are obtained that could have
even the slightest possibility of tamepering the image. The positional information of the forged region for
a given image is not so much challenging to find owing to the adoption of descriptive statistical approach
which offered increasing accuracy in its performance.
2.3. Algorithm for binary classfication
This prime purpose of this algorithm is to assists in discretizing the regions using binary
classification approach for speeding up the process of identification of the tampered region. The algorithm
takes the input of scope of blocks and final attributes that after processing results in formation of a binary
classified image. The steps of the algorthms are as follows:
Algorithm for Binary Classification
Input: b1/b2 (scope of blocks), fatt (final attributes)
Output: a (binarized classified image)
Start
1. For i=b1:b2
2. hf1(Fatt) and co=0
3. For i=1: length (h)
4. If h(i)>co
5. a(i)=1
6. End
7. End
End
The discussions of the steps of the algorithm are as follows: the implementation of the algorithm is
carried out in such a way that it can perform classification on binary basis for identifiying all the tampered
regions as well as non-tampered regions for the given image. For the entire minimum and maximum scope of
block (b1 b2), the algorithm applies a function f1(x) where the statistical information about the final attributes
Fatt is computed with respective to all the blocks corresponding to an image (Line-2). This step also considers
cut-off value co initialized to zero. For all the size of the matrix h that holds all the statistical attributes of
the blocks corresponding with the transition state. For all the size of h (Line-3), if the elements size is found
to be more than cut-off value co than they are assigned 1 where 0 and 1 represents for black and white
respectively. By doing this step, it will mean that size of the matrix h with its respective elements can have
various value, but shortlisting will be done as per the logic specified in Line-4 and Line-5. This process is
quite faster and significantly assists in optiminizing the processing time for classification of the regions
which is subjected to assessment for tamperment. Using lesser number of resources, the adoption of binary
classification assists in further steps of operation which is about correct identification and mitigation of
the sophisticated image attacks.
2.4. Algorithm for mitigating complex image attack
The core purpose of this algorithm is to ensure offer the solution towards the regions inflicted with
the image attacks. While the main contribution of the proposed algorithm will be to rectify the ultimate
mechanism for facilitating in yielding the regions tampered. The algorithm takes the input of scope of
the image block as well as regions that are obtained from prior algorithm and that after processing yields an
outcome of image that is corrected one I. The steps involved in the proposed algorithm are as follows:
Algorithm for Mitigating Complex Image Attack
Input: b1/b2 (scope of image blocks), reg (identified regions of attack)
Output: I ()
Start
1. For i=b1:b2
2. apply f2(reg1)
3. Extract (reg2)f2(reg1)Img(bin)
4. If region(reg2)>region(~reg2)
5. reg2=~reg2
6. End
7. I=f3(reg2)
8. End
End
6. Int J Elec & Comp Eng ISSN: 2088-8708
Novel framework for optimized digital forensic for mitigating complex image attacks (Shashidhar T M)
5203
The discussions of the steps involved in the algorithm are as follows: the algorithm considers
the average of transition state s1 with a condition that its value should be more than the cut-off value of co=0.
The input towards this function is basically a corrupted image by adversary that yields a rectified image
after processing as well as identification of the tampered region I. The precision of the image is increased
initially and is classified with respect to highest value of 255. Because of this operation, the proposed system
permits identification of multiple positions within an input image where the determination of the blocks
of the boundary. For all the scope of image blocks (Line-1), the proposed system applies an explicit
function f2(x) which is responsible for filling the insignificant white portion of an image after the binary
classification is over (Line-2). This step is carried out in order to prevent a form of outliers and its
possibilities. The algorithm extracts filled regions and further subtracts the filled regions with a reference
matrix while a logic of binary approach is filled using concatenation operation. The implementation also
constructs a concatenated matrix in the form of repositing all the false positives value found in tampered
regions that are subjected for rectification that is carried out using morphological operation. The outcome
matrix is re-checked for presence of any holes (Line-3) as further optimizing the process of identification
and solution together. Therefore, if any holes are found, it will be instantly filled up and the information is
stored in the form of explicit region reg2 (Line-4). The proposed study than formulation a conditional
statement to assess if the amount of the region under reg2 is more than that of ~reg2 (Line-4/5) and in such
case, the simpler substitution operation will be carried out. This matrix will will represent a rectified image
and hence it can be treated as an error free image without any possibilities of false positive within that binary
version of an image (Line-5). The next operation is linked with further mechanizing for the purpose of
obtaining the original tampered regions as the ultimate yield. In order to work in this direction, the proposed
system constructs a binary mask by performing concatenation operation of the binary objects present
in the stored matrix file. The proposed system obtains the image with double precision and subjects it to
an unsigned integer in order to generate the mask and the determination of the original tampered region.
One of the interesting facts of this algorithm is that it is capable of performing decision towards rectifying
the regions intruded by adversary and this operation is completely automated and is independent of
any interactive operation of human. In order to perform a validation of this part of the proposed system,
the algorithm makes use of the ground truth information. This operation will permit the users for choosing
the tampered regions with an aid of manual selection and then it follows the operation of re-computing
the average values for both the matrixes respectively. This form of assessment is carried out in order to
evaluate the uniformity of the numerical values. Finally, a concatenation operation is carried out using an
explicit function f3(x) which leads to generation of the final rectified image I (Line-7). The proposed system
has identified these non-uniformities with respect to static as well as dynamic objects for better purpose. One
of the interesting part of implementation of proposed algorithm is that it offers similar performance if
the intrusion with the shape of tamperment is carried out over any regions. However, this operation is carried
out in more spontaneous manner and is independent of much number of computational dependencies towards
exploring the artifacts owing to the multiple form of malicious activities over any form of images.
3. RESULT ANALYSIS
The assessment of the proposed system is carried out in experimental approach where
multiple forms of images and different variants of the images are used for analysis. The discussion of
the outcomes obtained is presented with respect to (a) visual representation and (b) numerical representation.
The analysis has been carried out by constructing more than 2000 high definition images in the form
of image dataset while these dataset has been constructed on the basis of captures carried out by DSLR
camera. The resolution of the image is 24 megapixel with smaller and bigger dimension of the image dataset.
In order to obtain better evidential feature of the analysis, the proposed dataset consist of two directories viz.
(a) tampered image and (b) ground truth image.
From the display of Figure 2, it can be seen that the orginal image has been tampered with
the different image thereby generating a forged image. During the analysis, it has been seen that forged area
could be possibly symmetrical or non-symmetrical with possibilities of different regional spaces.
The analysis has been carried out with different dimension of the regions of the forged image in order to
investigate various possibilities of image forensic. Once the forged image is considered as an input,
the proposed system performs multiple scale of blocking operation exhibited in Figure 2. The user is also
facilitied with varied options for using value of partitioning process. The concept derived here is that
detection process is improved via blocking operation and it also increases the possibility of exploring
the significant regions too. The selection of the values of the block partitioning process is selected on
the basis of imperceptibility of the forged image. Hence, smaller values of blocking are deployed only in
the cases that demands better imperceptibility while increased value of the blocking is demanded when there
is a need of higher level of difficulty for the input forged image see Figure 3.
7. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 10, No. 5, October 2020 : 5198 - 5207
5204
Figure 2. Tampered image
(a) (b) (c)
(d) (e)
Figure 3. Variants of partitioning image block:
(a) 4x4, (b) 8x8, (c) 16X16, (d) 32x32, and (e) 64x64
Once the selected block is finalized, than the next process will be to perform extraction of all
the significant attributes. Adoption of the statistical approach further improves the accuracy factor only in
less value of iteration as well as it also positively affect the computational efficiency. The adoption of such
statistical process results in faster processing too. Figure 4 highlights the mean value of the image while
standard deviation as an outcome of the statistical operation. The study outcome of the proposed system
shows that after the colored mosiaic is extracted then the identification of the traces can be carried out in
simpler fashion shown in Figure 5.
(a) (b)
Figure 4. Process of extracting significant attributes (a) Mu image and (b) Sigma image
8. Int J Elec & Comp Eng ISSN: 2088-8708
Novel framework for optimized digital forensic for mitigating complex image attacks (Shashidhar T M)
5205
(a) (b)
Figure 5. Analysis of (a) classification and (b) rectification
Therefore, from the visual outcome highlighted in Figure 6, it can be seen that proposed system
offers significantly better display of the region that are actually tampered. Apart from this process
of extracting the image blocks that is equivalent to the transition state is finally used for statistical
information associated with the tampered image. Hence, a better imperceptibilty factor is decoded in
the proposed system where the correct and reliable identification is carried out using simple techniques.
Apart from this, the outcome of the proposed study is compared with the existing approaches related
to localization-based [32] and feature-extraction based [33]. Using the same dataset, the comparative
analysis is carried out. The outcome of comparative analysis is shown in Figure 7.
Figure 6. Identification of tampered region
Localization-based
Feature-extraction
based
Figure 7. Comparative analysis
The outcome shows that proposed system does not take more than 0.75 seconds to carry out
the extraction of the significant features that contributes towards higher accuracy of 99% with lower
outlier rate of 16.45%, which is comparatively better than existing system. The prime reason behind this
is that localization-based approach [32] performs identification of the traces equivalent to the proposed
system but it does so with more number of the attributes associated with it and hence proposed system offers
better optimization performance. The feature-extraction based approach [33] make use of random theory
along with usage of thresholding however that do it recursively that significantly degrades the accuracy level
in contrast to proposed system.
9. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 10, No. 5, October 2020 : 5198 - 5207
5206
4. CONCLUSION
This paper has presented a mechanism for promoting image forensic performance where the idea is
two-fold viz. (a) capture information about the location of tampering and (b) improving the performing using
optimization. The contribution of the proposed system is as follows: (a) the proposed model is developed
without using any sophisticated security (conventional) technique, nor any recursive approaches are used,
(b) a significant level of faster processing time is obtained along with performance (accuracy) improvement.
REFERENCES
[1] G. Cao, et al., “Forensic detection of median filtering in digital images,” in 2010 IEEE International Conference on
Multimedia and Expo, pp. 89-94, 2010.
[2] X. Feng, I. J. Cox, and G. Doerr, “Normalized energy density-based forensic detection of resampled images,” IEEE
Transactions on Multimedia, vol. 14, no. 3, pp. 536-545, 2012.
[3] M. F. Breeuwsma, “Forensic imaging of embedded systems using JTAG (boundary-scan),” Digital investigation 3,
no. 1, pp. 32-42, 2006.
[4] A. Cheddad, et al., “Digital image steganography: Survey and analysis of current methods,” Signal processing,
vol. 90, no. 3, pp. 727-752, 2010.
[5] T. Gloe and R. Böhme, “The Dresden Image Database for benchmarking digital image forensics,” in Proceedings
of the 2010 ACM Symposium on Applied Computing, no. 2-4, pp. 1584-1590, 2010.
[6] T. Van Lanh, et al., “A Survey on Digital Camera Image Forensic Methods,” in 2007 IEEE International
Conference on Multimedia and Expo, Beijing, pp. 16-19, 2007.
[7] D. T. Dang-Nguyen, C et al., “Raise: A raw images dataset for digital image forensics,” in Proceedings of the 6th
ACM Multimedia Systems Conference, pp. 219-224. ACM, 2015.
[8] F. Y. Shih, Digital watermarking and steganography: fundamentals and techniques. CRC press, 2017.
[9] A. Swaminathan, M. Wu, and K. J. Ray Liu, “Digital image forensics via intrinsic fingerprints,” IEEE transactions
on information forensics and security, vol. 3, no. 1, pp. 101-117, 2008.
[10] T. M. Shashidhar and K. B. Ramesh, “Reviewing the effectivity factor in existing techniques of image forensics,”
International Journal of Electrical and Computer Engineering (IJECE), vol. 7, no. 6, pp. 3558-3569, 2017.
[11] K. H. Rhee, “Forensic Detection Using Bit-Planes Slicing of Median Filtering Image,” IEEE Access, vol. 7,
pp. 92586-92597, 2019.
[12] J. Xiao, S. Li and Q. Xu, “Video-Based Evidence Analysis and Extraction in Digital Forensic Investigation,” IEEE
Access, vol. 7, pp. 55432-55442, 2019.
[13] Z. Ma, et al., “Shoe-Print Image Retrieval with Multi-Part Weighted CNN,” IEEE Access, vol. 7, pp. 59728-59736,
2019.
[14] H. Han, J. Li, A. K. Jain, S. Shan and X. Chen, “Tattoo Image Search at Scale: Joint Detection and Compact
Representation Learning,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 10,
pp. 2333-2348, 2019.
[15] R. Sanchez-Reillo, J. Liu-Jimenez and R. Blanco-Gonzalo, “Forensic Validation of Biometrics Using Dynamic
Handwritten Signatures,” IEEE Access, vol. 6, pp. 34149-34157, 2018.
[16] T. Neubert, A. Makrushin, M. Hildebrandt, C. Kraetzer and J. Dittmann, “Extended StirTrace benchmarking of
biometric and forensic qualities of morphed face images,” IET Biometrics, vol. 7, no. 4, pp. 325-332, 7 2018.
[17] Y. Guo, X. Cao, W. Zhang and R. Wang, “Fake Colorized Image Detection,” IEEE Transactions on Information
Forensics and Security, vol. 13, no. 8, pp. 1932-1944, Aug. 2018.
[18] B. Bayar and M. C. Stamm, “Constrained Convolutional Neural Networks: A New Approach towards General
Purpose Image Manipulation Detection,” IEEE Transactions on Information Forensics and Security, vol. 13,
no. 11, pp. 2691-2706, Nov. 2018.
[19] F. Hao, L. Yang, G. Yang, N. Liu and Z. Liu, “RFPIQM: Ridge-Based Forensic Palmprint Image Quality
Measurement,” IEEE Access, vol. 6, pp. 62076-62088, 2018.
[20] W. Shan, Y. Yi, J. Qiu and A. Yin, “Robust Median Filtering Forensics Using Image Deblocking and Filtered
Residual Fusion,” IEEE Access, vol. 7, pp. 17174-17183, 2019.
[21] G. Singh and K. Singh, “Counter JPEG Anti-Forensic Approach Based on the Second-Order Statistical Analysis,”
IEEE Transactions on Information Forensics and Security, vol. 14, no. 5, pp. 1194-1209, May 2019.
[22] C. G. Zeinstra, R. N. J. Veldhuis, L. J. Spreeuwers, A. C. C. Ruifrok and D. Meuwly, “ForenFace: a unique
annotated forensic facial image dataset and toolset,” IET Biometrics, vol. 6, no. 6, pp. 487-494, 2017.
[23] C. Chen, J. Ni, Z. Shen and Y. Q. Shi, “Blind Forensics of Successive Geometric Transformations in Digital
Images Using Spectral Method: Theory and Applications,” IEEE Transactions on Image Processing, vol. 26, no. 6,
pp. 2811-2824, June 2017.
[24] D. Kim, H. Jang, S. Mun, S. Choi and H. Lee, “Median Filtered Image Restoration and Anti-Forensics Using
Adversarial Networks,” IEEE Signal Processing Letters, vol. 25, no. 2, pp. 278-282, Feb. 2018.
[25] C. van Dam, R. Veldhuis and L. Spreeuwers, “Face reconstruction from image sequences for forensic face
comparison,” in IET Biometrics, vol. 5, no. 2, pp. 140-146, 2016.
[26] C. Pun, C. Yan and X. Yuan, “Image Alignment-Based Multi-Region Matching for Object-Level Tampering
Detection,” IEEE Transactions on Information Forensics and Security, vol. 12, no. 2, pp. 377-391, Feb. 2017.
10. Int J Elec & Comp Eng ISSN: 2088-8708
Novel framework for optimized digital forensic for mitigating complex image attacks (Shashidhar T M)
5207
[27] P. Korus and J. Huang, “Multi-Scale Fusion for Improved Localization of Malicious Tampering in Digital Images,”
IEEE Transactions on Image Processing, vol. 25, no. 3, pp. 1312-1326, March 2016.
[28] V. Conotter, P. Comesaña and F. Pérez-González, “Forensic Detection of Processing Operator Chains: Recovering
the History of Filtered JPEG Images,” IEEE Transactions on Information Forensics and Security, vol. 10, no. 11,
pp. 2257-2269, Nov. 2015.
[29] S. Sarreshtedari and M. A. Akhaee, “A Source-Channel Coding Approach to Digital Image Protection and
Self-Recovery,” IEEE Transactions on Image Processing, vol. 24, no. 7, pp. 2266-2277, July 2015.
[30] W. Fan, K. Wang, F. Cayre and Z. Xiong, “Median Filtered Image Quality Enhancement and Anti-Forensics
via Variational Deconvolution,” IEEE Transactions on Information Forensics and Security, vol. 10, no. 5,
pp. 1076-1091, May 2015.
[31] T. M. Shashidhar and K. B. Ramesh, “An efficient computational approach to balance the trade-off between image
forensics and perceptual image quality,” International Journal of Electrical and Computer Engineering (IJECE),
vol. 9, no. 5, pp. 3474-3479, 2019.
[32] P. Ferrara, T. Bianchi, A. De Rosa, and A. Piva, “Image Forgery Localization via Fine-Grained Analysis of CFA
Artifacts,” IEEE transactions on information forensics and security, vol. 7, no. 5, October 2012.
[33] J. G. Han, T. H. Park, Y. H. Moon, K. Eom, “Efficient Markov feature extraction method for image splicing
detection using maximization and threshold expansion,” Journal of Electronic Imaging, vol. 25, no. 2, 2016.
BIOGRAPHIES OF AUTHORS
Shashidhar T. M. is a research scholar at Visvesvaraya Technological University Belagavi,
Karnataka, India. Currently pursuing PhD under RVCE (VTU), Karnataka, India. His teaching
experience is around 11 years. Hisresearch area is Signal Processing. He has completed hisM.
Tech (Digital electronics and communication system) from PESIT, Bengaluru, Karnataka, India.
Alsocompleted B.E. (Electronics and communication), from SJMIT, Chiradurga, Karnataka,
India.
Dr. K. B. Ramesh is an associate professor and Head of Department of Electronics and
Instrumentation Engg. R V College of Engineering, Bengaluru, Karnataka, India. He has
completed PhD in Computer Science and Engineering from Kuvempu University. He has around
twenty-threeyears (23) of teaching experience in E&I Engg. His major research area is in
Computer Science and Engineering and minorresearch area is in Biomedical Engineering/
Bioinformatics.