The document presents a Holistic Privacy Impact Assessment Framework (H-PIA) for evaluating video privacy filtering technologies. The framework is based on the UI-REF normative ethno-methodological framework for Privacy-by-Co-Design. The H-PIA framework integrates key holistic performance indicators (KPIs) comprising objective and subjective evaluation metrics. It assesses the optimal balance of privacy protection and security assurance for privacy filtering solutions negotiated through user-centered co-design. Both objective automated tests and a subjective user study are conducted to evaluate filtering performance based on criteria like efficacy, consistency, disambiguity, and intelligibility. Results confirm the framework enables optimally balanced privacy filtering while retaining necessary information.
InfoSec Technology Management of User Space and Services Through Security Thr...ecarrow
The focus of this paper will demonstrate the need to clearly define
and segregate various user space environments in the enterprise
network infrastructure with controls ranging from administrative
to technical and still provide the various services needed to
facilitate the work space environment and administrative
requirements of an enterprise system. Standards assumed are
industry practices and associated regulatory requirements with
implementations as they apply to the various contextual
applications. This is a high level approach to understanding the
significance and application of an effective secure network
infrastructure. The focus is on end user needs and the associated
services to support those needs. Conceptually user space is a
virtual area allocated to the end user needs identified with specific
services to support those needs by creating a virtual playground.
To manage risk, the concept of creating a "security threat gateway
(STG)" isolates and secures each user space with its associated
services. Emphasis will be placed on the functional managerial
process and application of the STG, safeguarding one user space
from another, to facilitate the use of the needed services to
perform the operational tasks of the organization. When user’s
needs and associated components are clearly identified, then it is
possible for anyone to use this model as a template, to guide them
in creating an effective strategy for their own network security.
This approach is practical in orientation and application, focusing
on a high level perspective and assumes the reader already has a
low level technical background for a tactical implementation in
mitigating risk to the enterprise network infrastructure.
WP82 Physical Security in Mission Critical FacilitiesSE_NAM_Training
Physical security systems use various methods to identify individuals and control access to secure areas in data centers. These systems balance reliability, cost, and risk. Common identification methods include cards, tokens, passwords, and biometrics that verify "what you have, know, or are". Effective security combines multiple identification layers with concentric zones of increasing protection depth for sensitive areas like computer rooms and racks. Physical security is critical to reducing data center downtime from human errors or threats.
The document discusses StoneGate's Intrusion Prevention System (IPS) and how it provides flexible and precise detection of internal and external threats to protect corporate networks and information flow. StoneGate IPS integrates with the company's firewall and VPN solutions to offer unified threat management. It can detect threats from vulnerable applications and operating systems and stop harmful traffic through both monitoring and prevention modes. Centralized management of StoneGate IPS simplifies threat handling and ensures compliance with various regulations.
The document discusses the convergence of physical and logical security infrastructure. It notes that as physical security systems like access control become network-enabled, they can be vulnerable to hacking and compromise the entire organization. The document outlines a solution of building cross-training between physical and IT security teams to increase awareness and complete small collaborative projects. Key lessons discuss managing expectations, having central change management, balancing both security teams, and integrating process models for collaboration. The document emphasizes that knowledge sharing between teams is important for effective security.
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...IJERD Editor
The document discusses privacy issues in cloud computing. It defines privacy and describes privacy enhancing technologies (PETs) that can help protect privacy, such as pseudonymization and federated identity management. It also discusses privacy by design, which aims to embed privacy protections directly into technologies. Ensuring privacy in cloud computing requires measures like access controls, protecting against unauthorized access/copying of data, and specifying privacy controls in agreements. Overall privacy is a major challenge for cloud computing due to issues of data governance, security, and differing international regulations.
In 3 sentences:
IT leaders are increasingly adopting hybrid cloud solutions to gain benefits like flexibility, innovation and cost savings while also addressing security concerns. A survey found that nearly half of organizations use a hybrid cloud approach and security technologies can help mitigate risks when applications and infrastructure span internal and external services. Experts recommend integrating existing security solutions and establishing processes when collaborating with cloud providers for a comprehensive security strategy across hybrid cloud environments.
VSD Infotech is an IT services company specializing in information security, network management, and data center solutions. They offer a range of services including: (1) implementing Information Security Management Systems to help organizations securely manage sensitive data according to ISO/IEC 27001 standards, (2) network security assessments and testing, and (3) consulting services to help businesses design and implement secure systems and best practices. They also provide networking solutions and products from technology partners to optimize customer networks.
InfoSec Technology Management of User Space and Services Through Security Thr...ecarrow
The focus of this paper will demonstrate the need to clearly define
and segregate various user space environments in the enterprise
network infrastructure with controls ranging from administrative
to technical and still provide the various services needed to
facilitate the work space environment and administrative
requirements of an enterprise system. Standards assumed are
industry practices and associated regulatory requirements with
implementations as they apply to the various contextual
applications. This is a high level approach to understanding the
significance and application of an effective secure network
infrastructure. The focus is on end user needs and the associated
services to support those needs. Conceptually user space is a
virtual area allocated to the end user needs identified with specific
services to support those needs by creating a virtual playground.
To manage risk, the concept of creating a "security threat gateway
(STG)" isolates and secures each user space with its associated
services. Emphasis will be placed on the functional managerial
process and application of the STG, safeguarding one user space
from another, to facilitate the use of the needed services to
perform the operational tasks of the organization. When user’s
needs and associated components are clearly identified, then it is
possible for anyone to use this model as a template, to guide them
in creating an effective strategy for their own network security.
This approach is practical in orientation and application, focusing
on a high level perspective and assumes the reader already has a
low level technical background for a tactical implementation in
mitigating risk to the enterprise network infrastructure.
WP82 Physical Security in Mission Critical FacilitiesSE_NAM_Training
Physical security systems use various methods to identify individuals and control access to secure areas in data centers. These systems balance reliability, cost, and risk. Common identification methods include cards, tokens, passwords, and biometrics that verify "what you have, know, or are". Effective security combines multiple identification layers with concentric zones of increasing protection depth for sensitive areas like computer rooms and racks. Physical security is critical to reducing data center downtime from human errors or threats.
The document discusses StoneGate's Intrusion Prevention System (IPS) and how it provides flexible and precise detection of internal and external threats to protect corporate networks and information flow. StoneGate IPS integrates with the company's firewall and VPN solutions to offer unified threat management. It can detect threats from vulnerable applications and operating systems and stop harmful traffic through both monitoring and prevention modes. Centralized management of StoneGate IPS simplifies threat handling and ensures compliance with various regulations.
The document discusses the convergence of physical and logical security infrastructure. It notes that as physical security systems like access control become network-enabled, they can be vulnerable to hacking and compromise the entire organization. The document outlines a solution of building cross-training between physical and IT security teams to increase awareness and complete small collaborative projects. Key lessons discuss managing expectations, having central change management, balancing both security teams, and integrating process models for collaboration. The document emphasizes that knowledge sharing between teams is important for effective security.
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...IJERD Editor
The document discusses privacy issues in cloud computing. It defines privacy and describes privacy enhancing technologies (PETs) that can help protect privacy, such as pseudonymization and federated identity management. It also discusses privacy by design, which aims to embed privacy protections directly into technologies. Ensuring privacy in cloud computing requires measures like access controls, protecting against unauthorized access/copying of data, and specifying privacy controls in agreements. Overall privacy is a major challenge for cloud computing due to issues of data governance, security, and differing international regulations.
In 3 sentences:
IT leaders are increasingly adopting hybrid cloud solutions to gain benefits like flexibility, innovation and cost savings while also addressing security concerns. A survey found that nearly half of organizations use a hybrid cloud approach and security technologies can help mitigate risks when applications and infrastructure span internal and external services. Experts recommend integrating existing security solutions and establishing processes when collaborating with cloud providers for a comprehensive security strategy across hybrid cloud environments.
VSD Infotech is an IT services company specializing in information security, network management, and data center solutions. They offer a range of services including: (1) implementing Information Security Management Systems to help organizations securely manage sensitive data according to ISO/IEC 27001 standards, (2) network security assessments and testing, and (3) consulting services to help businesses design and implement secure systems and best practices. They also provide networking solutions and products from technology partners to optimize customer networks.
This whitepaper discusses some common challenges and myths about data security when outsourcing engineering and looks at some industry best practices to address these concerns.
Puppetnets and Botnets: Information Technology Vulnerability Exploitsecarrow
The focus of this paper is to identify dominant trends of
information security threats to the Internet 2001 to 2007. This
paper is intended to provide an understanding of the new
emphasis of attacks through use of robotic networks and how
some users and organizations are already preparing a response
using innovative visualization techniques in conjunction with
traditional methods. The scope of research will focus on basic
enterprise level services that are commonly provided by various
corporations; e.g., e-mail, browser applications, wireless and
mobile devices, IP telephony, and online banking. The research
will first review the network infrastructure common to most
corporate organizations and assume basic enterprise components
and functionality in response to the current security threats. The
second emphasis will consider the impact of malware robotic
networks (Botnets and Puppetnets) on the corporate network
infrastructure and how to address these threats with new and
innovative techniques. This approach is pragmatic in application
and focuses on assimilation of existing data to present a
functional rationale of attacks to anticipate and prepare for this
coming year.
The document summarizes the components, purpose, and strategies of a security policy for T.Z.A.S.P. Mandal's Pragati College. It discusses the need for security policies to protect data, networks, and computing resources. The key components outlined include access policies, privacy policies, and guidelines for acceptable use, purchasing, authentication, availability, and violation reporting. Strategies discussed are host security, user authentication, password protection, firewalls, demilitarized zones, and encryption. The purpose is to inform users of security requirements and provide a baseline for compliance.
International approaches to critical information infrastructure protection ...owaspindia
This document discusses international cooperation on critical infrastructure protection (CIP) and trustworthy information and communications technology (ICT). It describes the BIC project, which aims to identify challenges to EU and global trust and security, and facilitate collaboration between organizations. Key issues discussed include monitoring critical infrastructure ecosystems, detection of anomalies, secure notification systems, metrics for quantifying protection, and response strategies. International cooperation is needed for technologies, threat information sharing, and data management standards regarding acquisition, dissemination, storage, and access.
Unique Security Challenges in the Datacenter Demand Innovative SolutionsJuniper Networks
The ability to leverage attacker intelligence across the infrastructure can improve security and simplify enforcement. Find out how to secure the network at campus edge, data center edge and data center core.
The document describes how the Unisys Stealth Solution can help healthcare organizations securely share private health information, save costs by simplifying networks while increasing security, solve the leading cause of HIPAA data breaches by preventing theft of laptops containing private data, and allow medical personnel to access patient data during emergencies even when away from secure facilities. The Stealth Solution uses strong encryption, message shredding, and controlled access through user credentials to protect data in motion across networks and remote access.
This document provides a 7-step guide for building security in the cloud from the ground up. It discusses starting security planning early, identifying vulnerabilities for cloud services, protecting data during transmission and storage, securing the cloud platform, extending trust across multiple cloud providers, choosing a secure cloud service provider, and learning more from Intel resources. The document aims to help readers strengthen data and platform protection when using cloud computing.
20111010 The National Security Framework of Spain for Guide Share Europe, in ...Miguel A. Amutio
Presentation about the National Security Framework of Spain for Guide Share Europe, in Madrid in October 2011.
The National Security Framework (NSF) of Spain is in the service of the right of citizens to interact electronically with their government. The NSF establishes the security policy in the scope of eGovernment (Law 11/2007) and consists of basic principles and minimum requirements to allow an adequate protection of information. It is a legal text (Royal Decree 3/2010).
The NSF introduces common elements and concepts that provide guidance to public administrations and that facilitate the communication of information security requirements to Industry. Recommendations of the OECD, EU, standards and experiences from other countries were considered.
This National Security Framework, as well as the National Interoperability Framework, is the result of a collective effort of all public administrations and also of the Industry through their associations. Both of them are part of the well known effort of Spain to develop the Information Society and eGovernment.
The document provides guidelines for data security practices. It summarizes revisions made in three areas: web application security, mobile device security, and data breach incident response. The full guidelines are divided into five parts addressing administrative, technical, physical, incident response, and breach notification safeguards. Companies can use the checklist format to evaluate their own risk levels and adopt appropriate practices. The guidelines aim to help protect privacy by establishing a foundation of responsible data security.
A STUDY FOR THE EFFECT OF THE EMPHATICNESS AND LANGUAGE AND DIALECT FOR VOIC...sipij
This study analyzes voice onset time (VOT) values for four stop consonants in Modern Standard Arabic: /d/, /t/, /d?
/, and /t?
/. The student researcher builds a database of carrier words with a CV-CV-CV syllable structure and computes VOT values. The main findings are that VOT values for the emphatic sounds (/d?
/ and /t?
/) are consistently lower than for their non-emphatic counterparts (/d/ and /t/), and that VOT can distinguish Arabic dialects. The study aims to address the lack of research analyzing phonetic features of Arabic and to help automatic speech and language processing systems.
Analog signal processing approach for coarse and fine depth estimationsipij
Imaging and Image sensors is a field that is continuously evolving. There are new products coming into the
market every day. Some of these have very severe Size, Weight and Power constraints whereas other
devices have to handle very high computational loads. Some require both these conditions to be met
simultaneously. Current imaging architectures and digital image processing solutions will not be able to
meet these ever increasing demands. There is a need to develop novel imaging architectures and image
processing solutions to address these requirements. In this work we propose analog signal processing as a
solution to this problem. The analog processor is not suggested as a replacement to a digital processor but
it will be used as an augmentation device which works in parallel with the digital processor, making the
system faster and more efficient. In order to show the merits of analog processing two stereo
correspondence algorithms are implemented. We propose novel modifications to the algorithms and new
imaging architectures which, significantly reduces the computation time
A Comparative Study of DOA Estimation Algorithms with Application to Tracking...sipij
Tracking the Direction of Arrival (DOA) Estimation of a moving source is an important and challenging
task in the field of navigation, RADAR, SONAR, Wireless Sensor Networks (WSNs) etc. Tracking is carried
out starting from the estimation of DOA, considering the estimated DOA as an initial value, the Kalman
Filter (KF) algorithm is used to track the moving source based on the motion model which governs the
motion of the source. This comparative study deals with analysis, significance of Non-coherent,
Narrowband DOA (Direction of Arrival) Estimation Algorithms in perception to tracking. The DOA
estimation algorithms Multiple Signal Classification (MUSIC), Root-MUSIC& Estimation of Signal
Parameters via Rotational Invariance Technique (ESPRIT) are considered for the purpose of the study, a
comparison in terms of optimality with respect to Signal to Noise Ratio (SNR), number of snapshots and
number of Antenna elements used and Computational complexity is drawn between the chosen algorithms
resulting in an optimum DOA estimate. The optimum DOA Estimate is taken as an initial value for the
Kalman filter tracking algorithm. The Kalman filter algorithm is used to track the optimum DOA Estimate.
Enhancement performance of road recognition system of autonomous robots in sh...sipij
This document summarizes a research paper that proposed an algorithm to improve road recognition for autonomous vehicles in shadow scenarios. The researchers conducted experiments to test their algorithm's performance on key metrics like true positive rate and error rate. Their algorithm first converted images to HSV color space to detect shadows, then used normalized difference index and morphological operations to eliminate shadow effects before segmentation and classification. Test results showed their algorithm enhanced road recognition in the presence of shadows, advancing autonomous vehicle navigation capabilities.
Modified approach to transform arc from text to linear form text a preproces...sipij
Arc-form-text is an artistic-text which is quite common in several documents such as certificates,
advertisements and history documents. OCRs fail to read such arc-form-text and it is necessary to
transform the same to linear-form-text at preprocessing stage. In this paper, we present a modification to
an existing transformation model for better readability by OCRs. The method takes the segmented arcform-
text as input. Initially two concentric ellipses are approximated to enclose the arc-form-text and later
the modified transformation model transforms the text in arc-form to linear-form. The proposed method is
implemented on several upper semi-circular arc-form-text inputs and the readability of the transformed text
is analyzed with an OCR.
CONTRAST ENHANCEMENT AND BRIGHTNESS PRESERVATION USING MULTIDECOMPOSITION HIS...sipij
Histogram Equalization (HE) has been an essential addition to the Image Enhancement world.
Enhancement techniques like Classical Histogram Equalization(CHE),Adaptive Histogram Equalization
(AHE), Bi-Histogram Equalization (BHE) and Recursive Mean Separate Histogram Equalization (RMSHE)
methods enhance contrast, brightness is not well preserved, which gives an unpleasant look to the final
image obtained. Thus, we introduce a novel technique Multi-Decomposition Histogram Equalization
(MDHE) to eliminate the drawbacks of the earlier methods. In MDHE, we have decomposed the input
image using a unique logic, applied CHE in each of the sub-images and then finally interpolated them in
correct order. The final image after MDHE gives us the best results based on contrast enhancement and
brightness preservation aspect compared to all other techniques mentioned above. We have calculated the
various parameters like PSNR, SNR, RMSE, MSE, etc. for every technique. Our results are well supported
by bar graphs, histograms and the parameter calculations at the end.
A NOVEL PROBABILISTIC BASED IMAGE SEGMENTATION MODEL FOR REALTIME HUMAN ACTIV...sipij
Automatic human activity detection is one of the difficult tasks in image segmentation application due to
variations in size, type, shape and location of objects. In the traditional probabilistic graphical
segmentation models, intra and inter region segments may affect the overall segmentation accuracy. Also,
both directed and undirected graphical models such as Markov model, conditional random field have
limitations towards the human activity prediction and heterogeneous relationships. In this paper, we have
studied and proposed a natural solution for automatic human activity segmentation using the enhanced
probabilistic chain graphical model. This system has three main phases, namely activity pre-processing,
iterative threshold based image enhancement and chain graph segmentation algorithm. Experimental
results show that proposed system efficiently detects the human activities at different levels of the action
datasets.
Development and Hardware Implementation of an Efficient Algorithm for Cloud D...sipij
This document discusses the development and hardware implementation of an efficient algorithm for cloud detection from satellite images. The algorithm uses an adaptive thresholding approach to segment clouds from background pixels in satellite imagery. It then determines the position of the segmented clouds to calculate cloud coverage percentages. The algorithm was tested on satellite images from Spot4 and Landsat archives. It was implemented on a TMS320C6713 DSK processor using Code Composer Studio and achieved accurate cloud detection and coverage calculation on images with resolutions up to 3600x3000 pixels.
Implementation of features dynamic tracking filter to tracing pupilssipij
The objective of this paper is to show the implementation of an artificial vision filter capable of tracking the
pupils of a person in a video sequence. There are several algorithms that can achieve this objective, for this
case, features dynamic tracking selected, which is a method that traces patterns between each frame that
form a video scene, this type of processing offers the advantage of eliminating the problems of occlusion
patterns of interest. The implementation was tested on a base of videos of people with different physical
characteristics of the eyes. An additional goal is to obtain information of the eye movements that are
captured and pupil coordinates for each of these movements. These data could help some studies related to
eye health.
SENSORLESS VECTOR CONTROL OF BLDC USING EXTENDED KALMAN FILTER sipij
This Paper mainly deals with the implementation of vector control technique using the brushless DC motor
(BLDC). Generally tachogenerators, resolvers or incremental encoders are used to detect the speed. These
sensors require careful mounting and alignment, and special attention is required with electrical noises. A
speed sensor need additional space for mounting and maintenance and hence increases the cost and size of
the drive system. These problems are eliminated by speed sensor less vector control by using Extended
Kalman Filter and Back EMF method for position sensing. By using the EKF method and Back EMFmethod, the sensor less vector control of BLDC is implemented and its simulation using MATLAB/SIMULINK and hardware kit is implemented.
E FFECTIVE P ROCESSING A ND A NALYSIS OF R ADIOTHERAPY I MAGESsipij
a-Si Electronic Portal Imaging Device (EPID) is an
important tool to verify the location of the radiat
ion
therapy beam with respect to the patient anatomy. B
ut, Electronic Portal Images (EPI) suffer from low
contrast. In order to have better in-treatment imag
es to extract relevant features of the anatomy, ima
ge
processing tools need to be integrated in the Radio
logy systems. The goal of this research work is to
inspect
several image processing techniques for contrast en
hancement of electronic portal images and gauge
parameters like mean, variance, standard deviation,
MSE, RMSE, entropy, PSNR, AMBE, normalised cross
correlation, average difference, structural content
(SC), maximum difference and normalised absolute
error (NAE) to study their visual quality improvem
ent. In addition, by adding salt and pepper noise,
Gaussian noise and motion blur, we calculate error
measurement parameters like Universal Image Quality
(UIQ) index, Enhancement Measurement Error (EME), P
earson Correlation Coefficient, SNR and Mean
Absolute error (MAE). The improved results point ou
t that image processing tools need to be incorporat
ed
into radiology for accurate delivery of dose
“FIELD PROGRAMMABLE DSP ARRAYS” - A NOVEL RECONFIGURABLE ARCHITECTURE FOR EFF...sipij
Digital Signal Processing functions are widely used in real time high speed applications. Those functions
are generally implemented either on ASICs with inflexibility, or on FPGAs with bottlenecks of relatively
smaller utilization factor or lower speed compared to ASIC. The proposed reconfigurable DSP processor is
redolent to FPGA, but with basic fixed Common Modules (CMs) (like adders, subtractors, multipliers,
scaling units, shifters) instead of CLBs. This paper introduces the development of a reconfigurable DSP
processor that integrates different filter and transform functions. The switching between DSP functions is
occurred by reconfiguring the interconnection between CMs. Validation of the proposed reconfigurable
architecture has been achieved on Virtex5 FPGA. The architecture provides sufficient amount of flexibility,
parallelism and scalability.
A study of a modified histogram based fast enhancement algorithm (mhbfe)sipij
Image enhancement is one of the most important issues in low-level image processing. The goal of image
enhancement is to improve the quality of an image such that enhanced image is better than the original
image. Conventional Histogram equalization (HE) is one of the most algorithms used in the contrast
enhancement of medical images, this due to its simplicity and effectiveness. However, it causes the
unnatural look and visual artefacts, where it tends to change the brightness of an images. The Histogram
Based Fast Enhancement Algorithm (HBFE) tries to enhance the CT head images, where it improves the
water-washed effect caused by conventional histogram equalization algorithms with less complexity. It
depends on using full gray levels to enhance the soft tissues ignoring other image details. We present a
modification of this algorithm to be valid for most CT image types with keeping the degree of simplicity.
Experimental results show that The Modified Histogram Based Fast Enhancement Algorithm (MHBFE)
enhances the results in term of PSNR, AMBE and entropy. We use also the Statistical analysis to ensure
the improvement of the proposed modification that can be generalized. ANalysis Of VAriance (ANOVA) is
used as first to test whether or not all the results have the same average. Then we find the significant
improvement of the modification.
This whitepaper discusses some common challenges and myths about data security when outsourcing engineering and looks at some industry best practices to address these concerns.
Puppetnets and Botnets: Information Technology Vulnerability Exploitsecarrow
The focus of this paper is to identify dominant trends of
information security threats to the Internet 2001 to 2007. This
paper is intended to provide an understanding of the new
emphasis of attacks through use of robotic networks and how
some users and organizations are already preparing a response
using innovative visualization techniques in conjunction with
traditional methods. The scope of research will focus on basic
enterprise level services that are commonly provided by various
corporations; e.g., e-mail, browser applications, wireless and
mobile devices, IP telephony, and online banking. The research
will first review the network infrastructure common to most
corporate organizations and assume basic enterprise components
and functionality in response to the current security threats. The
second emphasis will consider the impact of malware robotic
networks (Botnets and Puppetnets) on the corporate network
infrastructure and how to address these threats with new and
innovative techniques. This approach is pragmatic in application
and focuses on assimilation of existing data to present a
functional rationale of attacks to anticipate and prepare for this
coming year.
The document summarizes the components, purpose, and strategies of a security policy for T.Z.A.S.P. Mandal's Pragati College. It discusses the need for security policies to protect data, networks, and computing resources. The key components outlined include access policies, privacy policies, and guidelines for acceptable use, purchasing, authentication, availability, and violation reporting. Strategies discussed are host security, user authentication, password protection, firewalls, demilitarized zones, and encryption. The purpose is to inform users of security requirements and provide a baseline for compliance.
International approaches to critical information infrastructure protection ...owaspindia
This document discusses international cooperation on critical infrastructure protection (CIP) and trustworthy information and communications technology (ICT). It describes the BIC project, which aims to identify challenges to EU and global trust and security, and facilitate collaboration between organizations. Key issues discussed include monitoring critical infrastructure ecosystems, detection of anomalies, secure notification systems, metrics for quantifying protection, and response strategies. International cooperation is needed for technologies, threat information sharing, and data management standards regarding acquisition, dissemination, storage, and access.
Unique Security Challenges in the Datacenter Demand Innovative SolutionsJuniper Networks
The ability to leverage attacker intelligence across the infrastructure can improve security and simplify enforcement. Find out how to secure the network at campus edge, data center edge and data center core.
The document describes how the Unisys Stealth Solution can help healthcare organizations securely share private health information, save costs by simplifying networks while increasing security, solve the leading cause of HIPAA data breaches by preventing theft of laptops containing private data, and allow medical personnel to access patient data during emergencies even when away from secure facilities. The Stealth Solution uses strong encryption, message shredding, and controlled access through user credentials to protect data in motion across networks and remote access.
This document provides a 7-step guide for building security in the cloud from the ground up. It discusses starting security planning early, identifying vulnerabilities for cloud services, protecting data during transmission and storage, securing the cloud platform, extending trust across multiple cloud providers, choosing a secure cloud service provider, and learning more from Intel resources. The document aims to help readers strengthen data and platform protection when using cloud computing.
20111010 The National Security Framework of Spain for Guide Share Europe, in ...Miguel A. Amutio
Presentation about the National Security Framework of Spain for Guide Share Europe, in Madrid in October 2011.
The National Security Framework (NSF) of Spain is in the service of the right of citizens to interact electronically with their government. The NSF establishes the security policy in the scope of eGovernment (Law 11/2007) and consists of basic principles and minimum requirements to allow an adequate protection of information. It is a legal text (Royal Decree 3/2010).
The NSF introduces common elements and concepts that provide guidance to public administrations and that facilitate the communication of information security requirements to Industry. Recommendations of the OECD, EU, standards and experiences from other countries were considered.
This National Security Framework, as well as the National Interoperability Framework, is the result of a collective effort of all public administrations and also of the Industry through their associations. Both of them are part of the well known effort of Spain to develop the Information Society and eGovernment.
The document provides guidelines for data security practices. It summarizes revisions made in three areas: web application security, mobile device security, and data breach incident response. The full guidelines are divided into five parts addressing administrative, technical, physical, incident response, and breach notification safeguards. Companies can use the checklist format to evaluate their own risk levels and adopt appropriate practices. The guidelines aim to help protect privacy by establishing a foundation of responsible data security.
A STUDY FOR THE EFFECT OF THE EMPHATICNESS AND LANGUAGE AND DIALECT FOR VOIC...sipij
This study analyzes voice onset time (VOT) values for four stop consonants in Modern Standard Arabic: /d/, /t/, /d?
/, and /t?
/. The student researcher builds a database of carrier words with a CV-CV-CV syllable structure and computes VOT values. The main findings are that VOT values for the emphatic sounds (/d?
/ and /t?
/) are consistently lower than for their non-emphatic counterparts (/d/ and /t/), and that VOT can distinguish Arabic dialects. The study aims to address the lack of research analyzing phonetic features of Arabic and to help automatic speech and language processing systems.
Analog signal processing approach for coarse and fine depth estimationsipij
Imaging and Image sensors is a field that is continuously evolving. There are new products coming into the
market every day. Some of these have very severe Size, Weight and Power constraints whereas other
devices have to handle very high computational loads. Some require both these conditions to be met
simultaneously. Current imaging architectures and digital image processing solutions will not be able to
meet these ever increasing demands. There is a need to develop novel imaging architectures and image
processing solutions to address these requirements. In this work we propose analog signal processing as a
solution to this problem. The analog processor is not suggested as a replacement to a digital processor but
it will be used as an augmentation device which works in parallel with the digital processor, making the
system faster and more efficient. In order to show the merits of analog processing two stereo
correspondence algorithms are implemented. We propose novel modifications to the algorithms and new
imaging architectures which, significantly reduces the computation time
A Comparative Study of DOA Estimation Algorithms with Application to Tracking...sipij
Tracking the Direction of Arrival (DOA) Estimation of a moving source is an important and challenging
task in the field of navigation, RADAR, SONAR, Wireless Sensor Networks (WSNs) etc. Tracking is carried
out starting from the estimation of DOA, considering the estimated DOA as an initial value, the Kalman
Filter (KF) algorithm is used to track the moving source based on the motion model which governs the
motion of the source. This comparative study deals with analysis, significance of Non-coherent,
Narrowband DOA (Direction of Arrival) Estimation Algorithms in perception to tracking. The DOA
estimation algorithms Multiple Signal Classification (MUSIC), Root-MUSIC& Estimation of Signal
Parameters via Rotational Invariance Technique (ESPRIT) are considered for the purpose of the study, a
comparison in terms of optimality with respect to Signal to Noise Ratio (SNR), number of snapshots and
number of Antenna elements used and Computational complexity is drawn between the chosen algorithms
resulting in an optimum DOA estimate. The optimum DOA Estimate is taken as an initial value for the
Kalman filter tracking algorithm. The Kalman filter algorithm is used to track the optimum DOA Estimate.
Enhancement performance of road recognition system of autonomous robots in sh...sipij
This document summarizes a research paper that proposed an algorithm to improve road recognition for autonomous vehicles in shadow scenarios. The researchers conducted experiments to test their algorithm's performance on key metrics like true positive rate and error rate. Their algorithm first converted images to HSV color space to detect shadows, then used normalized difference index and morphological operations to eliminate shadow effects before segmentation and classification. Test results showed their algorithm enhanced road recognition in the presence of shadows, advancing autonomous vehicle navigation capabilities.
Modified approach to transform arc from text to linear form text a preproces...sipij
Arc-form-text is an artistic-text which is quite common in several documents such as certificates,
advertisements and history documents. OCRs fail to read such arc-form-text and it is necessary to
transform the same to linear-form-text at preprocessing stage. In this paper, we present a modification to
an existing transformation model for better readability by OCRs. The method takes the segmented arcform-
text as input. Initially two concentric ellipses are approximated to enclose the arc-form-text and later
the modified transformation model transforms the text in arc-form to linear-form. The proposed method is
implemented on several upper semi-circular arc-form-text inputs and the readability of the transformed text
is analyzed with an OCR.
CONTRAST ENHANCEMENT AND BRIGHTNESS PRESERVATION USING MULTIDECOMPOSITION HIS...sipij
Histogram Equalization (HE) has been an essential addition to the Image Enhancement world.
Enhancement techniques like Classical Histogram Equalization(CHE),Adaptive Histogram Equalization
(AHE), Bi-Histogram Equalization (BHE) and Recursive Mean Separate Histogram Equalization (RMSHE)
methods enhance contrast, brightness is not well preserved, which gives an unpleasant look to the final
image obtained. Thus, we introduce a novel technique Multi-Decomposition Histogram Equalization
(MDHE) to eliminate the drawbacks of the earlier methods. In MDHE, we have decomposed the input
image using a unique logic, applied CHE in each of the sub-images and then finally interpolated them in
correct order. The final image after MDHE gives us the best results based on contrast enhancement and
brightness preservation aspect compared to all other techniques mentioned above. We have calculated the
various parameters like PSNR, SNR, RMSE, MSE, etc. for every technique. Our results are well supported
by bar graphs, histograms and the parameter calculations at the end.
A NOVEL PROBABILISTIC BASED IMAGE SEGMENTATION MODEL FOR REALTIME HUMAN ACTIV...sipij
Automatic human activity detection is one of the difficult tasks in image segmentation application due to
variations in size, type, shape and location of objects. In the traditional probabilistic graphical
segmentation models, intra and inter region segments may affect the overall segmentation accuracy. Also,
both directed and undirected graphical models such as Markov model, conditional random field have
limitations towards the human activity prediction and heterogeneous relationships. In this paper, we have
studied and proposed a natural solution for automatic human activity segmentation using the enhanced
probabilistic chain graphical model. This system has three main phases, namely activity pre-processing,
iterative threshold based image enhancement and chain graph segmentation algorithm. Experimental
results show that proposed system efficiently detects the human activities at different levels of the action
datasets.
Development and Hardware Implementation of an Efficient Algorithm for Cloud D...sipij
This document discusses the development and hardware implementation of an efficient algorithm for cloud detection from satellite images. The algorithm uses an adaptive thresholding approach to segment clouds from background pixels in satellite imagery. It then determines the position of the segmented clouds to calculate cloud coverage percentages. The algorithm was tested on satellite images from Spot4 and Landsat archives. It was implemented on a TMS320C6713 DSK processor using Code Composer Studio and achieved accurate cloud detection and coverage calculation on images with resolutions up to 3600x3000 pixels.
Implementation of features dynamic tracking filter to tracing pupilssipij
The objective of this paper is to show the implementation of an artificial vision filter capable of tracking the
pupils of a person in a video sequence. There are several algorithms that can achieve this objective, for this
case, features dynamic tracking selected, which is a method that traces patterns between each frame that
form a video scene, this type of processing offers the advantage of eliminating the problems of occlusion
patterns of interest. The implementation was tested on a base of videos of people with different physical
characteristics of the eyes. An additional goal is to obtain information of the eye movements that are
captured and pupil coordinates for each of these movements. These data could help some studies related to
eye health.
SENSORLESS VECTOR CONTROL OF BLDC USING EXTENDED KALMAN FILTER sipij
This Paper mainly deals with the implementation of vector control technique using the brushless DC motor
(BLDC). Generally tachogenerators, resolvers or incremental encoders are used to detect the speed. These
sensors require careful mounting and alignment, and special attention is required with electrical noises. A
speed sensor need additional space for mounting and maintenance and hence increases the cost and size of
the drive system. These problems are eliminated by speed sensor less vector control by using Extended
Kalman Filter and Back EMF method for position sensing. By using the EKF method and Back EMFmethod, the sensor less vector control of BLDC is implemented and its simulation using MATLAB/SIMULINK and hardware kit is implemented.
E FFECTIVE P ROCESSING A ND A NALYSIS OF R ADIOTHERAPY I MAGESsipij
a-Si Electronic Portal Imaging Device (EPID) is an
important tool to verify the location of the radiat
ion
therapy beam with respect to the patient anatomy. B
ut, Electronic Portal Images (EPI) suffer from low
contrast. In order to have better in-treatment imag
es to extract relevant features of the anatomy, ima
ge
processing tools need to be integrated in the Radio
logy systems. The goal of this research work is to
inspect
several image processing techniques for contrast en
hancement of electronic portal images and gauge
parameters like mean, variance, standard deviation,
MSE, RMSE, entropy, PSNR, AMBE, normalised cross
correlation, average difference, structural content
(SC), maximum difference and normalised absolute
error (NAE) to study their visual quality improvem
ent. In addition, by adding salt and pepper noise,
Gaussian noise and motion blur, we calculate error
measurement parameters like Universal Image Quality
(UIQ) index, Enhancement Measurement Error (EME), P
earson Correlation Coefficient, SNR and Mean
Absolute error (MAE). The improved results point ou
t that image processing tools need to be incorporat
ed
into radiology for accurate delivery of dose
“FIELD PROGRAMMABLE DSP ARRAYS” - A NOVEL RECONFIGURABLE ARCHITECTURE FOR EFF...sipij
Digital Signal Processing functions are widely used in real time high speed applications. Those functions
are generally implemented either on ASICs with inflexibility, or on FPGAs with bottlenecks of relatively
smaller utilization factor or lower speed compared to ASIC. The proposed reconfigurable DSP processor is
redolent to FPGA, but with basic fixed Common Modules (CMs) (like adders, subtractors, multipliers,
scaling units, shifters) instead of CLBs. This paper introduces the development of a reconfigurable DSP
processor that integrates different filter and transform functions. The switching between DSP functions is
occurred by reconfiguring the interconnection between CMs. Validation of the proposed reconfigurable
architecture has been achieved on Virtex5 FPGA. The architecture provides sufficient amount of flexibility,
parallelism and scalability.
A study of a modified histogram based fast enhancement algorithm (mhbfe)sipij
Image enhancement is one of the most important issues in low-level image processing. The goal of image
enhancement is to improve the quality of an image such that enhanced image is better than the original
image. Conventional Histogram equalization (HE) is one of the most algorithms used in the contrast
enhancement of medical images, this due to its simplicity and effectiveness. However, it causes the
unnatural look and visual artefacts, where it tends to change the brightness of an images. The Histogram
Based Fast Enhancement Algorithm (HBFE) tries to enhance the CT head images, where it improves the
water-washed effect caused by conventional histogram equalization algorithms with less complexity. It
depends on using full gray levels to enhance the soft tissues ignoring other image details. We present a
modification of this algorithm to be valid for most CT image types with keeping the degree of simplicity.
Experimental results show that The Modified Histogram Based Fast Enhancement Algorithm (MHBFE)
enhances the results in term of PSNR, AMBE and entropy. We use also the Statistical analysis to ensure
the improvement of the proposed modification that can be generalized. ANalysis Of VAriance (ANOVA) is
used as first to test whether or not all the results have the same average. Then we find the significant
improvement of the modification.
C OMPARISON OF M ODERN D ESCRIPTION M ETHODS F OR T HE R ECOGNITION OF ...sipij
Plants are one kingdom of living things. They are e
ssential to the balance of nature and people’s live
s.
Plants are not just important to human environment,
they form the basis for the sustainability and lon
g-
term health of environmental systems. Beside these
important facts, they have many useful applications
such as medical application and agricultural applic
ation. Also plants are the origin of coal and petro
leum.
In order to plant recognition, one part of it has u
nique characteristic for recognition process. This
desired
part is leaf. The present paper introduces bag of w
ords (BoW) and support vector machine (SVM)
procedure to recognize and identify plants through
leaves. Visual contents of images are applied and t
hree
usual phases in computer vision are done: (i) featu
re detection, (ii) feature description, (iii) image
description. Three different methods are used on Fl
avia dataset. The proposed approach is done by scal
e
invariant feature transform (SIFT) method and two c
ombined method, HARRIS-SIFT and features from
accelerated segment test-SIFT (FAST-SIFT). The accu
racy of SIFT method is higher than other methods
which is 89.3519 %. Vision comparison is investigat
ed for four different species. Some quantitative re
sults
are measured and compared.
This document proposes a new window function called the Exponential window. The Exponential window is derived similarly to the Kaiser window but uses an exponential function instead of a Bessel function. The spectrum design equations for the Exponential window are established by analyzing the relationships between its parameters and spectral characteristics. Comparisons show the Exponential window provides a better sidelobe roll-off ratio than the Kaiser window for the same mainlobe width, but a worse ripple ratio. It exhibits better ripple ratio than the ultraspherical window for narrower mainlobes and steeper sidelobe roll-off, but worse ripple ratio for wider mainlobes and shallower roll-off.
A ROBUST CHAOTIC AND FAST WALSH TRANSFORM ENCRYPTION FOR GRAY SCALE BIOMEDICA...sipij
In this work, a new scheme of image encryption based on chaos and Fast Walsh Transform (FWT) has been proposed.
We used two chaotic logistic maps and combined chaotic encryption methods to the two-dimensional FWT of images.
The encryption process involves two steps: firstly, chaotic sequences generated by the chaotic logistic maps are used to
permute and mask the intermediate results or array of FWT, the next step consist in changing the chaotic sequences or
the initial conditions of chaotic logistic maps among two intermediate results of the same row or column. Changing the
encryption key several times on the same row or column makes the cipher more robust against any attack. We tested
our algorithms on many biomedical images. We also used images from data bases to compare our algorithm to those
in literature. It comes out from statistical analysis and key sensitivity tests that our proposed image encryption schemeprovides an efficient and secure way for real-time encryption and transmission biomedical images.
Image processing based girth monitoring and recording system for rubber plant...sipij
Measuring the girth and continuous monitoring of the increase in girth is one of the most important processes in rubber plantations since identification of girth deficiencies would enable planters to take corrective actions to ensure a good yield from the plantation.
This research paper presents an image processing based girth measurement & recording system that can replace existing manual process in an efficient and economical manner.
The system uses a digital image of the tree which uses the current number drawn on the tree to identify the tree number & its width. The image is threshold first & then filtered out using several filtering criterion to identify possible candidates for numbers. Identified blobs are then fed to the Tesseract OCR for number recognition. Threshold image is then filtered again with different criterion to segment out the black strip drawn on the tree which is then used to calculate the width of the tree using calibration parameters. Once the tree number is identified & width is calculated the girth the measured girth of the tree is stored in the data base under the identified tree number.
The results obtained from the system indicated significant improvement in efficiency & economy for main plantations. As future developments we are proposing a standard commercial system for girth measurement using standardized 2D Bar Codes as tree identifiers
Skin cure an innovative smart phone based application to assist in melanoma e...sipij
This document proposes a smart phone application called SKINcure that aims to assist with melanoma early detection and prevention. The application has two main components: 1) a UV alert module that notifies users of sunburn risk and calculates time to burn, and 2) an image analysis module that allows users to take skin images and classifies them as normal, atypical, or melanoma with 96.3-97.5% accuracy by analyzing features like hair detection, lesion segmentation, and classification algorithms. The proposed system utilizes a dermoscopy image database containing 200 images for development and testing, achieving high accuracy in detecting different lesion types automatically.
GABOR WAVELETS AND MORPHOLOGICAL SHARED WEIGHTED NEURAL NETWORK BASED AUTOMAT...sipij
1) The document proposes an automatic face recognition system using Gabor wavelet face detection with neural networks and morphological shared weighted neural networks (MSNN) for face recognition.
2) Face detection is performed using Gabor filters for feature extraction and a neural network for classification. Detected faces are input to the MSNN for face recognition.
3) The MSNN uses hit-miss transforms for feature extraction in each layer, which are independent of grayscale shifts. Feature matching compares output thresholds to identify faces.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
R E G U L A R P A P E RUser-centric privacy awareness in v.docxmakdul
R E G U L A R P A P E R
User-centric privacy awareness in video surveillance
Thomas Winkler • Bernhard Rinner
Published online: 1 July 2011
� Springer-Verlag 2011
Abstract Especially in urban environments, video cam-
eras have become omnipresent. Supporters of video sur-
veillance argue that it is an excellent tool for many
applications including crime prevention and law enforce-
ment. While this is certainly true, it must be questioned if
sufficient efforts are made to protect the privacy of moni-
tored people. Privacy concerns are often set aside when
compared to public safety and security. One reaction to this
situation is emerging: community-based efforts where cit-
izens register and map surveillance cameras in their envi-
ronment. Our study is inspired by this idea and proposes a
user-specific and location-aware privacy awareness system.
Using conventional smartphones, users not only can con-
tribute to the camera maps, but also use community-col-
lected data to be alerted of potential privacy violations. In
our model, we define different levels of privacy awareness.
For the highest level, we present a mechanism that allows
users to directly interact with specially designed, trust-
worthy cameras. These cameras provide direct feedback
about the tasks that are executed by the camera and how
privacy-sensitive data is handled. A hardware security chip
that is integrated into the camera is used to ensure
authenticity, integrity and freshness of the provided camera
status information.
Keywords Smart cameras � Video surveillance � Privacy �
User feedback � Trusted computing
1 Introduction and motivation
Video surveillance has been recognized as a valuable tool
for many applications including crime prevention and law
enforcement. As a consequence, surveillance cameras are
widely deployed especially in urban environments. This
trend is fostered by many factors such as technological
advances with the move from analog toward digital sys-
tems as well as considerable price drops of camera systems.
This way, surveillance cameras have become truly ubiq-
uitous sensors not only deployed by governments, but also
by companies and even individuals. In London, for
example, an average citizen is captured by surveillance
cameras 300 times a day [7].
It is commonly agreed that video surveillance not only
can help to increase public safety and security, but also
bears some risks. One of them is an increasing loss of
personal privacy. This problem is addressed from several
sides with mixed success. Governmental regulations intend
to protect the privacy of citizens. However, these regula-
tions are difficult to enforce and usually lag behind the
rapid developments of the surveillance industry. Several
efforts from research and academia try to address the
problem from a technological direction. Typically, they are
based on the identification and protection of privacy-sen-
sitive image regions such as human f ...
A high-resolution video surveillance management system incurs huge amounts of storage and network bandwidth. The current infrastructure required to support a high-resolution video surveillance management system (VMS) is expensive and time consuming to plan, implement and maintain. With the recent advances in cloud technologies, opportunity for the utilization of virtualization and the opportunity for distributed computing techniques of cloud storage have been pursued on the basis to find out if the various cloud computing services that are available can support the current requirements to a highresolution video surveillance management system. The research concludes, after investigating and comparing various Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS) cloud computing provides what is possible to architect a VMS using cloud technologies; however, it is more expensive and it will require additional reviews for legal implications, as well as emerging threats and countermeasures associated with using cloud technologies for a video surveillance management system..
Social, political and technological considerations for national identity mana...Ravinder (Ravi) Singh
Government agencies face the intricate challenge of effectively and securely controlling population flows,
identifying individuals, and managing their access to services, while aligning their strategies with citizen’s
expectations for convenience, security and privacy. Identity Management initiatives, especially after the
increased frequency of terrorist attacks around the world, have become a political imperative of
unprecedented urgency, for an increasing number of governments around the world. The India’s answer
to this challenge is expressed through the proposed UID Scheme.
This paper details all the architecture considerations and its realizations ...
A Survey on Person Detection for Social Distancing and Safety Violation Alert...IRJET Journal
This document discusses methods for monitoring social distancing using video surveillance and deep learning techniques. It describes how faster R-CNN, single shot detector (SSD) and YOLO v3 deep learning models can be used to detect people in video frames and calculate the distance between individuals to determine if social distancing guidelines are being followed. If distances between people are found to be unsafe, the system can send alerts or cautions. The methodology is intended to help prevent the spread of COVID-19 by monitoring adherence to social distancing and triggering warnings if safety violations are detected.
This document provides an overview of steganography techniques applied to protect biometric data, specifically fingerprints. It discusses how steganography and watermarking can improve security of biometric systems by embedding hidden data within images. The document outlines different biometric techniques including fingerprints, iris scanning, facial recognition and others. It also examines the strengths and weaknesses of targeted and blind steganalysis strategies for detecting hidden data within steganography techniques.
Multi-Dimensional Privacy Protection for Digital Collaborations.CSCJournals
In order to sustain privacy in digital collaborative environments a comprehensive multidimensional privacy protecting framework is required. Such information privacy solutions for collaborations must incorporate environmental factors and influences in order to provide a holistic information privacy solution. Our Technical, Legal, and Community Privacy Protecting (TLC-PP) framework addresses the problems associated with the multi-facetted notion of privacy. The three key components of the TLC-PP framework are merged together to provide complete solutions for collaborative environment stakeholders and users alike. The application of the TLC-PP framework provides a significant contribution to the delivery of a Privacy Augmented Collaborative Environment (PACE).
The WITDOM first project presentation has been updated to include a summary of the results corresponding to the first 18 months of the project. The presentation includes a high-level overview of the project scenarios, methodologies to elicit requirements and to formalize them into technical requirements, as well as the initial architecture.
A detailed review on Video SteganographyIRJET Journal
This document provides a summary of previous research on video steganography. It discusses several papers that proposed different techniques for video steganography, including using a binary attention mechanism to hide information in video frames, dual-channel embedding to make the hidden data robust against video transcoding, and using a convolutional neural network model called VStegNET. It also summarizes research on hiding encrypted data within video files and using techniques like genetic algorithms and AES encryption for steganography. Overall, the document reviews a variety of approaches that have been developed for video steganography and hiding secret information in video content.
Advanced Fuzzy Logic Based Image Watermarking Technique for Medical ImagesIJARIIT
The segmentation algorithms vary for the types of medical images such as MRI, CT, US, etc.The current study work
can further be extended to develop a GUI tool based approach for separating the ROI. Additionally, a new technique of
separating ROI form the original image that will be applicable for all type of medical images can be evolved. Separated ROI
can be stored with xmin, xmax, ymin and ymax value so that at the end of embedding process before transmitting watermarked
image, the segmented ROI can be attached with watermarked image. Any medical image watermarking approach will be
suitable, if we segment the ROI from medical image with the four values, then embedding of watermark can be done on whole
medical image, in this paper work on different scan like ctscan ,brain scan etc. our results significant high than other.
DEEP LEARNING APPROACH FOR SUSPICIOUS ACTIVITY DETECTION FROM SURVEILLANCE VIDEOIRJET Journal
This document proposes using deep learning approaches like convolutional neural networks to detect suspicious activities from surveillance videos. It aims to overcome limitations of existing depth sensors by using neural networks for more accurate human pose estimation. The system would monitor public areas through video surveillance and categorize human activities as usual or suspicious in real-time. It addresses gaps in previous research which focused on images rather than videos. The proposed methodology includes collecting and preprocessing surveillance video data, designing and training deep learning models to recognize and classify activities, and integrating the system with existing surveillance infrastructure for deployment.
During the research process, it was discovered that law enforcement has implemented some tracking surveillance software but they were not reliable enough for real world use. Developing better video sensors and tracking algorithms through an interdisciplinary approach with computer science and organizational leadership could help create more effective surveillance systems to track human trafficking suspects and victims. Future work should focus on overcoming obstacles like sensors not differentiating objects in varied lighting and software not handling large amounts of video data.
This document summarizes the state of the art in cloud security techniques. It begins by introducing cloud computing and its benefits. It then surveys existing literature on cloud security, summarizing 5 papers on topics like policy reconciliation and attribute-based encryption. It describes several techniques for cloud security like attribute-based encryption and fuzzy identity-based encryption. Finally, it discusses future work on developing a novel sequential rule mining algorithm for market basket data.
My Privacy My decision: Control of Photo Sharing on Online Social NetworksIRJET Journal
This document summarizes a research paper that proposes a facial recognition system to help users control photo sharing on social networks while protecting privacy. It discusses how photo sharing on social networks can unintentionally reveal private user information through tags, comments or metadata. The proposed system would recognize faces in photos and allow users to choose privacy settings during the photo posting process. It would also use private user photos to train the facial recognition model while preserving privacy through a distributed consensus method. The goal is to give users more awareness and control over how their photos are shared and help prevent unintended disclosure of personal information.
Secure Supervised Learning-Based Smart Home Authentication FrameworkIJCNCJournal
The Smart home possesses the capability of facilitating home services to their users with the systematic advance in The Internet of Things (IoT) and information and communication technologies (ICT) in recent decades. The home service offered by the smart devices helps the users in utilize maximized level of comfort for the objective of improving life quality. As the user and smart devices communicate through an insecure channel, the smart home environment is prone to security and privacy problems. A secure authentication protocol needs to be established between the smart devices and the user, such that a situation for device authentication can be made feasible in smart home environments. Most of the existing smart home authentication protocols were identified to fail in facilitating a secure mutual authentication and increases the possibility of lunching the attacks of session key disclosure, impersonation and stolen smart device. In this paper, Secure Supervised Learning-based Smart Home Authentication Framework (SSL-SHAF) is proposed as are liable mutual authentication that can be contextually imposed for better security. The formal analysis of the proposed SSL-SHAF confirmed better resistance against session key disclosure, impersonation and stolen smart device attacks. The results of SSL-SHAF confirmed minimized computational costs and security compared to the baseline protocols considered for investigation.
Secure Supervised Learning-Based Smart Home Authentication FrameworkIJCNCJournal
The Smart home possesses the capability of facilitating home services to their users with the systematic advance in The Internet of Things (IoT) and information and communication technologies (ICT) in recent decades. The home service offered by the smart devices helps the users in utilize maximized level of comfort for the objective of improving life quality. As the user and smart devices communicate through an insecure channel, the smart home environment is prone to security and privacy problems. A secure authentication protocol needs to be established between the smart devices and the user, such that a situation for device authentication can be made feasible in smart home environments. Most of the existing smart home authentication protocols were identified to fail in facilitating a secure mutual authentication and increases the possibility of lunching the attacks of session key disclosure, impersonation and stolen smart device. In this paper, Secure Supervised Learning-based Smart Home Authentication Framework (SSL-SHAF) is proposed as are liable mutual authentication that can be contextually imposed for better security. The formal analysis of the proposed SSL-SHAF confirmed better resistance against session key disclosure, impersonation and stolen smart device attacks. The results of SSL-SHAF confirmed minimized computational costs and security compared to the baseline protocols considered for investigation.
Secure Supervised Learning-Based Smart Home Authentication FrameworkIJCNCJournal
The Smart home possesses the capability of facilitating home services to their users with the systematic advance in The Internet of Things (IoT) and information and communication technologies (ICT) in recent decades. The home service offered by the smart devices helps the users in utilize maximized level of comfort for the objective of improving life quality. As the user and smart devices communicate through an insecure channel, the smart home environment is prone to security and privacy problems. A secure authentication protocol needs to be established between the smart devices and the user, such that a situation for device authentication can be made feasible in smart home environments. Most of the existing smart home authentication protocols were identified to fail in facilitating a secure mutual authentication and increases the possibility of lunching the attacks of session key disclosure, impersonation and stolen smart device. In this paper, Secure Supervised Learning-based Smart Home Authentication Framework (SSL-SHAF) is proposed as are liable mutual authentication that can be contextually imposed for better security. The formal analysis of the proposed SSL-SHAF confirmed better resistance against session key disclosure, impersonation and stolen smart device attacks. The results of SSL-SHAF confirmed minimized computational costs and security compared to the baseline protocols considered for investigation.
Information Systems Security & StrategyTony Hauxwell
This document discusses information security strategies and the importance of protecting sensitive data. It defines an information security strategy as a set of procedures and policies to protect information assets from being lost, stolen or compromised. The core concepts of confidentiality, integrity and availability underpin security strategies and regulations. The document examines techniques for implementing security strategies, including identifying risks and complying with standards to ensure protection of information.
Broadcasting Forensics Using Machine Learning Approachesijtsrd
Broadcasting forensic is the practice of using scientific methods and techniques to analyse and authenticate Multimedia content. Over the past decade, consumer grade imaging sensors have become increasingly prevalent, generating vast quantities of images and videos that are used for various public and private communication purposes. Such applications include publicity, advocacy, disinformation, and deception, among others. This paper aims to develop tools that can extract knowledge from these visuals and comprehend their provenance. However, many images and videos undergo modification and manipulation before public release, which can misrepresent the facts and deceive viewers. To address this issue, we propose a set of forensics and counter forensic techniques that can help establish the authenticity and integrity of Multimedia content. Additionally, we suggest ways to modify the content intentionally to mislead potential adversaries. Our proposed tools are evaluated using publicly available datasets and independently organized challenges. Our results show that the forensics and counter forensic techniques can accurately identify manipulated content and can help restore the original image or video. Furthermore, in this paper demonstrate that the modified content can successfully deceive potential adversaries while remaining undetected by state of the art forensic methods. Amit Kapoor | Prof. Vinod Mahor "Broadcasting Forensics Using Machine Learning Approaches" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-7 | Issue-3 , June 2023, URL: https://www.ijtsrd.com.com/papers/ijtsrd57545.pdf Paper URL: https://www.ijtsrd.com.com/engineering/computer-engineering/57545/broadcasting-forensics-using-machine-learning-approaches/amit-kapoor
Similar to Holistic privacy impact assessment framework for video privacy filtering technologies (20)
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
GraphRAG for Life Science to increase LLM accuracy
Holistic privacy impact assessment framework for video privacy filtering technologies
1. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.6, December 2013
HOLISTIC PRIVACY IMPACT ASSESSMENT
FRAMEWORK FOR VIDEO PRIVACY FILTERING
TECHNOLOGIES
Atta Badii1, Ahmed Al-Obaidi1, Mathieu Einig1, Aurélien Ducournau1
1
University of Reading
Intelligent Systems Research Laboratory,
School of Systems Engineering
United Kingdom
ABSTRACT
In this paper, we present a novel Holistic Framework for Privacy Protection Level Performance Evaluation
and Impact Assessment (H-PIA) to support the design and deployment of privacy-preserving filtering
techniques as may be co-evolved for video surveillance through user-centred participative engagement and
collectively negotiated solution seeking for privacy protection. The proposed framework is based on the
UI-REF normative ethno-methodological framework for Privacy-by-Co-Design which is based on
collective-interpretivist and socio-psycho-cognitively rooted Human Judgment and Decision Making (JDM)
theory including Pleasure-Pain-Recall (PPR)-theoretic opinion elicitation and analysis. This supports not
only the socio-ethically reflective conflicts resolution, prioritisation and traceability of privacy-preserving
requirements evolving through user-centred co-design but also the integration of Key Holistic
Performance Indicators (KPIs) comprising a number of objective and subjective evaluation metrics for the
design and operational deployment of surveillance data/-video-analytics from a system-of-system-scale
context-aware accountability engineering perspective. For the objective tests, we have proposed five
crucial criteria to be evaluated to assess the optimality of the balance of privacy protection and security
assurance as may be negotiated with end-users through co-design of a privacy filtering solution. This
evaluation is supported by a process of quantitative assessment of some of the KPIs through an automated
objective measurement of the functional performance of the given filter. Additionally, a subjective
qualitative user study has been conducted to correlate with, and cross-validate, the results obtained from
the objective assessment of the KPIs. The simulation results have confirmed the sufficiency, necessity and
efficacy of the UI-REF-based methodologically-guided framework for Privacy Protection evaluation to
enable optimally balanced Privacy Filtering of the video frame whilst retaining the minimum of the
information as negotiated per agreed process logic. Insights from this study have served the co-design and
deployment optimisation of privacy-preserving video filtering solutions. This UI-REF-based framework
has been successfully applied to the evaluation of MediaEval 2012-2013 Privacy Filtering and as such has
served to motivates further innovation in co-design and multi-level, multi-modal impact assessment of
multimedia privacy-security-balancing risk mitigation technologies.
KEYWORDS
Privacy Preserving, Privacy Protection, Video Analytics, UI-REF, Privacy-by-Co-Design, Filtering,
Evaluation, Visual Surveillance, Holistic Privacy Impact Assessment (H-PIA), Human Judgement and
Decision Making Theory (JDM), Pleasure-Pain-Recall Theory l(PPR), Context-aware Privacy Filtering.
1. INTRODUCTION
The installation of the traditional CCTV cameras, if appropriately deployed, may prove helpful as
part of a crime reduction strategy particularly in the urban environment. The increasing awareness
DOI : 10.5121/sipij.2013.4602
13
2. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.6, December 2013
of security threats and the need for efficient and effective means of security monitoring and crime
reduction management has led to an exponential growth in the development of fully automated
video surveillance systems. However much effort is needed to ensure that the potentially harmful
impacts of surveillance technologies are avoided and this requires socially negotiated, contextsensitive, co-design and evaluation of the performance of such systems including the assessment
of the societal and legal framework for their adoption, and, operational deployment responsibility.
Accordingly our approach is based on a system-of-systems scale (citizen-centred, socio-ethical,
societal, legal and technical) perspective for (re)-negotiated evolutionary reflective co-design.
This is to serve the transparency-accountability engineering and evaluation of video surveillance
solution systems as supported by the UI-REF [1] Privacy-by-Co-Design framework. This is a
normative ethno-methodology which supports the situated elicitation, interpretation, prioritisation
and context-specific conflicts resolution of both the requirements, and, the criteria for the multiperspective impacts assessment of the levels of privacy protection actually delivered by a
surveillance system and the various social and normative consequences of its adoption.
Computer vision technologies have been designed and customised to automate CCTV operations
through video-analytics capabilities such as object detection, tracking, and behaviour recognition.
Irrespective of whether these implementations are to take place at the back-end such as in the
centralised server system or in the “edge-smart” mode at the camera-end, they can potentially
lead to invasion of citizens’ privacy with various adverse personal and social impacts. These can
arise not just from how securely the data protection is managed but also due to the
uncertainty/inaccuracy of the data as captured e.g. noisy images that could cause misclassifications with no human over-ride in the decisional framework thus propagating false
positive/negative classifications based on simplistic behavioural stereotypes as may have been
applied to noisy data.
Often referred to as ‘Big Brother’, such public privacy concerns for video surveillance have been
raised since the early days of deployment of passive CCTV. With the increasing maturity of the
more advanced active CCTVs, the privacy of the citizen can be potentially at higher risk unless
protected through an inclusivist negotiation approach to the design and development of such
systems. Privacy risks mitigation technologies need to be deployed but for this to be wellmotivated, acceptable, and effective an actionable framework is needed for shared sense-making
and co-evolution of privacy-preserving surveillance solutions.
In recent years the computer vision research community has started to contribute to such solutions
but from a largely technological perspective and a variety of image processing techniques, for
example [2], have been proposed to mitigate privacy protection failure risks.
These approaches tend to apply some privacy filtering to obscure the privacy-sensitive parts of
the captured video; in much the same way as is the established practice in the film and television
sector. However, the naïve application of such privacy filters could lead to video surveillance
systems that are potentially ineffective in either adequately protecting the privacy of the citizen
(also referred to as the “data-subject”) or in retaining the essential information as justified and
established to be absolutely necessary for the intended security monitoring in the given situation.
Accordingly, a Privacy-by-Co-Design requirements elicitation and evaluation approach is
crucially needed to enable collective-participative judgements to be negotiated amongst all the
implicated stakeholders about: i) the level of surveillance information that can be agreed to be
justifiably essential for the situated security context and security protection purpose; ii) the
context-specific criteria for attribution of “suspect-ness” to a citizen whose image and behaviour
may be captured by a video surveillance camera, and, thus iii) the level and scope of situated
(context-dependent) privacy filtering to be afforded to the citizen(s).. This is to take into
14
3. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.6, December 2013
consideration a negotiated ethically-compliant category-theoretic judgement about what
constitutes “suspect”-ness and thus the respective privacy boundaries to be afforded to a citizen
whose image may be captured in a surveillance video-frame. It is acknowledged that the
perceived privacy boundaries of a citizen cannot be divorced from the contexts in which the
privacy is valued by that citizen and thus needs protecting. A citizen may have a variety of roles,
responsibilities and relationship sets; each associated with a particular persona of the citizen as
activated in a given context within their everyday life-style e.g. husband, father, employee, boss;
and each such persona of the same citizen is commensurate with a particular privacy boundary
linked to a certain (type of) context as may be viewed and defined by that citizen.
It is the explication of such privacy boundaries in each context for each persona of each citizen
and the subjective ascribing of a meaning, a value/sensitivity for the data in the respective context
as perceived by the citizen that accordingly specifies the boundaries of privacy filtering that must
be maintained whilst retaining the minimum required data-intelligence (the “privacy-securitybalancing” optimisation challenge; sometimes also referred to as the privacy-intelligibility “tradeoff”). Here, a pre-requisite to the notion of a “trade-off” between privacy and data intelligence
(i.e. the retained personal information post filtering) is the existence of an agreed context-specific
framework of meaning-value system mappings across two seemingly incongruent worlds. Thus
the challenge of privacy-security optimisation implies that the Privacy-by-Co-Design approach
must support a negotiated resolution of such so-called “trade-offs”. Clearly the system must also
safeguard against any discriminatory, inequitable and stereotyping classification of any citizens as
“Suspects” arising from mere video-analytics based attributions of suspicious behaviour.
These mandatory capabilities of the Privacy-by-Co-Design system call for context-aware privacy
filtering and evaluation to assess the situated optimality of privacy. Although some advances
have been recently reported in the application of video privacy filtering techniques, for example
as in [3]; the negotiation-centric co-innovation, co-evaluation and thus optimisation of situated
privacy filtering has only just begun. This requires participative definition of both video capture
context and security context for responsive, trace-able and accountable video-content category
judgments supported by a framework such as UI-REF to provide high resolution requirements
prioritisation and forensic designation of the Holistic Privacy Evaluation and Impact Assessment
Metrics [4, 5].
In practice, video analytics techniques can be deployed to detect the privacy-sensitive information
of the data-subjects as featured in video-frames (e.g. faces). The privacy-sensitive elements of a
user’s profile or any part of their picture need to be identified based on negotiation, and explicitly
stated user-led preferences-in-context. For some elements in some contexts a default rule
consistent with the applicable privacy-regimes-in-context may provide a pointer to the user’s
privacy preferences subject to explicit prior agreement of the user. Afterwards, image processing
techniques could be deployed to obscure the detected sensitive elements.
The main contributions of this paper could be summarised as the following:
•
•
•
•
An Integrated Holistic Evaluation and Impact Assessment Framework is proposed for
privacy filtering using novel evaluation criteria.
A comparison of several conventional privacy filters is presented; these can be used as
reference for future works.
Both objective and subjective evaluation are performed and combined for crossvalidation and conclusive results – an integrated quantitative and qualitative approach.
Numerical metrics are proposed to assess the efficacy of the performance evaluation and
impact assessment criteria.
15
4. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.6, December 2013
2. RELATED WORK
The literature includes many privacy filtering techniques that can be broadly classified as
reversible and irreversible methods. The former approach includes scrambling and encryption
methods, e.g. [6], which confer the benefit of a data recovery option but also, have the
disadvantage of exposure to privacy protection risks arising from possible hacking. The latter, on
the other hand, includes image processing and filtering techniques such as Gaussian blurring and
image pixilation, e.g. [7], which require more careful deployment to exclusively conceal the
privacy-sensitive elements within given video-frames. As a result, the widely-shared view of the
stakeholders, as expressed by Andrew Senior [3], for example, has been to call for advances in
effectiveness measurement of the privacy protection level afforded by video surveillance
solutions; this is the challenge to which the work reported in this and relevant other work [1, 4, 5]
have responded through the pioneering methodologically-guided UI-REF-enabled approach to
negotiation-centric co-design as may be applied to context-aware socio-technical personalisation
of requirements e.g. for privacy and its evaluation and holistic impact analysis.
However, to-date, only a few attempts have been reported which aim to assess and evaluate the
performance and impact of privacy-protecting systems within a relatively limited analysis
perspective.
Dufaux [8] has proposed a privacy filtering evaluation framework with validated pixilation,
blurring, and scrambling filtering methods based on: i) objective image quality measures using
PSNR and SSIM; ii) Face recognition performance using PCA and LDA.
Zhao and Stasko [9] have attempted to assess the relative accuracy of various filtering approaches
by conducting a comparative study of image filtering techniques that seek to mask the object
identity and activity in videos. Boyle et al. [10] have performed a subjective evaluation of
blurred and pixelated video filtering techniques. In their study, the filters had been tested at
different levels of fidelity and examined by human observers.
Accordingly the research study reported in this paper has responded to the need for a more
inclusive, holistic and high resolution assessment of privacy filtering requirements as well as the
evaluation of the effectiveness of the resulting privacy filtering solutions. We have addressed the
need for more exhaustive evaluation criteria to critically assess the privacy-preserving and
security-informative-ness balance of the situated filtering methods. Therefore, the primary
contribution of this work is the extended and integrated framework of context-aware requirements
prioritisation, and, objective and subjective evaluations to automatically validate the effectiveness
and impact of privacy filters based on user-centred preferences and metrics.
The rest of this manuscript is structured as follows: section 2 defines the proposed evaluation
criteria which form the basis of the proposed framework as described in Section 3. Section 4
discusses the simulation results. Finally Section 5 sets out the conclusions and briefly indicates
the scope for future work in responding to the outstanding societal challenges of optimising the
effectiveness of the protection of the privacy of the citizens as well as the security monitoring.
3. PRIVACY-PRESERVING EVALUATION CRITERIA
The UI-REF is an established normative ethno-methodology [1, 5], providing a framework for
integrative high resolution context-aware requirements ranking and usability-relationships-based
metrics for evaluation. This includes the Effects, Side-Effects and Cross-Effects, Affects criteria
set (The ESA Matrix). This confers several advantages as a means of holistic socio-psycho16
5. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.6, December 2013
cognitive impact assessment based on the context-aware, and, relationships-aware measurement
of Effects (direct impacts of a proposed privacy protection solution), Side-Effects and CrossEffects (secondary effects including un-intended/unforeseen/indirect effects), and Affects. The
Affects are the multi-level multi-modal emotional, psycho-social-and sentimental effects arising
from the primary/direct effects and/or secondary effects or side-effects of the adoption of a
proposed solution for privacy-preserving surveillance video-monitoring and its associated videoanalytics including privacy filtering. Therefore the privacy filtering evaluation framework as
studied in our recent work, as reported here, builds on and extends the above UI-REF based
evaluation and impact assessment criteria to derive a new set of contextualised metrics that can be
used for both objective and subjective evaluation of privacy-protecting filters compliant with UIREF-based Privacy-by-Co-Design.
In this context the privacy filtering algorithm design has to achieve the optimum balance of
privacy protection with minimal loss of necessary information as deemed essential to the
approved mission of the surveillance. Thus the key UI-REF based framework metrics for
Privacy-by-Co-Design that constitute the focus of this study include the following:
•
Efficacy
The main objective of any visual privacy-protecting filter is the ability to effectively
obscure the privacy-sensitive elements. In other words, the system should have the
capacity to de-identify the “data-subject” whose image is captured in the video-frames.
The measure of this criterion could be obtained using a combination of tests that for
instance could separately examine the face region as well as the full body. To fulfil this
criterion, the privacy-protecting filter should prevent the human evaluator from being
able to detect any faces and/or re-identify the person whose image is privacy-filtered.
•
Consistency
Object tracking is an essential functionality for the majority of video surveillance
applications. In order to successfully and continuously track the moving subject in the
field of view of a single camera or over a network of cameras, the short-term visual
appearance of the subject is required to support this task. Therefore, the privacyprotecting filter needs to maintain a reasonable and consistent level of detail of the
person’s body shape and appearance. Successful cross-frame object tracking of a filtered
subject would fulfil this criterion.
•
Disambiguity
This is the degree by which a privacy filter does not introduce additional ambiguity in
cross-frame trackability of same persons/objects. The intra-object-class variations are the
visual cues on which the majority of object classification and tracking algorithms depend.
Thus, a privacy filter should not alter the subject to the point that the subject could not be
distinguishable from other subjects within the same object class as may be encountered
inter/intra frame. This criterion could be tested using a person re-identification procedure
and applying a one-vs.-all strategy.
•
Intelligibility
This criterion examines the ability of the privacy filtering system to only protect the
privacy-sensitive attributes and retain all other features/information in the video-frame(s)
in order not to detract from the purpose of the surveillance system. Dufaux [8] has
proposed an approach for this assessment using the structural Similarity Index (SSIM)
quality metrics. However, we propose to use an analytics-based approach similar to that
used for pedestrian detection and recognition applications.
17
6. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.6, December 2013
•
Aesthetics
To avoid viewers’ distraction and unnecessary fixation of their attention on the region of
the video-frame to be obscured by the privacy filter, it is important for the privacy filter to
maintain the perceived quality of the visual effects of the video-frame. This would
depend on how stylistically coherent and suitably blended such visual effects might
appear after applying the filtering techniques. Methods for colour histogram comparison
of before-and-after effects in addition to the Structural Similarity Index (SSIM) could
provide an indication of filtering performance to meet this requirement.
The above evaluation criteria can be similarly applied to privacy-filtered audio-segments if an
audio-track is present. These criteria constitute a sub-set of the requirements and evaluation
framework as set out within the UI-REF Privacy-by Co-Design Methodology for user-centred,
negotiable, context-aware co-evolution and holistic assessment of the impacts of privacypreserving video surveillance as set out in Badii 2012 [4]; these include:
1. Negotiated ontological framework for innovation and deployment of mitigation
technologies for privacy preserving video surveillance co-design.
2. Accordingly the establishment of prototypical templates to delineate an agreed context
hierarchy within which various video-analytics operations can be co-specified for specific
agreed contexts-objectives; for example, Low-Level Video-Analytics (LLDA), Shallow
Vide-Analytics (SVA), Deep Video-analytics (DVA), and, Context-Aware VideoAnalytics (CAVA) as sub-spaces of deployment of surveillance technologies [4].
3. A decisional framework for ethically, legally and normatively coherent category
judgements as to the classification of persons whose images are captured by video
surveillance cameras and accordingly their tentative attribution, to be (dis)confirmed,
from a privacy-security viewpoint e.g. as Suspects or Non-Suspects based on types and
levels of evidence agreed to be un-mistake-ably indicative. This will follow a least-andlatest-commitment hierarchical evidence-based decisional framework for trace-able and
reversible man-in-the-loop classifications.
4. Key Holistic Performance Indicators (KPIs) such as Efficacy, Consistency, Intelligibility
and Disambiguity as defined above.
5. Quality-of-Experience: UI-REF-based use-context-dependent Effects, SideEffects and
Cross-Effects, Affects (ESA) as part of the Holistic Impact Assessment of a proposed
Privacy-preserving socio-technical system-of-systems [1].
6. User-centred perceived aesthetics (structural, textural, tonal, harmony and symmetry
maps).
7. Selectivity, sensitivity, of the Privacy Filtering solution system.
8. Computational efficiency of the Privacy Filtering solution system.
9. Scalability (for real-time web-scale) of the Privacy Filtering solution deployment.
10. The level of vulnerability to attack within a proposed video filtering technique and the
additional computational cost of protecting it against unauthorised attempts at privacy
protection reversal.
4. EVALUATION FRAMEWORK
The evaluation framework includes subjective and objective measurements of which the sub-set
defined in the previous section has been implemented for the present study. The aim was to test
the privacy filtering method based on user-perceive visual effects and preferences (Human
18
7. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.6, December 2013
Vision) as well as automatic video analysis (Computer Vision) requirements. The evaluation
framework was designed to encourage intelligent application of image filtering algorithms over
the different parts of the object featured in the video. The following sets out the description of the
framework components.
4.1. Subjective metrics
The subjective evaluation metrics were used to cross-validate the objective metrics by measuring
the performance of the privacy filter with respect to the same defined criteria. These were
deployed in two passes in order to test the filtered video in terms of the ability of person
recognition and the perception of the privacy filter separately.
4.1.1. Person recognition from pictures
In the first pass, participants were asked to recognise a given unfiltered image of a person in a
given set of filtered pictures. The filtered images were uniformly filtered using the same filter
type. To avoid bias due to the background or illumination, the images from the test set were
extracted from a different camera view point. The selected images were extracted from PEViD
dataset [11]. A sample of this test is shown in Figure1. The score resulting from this test
contributed to the assessment of the Disambiguity and Efficacy criteria through the level of the
recognisability.
Unfiltered
Filtered (Pixelated)
Figure 1: Original-filtered, matching for person recognition
4.1.2. Questionnaire on videos
In the second stage, the participants were asked to watch a set of privacy-filtered videos from the
PEViD dataset [11] representing different indoor and outdoor scenarios and featuring single and
multiple persons, and had to answer a questionnaire relating to different aspects of the filter.
From a meta-descriptive and interpretivist perspective, a set of questions were designed to
examine the aesthetics i.e. pleasantness of the visual effects resulting from the privacy filter and
the level of distraction that might be consequently experienced by the viewer. Another question
was devised to assess the stylistic congruence or suitability of the blended visual effects in the
privacy-filtered region of the video-frame. The scores of the answers for these questions were
collectively considered to reflect the aesthetic criterion.
Other questions were designed to detect whether the filter could trick the viewer into thinking
some features were still obvious, such as the gender and the ethnicity. These questions were
given in pairs, with the first one being “Is it possible to guess the person’s gender?”, and the
second: “What is the gender of the person?” The ability of the viewer to answer this pair of
questions correctly would indicate that the filter as deployed was inadequately effective.
However, answering “Yes” to the first question but failing to answer the second one correctly
19
8. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.6, December 2013
would mean that both questions as answered by the viewer were wrong thus indicating effective
performance by the filter as deployed in masking the gender of the person in the image.
The criterion of Consistency was examined through questions designed to assess the ability of the
viewers to track privacy-filtered persons/objects as well as measure the stability of the privacyfiltering effects across the video frames of the dataset and from different viewpoints. On the
other hand, the Disambiguity criterion was assessed through questions seeking to examine if the
viewers were able to distinguish multiple persons and identify their distinctive accessories and
(re)appearances. A final set of questions were incorporated to evaluate whether the filtered
video-frames still retained sufficient informative-ness to support the expected surveillance
purposes in terms of the information preserved by the privacy filter and whether the viewers
could still discern the main event shown in the video and how these were initiated.
4.2. Objective metrics
In this section we described the techniques used to produce the objective metrics followed by the
criteria mapping strategies. Figure 2 illustrates an overview of the proposed objective evaluation
for video privacy protection solutions.
Figure 2: UI-REF-based objective privacy filters evaluation toolkit
4.2.1. Person re-identification
Originally, person re-identification referred to the process of determining whether a given
detected or tracked individual has already appeared in a different image or camera view. This
concept was exploited to assist in the evaluation of the proposed evaluation criteria using an
appearance-based model with a supervised learning approach. In order to build a discriminative
object model, a pyramid dense feature detector was implemented to provide a sufficient number
of key points at different scales followed by a SURF descriptor [12] which is known to be
invariant to the pose, viewpoint and illumination changes. A bag-of-words representation [17]
was then generated from the extracted descriptors to form a holistic model of the object
appearance. For the machine learning of the resulting model, we used a supervised SVM
classifier with a non-linear Radial Basis Function (RBF) kernel.
For each person, we first created a training dataset using selected frames featuring the same
subject from different viewpoints as extracted from PEViD training set. Thereafter we built an
object classifier using the original unfiltered data and a second classifier based on the respective
filtered frames.
20
9. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.6, December 2013
To evaluate the privacy-protected videos, three (3) main testing strategies were used: i) query the
protected
image of a filtered person against the unfiltered classifier, ii) query the image of an unfiltered
person against the filtered classifier, and iii) query the image of a filtered person against the
filtered classifier. The Anonymity score was obtained by applying the first (i) and the second (ii)
strategies using a one-vs.-one scheme (i.e. matched against a single object classifier). The
one
Consistency score was produced through the third strategy also using a one-vs.als
-one scheme;
whereas, the F-score (Eq.1) was computed using the third strategy with a one-vs.-all scheme (i.e.
all
matched against all objects classifiers); as:
ܨെ ݁ݎܿݏൌ 2 ∙
݈݈ܽܿ݁ݎ . ݊݅ݏ݅ܿ݁ݎ
݊݅ݏ݅ܿ݁ݎ ݈݈ܽܿ݁ݎ
Eq.1
A high Anonymity score would indicate a higher Efficacy privacy protection afforded by the
privacy filtering techniques as deployed in each case. In the case of a rer -identification
Consistency score, a high score indicated that the filtered video still carried sufficient information
to enable an observer to perform tasks such as person tracking across images from the CCTV
network without finding out the person’s identity. Finally, a higher F-score indicated a higher
iden
score
Disambiguity capability arising from the deployment of the filtering technique whose Efficacy
was being thus evaluated.
4.2.2. Object tracking
The human detection and tracking based on the Histogram of Oriented Gradient HOG [13 was
Histog
[13]
deployed for object tracking. The ability to successfully detect humans after the applicati of a
application
privacy filter signifies that the resulting video could still carry sufficient visible clues to serve the
surveillance purposes of video analytics including object tracking. The baseline was defined as
object
the detection and tracking rate when performed on the original unfiltered videos. The final
videos.
Trackability score was defined in (Eq.2) as a minimum and maximum area ratio of the overlapped
region between the ground truth bounding box (i.e. the one detected in the original frame) and the
one detected in the filtered frame.
ܶ ݕݐ݈ܾ݅݅ܽ݇ܿܽݎൌ
ܶ݇ܿܽݎ
Where:
min൫ܣ௩ , ்ீܣ൯
max൫ܣ௩ , ்ீܣ൯
Eq.2
݀݁݀݁ݐܿ݁ݐ
ܣ௩ ൌ ܽܽ݁ݎሺ݄݀݁ݐݑݎݐ ݀݊ݑݎ݃ ∩ ݀݁ݐܿ݁ݐሻ
ܽ݊݀ ்ீܣൌ ܽܽ݁ݎሺ݄݃ݐݑݎݐ ݀݊ݑݎሻ
݃݀݊ݑݎ
The calculated Trackability score was compared against the baseline following the procedure as
described in Algorithm 1. The final value of this metric was used in the Disambiguity as well as
he
the Intelligibility criteria.
21
10. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.6, December 2013
4.2.3. Face detection
The Viola-Jones cascade classifier [14] was trained to detect faces and was then performed on the
privacy-protected videos. Ideally, no faces should be found, since they all should be obscured.
The faces found by the face detection algorithm were matched against the ground truth following
the same procedure as in Algorithm 1 to avoid taking into account any false positives as may be
output by the detection algorithm. The FaceDetection score was calculated based on (Eq.3).
݊݅ݐܿ݁ݐ݁ܦ݁ܿܽܨൌ
min൫ܣ௩ , ்ீܣ൯
max൫ܣ௩ , ்ீܣ൯
Where:ܣ௩ ൌ ܽܽ݁ݎሺ݄݀݁ݐݑݎݐ ݀݊ݑݎ݃ ∩ ݀݁ݐܿ݁ݐሻ
ܽ݊݀ ்ீܣൌ ܽܽ݁ݎሺ݄݃ݐݑݎݐ ݀݊ݑݎሻ
Eq.3
A higher FaceDetection rate implies higher Intelligibility, whereas a lower detection rate would
be consistent with a higher level of Efficacy in privacy protection.
3.2.4. Histogram comparison
The 2D Hue-Saturation histograms of the filtered and unfiltered face region were computed. The
Correlation metric of the two histograms was calculated to produce numerical parameters that
expressed how well the two histograms H1 and H2 matched each other following (Eq.4).
݊݅ݐ݈ܽ݁ݎݎܥሺܪଵ , ܪଶ ሻ ൌ
തതത
ത
തതത
ത
∑ூ ൫ܪଵ ሺܫሻ െ ܪଵ ሺܫሻ൯൫ܪଶ ሺܫሻ െ ܪଶ ሺܫሻ൯
തതത
ത
തതത
ത
ට∑ூ൫ܪଵ ሺܫሻ െ ܪଵ ሺܫሻ൯ଶ ൫ܪଶ ሺܫሻ െ ܪଶ ሺܫሻ൯ଶ
Eq.4
Whereby:
1ܪൌ ݉ܽݎ݃ݐݏ݅ܪ ݁ܿܽܨ ݀݁ݎ݁ݐ݈݅ܨ
ܽ݊݀ 2ܪൌ ܱ݉ܽݎ݃ݐݏ݅ܪ ݁ܿܽܨ ݈ܽ݊݅݃݅ݎ
The higher the metric, the more accurate the matching scores. This metric was used to evaluate
the Aesthetics after-effects arising from the deployment of a given privacy filter.
4.2.5. Template matching
This technique was exploited to find the areas of an image that could be matched to a template
image. A cropped image of an unfiltered face was used as a template T(x’,y’) and matched by
sliding it over the corresponding filtered frame I(x,y). Using the squared difference method
(Eq.5), a 2D result matrix R was generated with each value standing for a match metric.
ଶ
ܴሺݕ ,ݔሻ ൌ ൫ܶሺ ݔᇱ , ݕᇱ ሻ െ ܫሺ ݔ ݔᇱ , ݕ ݕᇱ ሻ൯
௫ᇲ ,௬ ᇲ
Eq.5
The best match could be found as global minimum in the R matrix which represented the corner
of the candidate detection bounding box C with dimensions equivalent to the template image T.
The Matching score was later given by the area of the intersection between the detection C and
template T bounding boxes divided by the template area. The Matching score was obtained using
(Eq.6).
ܽܽ݁ݎሺܶ ∩ ܥሻ
݄݃݊݅ܿݐܽܯൌ
Eq.6
ܽܽ݁ݎሺܶሻ
Ideally, after applying a privacy protection filter there should be no matching between before and
after effects frames and this could be reflected on the Efficacy, Intelligibility, and Aesthetics
criteria.
22
11. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.6, December 2013
4.2.6. Structural similarity (SSIM)
The SSIM index [15] is designed to measure the structural similarity and the alteration between
two images. In a full reference fashion, the image quality of the filtered video frame was
measured based on the corresponding unfiltered copy as a reference. SSIM is considered as an
improvement on the traditional peak signal-to-noise ratio (PSNR); in terms of compatibility with
the human visual perception system. A successful privacy filtering system should have a minimal
impact on the global quality of the image with modifications occurring only on the privacysensitive areas which should be anonymised. This metric was used to evaluate the Aesthetics
criterion.
4.2.7. Selective fusion
The final scores for the evaluation criteria resulting from the objective metrics were calculated by
selectively combining the most relevant subset of the metrics as mapped in Figure 3. Regarding
the Efficacy, we were interested in how effectively the privacy filters performed in obscuring the
privacy-sensitive information. Accordingly, we proposed to combine the FaceDetection rate in
addition to the anonymity score of the re-identification system as well as the template matching
output as given in the following equation.
ݕ݂݂ܿܽܿ݅ܧൌ [݃ݒܣሺ1 െ ݊݅ݐܿ݁ݐ݁ܦ݁ܿܽܨሻ ݕݐ݅݉ݕ݊݊ܣ ሺ1 െ ݄݃݊݅ܿݐܽܯሻ]
Eq.7
The re-identification score of Consistency was sufficient to judge the Consistency criteria of the
privacy-protected video-frames.
Eq.8
ݕܿ݊݁ݐݏ݅ݏ݊ܥൌ )݀݅݁ݎ( ݕܿ݊݁ݐݏ݅ݏ݊ܥ
To obtain the overall Disambiguity score of the examined filter, an average combination of the
object tracking performance and the F-score measure for the person re-identification system was
used; as follows.
Eq.9
]݁ݎܿܵ − ܨ + ݕݐ݈ܾ݅݅ܽ݇ܿܽݎܶ[݃ݒܣ = ݕݐ݅ݑܾ݃݅݉ܽݏ݅ܦ
The Intelligibility of the privacy filter was inferred from the scores as previously computed for
the face detector, the object tracker, and, the template matching metrics; as follows.
ܿݐܽܯ + ݕݐ݈ܾ݅݅ܽ݇ܿܽݎܶ + ݊݅ݐܿ݁ݐ݁ܦ݁ܿܽܨ[݃ݒܣ = ݕݐ݈ܾ݈݈݅݅݅݃݅݁ݐ݊ܫℎ݅݊݃
Eq.10
Finally, the level of the aesthetic appeal of the visual effects arising from the application of a
privacy filter was determined based on the correlation of the histogram of before and after effects
combined with the template matching and SSIM index.
ݐݏ݁ܣℎ݁ܿݐܽܯ + ݊݅ݐ݈ܽ݁ݎݎܥ[݃ݒܣ = ݏܿ݅ݐℎ݅݊݃ + ܵܵ]ܯܫ
Eq.11
23
12. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.6, December 2013
Figure 3: UI-REF-based privacy filters performance evaluation metrics mapping
5. EXPERIMENTS AND RESULTS
In this section we discuss the procedure for conducting the survey, and, the selection of videos
from the dataset for the testing and the simulation of privacy filters to which the objective and
subjective evaluation processes were consistently applied to arrive at the results that have are
reported in this paper.
5.1. Dataset
The PEViD dataset [11] was specifically created for impact assessment of the privacy protection
technologies. The dataset consists of two subsets (training and testing) of videos collected from a
range of standard and high resolution cameras and contains clips of different scenarios showing
one or several persons walking or interacting in front of the cameras. The actors may also carry
specific items which could potentially reveal their identity and may therefore need to be filtered
appropriately. The actors are featured carrying backpacks, umbrellas, wearing scarves, and, can
be seen fighting, pickpocketing or simply walking around. Actors may be at a distance from the
camera or near the camera, making their faces vary considerably in pixel size and quality. The
ambient lighting conditions of the videos vary widely as half of the clips were recorded at night.
The used dataset contained (20) video clips and associated ground truth in xml format as well as a
foreground mask. The ground truth consisted of annotations of persons, faces, skin regions, and
personal accessories. The videos included indoor, outdoor, day-time and night-time
environments, showing people interacting or performing various actions. The video-clips were
made available in MPEG format with a resolution of 1920x1080 pixels at a rate of (25) frames
per second. Figure 4 depicts a sample frame from the described dataset.
24
13. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.6, December 2013
(a)
(b)
(c)
Figure 4: Sample data from the PEViD dataset: (a) original frame, (b) foreground mask,
(c) annotated frame
5.2. Filters
In order to validate the framework, several filters which covered a range of image manipulation
techniques for privacy protection were implemented and evaluated; as follows:
•
Disc: variable size discs were drawn on top of the original image on a regular grid. The
colour of each disc was the colour of the pixel underneath its centre; Figure 5, top middle.
•
Median: Median blur was applied to the image; Figure 5, top right.
•
Pixelate: The image was simply down-sampled and up-sampled back to its original size
with a nearest neighbour interpolation; Figure 5, bottom left.
•
Resample: The resample method which first down-sampled the image before upsampling it back to its original size using the Lanczos re-sampling method [16]; Figure 5,
bottom middle.
•
Scramble-blur: in the first pass, we computed the DCT transform of each 8x8 block and
switched the sign of 4 coefficients selected randomly within the first 5 DC coefficients of
any colour channel; in the second pass, we applied a median blur; Figure 5, bottom
right.
25
14. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.6, December 2013
Figure 5: The privacy filters considered: Original image, Disc, Median, Pixelate, Resample, and
Scramble-blur
5.3. Survey procedure
Ten (10) male adults with normal vision participated in this survey. Participants’ ages ranged
between (21-35) years old. None of the participants had seen the test data previously or knew the
actors. To prepare the data, each of the above described filters (in section 5.2) was applied to five
different video-clips from the PEViD dataset representing different scenarios. The privacy filter
aimed to cover the region of the video frame where a person was located. Each participant was
asked to view five videos which were filtered using two different filters. The participant was
required to answer a separate set of questions for each video. The questionnaire had been
carefully designed to probe for the content of the examined videos and was focused on the
proposed evaluation criteria. The questions had been prioritised by considering the precedence
and sequentiality effects due to any incremental observations that might arising from the ordering
of the questions and the staging of the overall subjective evaluation. The participants performed
their evaluations alone and independently.
5.4. Simulation results
This section reports and discusses the results obtained from the objective and the subjective
evaluation and the corresponding validation process.
5.4.1. Objective evaluation
The five types of filters as described in the previous section were evaluated using the proposed
objective metrics separately. The filters were applied to the region of the video frame where a
person was located corresponding to the foreground mask. The average scores of each filter as a
response to the metrics performed on the PEViD dataset are listed in Table 1.
Disc
Median
Pixelate
Resample
Scramble
FaceDetection
0.05
0.26
0.41
0.16
0.06
Anonymity
0.49
0.46
0.46
0.48
0.49
Consistency
0.57
0.58
0.56
0.54
0.54
F-Score
0.54
0.55
0.59
0.58
0.53
Trackability
0.15
0.63
0.46
0.55
0.34
Correlation
0.65
0.64
0.65
0.65
0.64
Matching
0.59
0.96
0.96
0.90
0.52
SSIM
0.74
0.86
0.82
0.79
Table 1: Average metrics score of the tested privacy filters
0.68
26
15. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.6, December 2013
A comparison of the filters performance is shown in Figure6. The overall scores have been
normalised and represented such that the higher the score shown, the better the performance of
the privacy filter; except for the FaceDetection in which no detection is the most desirable
outcome.
1.00
0.90
0.80
0.70
0.60
0.50
0.40
0.30
0.20
0.10
0.00
Disc
Median
Pixelate
Resample
Scramble
Figure 6: Privacy filter performance based on different quality metrics
The FaceDetection metric was applied to the privacy filtered videos in order to examine if it was
capable of detecting the faces after the filtering effect had been applied. Ideally no faces should
be detected. In this test, Pixelate had the highest success rate as it preserved the spatial
arrangement of the facial features. The Median filter was ranked next due to the fact that it only
blurred the face region with some detail still visible.
Although Resample shared the same approach as Pixelate, the use of the Lanczos method instead
of linear up-sampling evidently led to a smoother filter output and thus made FaceDetection more
challenging. Finally, the deployment Disc and the Scramble caused the highest failure rate for the
face detector due to significant image deformation in the case of Disc and spatial re-arrangement
as a result of the Scramble filter.
The re-identification metrics namely, Anonymity, Consistency, and F-score provided similar
scores for the tested filters which ranged around mid-point. This was due to the fact that our reidentification scheme relied heavily on colours, which were not changed significantly by any of
the filters.
Object tracking using HOG was relatively successful in tracking the Median filtered person as it
still carried sufficient appearance information for object tracking. The Resample and Pixelate
filters had a lower performance than the Median while the Disc and Scramble filters had the
lowest trackability scores as would be expected given their level of image modifications.
The histogram correlation metric scores were similar for all filter types; which could be expected
due to the nature of the filters in use.
On the other hand, Median and Pixelate filters achieved an equally high score for the template
matching metric followed by Resample. The score for Disc and Scramble was significantly lower
than the rest of the filters.
With respect to the Structural Similarity (SSIM) metric, Disc and Scramble approaches to
filtering scored slightly lower than the other techniques. This was due to their trackability score
27
16. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.6, December 2013
being the lowest. However, all the tested filters produced relatively high scores indicating an
acceptable filtering effect and moderate level of modification had been achieved in comparison to
more severe obscuring methods such as a solid coloured mask or virtual object placement.
Based on the above, we can conclude that the stand alone individual scores of the metrics are not
sufficiently informative. However, the selective fused scores as proposed to meet the evaluation
criteria proved to be an efficient method with more meaningful measures for critical and
comparative analysis of the privacy-protective performance of alternative privacy filtering
solutions.
Table 2 summarises the overall merit score for the evaluated filters as calculated using the
selective fusion methods as described in section4.2.7. A comparison of the performance of the
evaluated filters in terms of evaluation criteria is shown in Figure7.
Disc
Median
Pixelate
Resample
Scramble
Efficacy
0.32
0.44
0.39
0.49
0.64
Consistency
0.57
0.58
0.56
0.54
0.54
Intelligibility
0.27
0.62
0.61
0.53
0.31
Aesthetics
0.66
0.82
0.81
0.78
0.61
Disambiguity
0.34
0.59
0.53
0.56
0.44
SCORE
0.39
0.49
0.50
0.44
0.40
Table 2: Overall evaluation score of the objectively tested privacy filters
The effectiveness of the tested filters was generally low suggesting the need to explore more
advanced filtering methods. The only exception was the Scramble method as it achieved the
highest level of image modification.
Overall, the filters were assessed to be of average value for the Consistency criterion; this could
allow a reasonable tracking performance. Comparatively, the filters with the highest score for the
Intelligibility criterion were the Median and the Pixelate closely followed by the Resample filter
while the Disc and Scramble scored significantly lower.
Aesthetically, all the filters scored above the average; although the Median outscored the rest of
filters. The Disc filter scored the lowest for the Disambiguity criterion followed by the Scramble
filter, whereas the Median, Pixelate, and Resample were performed above the mid-point.
28
17. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.6, December 2013
1.00
0.90
0.80
0.70
0.60
0.50
0.40
Disc
Median
Pixellate
Resample
Scramble
0.30
0.20
0.10
0.00
Figure 7: Comparison of the objective performance of the privacy filters in terms of the evaluation criteria
The final scores of the tested filters in terms of the evaluation criteria demonstrated the
effectiveness of the proposed UI-REF-based framework to critically and objectively evaluate
privacy-protecting filters. As expected, the simplicity of the evaluated filters has led to a below
average overall score emphasising the need for more effective filters.
5.4.2. Subjective evaluation
To ensure that the subjective evaluation was conducted systematically, each volunteer
participating in the subjective evaluation was provided with the same set and number of videoclips as had been privacy filtered using each of the same set of 5 privacy Filters. In this way the
evaluators all received identical sets. Figure 8 depicts the averaged responses of the participants
to score against each of the evaluation criteria. In terms of Efficacy, Scramble and Disc filters
outperformed the other filters as most of the participants found them more effective in hiding the
features of the person’s appearance. The Consistency values were relatively similar amongst the
tested filters; this showed the same trend as the results for the equivalent objective evaluation.
Expectedly, the score for the Intelligibility criterion was significantly higher for all the filters in
the subjective evaluation as compared with the objective measures; this highlighted the difference
between the human visual perception and the computer vision performance. In terms of Aesthetic
appeal, only the subjective results for the Pixilate filter were found to be inconsistent with the
corresponding results from the objective evaluation; this was considered to be due to the fact that
the Pixelate filters box up and block off the filtered areas. Although the average Disambiguity
scores from the subjective tests were higher than the ones arrived at through the objective
evaluation, the two sets of results indicated the same trend.
29
18. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.6, December 2013
1.00
0.90
0.80
0.70
0.60
0.50
0.40
0.30
0.20
0.10
0.00
Disc
Median
Pixelate
Resample
Scramble
Figure 8: Comparison of the subjective performance of privacy filters in terms of evaluation criteria
Table 3 summarises the averaged values of all the responses obtained from the questionnaire.
The overall scores were noticeability higher than the corresponding scores from the objective
evaluation.
Disc
Median
Pixelate
Resample
Scramble
0.36
0.11
0.11
0.17
0.39
Consistency
0.79
0.74
0.74
0.81
0.84
Intelligibility
0.89
0.96
0.97
0.97
0.92
Aesthetics
0.65
0.70
0.61
0.78
0.69
Disambiguity
0.74
0.84
0.80
0.84
0.71
Efficacy
SCORE
0.69
0.67
0.65
0.72
0.71
Table 3: Overall evaluation score of the subjectively tested privacy filters
6. CONCLUSION
This paper reports the results of a research study which has applied the UI-REF Requirements
Prioritisation and Holistic system-of-systems-scale Impact Assessment (H-PIA) Methodology for
Privacy-by Co-design. This has been supported by integrated high resolution objective and
subjective evaluation of the performance and impacts of various privacy filtering techniques. We
have outlined the UI-REF ontological commitment to the negotiation of situated context-contentpurpose and the analysis of surveillance solution as underpinned by the UI-REF decisional
framework. This is to support the socio-ethically reflective and normatively adaptive category
judgments and attributions, e.g. as to the Suspect-ness of an individual whose image has been
captured in some surveillance video dataset. As a precursor to subsequent studies that would
incorporate additional criteria applied to a wider set of contexts (as set out in Badii 2013 [5]) we
have implemented a sub-set of the UI-REF-based Holistic Privacy Impact Assessment (H-PIA)
metrics as a set of five evaluation criteria that enable the integrative objective and subjective
evaluation of multi-modal multimedia privacy protection solutions. Accordingly a number of
filtering techniques have been evaluated and the simulation results have demonstrated the
consistency and efficiency of the framework. The scores of the implemented objective metrics
have been mapped to reflect each evaluation criterion as well as the subjective findings. Future
30
19. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.6, December 2013
work could usefully apply the framework to more video analytics algorithms in order to evaluate
the videos at a finer level of granularity. Furthermore, the fusion of results with low-level
metrics could be enhanced by introducing a weighted selective fusion strategy and experimenting
with an extended set of privacy filters. This work has been motivated by the commitment to
provide rigorous benchmarking mechanisms to support Privacy Impact Assessment as an integral
process within evolutionary socio-ethical co-design of surveillance systems. The full engagement
of citizens in holistic societal impact assessment of privacy failure risks is likely to remain an
unattainable ideal unless the interlocutors of the Privacy-by- Co-Design negotiations are
symmetrically empowered with technological enablers to make transparent to all just how
citizens’ personal data are handled and used in what contexts by whom and for what purpose.
This must include mechanisms for robust forensic assessment of the performance efficacy of the
proposed privacy protection solutions in responding to citizens’ most deeply valued needs in each
of the evolving multi-level multi-modal situated contexts of their lifestyle as can be dynamically
(re)defined and (re) negotiated by them with all implicated stakeholders. The proposed UI-REFbased methodological framework is accordingly fully equipped to support such operationalisation
of Privacy-by-Co-design to break away from the abundant rhetoric of the oft-avowed privacycaring aspirations and idealism of the past towards enablers to support the full practical
realisation of an inclusivist Privacy-by-Co-Design framework enriched by collectively actionable
engagement, and reflective practice as empowered by transparent accountability.
ACKNOWLEDGEMENTS
This work was supported by the European Commission under contracts FP7-261743 VideoSense
project.
REFERENCES
[1]
Badii A, “User-Intimate Requirements Hierarchy Resolution Framework (UI-REF): Methodology
for Capturing Ambient Assisted Living Needs”, Proceedings of the Research Workshop, Int.
Ambient Intelligence Systems Conference (AmI’08), Nuremberg, Germany November 2008
[2]
Senior, Andrew, et al. "Blinkering surveillance: Enabling video privacy through computer
vision." IBM Technical Paper, RC22886 (W0308-109) (2003).
[3]
Senior, Andrew. "Privacy enablement in a surveillance system." Image Processing, 2008. ICIP
2008. 15th IEEE International Conference on. IEEE, 2008.
[4]
Badii, A., Einig, M., Tiemann, M., Thiemert, D. and Lallah, C. (2012) Visual context identification
for privacy-respecting video analytics. In: IEEE 14th International Workshop on Multimedia Signal
Processing (MMSP 2012), 17-19 Sep 2012, Banff, Canada, pp. 366-371.
[5]
Badii, A, Framework for Requirements Prioritisation, Usability Evaluation and Holistic Impact
Assessment of Privacy-preserving Video Analytics by co-Design, Working Paper UoR-ISR-VS2013-4, September 2013.
[6]
Dufaux, Frederic, and Ebrahimi, Touradj. "Scrambling for privacy protection in video surveillance
systems." Circuits and Systems for Video Technology, IEEE Transactions on 18.8 (2008): 11681174
[7]
Boyle, Michael, Christopher Edwards, and Saul Greenberg. "The effects of filtered video on
awareness and privacy." Proceedings of the 2000 ACM conference on Computer supported
cooperative work. ACM, 2000
[8]
Dufaux, Frederic. "Video scrambling for privacy protection in video surveillance: recent results and
validation framework." SPIE Defense, Security, and Sensing. International Society for Optics and
Photonics, 2011.
[9]
Zhao, Qiang Alex, and John T. Stasko. "Evaluating image filtering based techniques in media space
applications." Proceedings of the 1998 ACM conference on Computer supported cooperative work.
ACM, 1998.
31
20. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.6, December 2013
[10]
Boyle, Michael, Christopher Edwards, and Saul Greenberg. "The effects of filtered video on
awareness and privacy." Proceedings of the 2000 ACM conference on Computer supported
cooperative work. ACM, 2000.
[11]
Korshunov, Pavel, and Ebrahimi, Touradj. "PEViD: privacy evaluation video dataset". Applications
of Digital Image Processing XXXVI, San Diego, California, USA, August 25-29, 2013.
[12]
Bay, Herbert, Tinne Tuytelaars, and Luc Van Gool. "Surf: Speeded up robust features." Computer
Vision–ECCV 2006. Springer Berlin Heidelberg, 2006. 404-417.
[13]
Dalal, Navneet, and Bill Triggs. "Histograms of oriented gradients for human detection." Computer
Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on. Vol. 1.
IEEE, 2005.
[14]
Viola, Paul, and Michael J. Jones. "Robust real-time face detection." International journal of
computer vision 57.2 (2004): 137-154.
[15]
Wang, Zhou, et al. "Image quality assessment: From error visibility to structural similarity." Image
Processing, IEEE Transactions on 13.4 (2004): 600-612.
[16]
Turkowski, Ken. "Filters for common resampling tasks." Graphics gems. Academic Press
Professional, Inc., 1990.
[17]
Fei-Fei, Li, and Pietro Perona. "A bayesian hierarchical model for learning natural scene
categories." Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society
Conference on. Vol. 2. IEEE, 2005.
Authors
Atta Badii
Senior Professor, Secure Pervasive Technologies, Founding Director of Intelligent Systems
Research Laboratory (www.isr.reading.ac.uk), the University of Reading, School of Systems
Engineering and Distinguished Professor of Systems Engineering and Digital Innovation, University
of Cordoba., Atta has a multi-disciplinary academic and industrial research experience in the fields
of Distributed Intelligent and Multi-modal Interactive Systems, Pattern Recognition, Security, Trust
and Privacy-Preserving Architectures and Semantic Media Technologies. Atta’s research stands at
the confluence of intelligent interactive systems, and, human agent modelling as informed by the
well-established principles of psycho-cognitively-based requirements elicitation, user-centred
prioritisation, dynamic usability-relationship-based evaluation and co-design and innovation of
socially responsible systems-of-systems; He has pioneered such approaches since 1997 and the UIREF-based application of context-aware video privacy protection and its evaluation since 2011.
Ahmed Al-Obaidi
Ahmed Al-Obaidi received his BSc degree in computer engineering from the University of Mosul in
2002 and the MSc degree in computer systems engineering from Putra University in 2008. His R&D
career began at MIMOS Bhd., Malaysia in 2007 where he was involved in projects for intelligent
video surveillance. Ahmed joined the ISR Laboratory at the University of Reading, in April 2013
within the Joint Research Activity Programme of the European Centre for Video-Analytics Research
(VideoSense) led by Professor Badii. Ahmed’s research interests include computer vision, machine
learning, and context-aware video privacy protection.
Mathieu Einig
In 2008 Mathieu Einig completed his BSc in Computer Science from Université “A” Paul Sabatier,
Toulouse, France. This achievement was followed by Mathieu’s BSc (Hon) in Computer Graphics
Science, Teesside University, 2010. Mathieu’s research work at ISR focused on Semantic Media
Technology projects as an accomplished software engineer with keen research interests in Computer
Vision, Computer Graphics and Augmented Reality.
Aurélien Ducournau
In 2009, Aurelien Ducournau received his MSc in Computer Science and Image Analysis with
Honors from Universite de Caen Basse-Normandie, France; followed by a 6-months research
internship at the Telecom ParisTech Paris, France and a PhD in 2012 (with Distinction) in
Computer Science, Signal, Image and Vision at the National Engineering School of Saint-Etienne,
France. His research work at ISR, University of Reading mainly focused on Computer Vision,
Image and Video Processing and Analysis, Image Segmentation and Graph/Hypergraph theory.
32