One of the major problems in the most populated and developing countries like
India, is Energy or Power crisis. Hence, there is a pressing need to conserve power. There
are many simple ways to save electricity, like using the electric and electronic gadgets
whenever and wherever needed and switching them off, while not in use. But in places such
as large auditoriums and meeting halls, there will be a fan or an Air-conditioner keeps
running in unmanned area too, even before the people arrive. This contributes to a
considerable amount of electricity wastage. There are many ways to prevent this wastage,
like, installing IR sensors to detect people etc. These methods are quite costlier and complex
for larger areas. Hence, here we propose a new method of controlling the power supply of
auditoriums using, Image Processing. Here first we take a reference image of an empty
auditorium and any change in that reference image is detected and then according to that
change respective equipments alone are turned on. Thus power wastage is controlled. This is
dual usage system in which a camera is used for detecting people as well as surveillance
purposes. This is very simple, efficient and cheaper technique to save energy. Another big
advantage is, we can extend this up to applications like home automation etc.
Improving image resolution through the cra algorithm involved recycling proce...csandit
Image processing concepts are widely used in medical fields. Digital images are prone to a
variety of types of noise. Noise is the result of errors in the image acquisition process for
reconstruction that result in pixel values that reflect the true intensities of the real scenes. A lot
of researchers are working on the field analysis and processing of multi-dimensional images.
Work previously hasn’t sufficient to stop them, so they continue performance work is due by the
researcher. In this paper we contribute a novel research work for analysis and performance
improvement about to image resolution. We proposed Concede Reconstruction Algorithm (CRA)
Involved Recycling Process to reduce the remained problem in improvement part of an image
processing. The CRA algorithms have better response from researcher to use them
Introduction to image processing (or signal processing).
Types of Image processing.
Applications of Image processing.
Applications of Digital image processing.
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
In computer science, digital image processing is the use of computer algorithms to perform image processing on digital images. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing.It allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and signal ...
digital image processing pdf
digital image processing books
digital image processing textbook pdf
digital image processing textbook
digital image processing pdf book
digital image processing gonzalez pdf
digital image processing 4th pdf
digital image processing 3rd pdf
digital image processing slides
history of image processing
digital image processing third edition
digital image processing pdf
digital image processing gonzalez ppt
digital image processing 3rd edition pdf
digital image processing third edition pdf
image processing basics
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Improving image resolution through the cra algorithm involved recycling proce...csandit
Image processing concepts are widely used in medical fields. Digital images are prone to a
variety of types of noise. Noise is the result of errors in the image acquisition process for
reconstruction that result in pixel values that reflect the true intensities of the real scenes. A lot
of researchers are working on the field analysis and processing of multi-dimensional images.
Work previously hasn’t sufficient to stop them, so they continue performance work is due by the
researcher. In this paper we contribute a novel research work for analysis and performance
improvement about to image resolution. We proposed Concede Reconstruction Algorithm (CRA)
Involved Recycling Process to reduce the remained problem in improvement part of an image
processing. The CRA algorithms have better response from researcher to use them
Introduction to image processing (or signal processing).
Types of Image processing.
Applications of Image processing.
Applications of Digital image processing.
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
In computer science, digital image processing is the use of computer algorithms to perform image processing on digital images. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing.It allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and signal ...
digital image processing pdf
digital image processing books
digital image processing textbook pdf
digital image processing textbook
digital image processing pdf book
digital image processing gonzalez pdf
digital image processing 4th pdf
digital image processing 3rd pdf
digital image processing slides
history of image processing
digital image processing third edition
digital image processing pdf
digital image processing gonzalez ppt
digital image processing 3rd edition pdf
digital image processing third edition pdf
image processing basics
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Image fusion is the process of combining two or more images with specific objects with more precision. It is very common that when one object is focused remaining objects will be less highlighted. To get an image highlighted in all areas, a different means is necessary. This is done by the Image Fusion. In remote sensing, the increasing availability of Space borne images and synthetic aperture radar images gives a motivation to different kinds of image fusion algorithms. In the literature a number of time domain image fusion techniques are available. Few transform domain fusion techniques are proposed. In transform domain fusion techniques, the source images will be decomposed, then integrated into a single data and will be reconstructed back into time domain. In this paper, singular value decomposition as a tool to have transform domain data will be utilized for image fusion. In the literature, the quality assessment of fusion techniques is mainly by subjective tests. In this paper, objective quality assessment metrics are calculated for existing and proposed techniques. It has been found that the new image fusion technique outperformed the existing ones.
An image sensor or imaging sensor is a device that converts an optical image to an electric signal. It is used mostly in digital cameras and other imaging devices. This paper presents a high speed simulation methodology to reduce the long simulation time problem of traditional CMOS image sensor. A method based on spice model in cadence design platform is proposed to reduce the simulation time. This results simulation time reduced from 16ms to 0.225microsecond.
Adaptive denoising technique for colour imageseSAT Journals
Abstract
In digital image processing noise removal or noise filtering plays an important role, because for meaningful and useful processing images should not be corrupted by noises. In recent years, high quality televisions have become very popular but noise often affects TV broadcasts. Impulse noise corrupts the video during transmission and acquisition of signals. A number of denoising techniques have been introduced to remove impulse noise from images . Linear noise filtering technique does not work well when the noise is non-adaptive in nature and hence a number of non-linear filtering technique where introduced. In non-linear filtering technique, median filters and its modifications where used to remove noise but it resulted in blurring of images. Therefore here we propose an adaptive digital signal processing approach that can efficiently remove impulse noise from colour image. This algorithm is based on threshold which is adaptive in nature. This algorithm replaces the pixel only if it is found to be noisy pixel otherwise the original pixel is retained thus it results a better filtering technique when compared to median filters and its modified filters.
Keywords: impulse noise, Adaptive threshold, Noise detection, colour video
An introduction to Digital Image Processing as a continuation of a classic Digital Signal Processing course delivered at the University of Plymouth (2011)
AN EMERGING TREND OF FEATURE EXTRACTION METHOD IN VIDEO PROCESSINGcscpconf
Recently the progress in technology and flourishing applications open up new forecast and defy
for the image and video processing community. Compared to still images, video sequences
afford more information about how objects and scenarios change over time. Quality of video is
very significant before applying it to any kind of processing techniques. This paper deals with
two major problems in video processing they are noise reduction and object segmentation on
video frames. The segmentation of objects is performed using foreground segmentation based
and fuzzy c-means clustering segmentation is compared with the proposed method Improvised
fuzzy c – means segmentation based on color. This was applied in the video frame to segment
various objects in the current frame. The proposed technique is a powerful method for image
segmentation and it works for both single and multiple feature data with spatial information.
The experimental result was conducted using various noises and filtering methods to show which is best suited among others and the proposed segmentation approach generates good quality segmented frames.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Cassandra-Based Image Processing: Two Case Studies (Kerry Koitzsch, Kildane) ...DataStax
In this presentation, we will detail two image processing applications which rely on a Cassandra centric architecture to achieve distributed, high accuracy analysis of a variety of image formats, types, and quality, and which require different kinds of metadata processing as well as feature extraction from the image themselves. We will outline the architecture choices made for the two use case studies, and how we found Cassandra to be the ideal choice for the persistence layer implementation technology. In conclusion we will discuss extensions to the two use cases discussed and some of the 'lessons learned' from the two implementation projects.
About the Speaker
Kerry Koitzsch Project Lead, Kildane Software Technologies, Inc
Kerry Koitzsch is a software engineer and architect specializing in big data applications, NoSQL databases, and image processing. He currently works for Correlli Software Systems, a big data analytics company in Sunnyvale CA.
Image fusion is the process of combining two or more images with specific objects with more precision. It is very common that when one object is focused remaining objects will be less highlighted. To get an image highlighted in all areas, a different means is necessary. This is done by the Image Fusion. In remote sensing, the increasing availability of Space borne images and synthetic aperture radar images gives a motivation to different kinds of image fusion algorithms. In the literature a number of time domain image fusion techniques are available. Few transform domain fusion techniques are proposed. In transform domain fusion techniques, the source images will be decomposed, then integrated into a single data and will be reconstructed back into time domain. In this paper, singular value decomposition as a tool to have transform domain data will be utilized for image fusion. In the literature, the quality assessment of fusion techniques is mainly by subjective tests. In this paper, objective quality assessment metrics are calculated for existing and proposed techniques. It has been found that the new image fusion technique outperformed the existing ones.
An image sensor or imaging sensor is a device that converts an optical image to an electric signal. It is used mostly in digital cameras and other imaging devices. This paper presents a high speed simulation methodology to reduce the long simulation time problem of traditional CMOS image sensor. A method based on spice model in cadence design platform is proposed to reduce the simulation time. This results simulation time reduced from 16ms to 0.225microsecond.
Adaptive denoising technique for colour imageseSAT Journals
Abstract
In digital image processing noise removal or noise filtering plays an important role, because for meaningful and useful processing images should not be corrupted by noises. In recent years, high quality televisions have become very popular but noise often affects TV broadcasts. Impulse noise corrupts the video during transmission and acquisition of signals. A number of denoising techniques have been introduced to remove impulse noise from images . Linear noise filtering technique does not work well when the noise is non-adaptive in nature and hence a number of non-linear filtering technique where introduced. In non-linear filtering technique, median filters and its modifications where used to remove noise but it resulted in blurring of images. Therefore here we propose an adaptive digital signal processing approach that can efficiently remove impulse noise from colour image. This algorithm is based on threshold which is adaptive in nature. This algorithm replaces the pixel only if it is found to be noisy pixel otherwise the original pixel is retained thus it results a better filtering technique when compared to median filters and its modified filters.
Keywords: impulse noise, Adaptive threshold, Noise detection, colour video
An introduction to Digital Image Processing as a continuation of a classic Digital Signal Processing course delivered at the University of Plymouth (2011)
AN EMERGING TREND OF FEATURE EXTRACTION METHOD IN VIDEO PROCESSINGcscpconf
Recently the progress in technology and flourishing applications open up new forecast and defy
for the image and video processing community. Compared to still images, video sequences
afford more information about how objects and scenarios change over time. Quality of video is
very significant before applying it to any kind of processing techniques. This paper deals with
two major problems in video processing they are noise reduction and object segmentation on
video frames. The segmentation of objects is performed using foreground segmentation based
and fuzzy c-means clustering segmentation is compared with the proposed method Improvised
fuzzy c – means segmentation based on color. This was applied in the video frame to segment
various objects in the current frame. The proposed technique is a powerful method for image
segmentation and it works for both single and multiple feature data with spatial information.
The experimental result was conducted using various noises and filtering methods to show which is best suited among others and the proposed segmentation approach generates good quality segmented frames.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Cassandra-Based Image Processing: Two Case Studies (Kerry Koitzsch, Kildane) ...DataStax
In this presentation, we will detail two image processing applications which rely on a Cassandra centric architecture to achieve distributed, high accuracy analysis of a variety of image formats, types, and quality, and which require different kinds of metadata processing as well as feature extraction from the image themselves. We will outline the architecture choices made for the two use case studies, and how we found Cassandra to be the ideal choice for the persistence layer implementation technology. In conclusion we will discuss extensions to the two use cases discussed and some of the 'lessons learned' from the two implementation projects.
About the Speaker
Kerry Koitzsch Project Lead, Kildane Software Technologies, Inc
Kerry Koitzsch is a software engineer and architect specializing in big data applications, NoSQL databases, and image processing. He currently works for Correlli Software Systems, a big data analytics company in Sunnyvale CA.
Vehicle detection by using rear parts and tracking systemeSAT Journals
Abstract Vision of Indian government; of making 100 smart cities, attracts our attention to intelligent transport system. Traffic flow analysis is a part of intelligent transport system. It mainly contains three parts: vehicle detection, classification and vehicle tracking par t. Recently, there are different detection and tracking methods like computer vision based, magnetic frequency wave based etc. With the rapid development of computer vision techniques, visual detection has become increasingly popular in the transportation field. In urban traffic video monitoring systems, traffic congestion is a common scene that causes vehicle occlusion and is a challenge for current vehicle detection methods.In practical traffic scenarios, occlusion between vehicles often occurs; therefore, it is unreasonable to treat the vehicle as a whole. To overcome this problem we can use part based detection model. In our system the vehicle is treated as an object composed of multiple salient parts, including the license plate and rear lamps. These parts are localized using their distinctive color, texture, and region feature. Furthermore, the detected parts are treated as graph nodes to construct a probabilistic graph using a Markov random field model. After that, the marginal posterior of each part is inferred using loopy belief propagation to get final vehicle detection. Finally, the vehicles’ trajectories are estimated using a Kalman filter and a tracking-based detection technique is realized. This method we can use in daytime as well as night time and in any bad weather condition. Keywords vehicle detection, kalman filter, Markov model, tracking, rear lamps
The Pohlig-Hellman Exponentiation Cipher as a Bridge Between Classical and Mo...Joshua Holden
The Pohlig-Hellman exponentiation cipher is a symmetric-key cipher that uses some of the same mathematical operations as the better-known RSA and Diffie-Hellman public-key cryptosystems. First published in 1978, the Pohlig-Hellman cipher was never of practical importance due to its slow speed compared to ciphers such as DES and AES. The theoretical importance of the Pohlig-Hellman cipher comes from the fact that it relies on the Discrete Logarithm Problem for its resistance against known plain text attacks, as does RSA and several other modern cryptosystems. For this reason, the Pohlig-Hellman systemcan play a very important role pedagogically, since it also shares many features in common with classical ciphers such as shift ciphers and Hill ciphers. Thus, it allows the instructor to introduce the important concepts of the discrete logarithm and known plain text attacks separately from the more conceptually difficult idea of public-key cryptography.
Intelligent analysers for control and optimization of wastewater treatment pl...CLIC Innovation Ltd
MMEA (The Measurement, Monitoring and Environmental Efficiency Assessment) research program final seminar presentation by Dr. Esko Juuso, University of Oulu
Online Payment System using Steganography and Visual CryptographyIJCERT
In recent time there is rapid growth in E-Commerce market. Major concerns for customers in online shopping are debit card or credit card fraud and personal information security. Identity theft and phishing are common threats of online shopping. Phishing is a method of stealing personal confidential information such as username, passwords and credit card details from victims. It is a social engineering technique used to deceive users. In this paper new method is proposed that uses text based steganography and visual cryptography. It represents new approach which will provide limited information for fund transfer. This method secures the customer's data and increases customer's confidence and prevents identity theft.
Strong cryptography is the usage of systems or components that are considered highly resistant to cryptanalysis, the study of methods to cracking the codes. In this talk I would like to present the usage of strong cryptography in PHP. Security is a very important aspect of web applications especially when they manipulate data like passwords, credit card numbers, or sensitive data (as health, financial activities, sexual behavior or sexual orientation, social security numbers, etc). In particular I will present the extensions mcrypt, Hash, and OpenSSL that are been improved in the last version of PHP. These are the slides presented during my talk at PHP Dutch Conference 2011.
Data Steganography for Optical Color Image CryptosystemsCSCJournals
In this paper, an optical color image cryptosystem with a data hiding scheme is proposed. In the proposed optical cryptosystem, a confidential color image is embedded into the host image of the same size. Then the stego-image is encrypted by using the double random phase encoding algorithm. The seeds to generate random phase data are hidden in the encrypted stego-image by a content-dependent and low distortion data embedding technique. The confidential image and secret data delivery is accomplished by hiding the image into the host image and embedding the data into the encrypted stego-image. Experimental results show that the proposed data steganographic cryptosystem provides large data hiding capacity and high reconstructed image quality.
Intelligent Traffic light detection for individuals with CVDSwaroop Aradhya M C
This is a technical seminar ppt on mobile standards based traffic light detection which can be used as an assistive device in vehicles for individuals with Color vision deficiency
Source : “Mobile Standards-Based Traffic Light Detection in Assistive Devices for Individuals with Color-Vision Deficiency” An IEEE Transaction on Intelligent Transport Systems 2014
Modeling Design and Analysis of Intelligent Traffic Control System Based on S...Yasar Abbas
UrRehman, Yasar Abbas, Adam Khan, and Muhammad Tariq. "Modeling, design and analysis of intelligent traffic control system based on integrated statistical image processing techniques." Applied Sciences and Technology (IBCAST), 2015 12th International Bhurban Conference on. IEEE, 2015.
IMPROVING IMAGE RESOLUTION THROUGH THE CRA ALGORITHM INVOLVED RECYCLING PROCE...cscpconf
Image processing concepts are widely used in medical fields. Digital images are prone to a variety of types of noise. Noise is the result of errors in the image acquisition process for
reconstruction that result in pixel values that reflect the true intensities of the real scenes. A lot of researchers are working on the field analysis and processing of multi-dimensional images. Work previously hasn’t sufficient to stop them, so they continue performance work is due by the researcher. In this paper we contribute a novel research work for analysis and performance improvement about to image resolution. We proposed Concede Reconstruction Algorithm (CRA)
Involved Recycling Process to reduce the remained problem in improvement part of an image processing. The CRA algorithms have better response from researcher to use them.
This paper describes and implements an authentication resolutionmistreatmentstatistics, digital certificates and sensible cards to unravelthe protectiondownsidewithin the authentication method. The primaryhalfmay be a general introduction to the subject; the second may be atemporarysummaryregardingmistreatmentstatistics, a lot ofprecisely hand vein pattern. The third half presents a way of extracting the pattern vein of the rear of the hand additionallya way to match 2 templates. The fourth presents the 2 necessary phases in any authentication system: the enrolment and therefore the authentication. A projected authentication protocol is delineated too. The twenty percent generalize the attainable attacks and vulnerabilities during abiometric identification system and it additionally shows however our system is ready to avoid them .The sixth half talks regarding the implementation of the applying. Finally, within the conclusion, we tend to tried to summarize our work and prove the advantages of mistreatmentthis technique.
here it introduces an efficient multi-resolution watermarking methodology for copyright protection of digital images. By adapting the watermark signal to the wavelet coefficients, the proposed method is highly image adaptive and the watermark signal can be strengthen in the most significant parts of the image. As this property also increases the watermark visibility, usage of the human visual system is incorporated to prevent perceptual visibility of embedded watermark signal. Experimental results show that the proposed system preserves the image quality and is vulnerable against most common image processing distortions. Furthermore, the hierarchical nature of wavelet transform allows for detection of watermark at various resolutions, resulting in reduction of the computational load needed for watermark detection based on the noise level. The performance of the proposed system is shown to be superior to that of other available schemes reported in the literature.
Parameterized Image Filtering Using fuzzy LogicEditor IJCATR
The principal source of blur in digital images arise during image acquisition (digitization) or transmission. The
performance of imaging sensors is affected by a variety of factors, such as the environmental conditions during image acquisition.
Blurry images are the result of movement of the camera during shooting (not holding it still) or the camera not being capable of
choosing a fast enough shutter speed to freeze the action under the light conditions. For instance, in acquiring images with a camera,
light levels and sensor temperature are major factors affecting the amount of blur in the resulting image.
Blur was implemented by first creating a PSF filter in MatLab that would approximate linear motion blur. This PSF was then
convolved with the original image to produce the blurred image. Convolution is a mathematical process by which a signal, in this case
the image, is acted on by a system, the filter, in order to find the resulting signal. The amount of blur added to the original image
depended on two parameters of the PSF: length of blur (in pixels), and the angle of the blur. This thesis work is going to provide a
new, faster, and more efficient noise reduction method for images corrupted with motion blur. This new filter has two separated steps
or phases: the detection phase and the filtering phase. The detection phase uses fuzzy rules to determine whether a image is blurred or
not. When blurry image is detected, Then we use fuzzy filtering technique focuses only on the on the real blurred pixels.
A Flexible Scheme for Transmission Line Fault Identification Using Image Proc...IJEEE
This paper describes a methodology that aims to find and diagnosing faults in transmission lines exploitation image process technique. The image processing techniques have been widely used to solve problem in process of all areas. In this paper, the methodology conjointly uses a digital image process Wavelet Shrinkage function to fault identification and diagnosis. In other words, the purpose is to extract the faulty image from the source with the separation and the co-ordinates of the transmission lines. The segmentation objective is the image division its set of parts and objects, which distinguishes it among others in the scene, are the key to have an improved result in identification of faults.The experimental results indicate that the proposed method provides promising results and is advantageous both in terms of PSNR and in visual quality.
Basic Video-Surveillance with Low Computational and Power Requirements Using ...uberticcd
V. Caglioti, A. Giusti: "Basic Video-Surveillance with Low Computational and Power Requirements Using Long-Exposure Frames".
Proc. of Advanced Concepts for Intelligent Vision Systems (ACIVS) 2008.
An Application of Second Generation Wavelets for Image Denoising using Dual T...IDES Editor
The lifting scheme of the discrete wavelet transform
(DWT) is now quite well established as an efficient technique
for image denoising. The lifting scheme factorization of
biorthogonal filter banks is carried out with a linear-adaptive,
delay free and faster decomposition arithmetic. This adaptive
factorization is aimed to achieve a well transparent, more
generalized, complexity free fast decomposition process in
addition to preserve the features that an ordinary wavelet
decomposition process offers. This work is targeted to get
considerable reduction in computational complexity and power
required for decomposition. The hard striking demerits of
DWT structure viz., shift sensitivity and poor directionality
had already been proven to be washed out with an emergence
of dual tree complex wavelet (DT-CWT) structure. The well
versed features of DT-CWT and robust lifting scheme are
suitably combined to achieve an image denoising with prolific
rise in computational speed and directionality, also with a
desirable drop in computation time, power and complexity of
algorithm compared to all other techniques.
The Journal of MC Square Scientific Research is published by MC Square Publication on the monthly basis. It aims to publish original research papers devoted to wide areas in various disciplines of science and engineering and their applications in industry. This journal is basically devoted to interdisciplinary research in Science, Engineering and Technology, which can improve the technology being used in industry. The real-life problems involve multi-disciplinary knowledge, and thus strong inter-disciplinary approach is the need of the research.
A Real Time Image Processing Based Fire Safety Intensive Automatic Assistance...IJMTST Journal
Fire usually cause serious disasters. Thus, fire detection has been an important issue to protect human life
and property. In this project, I propose a fast and practical real-time image-based fire flame detection method
based on colour pair analysis and intensity level algorithm. Then, based on the above fire flame colour
features model, regions with fire-like colours are roughly separated from each frame of the test videos.
Besides segmenting fire flame regions, background objects with similar fire colours or caused by colour shift
resulted from the reflection of fire flames are also extracted from the image during the above colour
separation process. To remove these spurious fire-like regions, the image difference method and the invented
colour masking technique are applied. The device can detect fire by using Artificial Neural Network (ANN).
Finally device automatically control the fire safety assistance. This method was tested with Raspberry pi B+
Board interface with camera module.
Hardware Unit for Edge Detection with Comparative Analysis of Different Edge ...paperpublications3
Abstract: An edge in an image is a contour across which the brightness of the image changes abruptly. In image processing, an edge is often interpreted as one class of singularities. Edge detection is an important task in image processing. It is a main tool in pattern recognition, image segmentation, and scene analysis. An edge detector is basically a high pass filter that can be applied to extract the edge points in an image. This topic has attracted many researchers and many achievements have been made. Many researchers provided different approaches based on mathematical calculations which some of them are either robust or cost effective. A new algorithm will be proposed to detect the edges of image with increased robustness and throughput. Using this algorithm we will reduce the time complexity problem which is faced by previous algorithm. We will also propose hardware unit for proposed algorithm which will reduce the area, power and speed problem. We will compare our proposed algorithm with previous approach. For image quality measurement we will use some scientific parameters those are PSNR, SSIM, FSIM. Implementation of proposed algorithm will be done by Matlab and hardware implementation will be done by using of Verilog on Xilinx 14.1 simulator. Verification will be done on Model sim.
Now-a-days, Internet has become an important part of human’s life, a person
can shop, invest, and perform all the banking task online. Almost, all the organizations have
their own website, where customer can perform all the task like shopping, they only have to
provide their credit card details. Online banking and e-commerce organizations have been
experiencing the increase in credit card transaction and other modes of on-line transaction.
Due to this credit card fraud becomes a very popular issue for credit card industry, it causes
many financial losses for customer and also for the organization. Many techniques like
Decision Tree, Neural Networks, Genetic Algorithm based on modern techniques like
Artificial Intelligence, Machine Learning, and Fuzzy Logic have been already developed for
credit card fraud detection. In this paper, an evolutionary Simulated Annealing algorithm is
used to train the Neural Networks for Credit Card fraud detection in real-time scenario.
This paper shows how this technique can be used for credit card fraud detection and
present all the detailed experimental results found when using this technique on real world
financial data (data are taken from UCI repository) to show the effectiveness of this
technique. The algorithm used in this paper are likely beneficial for the organizations and
for individual users in terms of cost and time efficiency. Still there are many cases which are
misclassified i.e. A genuine customer is classified as fraud customer or vise-versa.
Wireless sensor networks (WSN) have been widely used in various applications.
In these networks nodes collect data from the attached sensors and send their data to a base
station. However, nodes in WSN have limited power supply in form of battery so the nodes
are expected to minimize energy consumption in order to maximize the lifetime of WSN. A
number of techniques have been proposed in the literature to reduce the energy
consumption significantly. In this paper, we propose a new clustering based technique
which is a modification of the popular LEACH algorithm. In this technique, first cluster
heads are elected using the improved LEACH algorithm as usual, and then a cluster of
nodes is formed based on the distance between node and cluster head. Finally, data from
node is transferred to cluster head. Cluster heads forward data, after applying aggregation,
to the cluster head that is closer to it than sink in forward direction or directly to the sink.
This reduction in distance travelled improves the performance over LEACH algorithm
significantly.
The next generation wireless networks comprises of mobile users moving
between heterogeneous networks, using terminals with multiple access interfaces and
services. The most important issue in such environment is ABC (Always Best Connected) i.e.
allowing the best connectivity to applications anywhere at any time. For always best
connectivity requirement various vertical handover strategies for decision making have
been proposed. This paper provides an overview of the most interesting and recent
strategies.
This paper presents the design and performance comparison of a two stage
operational amplifier topology using CMOS and BiCMOS technology. This conventional op
amp circuit was designed by using RF model of BSIM3V3 in 0.6 μm CMOS technology and
0.35 μm BiCMOS technology. Both the op amp circuits were designed and simulated,
analyzed and performance parameters are compared. The performance parameters such as
gain, phase margin, CMRR, PSRR, power consumption etc achieved are compared. Finally,
we conclude the suitability of CMOS technology over BiCMOS technology for low power
RF design.
In Cognitive Radio Networks (CRN), Cooperative Spectrum Sensing (CSS) is
used to improve performance of spectrum sensing techniques used for detection of licensed
(Primary) user’s signal. In CSS, the spectrum sensing information from multiple unlicensed
(Secondary) users are combined to take final decision about presence of primary signal. The
mixing techniques used to generate final decision about presence of PU’s signal are also
called as Fusion techniques / rules. The fusion techniques are further classified as data
fusion and decision fusion techniques. In data fusion technique all the secondary users
(SUs) share their raw information of spectrum detection like detected energy or other
statistical information, while in decision fusion technique all the SUs take their local
decisions and share the decision by sending ‘0’ or ‘1’ corresponding to absence and presence
of PU’s signal respectively. The rules used in decision fusion techniques are OR rule, AND
rule and K-out-of-N rule. The CSS is further classified as distributed CSS and centralized
CSS. In distributed CSS all the SUs share the spectrum detection information with each
other and by mixing the shared information; all the SUs take final decision individually. In
centralized CSS all the SUs send their detected information to a secondary base station /
central unit which combines the shared information and takes final decision. The secondary
base station shares the final decision with all the SUs in the CRN. This paper covers
overview of information fusion methods used for CSS and analysis of decision fusion rules
with simulation results.
ZigBee has been developed to support lower data rates and low power consuming
applications. This paper targets to analyze various parameters of ZigBee physical (PHY).
Performance of ZigBee PHY is evaluated on the basis of energy consumption in
transmitting and receiving mode and throughput. Effect of variation in network size is
studied on these performance attributes. Some modulation schemes are also compared and
the best modulation scheme is suggested with tradeoffs between different performance
metrics.
This paper gives a brief idea of the moving objects tracking and its application.
In sport it is challenging to track and detect motion of players in video frames. Task
represents optical flow analysis to do motion detection and particle filter to track players
and taking consideration of regions with movement of players in sports video. Optical flow
vector calculation gives motion of players in video frame. This paper presents improved
Luacs Kanade algorithm explained for optical flow computation for large displacement and
more accuracy in motion estimation.
A rapid progress is seen in the field of robotics both in educational and industrial
automation sectors. The Robotics education in particular is gaining technological advances
and providing more learning opportunities. In automotive sector, there is a necessity and
demand to automate daily human activities by robot. With such an advancement and
demand for robotics, the realization of a popular computer game will help students to learn
and acquire skills in the field of robotics. The computer game such as Pacman offers
challenges on both software and hardware fronts. In software, it provides challenges in
developing algorithms for a robot to escape from the pool of attacking robots and to develop
algorithms for multiple ghost robots to attack the Pacman. On the hardware front, it
provides a challenge to integrate various systems to realize the game. This project aims to
demonstrate the pacman game in real world as well as in simulation. For simulation
purpose Player/Stage is used to develop single-client and multi-client architectures. The
multi- client architecture in player/stage uses one global simulation proxy to which all the
robot models are connected. This reduces the overhead to manage multiple robots proxy.
The single-client architecture enables only two robot models to connect to the simulation
proxy. Multi-client approach offers flexibility to add sensors to each port which will be used
distinctly by the client attached to the respective robot. The robots are named as Pacman
and Ghosts, which try to escape and attack respectively. Use of Network Camera has been
done to detect the global positions of the robots and data is shared through inter-process
communication.
In Content-Based Image Retrieval (CBIR) systems, the visual contents of the
images in the database are took out and represented by multi-dimensional characteristic
vectors. A well known CBIR system that retrieves images by unsupervised method known
as cluster based image retrieval system. For enhancing the performance and retrieval rate
of CBIR system, we fuse the visual contents of an image. Recently, we developed two
cluster-based CBIR systems by fusing the scores of two visual contents of an image. In this
paper, we analyzed the performance of the two recommended CBIR systems at different
levels of precision using images of varying sizes and resolutions. We also compared the
performance of the recommended systems with that of the other two existing CBIR systems
namely UFM and CLUE. Experimentally, we find that the recommended systems
outperform the other two existing systems and one recommended system also comparatively
performed better in every resolution of image.
Information Systems and Networks are subjected to electronic attacks. When
network attacks hit, organizations are thrown into crisis mode. From the IT department to
call centers, to the board room and beyond, all are fraught with danger until the situation is
under control. Traditional methods which are used to overcome these threats (e.g. firewall,
antivirus software, password protection etc.) do not provide complete security to the system.
This encourages the researchers to develop an Intrusion Detection System which is capable
of detecting and responding to such events. This review paper presents a comprehensive
study of Genetic Algorithm (GA) based Intrusion Detection System (IDS). It provides a
brief overview of rule-based IDS, elaborates the implementation issues of Genetic Algorithm
and also presents a comparative analysis of existing studies.
Step by step operations by which we make a group of objects in which attributes
of all the objects are nearly similar, known as clustering. So, a cluster is a collection of
objects that acquire nearly same attribute values. The property of an object in a cluster is
similar to other objects in same cluster but different with objects of other clusters.
Clustering is used in wide range of applications like pattern recognition, image processing,
data analysis, machine learning etc. Nowadays, more attention has been put on categorical
data rather than numerical data. Where, the range of numerical attributes organizes in a
class like small, medium, high, and so on. There is wide range of algorithm that used to
make clusters of given categorical data. Our approach is to enhance the working on well-
known clustering algorithm k-modes to improve accuracy of algorithm. We proposed a new
approach named “High Accuracy Clustering Algorithm for Categorical datasets”.
Brain tumor is a malformed growth of cells within brain which may be
cancerous or non-cancerous. The term ‘malformed’ indicates the existence of tumor. The
tumor may be benign or malignant and it needs medical support for further classification.
Brain tumor must be detected, diagnosed and evaluated in earliest stage. The medical
problems become grave if tumor is detected at the later stage. Out of various technologies
available for diagnosis of brain tumor, MRI is the preferred technology which enables the
diagnosis and evaluation of brain tumor. The current work presents various clustering
techniques that are employed to detect brain tumor. The classification involves classification
of images into normal and malformed (if detected the tumor). The algorithm deals with
steps such as preprocessing, segmentation, feature extraction and classification of MR brain
images. Finally, the confirmatory step is specifying the tumor area by technique called
region of interest.
A Proxy signature scheme enables a proxy signer to sign a message on behalf of
the original signer. In this paper, we propose ECDLP based solution for chen et. al [1]
scheme. We describe efficient and secure Proxy multi signature scheme that satisfy all the
proxy requirements and require only elliptic curve multiplication and elliptic curve addition
which needs less computation overhead compared to modular exponentiations also our
scheme is withstand against original signer forgery and public key substitution attack.
Water marking has been proposed as a method to enhance data security. Text
water marking requires extreme care when embedding additional data within the images
because the additional information must not affect the image quality. Digital water marking
is a method through which we can authenticate images, videos and even texts. Add text
water mark and image water mark to your photos or animated image, protect your
copyright avoid unauthorized use. Water marking functions are not only authentication, but
also protection for such documents against malicious intentions to change such documents
or even claim the rights of such documents. Water marking scheme that hides water
marking in method, not affect the image quality. In this paper method of hiding a data using
LSB replacement technique is proposed.
Today among various medium of data transmission or storage our sensitive data
are not secured with a third-party, that we used to take help of. Cryptography plays an
important role in securing our data from malicious attack. This paper present a partial
image encryption based on bit-planes permutation using Peter De Jong chaotic map for
secure image transmission and storage. The proposed partial image encryption is a raw data
encryption method where bits of some bit-planes are shuffled among other bit-planes based
on chaotic maps proposed by Peter De Jong. By using the chaotic behavior of the Peter De
Jong map the position of all the bit-planes are permuted. The result of the several
experimental, correlation analysis and sensitivity test shows that the proposed image
encryption scheme provides an efficient and secure way for real-time image encryption and
decryption.
This paper presents a survey of Dependency Analysis of Service Oriented
Architecture (SOA) based systems. SOA presents newer aspects of dependency analysis due
to its different architectural style and programming paradigm. This paper surveys the
previous work taken on dependency analysis of service oriented systems. This study shows
the strengths and weaknesses of current approaches and tools available for dependency
analysis task in context of SOA. The main motivation of this work is to summarize the
recent approaches in this field of research, identify major issue and challenges in
dependency analysis of SOA based systems and motivate further research on this topic.
In this paper, proposed a novel implementation of a Soft-Core system using
micro-blaze processor with virtex-5 FPGA. Till now Hard-Core processors are used in
FPGA processor cores. Hard cores are a fixed gate-level IP functions within the FPGA
fabrics. Now the proposed processor is Soft-Core Processor, this is a microprocessor fully
described in software, usually in an HDL. This can be implemented by using EDK tool. In
this paper, developed a system which is having a micro-blaze processor is the combination
of both hardware & Software. By using this system, user can control and communicate all
the peripherals which are in the supported board by using Xilinx platform to develop an
embedded system. Implementing of Soft-Core process system with different peripherals like
UART interface, SPA flash interface, SRAM interface has to be designed using Xilinx
Embedded Development Kit (EDK) tools.
The article presents a simple algorithm to construct minimum spanning tree and
to find shortest path between pair of vertices in a graph. Our illustration includes the proof
of termination. The complexity analysis and simulation results have also been included.
Wimax technology has reshaped the framework of broadband wireless internet
service. It provides the internet service to unconnected or detached areas such as east South
Africa, rural areas of America and Asia region. Full duplex helpers employed with one of
the relay stations selection and indexing method that is Randomized Distributed Space Time
are used to expand the coverage area of primary Wimax station. The basic problem was
identified at cell edge due to weather conditions (rain, fog), insertion of destruction because
of multiple paths in the same communication channel and due to interference created by
other users in that communication. It is impractical task for the receiver station to decode
the transmitted signal successfully at the cell edges, which increases the high packet loss and
retransmissions. But Wimax is a outstanding technology which is used for improving the
quality of internet service and also it offers various services like Voice over Internet
Protocol, Video conferencing and Multimedia broadcast etc where a little delay in packet
transmission can cause a big loss in the communication. Even setup and initialization of
another Wimax station nearer to each other is not a good alternate, where any mobile
station can easily handover to another base station if it gets a strong signal from other one.
But in rural areas, for few numbers of customers, installation of base station nearer to each
other is costlier task. In this review article, we present a scheme using R-DSTC technique to
choose and select helpers (relay nodes) randomly to expand the coverage area and help to
mobile station as a helper to provide secure communication with base station. In this work,
we use full duplex helpers for better utilization of bandwidth.
Radio Frequency identification (RFID) technology has become emerging
technique for tracking and items identification. Depend upon the function; various RFID
technologies could be used. Drawback of passive RFID technology, associated to the range
of reading tags and assurance in difficult environmental condition, puts boundaries on
performance in the real life situation [1]. To improve the range of reading tags and
assurance, we consider implementing active backscattering tag technology. For making
mobiles of multiple radio standards in 4G network; the Software Defined Radio (SDR)
technology is used. Restrictions in Existing RFID technologies and SDR technology, can be
eliminated by the development and implementation of the Software Defined Radio (SDR)
active backscattering tag compatible with the EPC global UHF Class 1 Generation 2 (Gen2)
RFID standard. Such technology can be used for many of applications and services.
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
Acetabularia Information For Class 9 .docxvaibhavrinwa19
Acetabularia acetabulum is a single-celled green alga that in its vegetative state is morphologically differentiated into a basal rhizoid and an axially elongated stalk, which bears whorls of branching hairs. The single diploid nucleus resides in the rhizoid.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
2. using a famous technique called, Image Processing.
Using this technique we monitor the changes in the auditorium through sequence of images and according to
that the power supply is controlled. Image processing is a form of signal processing for which the input is an
image, and the output may be either an image or, a set of characteristics or parameters related to the image
[2]. Most image-processing techniques involve treating the image as a two-dimensional signal and applying
standard signal-processing techniques to it. The implementation of power supply control using image
processing is relatively very simple. The empty image of the auditorium is taken as a reference image, using
a digital camera in an elevated view. The image is converted to gray and enhanced using image enhancement
techniques. Now edge detection is done. Similarly the captured real time image is enhanced and edge
detected. These two images are compared and using the comparison results, respective control signals are
generated using a hardware prototype. The reference and real time images undergo the following processes
starting from their acquisition, Gray conversion, Partitioning, Edge detection, Comparison and finally
generating the control signals.
II. METHODOLOGY
The General framework is given as a block diagram in Fig. 1.
Figure 1. General Framework
For convenience, in this entire paper, we consider a class room instead of an auditorium for an example.
A. Image Acquisition
The first stage is the image acquisition. After that any processing techniques can be applied to it. Image
acquisition means creating digital images from a physical scene. It includes processing, compressing, storing,
printing and displaying the images [2]. The most usual method is by digital photography with a digital
camera but other methods like using image sensors can also be employed. Here we go with the digital
camera. The camera should be installed in a perfect place so that it covers the entire auditorium or Hall. The
camera is interfaced with a computer or a micro-controller. First image of the auditorium is captured, when
there are no people. This empty auditorium’s image is saved as reference image at a particular location
specified in the program (Fig. 2a). The images resolution may vary from camera to camera. But a fixed
resolution must be maintained for an application. In this illustration, the image resolution is of width 2592
pixels and height 1944 pixels. Note that, reference image is taken only once, whereas the real time images are
captured in certain intervals of time. Here we take the real time images in the interval of 10 seconds (Fig. 2b).
In this example case a person occupies a seat in the last row. Here the camera angle is a very important
parameter. Aerial view is the most recommended one. And camera should be fixed and stationary one,
throughout the process. The captured images are fed as inputs to the main program through certain
algorithms.
66
3. Figure 2a.Reference Image
Figure 2b. Real Time Image
The real time image captured is a color image (RGB image). But grayscale images are comfortable for
processing. A Grayscale image contains each pixel as a single sample. In other words it carries only intensity
information. These images are also known as black-and-white images, and that are composed exclusively of
shades of gray, varying from black at the weakest intensity to white at the strongest. The gray scale image
contains image components with 256 intensity levels ranging from 0 to 255. RGB to Gray conversion is
done for both the reference and captured images (Fig. 3a and Fig. 3b). The purpose of this image intensity
conversion is the analysis of the image which is easy for processing in gray scale mode than in the RGB
mode.
Figure 3a.Grayscale Reference Image
Figure 3b.Grayscale Real Time Image
B. Image Partitioning
An image is understood as a collection of regions that totally covers it (a partition). Regions are
homogeneous in the selected feature space and connected in the image space. Such an image representation
enables region-baseduser interaction. In it, the user can interact with the underlying partition(s) that represent
the image [3]. After partitioning the features are the regions can be parallel processed. Now in our case,
auditorium is installed with many fans and lights. Each fan or a light has its own coverage area. According to
the coverage area we split the image into many cells, with each cell is simply the area covered by a fan. This
is because; during the image comparison we have to know the place where the humans exist. So initially the
cells are split and given a unique name or label. In this example if a hall has 4 fans, we will divide the image
into four regions (Fig. 4). Each region is the coverage area of each fan. Using these regions further
processing is carried out. Totally there are twelve regions. But out of them only four regions are going to be
occupied by humans. Hence those four regions are alone considered. They are indicated by numbers in the
Fig.4. The resolutions for these cells are given in the TABLE 1. These are the cells that are going to be
processed. Note that both the reference and real time images are partitioned in a same manner. Field study is
required to know the exact coverage areas. These areas are carefully specified in the main program.
TABLE I. RESOLUTION FOR VARIOUS CELLS
Cell Name
Cell 1
Cell 2
Cell 3
Cell 4
Width (Pixels)
370
370
880
880
Height (Pixels)
1140
1022
1140
1022
67
Corresponding Equipment
Fan 1
Fan 2
Fan 3
Fan 4
4. Figure 4. Image Partitioning Illustration
C. Edge Detection
Edge detection is a basic tool in image processing used for feature detection and attributes extraction. The
edge is detected by any abrupt change in intensity levels of an image. Using this technique the amount of data
to be analyzed is reduced and hence the response time will be reduced. The main objective of edge detection
is to find out the variations in the real time captured image from the reference image. There are many
detectors for edge detection like sobel, prewitt, canny etc. Here we go with the canny edge detector. It is one
of the most widely used algorithms. First, it smoothens the image and detects the image gradient to highlight
regions with high spatial derivatives. It then tracks along these regions to suppress any pixel that is not at the
maximum. Finally, through hysteresis, it uses two thresholds and if the magnitude is below the first
threshold, it is set to zero. If the magnitude is above the high threshold, it is made an edge and if the
magnitude is between the two thresholds, it is set to zero unless there is a path from this pixel to a pixel with
a gradient above the second threshold. That is to say that the two thresholds are used to detect strong and
weak edges, and include the weak edges in the output only if they are connected to strong edges [4]. Here, we
find edge detected images for each and every cell. A typical edge detected cell in both reference image and
real time image is shown in the Fig. 5a and
Fig. 5b respectively. When the images are directly taken for
any processing, the analysis time and the process data will be very high. But, here after the edge detection,
only the edges appear in the images. So the calculation time will be reduced.
Figure 5a. Edge Detected Reference Image of Cell 1
Figure 5b. Edge Detected Real Time Image of Cell 1
D. Image Comparison
In this step, the two edge detected images are compared by merely subtracting and the intensity values for the
entire new image is calculated. Image subtraction is a type of Image segmentation. We need to extract the
human shape from the background. Hence, the real time images are subtracted from the reference image.
This subtraction results in indication of the places which are modified. In other words we can say that, the
regions which are occupied by humans are obviously indicated (Fig. 6). The summation of all values in the
resultant matrix is then obtained.
E. Generating Control Signals
Now all the changes are identified. The cells which are occupied by humans will be detected in the above
step. The modified values are summed for each cell separately. If this sum of a particular cell exceeds the
68
5. threshold value then the fan or light corresponding to that cell is turned ON. The threshold value
determination is the important process here. Various test cases are considered and the threshold value must
Figure 6. Subtracted Image
be carefully determined. Generally it is should be the minimum change that can be detected when a human
being enters the cell. The threshold values vary from cell to cell. The cells that are closer to the camera will
have larger threshold values than that of the cells that are farther. Here for the four cells the threshold values
vary from 1500 to 2500. This controlling can be done using separate microcontroller circuitry interfaced with
the programming system.
III. RESULTS AND DISCUSSIONS
The various results are compared with some test cases (Figures 7a, 7b, 7c and 7d). Figures 8a, 8b, 8c and 8d
are the respective edge detected subtracted images. The probability of seating arrangement people is very
vast in numbers. They can either occupy the areas which are closer the camera or the areas that are farther
from the camera. When people occupy the cells at the bottom of image matrix (cells 3 and 4) the threshold
value will be more. One the other hand if people occupy the cells in the top of image matrix (cells 1 and 2)
then the threshold level will be lesser. This is because; when a person occupies a seat that is farther to the
camera, his size will be smaller in the captured image. Similarly, if he occupies a seat that is nearer to the
camera, his size will be larger in the image. The minimum change when a human being enters the cell can be
detected and the minimum threshold level must be found out. Refer TABLE 2 for the threshold values of
these cells. In Fig. 7a a man occupies the cell 1. His presence will exceed the threshold value in image
subtraction and hence the fan 1 will be turned ON (TABLE 2 and TABLE 3). Similarly in Fig. 7b all the cells
are occupied resulting in switching all the four fans ON. If a person occupies a place near the frontiers of two
cells, so that his presence is detected in two cells, then both the fans corresponding to those cells are turned
on, with the summation exceeding the threshold value. Figures 7c and 7d are examples for this case. Fan 1
and fan 3 will be turned ON for these cases.
Figure 7a. Cell 1 is occupied
Figure 8a.Subtracted Image
for Fig. 7a
Figure 7b. All the cells are occupied
Figure 7c. Cell 3 is occupied
Figure 7d. Group of people
occupying cell 3
Figure 8b.Subtracted Image Figure 8c.Subtracted Image Figure 8d. Subtracted Image
for Fig. 7b
for Fig. 7c
for Fig. 7d
Here various test images are given as inputs as real time images and the minimum threshold for each cells
have be tabulated as follows:
69
6. TABLE II. T HRESHOLD VALUES FOR VARIOUS CELLS
Cell Number
Cell 1
Cell 2
Cell 3
Cell 4
Minimum Estimated Threshold Value
1500
1500
2500
2500
TABLE. III. OBTAINED SUMMATION VALUES FOR VARIOUS CELLS
Test Figures Name
Fig. 7a
Fig. 7b
Fig. 7c
Fig. 7d
Cell 1
2557
29248
8060
47703
Summation Values for
Cell 2
Cell 3
0
0
9050
13413
0
5478
0
34987
Fans Turned ON
Cell 4
0
10686
0
0
Fan 1
Fan 1, Fan 2, Fan 3 and Fan 4
Fan 1 and Fan 3
Fan 1 and Fan 3
IV. CONCLUSION
The study showed that image processing is a better technique to control the power supply in the auditoriums. It
shows that it can reduce the wastage of electricity and avoids the free running of those electrical equipments. It
is also more consistent in detecting presence of people because it uses real time images. Overall, the system is
good but it still needs improvement to achieve a hundred percent accuracy. If achieved, then we can extend this
application to many places like theaters and even for home automation.
V. FUTURE WORK
The main drawback with this system is that, it can be used only for the places whose orientation or
arrangement of seats never changes. But we can overcome this by resetting the reference images whenever
the arrangement is altered. The main program needs not to be altered. Another way of overcoming this
limitation is using the face detection techniques. It is expected to give much flexibility and simplicity to the
overall system.
VI. ACKNOWLEDGEMENT
Our deepest thanks to our professors J.Augustin Jacob and J.Prabin Jose for guiding us to bring up this idea.
Also we thank our project mates S.Mohan and S.ThalavaiShanmugaBalaji.
REFERNCES
[1]
Sunil Kumar.Matangiand, Sateesh.Prathapani, “Design of Smart Power Controlling and Saving System in
Auditorium by using MCS 51 Microcontrollers ” , Advanced Engineering and Applied Sciences: An International
Journal 2013; 3(1): 5-9
[2] G. Lloyd Singh, M. MelbernParthido , R. Sudha, “Embedded based Implementation: Controlling of Real Time
Traffic Light using Image Processing”,National Conference on Advances in Computer Science and Applications
with International Journal of Computer Applications (NCACSA 2012) Proceedings published in International
Journal of Computer Applications® (IJCA)
[3] F. MarquCs B. Marcotenui, F. Zanoguera P. Correia R. Mech, M. Wollborn, “PARTITION-BASED IMAGE
REPRESENTATION AS BASIS FOR USER-ASSISTED SEGMENTATION” 0-7803-6297-7/00/$10.00 0 2000
IEEE
[4] VikramadityaDangi, AmolParab, KshitijPawar& S.S Rathod,“ Image Processing Based Intelligent Traffic
Controller”, Undergraduate Academic Research Journal (UARJ), ISSN : 2278 – 1129, Volume-1, Issue-1, 2012
70