IJERD (www.ijerd.com) International Journal of Engineering Research and Development IJERD : hard copy of journal, Call for Papers 2012, publishing of journal, journal of science and technology, research paper publishing, where to publish research paper,
1. The document discusses a technique called document image segmentation to extract text like the title, author name, and publication name from scanned images of documents like book covers.
2. The extracted text is classified and stored in a database for further library operations, avoiding the need for manual data entry.
3. A wavelet-based approach is proposed for segmenting text and non-text regions, using multi-scale wavelet analysis to decompose images into different frequency bands for analysis.
Reconstructing the Path of the Object based on Time and Date OCR in Surveilla...ijtsrd
The inclusion of time based queries in video indexing application is enables by the recognition of time and date stamps in CCTV video. In this paper, we propose the system for reconstructing the path of the object in surveillance cameras based on time and date optical character recognition system. Since there is no boundary in region for time and date, Discrete Cosine Transform DCT method is applied in order to locate the region area. After the region for time and date is located, it is segmented and then features for the symbols of the time and date are extracted. Back propagation neural network is used for recognition of the features and then stores the result in the database. By using the resulted database, the system reconstructs the path for the object based on time. The proposed system will be implemented in MATLAB. Pyae Phyo Thu | Mie Mie Tin | Ei Phyu Win | Cho Thet Mon "Reconstructing the Path of the Object based on Time and Date OCR in Surveillance System" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd27981.pdfPaper URL: https://www.ijtsrd.com/home-science/education/27981/reconstructing-the-path-of-the-object-based-on-time-and-date-ocr-in-surveillance-system/pyae-phyo-thu
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
With the onslaught of multimedia in the near past, there has been a tremendous increase in the uses of images. A very good example of which is the web on which most of the documents contain images. Other than this the images are being used in other applications like weather forecasting, medical diagnosis, police department. In R-Tree implementation of image database, images are made available to the program which are then stores in the database. The image database is presented using R-tree and the database is stored in separate file .This R-tree implementation results in both update as well as efficient retrieval of images from hard disk [1][2][4]. We use the similarity based retrieval feature to retrieve the required number of similar images being inquired by the user [3][5][6]. Distance matrix approach is used to find similarity of images [7]. Sobel edge detection algorithm is used to form sketches. If sketch of image is entered for similarity based retrieval, then sketches of stored images are formed and these sketches are compared with input image (sketch) using distance matrix approach[8][9].
Spectral Density Oriented Feature Coding For Pattern Recognition ApplicationIJERDJOURNAL
ABSTRACT:- The significant of multi spectral band resolution is explored towards selection of feature coefficients based on its energy density. Toward the feature representiaon in transformed domain, multi wavelet transformations were used for finer spectral representation. However, due to a large feature count these features are not optimal under low resource computing system. In the recognition units, running with low resources a new coding approach of feature selection, considering the band spectral density is developed. The effective selection of feature element, based on its spectral density achieve two objective of pattern recognition, the feature coefficient representiaon is minimized, hence leading to lower resource requirement, and dominant feature representation, resulting in higher retrieval performance.
Report medical image processing image slice interpolation and noise removal i...Shashank
This document is a project report submitted by Shashank Singh to the Indian Institute of Information Technology. The project involved developing modules for image slice interpolation and noise removal in medical images. Shashank describes developing algorithms for interpolating between image slices and removing noise while preserving true image data. He provides details on implementing the algorithms in Matlab and creating a GUI for noise removal. The document also covers common medical imaging modalities and techniques like CT, MRI, and image processing filters.
This document compares the performance of three lossless image compression techniques: Run Length Encoding (RLE), Delta encoding, and Huffman encoding. It tests these algorithms on binary, grayscale, and RGB images to evaluate compression ratio, storage savings percentage, and compression time. The results found that Delta encoding achieved the highest compression ratio and storage savings, while Huffman encoding had the fastest compression time. In general, the document evaluates and compares the performance of different lossless image compression algorithms.
Neural network based image compression with lifting scheme and rlceSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
This document discusses image fusion techniques for enhancing images. It begins with an introduction to image fusion, which combines relevant information from multiple images of the same scene into a single enhanced image. It then discusses discrete wavelet transform (DWT) based image fusion in more detail. Several image fusion rules for combining coefficient data during the DWT process are described, including maximum selection, weighted average, and window-based verification schemes. The importance of image fusion for applications like object identification, classification, and change detection is highlighted. Finally, the document reviews related work on different image fusion methods and algorithms proposed by other researchers.
Reconstructing the Path of the Object based on Time and Date OCR in Surveilla...ijtsrd
The inclusion of time based queries in video indexing application is enables by the recognition of time and date stamps in CCTV video. In this paper, we propose the system for reconstructing the path of the object in surveillance cameras based on time and date optical character recognition system. Since there is no boundary in region for time and date, Discrete Cosine Transform DCT method is applied in order to locate the region area. After the region for time and date is located, it is segmented and then features for the symbols of the time and date are extracted. Back propagation neural network is used for recognition of the features and then stores the result in the database. By using the resulted database, the system reconstructs the path for the object based on time. The proposed system will be implemented in MATLAB. Pyae Phyo Thu | Mie Mie Tin | Ei Phyu Win | Cho Thet Mon "Reconstructing the Path of the Object based on Time and Date OCR in Surveillance System" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd27981.pdfPaper URL: https://www.ijtsrd.com/home-science/education/27981/reconstructing-the-path-of-the-object-based-on-time-and-date-ocr-in-surveillance-system/pyae-phyo-thu
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
With the onslaught of multimedia in the near past, there has been a tremendous increase in the uses of images. A very good example of which is the web on which most of the documents contain images. Other than this the images are being used in other applications like weather forecasting, medical diagnosis, police department. In R-Tree implementation of image database, images are made available to the program which are then stores in the database. The image database is presented using R-tree and the database is stored in separate file .This R-tree implementation results in both update as well as efficient retrieval of images from hard disk [1][2][4]. We use the similarity based retrieval feature to retrieve the required number of similar images being inquired by the user [3][5][6]. Distance matrix approach is used to find similarity of images [7]. Sobel edge detection algorithm is used to form sketches. If sketch of image is entered for similarity based retrieval, then sketches of stored images are formed and these sketches are compared with input image (sketch) using distance matrix approach[8][9].
Spectral Density Oriented Feature Coding For Pattern Recognition ApplicationIJERDJOURNAL
ABSTRACT:- The significant of multi spectral band resolution is explored towards selection of feature coefficients based on its energy density. Toward the feature representiaon in transformed domain, multi wavelet transformations were used for finer spectral representation. However, due to a large feature count these features are not optimal under low resource computing system. In the recognition units, running with low resources a new coding approach of feature selection, considering the band spectral density is developed. The effective selection of feature element, based on its spectral density achieve two objective of pattern recognition, the feature coefficient representiaon is minimized, hence leading to lower resource requirement, and dominant feature representation, resulting in higher retrieval performance.
Report medical image processing image slice interpolation and noise removal i...Shashank
This document is a project report submitted by Shashank Singh to the Indian Institute of Information Technology. The project involved developing modules for image slice interpolation and noise removal in medical images. Shashank describes developing algorithms for interpolating between image slices and removing noise while preserving true image data. He provides details on implementing the algorithms in Matlab and creating a GUI for noise removal. The document also covers common medical imaging modalities and techniques like CT, MRI, and image processing filters.
This document compares the performance of three lossless image compression techniques: Run Length Encoding (RLE), Delta encoding, and Huffman encoding. It tests these algorithms on binary, grayscale, and RGB images to evaluate compression ratio, storage savings percentage, and compression time. The results found that Delta encoding achieved the highest compression ratio and storage savings, while Huffman encoding had the fastest compression time. In general, the document evaluates and compares the performance of different lossless image compression algorithms.
Neural network based image compression with lifting scheme and rlceSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
This document discusses image fusion techniques for enhancing images. It begins with an introduction to image fusion, which combines relevant information from multiple images of the same scene into a single enhanced image. It then discusses discrete wavelet transform (DWT) based image fusion in more detail. Several image fusion rules for combining coefficient data during the DWT process are described, including maximum selection, weighted average, and window-based verification schemes. The importance of image fusion for applications like object identification, classification, and change detection is highlighted. Finally, the document reviews related work on different image fusion methods and algorithms proposed by other researchers.
DCT based Steganographic Evaluation parameter analysis in Frequency domain by...IOSR Journals
This document analyzes DCT-based steganography using a modified JPEG luminance quantization table to improve evaluation parameters like PSNR, mean square error, and capacity. The authors propose modifying the default 8x8 quantization table by adjusting frequency values in 4 bands to increase image quality for the embedded stego image. Experimental results on test images show that using the modified table improves PSNR, decreases mean square error, and increases maximum embedding capacity compared to the default table. Therefore, the proposed method allows more secret data to be hidden with less distortion and improved image quality.
4 ijaems jun-2015-5-hybrid algorithmic approach for medical image compression...INFOGAIN PUBLICATION
This document summarizes a research paper that proposes a hybrid algorithm for medical image compression using discrete wavelet transform (DWT) and Huffman coding techniques. The algorithm performs multilevel decomposition of medical images using DWT, quantizes the coefficients, assigns Huffman codes, and compresses the images. Simulation results on test medical images showed that the algorithm achieved excellent reconstruction quality with better compression ratios compared to other techniques. The algorithm is well-suited for compressing and transmitting large volumes of medical images over cloud platforms.
Content Based Image Retrieval Using 2-D Discrete Wavelet TransformIOSR Journals
This document proposes a content-based image retrieval system using 2D discrete wavelet transform and texture features. The system decomposes images using 2D DWT, extracts texture features from low frequency coefficients using GLCM, and retrieves similar images by calculating Euclidean distances between feature vectors. Experimental results on Wang's database show the proposed approach achieves 89.8% average retrieval accuracy.
Image processing is among rapidly growing technologies today, with its applications in various aspects of a business. Image Processing forms core research area within electronics engineering and computer science disciplines too. Image Processing is a technique to enhance raw images received from satellites, space probes, aircrafts, military reconnaissance flights or pictures taken in normal day-to-day life from normal cameras. The field is becoming powerful and popular because of technically powerful personal computers, large memories of available devices as well as graphic softwares and tools available with that devices and gadgets. Image acquisition, pre-processing, segmentation, representation, recognition and interpretation are the different basic steps through which image processing is carried out. [3][4].
Color image analyses using four deferent transformationsAlexander Decker
This document discusses and compares four different image transformations: discrete Fourier transform (DFT), discrete cosine transform (DCT), wavelet transform (DWT), and discrete multiwavelet transform (DMWT). It analyzes the effectiveness of each transform for processing color images in terms of noise reduction, enhancement, brightness, compression, and resolution. The performance of the techniques is evaluated using computer simulations in Visual Basic 6.
Color image compression based on spatial and magnitude signal decomposition IJECEIAES
In this paper, a simple color image compression system has been proposed using image signal decomposition. Where, the RGB image color band is converted to the less correlated YUV color model and the pixel value (magnitude) in each band is decomposed into 2-values; most and least significant. According to the importance of the most significant value (MSV) that influenced by any simply modification happened, an adaptive lossless image compression system is proposed using bit plane (BP) slicing, delta pulse code modulation (Delta PCM), adaptive quadtree (QT) partitioning followed by an adaptive shift encoder. On the other hand, a lossy compression system is introduced to handle the least significant value (LSV), it is based on an adaptive, error bounded coding system, and it uses the DCT compression scheme. The performance of the developed compression system was analyzed and compared with those attained from the universal standard JPEG, and the results of applying the proposed system indicated its performance is comparable or better than that of the JPEG standards.
This document outlines the syllabus for the Data Structures course in the second year, fourth semester of the B.Sc Computer Science program at Sri Krishnadevaraya University in Ananthapuramu, Andhra Pradesh, India. The syllabus covers five units: (1) concepts of abstract data types and linear/non-linear data structures, (2) stacks, queues, trees, (3) binary trees, binary search trees, graphs, (4) graph traversals, searching techniques, and (5) sorting algorithms. The course aims to teach students fundamental data structures and algorithms.
A S URVEY ON D OCUMENT I MAGE A NALYSIS AND R ETRIEVAL S YSTEMSIJCI JOURNAL
The digitization of documents and their availabilit
y over the network demands solution toward content
based document image analysis, indexing, searching
and retrieval. Signature, Logo and Layout of the
documents present convincing evidence and provide a
n important form of indexing for effective document
image retrieval in a variety of applications. This
paper describes methods and techniques developed fo
r
document image analysis and retrieval by researcher
s.
A broad ranging open access journal Fast and efficient online submission Expe...ijceronline
nternational Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
IMAGE TRANSFORMATION & DWT BASED IMAGE DECOMPOSITION FOR COVERT COMMUNICATION Editor Jacotech
Widely used computer, and therefore require large-scale data storage and transfer, and efficient method of data storage has become necessary. Image compression is to reduce the number of bytes in an image file, without degrading the image quality to an unacceptable level. In reducing the file size to allow more images to be stored in a given amount of memory or disk space. It also reduces the desired image is transmitted from a website via the Internet or downloaded time. Gray image is 256 × 256 pixels of 65,536 Yuan, to store and a typical 640 × 480 colour image of nearly one million. These files are downloaded from the Internet can be very time-consuming task. A significant portion of the image data of the multimedia data comprises they occupy the major portion of the communication bandwidth used for the multimedia communication. Therefore, the development of effective techniques for image compression has become quite necessary [9]. The basic goal of image compression is to find the image representation associated with fewer pixels. Two basic principles used in image compression is redundant and irrelevant. Source Redundancy eliminating redundant and irrelevant omit pixel values rather than by the human eye to detect. International standard for image compression work began in the late 1970s with the CCITT (now ITU-T) requires specification of the binary image compression algorithm facsimile communications. Image compression standard brings many benefits, such as: (1) different image files between devices and applications easily exchanged; (2) the re-use of existing hardware and software products are more widely; (3) the existence of benchmarks and benchmark data sets for new and alternative development.
This document discusses a hand gesture recognition system for underprivileged individuals. It begins by outlining the key steps in hand gesture recognition systems: image capture, pre-processing, segmentation, feature extraction and gesture recognition. It then goes into more detail on specific techniques for each step, such as thresholding and edge detection for segmentation. The document also covers applications like access control, sign language translation and future areas like biometric authentication. In conclusion, it proposes that hand gesture recognition can help disabled individuals communicate through accessible human-computer interaction.
Comparative Analysis of Lossless Image Compression Based On Row By Row Classi...IJERA Editor
This document proposes and evaluates a near lossless image compression algorithm that divides color images into red, green, and blue channels. It classifies pixels in each channel row-by-row and records the results in mask images. The image data is then decomposed into sequences based on the classification and the mask images are hidden in the least significant bits of the sequences. Different encoding schemes like LZW, Huffman, and RLE are applied and compared. Experimental results on test images show the proposed algorithm achieves smaller bits per pixel than simple encoding schemes. PSNR values also indicate very little difference between original and reconstructed images.
International Journal on Soft Computing ( IJSC )ijsc
This document provides a summary of various wavelet-based image coding schemes. It discusses the basics of image compression including transformation, quantization, and encoding. It then reviews the wavelet transform approach and several popular wavelet coding techniques such as EZW, SPIHT, SPECK, EBCOT, WDR, ASWDR, SFQ, EPWIC, CREW, SR and GW coding. These techniques exploit the multi-resolution properties of wavelets for superior energy compaction and compression performance compared to DCT-based methods like JPEG. The document provides details on how each coding scheme works and compares their features.
This document discusses image mosaicing, which is the process of combining multiple overlapping images into a single image with a larger field of view. It describes image mosaicing models and algorithms, including feature extraction, image registration, homographic refinement, image warping and blending. Two main algorithms are presented: unidirectional scanning and bidirectional scanning. The document also discusses applications of image mosaicing like creating panoramic images and immersive virtual environments, and limitations such as difficulties mosaicing more than four images.
This document discusses a proposed approach for multi-focus image fusion using a discrete cosine wavelet sharpness criterion. Multi-focus image fusion combines information from multiple images of the same scene to produce an "all-in-focus" image. The proposed approach uses a discrete cosine transform to calculate sharpness values for sub-blocks of the input images and selects the sharpest sub-blocks to include in the fused image. Experimental results on images of a clock, bottle, and book show the discrete cosine wavelet criterion produces fused images with higher quality than a bilateral gradient-based sharpness criterion, as measured by mutual information metrics.
This document contains the class schedule for a school called Unidad Educativa Internacional Liceo Iberoamericano. It lists the courses taught each day of the week during each class period, with class periods ranging from 7:00-14:20 and courses taught in Spanish including L.E. (Lengua Española), M. (Matemáticas), C.C.NN. (Ciencias Naturales), among others. The schedule is broken into four pages with minor variations each page.
The document summarizes microinsurance as a tool for disaster risk management. It provides details on the devastating 2010 Haiti earthquake that killed over 222,000 people and caused $8 billion in economic losses. It then outlines the Microinsurance Catastrophe Risk Organisation (MiCRO) partnership model where Swiss Re provides reinsurance coverage to local insurers through a parametric policy, allowing faster payouts following a triggering event. The model aims to lower public costs from disasters, reduce reliance on foreign aid, and increase population resilience through expanded insurance access. Lessons learned include ongoing challenges of basis risk but also opportunities to improve disaster preparation and response.
El documento presenta una serie de preguntas de selección múltiple sobre diversos temas como geometría, geografía, historia e información general. Las preguntas tratan sobre los lados de un cuadrado, la capital de Brasil, el inventor del teléfono, el caballo de El Cid, la esposa del huevo, la esposa de Brad Pitt, el símbolo sexual de Colombia, la montaña más alta del mundo y el mejor equipo de fútbol de Colombia.
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...IJERD Editor
This document discusses methods for calculating semantic similarity between terms in ontologies. It summarizes the Wu-Palmer algorithm, which calculates similarity based on the depth of terms from their closest common ancestor. The document also describes a modified "tbk" algorithm that adds a penalization factor for terms in neighboring hierarchies to address limitations of Wu-Palmer. The paper proposes a new algorithm that calculates similarity based on the direct distance between terms rather than their distance from the root node. It argues this approach could provide better results than existing edge-based algorithms like Wu-Palmer and tbk.
El documento describe el casamiento de Cleopatra, reina de Egipto, incluyendo detalles sobre los invitados como Berenice IV, Marco Antonio y Arsione IV; la ceremonia religiosa que se llevaría a cabo en Abu Simbel; y una luna de miel luego de la boda.
Efektem naszego wieloletniego doświadczenia są nasze opinie i poglądy, na temat których mamy pewność, że są uzasadnione i warte poznania przez Państwa.Prócz specjalistycznych artykułów magazyn wzbogacony jest o teksty, które przewodniemu zagadnieniu nadają szerszy kontekst. Zapewniając czytelnikom spojrzenie z innego, aniżeli nasz, punktu widzenia, proponujemy lekturę felietonów niezależnych konsultantów.
DCT based Steganographic Evaluation parameter analysis in Frequency domain by...IOSR Journals
This document analyzes DCT-based steganography using a modified JPEG luminance quantization table to improve evaluation parameters like PSNR, mean square error, and capacity. The authors propose modifying the default 8x8 quantization table by adjusting frequency values in 4 bands to increase image quality for the embedded stego image. Experimental results on test images show that using the modified table improves PSNR, decreases mean square error, and increases maximum embedding capacity compared to the default table. Therefore, the proposed method allows more secret data to be hidden with less distortion and improved image quality.
4 ijaems jun-2015-5-hybrid algorithmic approach for medical image compression...INFOGAIN PUBLICATION
This document summarizes a research paper that proposes a hybrid algorithm for medical image compression using discrete wavelet transform (DWT) and Huffman coding techniques. The algorithm performs multilevel decomposition of medical images using DWT, quantizes the coefficients, assigns Huffman codes, and compresses the images. Simulation results on test medical images showed that the algorithm achieved excellent reconstruction quality with better compression ratios compared to other techniques. The algorithm is well-suited for compressing and transmitting large volumes of medical images over cloud platforms.
Content Based Image Retrieval Using 2-D Discrete Wavelet TransformIOSR Journals
This document proposes a content-based image retrieval system using 2D discrete wavelet transform and texture features. The system decomposes images using 2D DWT, extracts texture features from low frequency coefficients using GLCM, and retrieves similar images by calculating Euclidean distances between feature vectors. Experimental results on Wang's database show the proposed approach achieves 89.8% average retrieval accuracy.
Image processing is among rapidly growing technologies today, with its applications in various aspects of a business. Image Processing forms core research area within electronics engineering and computer science disciplines too. Image Processing is a technique to enhance raw images received from satellites, space probes, aircrafts, military reconnaissance flights or pictures taken in normal day-to-day life from normal cameras. The field is becoming powerful and popular because of technically powerful personal computers, large memories of available devices as well as graphic softwares and tools available with that devices and gadgets. Image acquisition, pre-processing, segmentation, representation, recognition and interpretation are the different basic steps through which image processing is carried out. [3][4].
Color image analyses using four deferent transformationsAlexander Decker
This document discusses and compares four different image transformations: discrete Fourier transform (DFT), discrete cosine transform (DCT), wavelet transform (DWT), and discrete multiwavelet transform (DMWT). It analyzes the effectiveness of each transform for processing color images in terms of noise reduction, enhancement, brightness, compression, and resolution. The performance of the techniques is evaluated using computer simulations in Visual Basic 6.
Color image compression based on spatial and magnitude signal decomposition IJECEIAES
In this paper, a simple color image compression system has been proposed using image signal decomposition. Where, the RGB image color band is converted to the less correlated YUV color model and the pixel value (magnitude) in each band is decomposed into 2-values; most and least significant. According to the importance of the most significant value (MSV) that influenced by any simply modification happened, an adaptive lossless image compression system is proposed using bit plane (BP) slicing, delta pulse code modulation (Delta PCM), adaptive quadtree (QT) partitioning followed by an adaptive shift encoder. On the other hand, a lossy compression system is introduced to handle the least significant value (LSV), it is based on an adaptive, error bounded coding system, and it uses the DCT compression scheme. The performance of the developed compression system was analyzed and compared with those attained from the universal standard JPEG, and the results of applying the proposed system indicated its performance is comparable or better than that of the JPEG standards.
This document outlines the syllabus for the Data Structures course in the second year, fourth semester of the B.Sc Computer Science program at Sri Krishnadevaraya University in Ananthapuramu, Andhra Pradesh, India. The syllabus covers five units: (1) concepts of abstract data types and linear/non-linear data structures, (2) stacks, queues, trees, (3) binary trees, binary search trees, graphs, (4) graph traversals, searching techniques, and (5) sorting algorithms. The course aims to teach students fundamental data structures and algorithms.
A S URVEY ON D OCUMENT I MAGE A NALYSIS AND R ETRIEVAL S YSTEMSIJCI JOURNAL
The digitization of documents and their availabilit
y over the network demands solution toward content
based document image analysis, indexing, searching
and retrieval. Signature, Logo and Layout of the
documents present convincing evidence and provide a
n important form of indexing for effective document
image retrieval in a variety of applications. This
paper describes methods and techniques developed fo
r
document image analysis and retrieval by researcher
s.
A broad ranging open access journal Fast and efficient online submission Expe...ijceronline
nternational Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
IMAGE TRANSFORMATION & DWT BASED IMAGE DECOMPOSITION FOR COVERT COMMUNICATION Editor Jacotech
Widely used computer, and therefore require large-scale data storage and transfer, and efficient method of data storage has become necessary. Image compression is to reduce the number of bytes in an image file, without degrading the image quality to an unacceptable level. In reducing the file size to allow more images to be stored in a given amount of memory or disk space. It also reduces the desired image is transmitted from a website via the Internet or downloaded time. Gray image is 256 × 256 pixels of 65,536 Yuan, to store and a typical 640 × 480 colour image of nearly one million. These files are downloaded from the Internet can be very time-consuming task. A significant portion of the image data of the multimedia data comprises they occupy the major portion of the communication bandwidth used for the multimedia communication. Therefore, the development of effective techniques for image compression has become quite necessary [9]. The basic goal of image compression is to find the image representation associated with fewer pixels. Two basic principles used in image compression is redundant and irrelevant. Source Redundancy eliminating redundant and irrelevant omit pixel values rather than by the human eye to detect. International standard for image compression work began in the late 1970s with the CCITT (now ITU-T) requires specification of the binary image compression algorithm facsimile communications. Image compression standard brings many benefits, such as: (1) different image files between devices and applications easily exchanged; (2) the re-use of existing hardware and software products are more widely; (3) the existence of benchmarks and benchmark data sets for new and alternative development.
This document discusses a hand gesture recognition system for underprivileged individuals. It begins by outlining the key steps in hand gesture recognition systems: image capture, pre-processing, segmentation, feature extraction and gesture recognition. It then goes into more detail on specific techniques for each step, such as thresholding and edge detection for segmentation. The document also covers applications like access control, sign language translation and future areas like biometric authentication. In conclusion, it proposes that hand gesture recognition can help disabled individuals communicate through accessible human-computer interaction.
Comparative Analysis of Lossless Image Compression Based On Row By Row Classi...IJERA Editor
This document proposes and evaluates a near lossless image compression algorithm that divides color images into red, green, and blue channels. It classifies pixels in each channel row-by-row and records the results in mask images. The image data is then decomposed into sequences based on the classification and the mask images are hidden in the least significant bits of the sequences. Different encoding schemes like LZW, Huffman, and RLE are applied and compared. Experimental results on test images show the proposed algorithm achieves smaller bits per pixel than simple encoding schemes. PSNR values also indicate very little difference between original and reconstructed images.
International Journal on Soft Computing ( IJSC )ijsc
This document provides a summary of various wavelet-based image coding schemes. It discusses the basics of image compression including transformation, quantization, and encoding. It then reviews the wavelet transform approach and several popular wavelet coding techniques such as EZW, SPIHT, SPECK, EBCOT, WDR, ASWDR, SFQ, EPWIC, CREW, SR and GW coding. These techniques exploit the multi-resolution properties of wavelets for superior energy compaction and compression performance compared to DCT-based methods like JPEG. The document provides details on how each coding scheme works and compares their features.
This document discusses image mosaicing, which is the process of combining multiple overlapping images into a single image with a larger field of view. It describes image mosaicing models and algorithms, including feature extraction, image registration, homographic refinement, image warping and blending. Two main algorithms are presented: unidirectional scanning and bidirectional scanning. The document also discusses applications of image mosaicing like creating panoramic images and immersive virtual environments, and limitations such as difficulties mosaicing more than four images.
This document discusses a proposed approach for multi-focus image fusion using a discrete cosine wavelet sharpness criterion. Multi-focus image fusion combines information from multiple images of the same scene to produce an "all-in-focus" image. The proposed approach uses a discrete cosine transform to calculate sharpness values for sub-blocks of the input images and selects the sharpest sub-blocks to include in the fused image. Experimental results on images of a clock, bottle, and book show the discrete cosine wavelet criterion produces fused images with higher quality than a bilateral gradient-based sharpness criterion, as measured by mutual information metrics.
This document contains the class schedule for a school called Unidad Educativa Internacional Liceo Iberoamericano. It lists the courses taught each day of the week during each class period, with class periods ranging from 7:00-14:20 and courses taught in Spanish including L.E. (Lengua Española), M. (Matemáticas), C.C.NN. (Ciencias Naturales), among others. The schedule is broken into four pages with minor variations each page.
The document summarizes microinsurance as a tool for disaster risk management. It provides details on the devastating 2010 Haiti earthquake that killed over 222,000 people and caused $8 billion in economic losses. It then outlines the Microinsurance Catastrophe Risk Organisation (MiCRO) partnership model where Swiss Re provides reinsurance coverage to local insurers through a parametric policy, allowing faster payouts following a triggering event. The model aims to lower public costs from disasters, reduce reliance on foreign aid, and increase population resilience through expanded insurance access. Lessons learned include ongoing challenges of basis risk but also opportunities to improve disaster preparation and response.
El documento presenta una serie de preguntas de selección múltiple sobre diversos temas como geometría, geografía, historia e información general. Las preguntas tratan sobre los lados de un cuadrado, la capital de Brasil, el inventor del teléfono, el caballo de El Cid, la esposa del huevo, la esposa de Brad Pitt, el símbolo sexual de Colombia, la montaña más alta del mundo y el mejor equipo de fútbol de Colombia.
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...IJERD Editor
This document discusses methods for calculating semantic similarity between terms in ontologies. It summarizes the Wu-Palmer algorithm, which calculates similarity based on the depth of terms from their closest common ancestor. The document also describes a modified "tbk" algorithm that adds a penalization factor for terms in neighboring hierarchies to address limitations of Wu-Palmer. The paper proposes a new algorithm that calculates similarity based on the direct distance between terms rather than their distance from the root node. It argues this approach could provide better results than existing edge-based algorithms like Wu-Palmer and tbk.
El documento describe el casamiento de Cleopatra, reina de Egipto, incluyendo detalles sobre los invitados como Berenice IV, Marco Antonio y Arsione IV; la ceremonia religiosa que se llevaría a cabo en Abu Simbel; y una luna de miel luego de la boda.
Efektem naszego wieloletniego doświadczenia są nasze opinie i poglądy, na temat których mamy pewność, że są uzasadnione i warte poznania przez Państwa.Prócz specjalistycznych artykułów magazyn wzbogacony jest o teksty, które przewodniemu zagadnieniu nadają szerszy kontekst. Zapewniając czytelnikom spojrzenie z innego, aniżeli nasz, punktu widzenia, proponujemy lekturę felietonów niezależnych konsultantów.
Severe accidents of nuclear power plants in Europe: possible consequences and...Global Risk Forum GRFDavos
Petra SEIBERT1, Delia ARNOLD1,4, Gabriele MRAZ3, Nikolaus ARNOLD2, Klaus GUFLER2, Helga KROMP-KOLB1, Wolfgang KROMP2, Philipp SUTTER3, Antonia WENISH3
1Institute of Meteorology, BOKU, Austria, Republic of; 2Institute for Security and Risk Sciences, BOKU, Austria, Republic of; 3Austrian Institute of Ecology, Austria, Republic of; 4Institute of Energy Technologies (INTE), Technical University of Catalonia (UPC), Barcelona, Spain
This document outlines the details of a mentoring program called SmartLife Mentoring Program. It provides background on the origins of mentoring from Greek mythology. It then defines what a mentor is and lists 10 reasons for someone to become a mentor and 10 reasons for someone to become a mentee. The document outlines commitments expected of mentors and mentees. It provides guidelines for effective mentoring conversations and preparations for initial meetings. It discusses the agenda, roles of parties, deliverables, results and dos/don'ts of the mentoring relationship.
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...IJERD Editor
The document describes a wireless monitoring system for electric drive systems using ZigBee technology. The system includes intelligent sensor modules that measure parameters like temperature, voltage, and current. The sensor modules transmit data via ZigBee to a coordinator device, which can connect to the internet using GSM/GPRS or Ethernet to send the data to a database server. The monitoring system allows remote monitoring of electric drives in various applications.
What is marketing and the marketing mixteacherhall
The document discusses marketing and the marketing mix. It defines marketing as activities involved in planning, pricing, promoting, distributing, and selling goods and services to satisfy consumer needs. The purpose of marketing is to generate brand awareness, increase sales/profits for businesses, and generate awareness/increase donations for non-profits. The marketing mix, also called the four Ps, consists of product, place, price, and promotion. Marketers must consider the right combination of these elements to differentiate their offering. The document also discusses how the importance of each P can depend on the specific product or service, and no single P is most important on its own.
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...IJERD Editor
This document proposes a modified design for a three-loop lateral missile autopilot based on linear quadratic regulator (LQR) and a reduced order observer. The design improves on previous three-loop autopilot designs by 1) reducing the initial negative peak in response due to non-minimum phase properties, 2) reaching a steady state flight path rate of 1 for a unit step input command, eliminating steady state error. The modified design uses a reduced order Das & Ghosal observer, output feedback, and LQR state feedback to optimally place closed loop poles for desired time domain performance. Simulation results demonstrate the improved performance over classical two-loop and three-loop autopilot designs.
Freedom is singing the song of the sun in the rain.Rhea Myers
The document contains a collection of short quotes and passages on various topics ranging from technology, privacy, money, existence, and the future. It discusses issues like how technology has exceeded humanity, the challenges of understanding complexity outside of human perception, and whether new technologies will enable moneymaking opportunities or punish content buyers. The document explores many profound ideas in a concise yet thought-provoking manner.
The document proposes establishing a national earthquake insurance program for Greece. Greece experiences a strong earthquake every two years on average and earthquakes have caused over 10 billion euros in damages in the last three decades. However, less than 20% of buildings are insured for earthquake damage. The 1999 Athens earthquake caused over 3.5 billion euros in losses but insurance only covered 4% of losses. The proposal calls for compulsory earthquake insurance available to all homeowners with risk-based and affordable pricing. Insurance limits, first loss coverage, and ceding some state liability to private insurers could help establish an effective national catastrophe insurance program. Modeling indicates insurance premiums from 50-140 euros depending on location, age of home
Powerpoint class 2 historical avant garde debussyrebakim
The document defines key terms related to the historical avant-garde such as avant-garde, Belle époque, Impressionism, and various musical scales and techniques. It also lists influential figures of the time including Claude Debussy, Claude Monet, Georges Seurat, Josef Danhauser, and provides examples of their notable works like A Sunday Afternoon on the Island of La Grande Jatte, Wild Poppies, and Impression, Sunrise that helped advance Impressionism. Musical excerpts from Balinese gamelan are also mentioned.
The Grant Thornton International Business Report (IBR) surveys over 11,500 businesses globally each year. The 2012 report focuses on Russian businesses and their outlook. It found that while the Russian economy grew 4.2% in 2011, businesses were slightly less optimistic than in 2010. Regulations and red tape were cited as major constraints. Optimism around employment, revenues and profits improved in 2011 but dipped in the last quarter compared to earlier in the year.
Pain nourishes courage. You can't be brave if you've only had wonderful thing...Rhea Myers
The document discusses several topics including technology, information, context, and money. It notes that information needs context to have meaning, and that advancing technology can seem like magic. It also references that money alone does not necessarily bring happiness or success.
Grant Thornton - Facing an uncertain future: Government intervention threaten...Grant Thornton
The document discusses increasing government intervention in the global mining sector that is adding complexity and uncertainty. It poses threats such as higher taxes, restrictive regulations, and potential nationalization of mining assets. This raises risks for commodity prices, valuations, and investment. Government interventions are motivated by desires for more revenue and responding to public pressures around environmental issues. The key areas discussed are taxation increases, nationalization/indigenization policies, and stricter environmental regulations being implemented around the world.
Grant Thornton - Facing an uncertain future: Government intervention threaten...
Similar to IJERD (www.ijerd.com) International Journal of Engineering Research and Development IJERD : hard copy of journal, Call for Papers 2012, publishing of journal, journal of science and technology, research paper publishing, where to publish research paper,
Texture based feature extraction and object trackingPriyanka Goswami
This document provides a project report on texture-based feature extraction and object tracking. It discusses using various texture analysis techniques like Local Binary Pattern (LBP), Local Derivative Pattern (LDP), and Local Ternary Pattern (LTP) to extract features from images for tasks like cloud tracking. It implements these techniques in MATLAB and evaluates them on standard datasets to extract features and represent images with histograms for tasks like image recognition and analysis while reducing computational requirements compared to using raw images. The techniques are then applied to track cloud motion in weather satellite images by analyzing differences in texture histograms over time.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
This document summarizes a research paper on using discrete wavelet transform for medical image retrieval. It discusses extracting texture features like energy, entropy, contrast and correlation from images using DWT. Haar wavelet is used to analyze texture features. The texture features of images in a database are calculated and compared to an input image to retrieve similar images from the database. Local binary patterns are also extracted as features for classification and retrieval of medical images.
The document summarizes an automatic text extraction system for complex images. The system uses discrete wavelet transform for text localization. Morphological operations like erosion and dilation are used to enhance text identification and segmentation. Text regions are segmented using connected component analysis and properties like area and bounding box shape. The extracted text is recognized and shown in a text file. The system allows modifying the recognized text and shows better performance than existing techniques.
Annotated Bibliography On Multimedia SecurityBrenda Higgins
This document discusses an efficient distributed arithmetic architecture for implementing the discrete wavelet transform (DWT) in JPEG2000 encoders. Distributed arithmetic is implemented to achieve multiplier-less computation in DWT filtering using a lookup table approach, which can reduce power consumption and hardware complexity compared to other DWT implementations. The main goals are to design high-speed digital filters for DWT using a parallel distributed arithmetic approach to speed up the computation.
Feature Extraction and Feature Selection using Textual Analysisvivatechijri
After pre-processing the images in character recognition systems, the images are segmented based on
certain characteristics known as “features”. The feature space identified for character recognition is however
ranging across a huge dimensionality. To solve this problem of dimensionality, the feature selection and feature
extraction methods are used. Hereby in this paper, we are going to discuss, the different techniques for feature
extraction and feature selection and how these techniques are used to reduce the dimensionality of feature space
to improve the performance of text categorization.
AVC based Compression of Compound Images Using Block Classification SchemeDR.P.S.JAGADEESH KUMAR
The document discusses a proposed method for compressing compound images using block classification and different compression schemes for different block types. The method classifies blocks of a compound image as background, text, hybrid, or picture blocks using a histogram-based approach. Different compression algorithms are then applied to different block types, including run-length encoding for background blocks, wavelet coding for text blocks, H.264 AVC with CABAC entropy coding for hybrid blocks, and JPEG coding for picture blocks. Experimental results showed that this block classification and compression scheme approach improved the compression ratio for compound images over single compression methods but increased computational complexity.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Enhancement and Segmentation of Historical Recordscsandit
Document Analysis and Recognition (DAR) aims to extract automatically the information in the document and also addresses to human comprehension. The automatic processing of degraded
historical documents are applications of document image analysis field which is confronted with many difficulties due to the storage condition and the complexity of the script. The main interest
of enhancement of historical documents is to remove undesirable statistics that appear in the
background and highlight the foreground, so as to enable automatic recognition of documents
with high accuracy. This paper addresses pre-processing and segmentation of ancient scripts, as an initial step to automate the task of an epigraphist in reading and deciphering inscriptions.
Pre-processing involves, enhancement of degraded ancient document images which is achieved through four different Spatial filtering methods for smoothing or sharpening namely Median,
Gaussian blur, Mean and Bilateral filter, with different mask sizes. This is followed by
binarization of the enhanced image to highlight the foreground information, using Otsu
thresholding algorithm. In the second phase Segmentation is carried out using Drop Fall and
WaterReservoir approaches, to obtain sampled characters, which can be used in later stages of
OCR. The system showed good results when tested on the nearly 150 samples of varying
degraded epigraphic images and works well giving better enhanced output for, 4x4 mask size
for Median filter, 2x2 mask size for Gaussian blur, 4x4 mask size for Mean and Bilateral filter.
The system can effectively sample characters from enhanced images, giving a segmentation rate of 85%-90% for Drop Fall and 85%-90% for Water Reservoir techniques respectively
Multi Wavelet for Image Retrival Based On Using Texture and Color QuerysIOSR Journals
This document summarizes a research paper on using multi-wavelet transforms for content-based image retrieval. The paper proposes extracting multi-wavelet features from images in a database and query images to measure similarity. It calculates energy levels from multi-wavelet sub-bands and uses Canberra distance between feature vectors to retrieve similar images. The method achieves 98.5% accuracy and is faster than using Gabor wavelets. In conclusion, multi-wavelet transforms provide good performance for content-based image retrieval applications.
Low Memory Low Complexity Image Compression Using HSSPIHT EncoderIJERA Editor
Due to the large requirement for memory and the high complexity of computation, JPEG2000 cannot be used in
many conditions especially in the memory constraint equipment. The line-based W avelet transform was
proposed and accepted because lower memory is required without affecting the result of W avelet transform, In
this paper, the improved lifting schem e is introduced to perform W avelet transform to replace Mallat method
that is used in the original line-based wavelet transform. In this a three-adder unit is adopted to realize lifting
scheme. It can perform wavelet transform with less computation and reduce memory than Mallat algorithm. The
corresponding HS_SPIHT coding is designed here so that the proposed algorithm is more suitable for
equipment. W e proposed a highly scale image compression scheme based on the Set Partitioning in
Hierarchical Trees (SPIHT) algorithm. Our algorithm, called Highly Scalable SPIHT (HS_SPIHT), supports
High Compression efficiency, spatial and SNR scalability and provides l bit stream that can be easily adapted to
give bandwidth and resolution requirements by a simple transcoder (parse). The HS_SPIHT algorithm adds
the spatial scalability feature without sacrificing the S NR embeddedness property as found in the original
SPIHT bit stream. Highly scalable image compression scheme based on the SPIHT algorithm the proposed
algorithm used, highly scalable SPIHT (HS_SPIHT) Algorithm, adds the spatial scalability feature to the SPIHT
algorithm through the introduction of multiple resolution dependent lists and a resolution-dependent sorting
pass. SPIHT keeps the import features of the original SPIHT algorithm such as compression efficiency, full
SNR Scalability and low complexity.
EFFICIENT IMAGE RETRIEVAL USING REGION BASED IMAGE RETRIEVALsipij
1) The document describes an efficient region-based image retrieval system that uses discrete wavelet transform and k-means clustering. It segments images into regions, each characterized by features like size, mean, and covariance.
2) The system pre-processes images by resizing, converting to HSV color space, performing DWT, and using k-means clustering on DWT coefficients to generate regions. It extracts features for each region and stores them in a database.
3) For retrieval, it pre-processes the query image similarly and calculates similarities between the query regions and database regions based on their features, returning similar images.
This document summarizes image indexing and its features. It discusses that image indexing is used to retrieve similar images from a database based on extracted features like color, shape, and texture. Color features can be represented by models like RGB, HSV, and color histograms. Shape features include global properties like roundness and local features like edge segments. Texture is described using statistical, structural, and spectral approaches. Texture feature extraction methods discussed include standard wavelets, Gabor wavelets, and extracting features like entropy and standard deviation. The paper provides an overview of the different features used for image indexing and classification.
Header Based Classification of Journals Using Document Image Segmentation and...CSCJournals
Document image segmentation plays an important role in classification of journals, magazines, newspaper, etc., It is a process of splitting the document into distinct regions. Document layout analysis is a key process of identifying and categorizing the regions of interest in the scanned image of a text document. A reading system requires the segmentation of text zones from non- textual ones and the arrangement in their correct reading order. Detection and labelling of text zones play different logical roles inside the document such as titles, captions, footnotes, etc. This research work proposes a new approach to segment the document and classify the journals based on the header block. Documents are collected from different journals and used as input image. The image is segmented into blocks like heading, header, author name and footer using Particle Swarm optimization algorithm and features are extracted from header block using Gray Level Co-occurrences Matrix. Extreme Learning Machine has been used for classification based on the header blocks and obtained 82.3% accuracy.
This document discusses various image compression methods and algorithms. It begins by explaining the need for image compression in applications like transmission, storage, and databases. It then reviews different types of compression, including lossless techniques like run length encoding and Huffman encoding, and lossy techniques like transformation coding, vector quantization, fractal coding, and subband coding. The document also describes the JPEG 2000 image compression algorithm and applications of JPEG 2000. Finally, it discusses self-organizing feature maps (SOM) and learning vector quantization (VQ) for image compression.
Binarization of Degraded Text documents and Palm Leaf ManuscriptsIRJET Journal
This document proposes a technique for binarizing degraded text documents and palm leaf manuscripts. It involves taking the average pixel value of the image as a threshold to distinguish foreground from background. The algorithm first computes the average value of the original image and uses it to set pixels above the threshold to black, removing background. It then computes the average of the remaining image, excluding black pixels, and uses that value as a new threshold to set remaining pixels above it to white, extracting the foreground. The technique is tested on old documents and manuscripts, showing improvement over existing methods based on metrics like peak signal-to-noise ratio. While effective for documents, it needs improvement for palm leaf manuscripts with non-uniform degradation.
Comparison of various Image Registration Techniques with the Proposed Hybrid ...idescitation
Image Registration is termed as the method to
transform different forms of image data into one coordinate
system. Registration is a important part in image processing
which is used for matching the pictures which are obtained at
different time intervals or from various sensors. A broad range
of registration techniques have been developed for the various
types of image data. These techniques are independently
studied for many applications resulting in the large body of
result. Vision is the most advanced of human sensors, so
naturally images play one of the most important roles in
human perception. Image registration is one of the branches
encompassed by the diverse field of digital image processing.
Due to its importance in many application areas as well as
since its nature is complicated; image registration is now the
topic of much recent research. Registration algorithms tend
to compute transformations to set correspondence betweenthe two images. In this paper the survey is done on various
image registration techniques. Also the different techniques
are compared with the proposed system of the projec
Image compression techniques by using wavelet transformAlexander Decker
This document discusses image compression techniques using wavelet transforms. It begins with an introduction to image compression and discusses lossless and lossy compression methods. It then focuses on wavelet transforms, which decompose images into different frequency components, allowing for better compression. The document describes how wavelet-based compression avoids blocking artifacts seen in other methods like DCT. It details an image compression program called MinImage that implements various wavelet types and the embedded zerotree wavelet coding algorithm to achieve good compression ratios while maintaining image quality. In conclusion, wavelet transforms combined with entropy coding provide effective lossy compression of digital images.
A High Performance Modified SPIHT for Scalable Image CompressionCSCJournals
In this paper, we present a novel extension technique to the Set Partitioning in Hierarchical Trees (SPIHT) based image compression with spatial scalability. The present modification and the preprocessing techniques provide significantly better quality (both subjectively and objectively) reconstruction at the decoder with little additional computational complexity. There are two proposals for this paper. Firstly, we propose a pre-processing scheme, called Zero-Shifting, that brings the spatial values in signed integer range without changing the dynamic ranges, so that the transformed coefficient calculation becomes more consistent. For that reason, we have to modify the initialization step of the SPIHT algorithms. The experiments demonstrate a significant improvement in visual quality and faster encoding and decoding than the original one. Secondly, we incorporate the idea to facilitate resolution scalable decoding (not incorporated in original SPIHT) by rearranging the order of the encoded output bit stream. During the sorting pass of the SPIHT algorithm, we model the transformed coefficient based on the probability of significance, at a fixed threshold of the offspring. Calling it a fixed context model and generating a Huffman code for each context, we achieve comparable compression efficiency to that of arithmetic coder, but with much less computational complexity and processing time. As far as objective quality assessment of the reconstructed image is concerned, we have compared our results with popular Peak Signal to Noise Ratio (PSNR) and with Structural Similarity Index (SSIM). Both these metrics show that our proposed work is an improvement over the original one.
Similar to IJERD (www.ijerd.com) International Journal of Engineering Research and Development IJERD : hard copy of journal, Call for Papers 2012, publishing of journal, journal of science and technology, research paper publishing, where to publish research paper, (20)
A Novel Method for Prevention of Bandwidth Distributed Denial of Service AttacksIJERD Editor
Distributed Denial of Service (DDoS) Attacks became a massive threat to the Internet. Traditional
Architecture of internet is vulnerable to the attacks like DDoS. Attacker primarily acquire his army of Zombies,
then that army will be instructed by the Attacker that when to start an attack and on whom the attack should be
done. In this paper, different techniques which are used to perform DDoS Attacks, Tools that were used to
perform Attacks and Countermeasures in order to detect the attackers and eliminate the Bandwidth Distributed
Denial of Service attacks (B-DDoS) are reviewed. DDoS Attacks were done by using various Flooding
techniques which are used in DDoS attack.
The main purpose of this paper is to design an architecture which can reduce the Bandwidth
Distributed Denial of service Attack and make the victim site or server available for the normal users by
eliminating the zombie machines. Our Primary focus of this paper is to dispute how normal machines are
turning into zombies (Bots), how attack is been initiated, DDoS attack procedure and how an organization can
save their server from being a DDoS victim. In order to present this we implemented a simulated environment
with Cisco switches, Routers, Firewall, some virtual machines and some Attack tools to display a real DDoS
attack. By using Time scheduling, Resource Limiting, System log, Access Control List and some Modular
policy Framework we stopped the attack and identified the Attacker (Bot) machines
Hearing loss is one of the most common human impairments. It is estimated that by year 2015 more
than 700 million people will suffer mild deafness. Most can be helped by hearing aid devices depending on the
severity of their hearing loss. This paper describes the implementation and characterization details of a dual
channel transmitter front end (TFE) for digital hearing aid (DHA) applications that use novel micro
electromechanical- systems (MEMS) audio transducers and ultra-low power-scalable analog-to-digital
converters (ADCs), which enable a very-low form factor, energy-efficient implementation for next-generation
DHA. The contribution of the design is the implementation of the dual channel MEMS microphones and powerscalable
ADC system.
Influence of tensile behaviour of slab on the structural Behaviour of shear c...IJERD Editor
-A composite beam is composed of a steel beam and a slab connected by means of shear connectors
like studs installed on the top flange of the steel beam to form a structure behaving monolithically. This study
analyzes the effects of the tensile behavior of the slab on the structural behavior of the shear connection like slip
stiffness and maximum shear force in composite beams subjected to hogging moment. The results show that the
shear studs located in the crack-concentration zones due to large hogging moments sustain significantly smaller
shear force and slip stiffness than the other zones. Moreover, the reduction of the slip stiffness in the shear
connection appears also to be closely related to the change in the tensile strain of rebar according to the increase
of the load. Further experimental and analytical studies shall be conducted considering variables such as the
reinforcement ratio and the arrangement of shear connectors to achieve efficient design of the shear connection
in composite beams subjected to hogging moment.
Gold prospecting using Remote Sensing ‘A case study of Sudan’IJERD Editor
Gold has been extracted from northeast Africa for more than 5000 years, and this may be the first
place where the metal was extracted. The Arabian-Nubian Shield (ANS) is an exposure of Precambrian
crystalline rocks on the flanks of the Red Sea. The crystalline rocks are mostly Neoproterozoic in age. ANS
includes the nations of Israel, Jordan. Egypt, Saudi Arabia, Sudan, Eritrea, Ethiopia, Yemen, and Somalia.
Arabian Nubian Shield Consists of juvenile continental crest that formed between 900 550 Ma, when intra
oceanic arc welded together along ophiolite decorated arc. Primary Au mineralization probably developed in
association with the growth of intra oceanic arc and evolution of back arc. Multiple episodes of deformation
have obscured the primary metallogenic setting, but at least some of the deposits preserve evidence that they
originate as sea floor massive sulphide deposits.
The Red Sea Hills Region is a vast span of rugged, harsh and inhospitable sector of the Earth with
inimical moon-like terrain, nevertheless since ancient times it is famed to be an abode of gold and was a major
source of wealth for the Pharaohs of ancient Egypt. The Pharaohs old workings have been periodically
rediscovered through time. Recent endeavours by the Geological Research Authority of Sudan led to the
discovery of a score of occurrences with gold and massive sulphide mineralizations. In the nineties of the
previous century the Geological Research Authority of Sudan (GRAS) in cooperation with BRGM utilized
satellite data of Landsat TM using spectral ratio technique to map possible mineralized zones in the Red Sea
Hills of Sudan. The outcome of the study mapped a gossan type gold mineralization. Band ratio technique was
applied to Arbaat area and a signature of alteration zone was detected. The alteration zones are commonly
associated with mineralization. The alteration zones are commonly associated with mineralization. A filed check
confirmed the existence of stock work of gold bearing quartz in the alteration zone. Another type of gold
mineralization that was discovered using remote sensing is the gold associated with metachert in the Atmur
Desert.
Reducing Corrosion Rate by Welding DesignIJERD Editor
This document summarizes a study on reducing corrosion rates in steel through welding design. The researchers tested different welding groove designs (X, V, 1/2X, 1/2V) and preheating temperatures (400°C, 500°C, 600°C) on ferritic malleable iron samples. Testing found that X and V groove designs with 500°C and 600°C preheating had corrosion rates of 0.5-0.69% weight loss after 14 days, compared to 0.57-0.76% for 400°C preheating. Higher preheating reduced residual stresses which decreased corrosion. Residual stresses were 1.7 MPa for optimal X groove and 600°C
Router 1X3 – RTL Design and VerificationIJERD Editor
Routing is the process of moving a packet of data from source to destination and enables messages
to pass from one computer to another and eventually reach the target machine. A router is a networking device
that forwards data packets between computer networks. It is connected to two or more data lines from different
networks (as opposed to a network switch, which connects data lines from one single network). This paper,
mainly emphasizes upon the study of router device, it‟s top level architecture, and how various sub-modules of
router i.e. Register, FIFO, FSM and Synchronizer are synthesized, and simulated and finally connected to its top
module.
Active Power Exchange in Distributed Power-Flow Controller (DPFC) At Third Ha...IJERD Editor
This paper presents a component within the flexible ac-transmission system (FACTS) family, called
distributed power-flow controller (DPFC). The DPFC is derived from the unified power-flow controller (UPFC)
with an eliminated common dc link. The DPFC has the same control capabilities as the UPFC, which comprise
the adjustment of the line impedance, the transmission angle, and the bus voltage. The active power exchange
between the shunt and series converters, which is through the common dc link in the UPFC, is now through the
transmission lines at the third-harmonic frequency. DPFC multiple small-size single-phase converters which
reduces the cost of equipment, no voltage isolation between phases, increases redundancy and there by
reliability increases. The principle and analysis of the DPFC are presented in this paper and the corresponding
simulation results that are carried out on a scaled prototype are also shown.
Mitigation of Voltage Sag/Swell with Fuzzy Control Reduced Rating DVRIJERD Editor
Power quality has been an issue that is becoming increasingly pivotal in industrial electricity
consumers point of view in recent times. Modern industries employ Sensitive power electronic equipments,
control devices and non-linear loads as part of automated processes to increase energy efficiency and
productivity. Voltage disturbances are the most common power quality problem due to this the use of a large
numbers of sophisticated and sensitive electronic equipment in industrial systems is increased. This paper
discusses the design and simulation of dynamic voltage restorer for improvement of power quality and
reduce the harmonics distortion of sensitive loads. Power quality problem is occurring at non-standard
voltage, current and frequency. Electronic devices are very sensitive loads. In power system voltage sag,
swell, flicker and harmonics are some of the problem to the sensitive load. The compensation capability
of a DVR depends primarily on the maximum voltage injection ability and the amount of stored
energy available within the restorer. This device is connected in series with the distribution feeder at
medium voltage. A fuzzy logic control is used to produce the gate pulses for control circuit of DVR and the
circuit is simulated by using MATLAB/SIMULINK software.
Study on the Fused Deposition Modelling In Additive ManufacturingIJERD Editor
Additive manufacturing process, also popularly known as 3-D printing, is a process where a product
is created in a succession of layers. It is based on a novel materials incremental manufacturing philosophy.
Unlike conventional manufacturing processes where material is removed from a given work price to derive the
final shape of a product, 3-D printing develops the product from scratch thus obviating the necessity to cut away
materials. This prevents wastage of raw materials. Commonly used raw materials for the process are ABS
plastic, PLA and nylon. Recently the use of gold, bronze and wood has also been implemented. The complexity
factor of this process is 0% as in any object of any shape and size can be manufactured.
Spyware triggering system by particular string valueIJERD Editor
This computer programme can be used for good and bad purpose in hacking or in any general
purpose. We can say it is next step for hacking techniques such as keylogger and spyware. Once in this system if
user or hacker store particular string as a input after that software continually compare typing activity of user
with that stored string and if it is match then launch spyware programme.
A Blind Steganalysis on JPEG Gray Level Image Based on Statistical Features a...IJERD Editor
This paper presents a blind steganalysis technique to effectively attack the JPEG steganographic
schemes i.e. Jsteg, F5, Outguess and DWT Based. The proposed method exploits the correlations between
block-DCTcoefficients from intra-block and inter-block relation and the statistical moments of characteristic
functions of the test image is selected as features. The features are extracted from the BDCT JPEG 2-array.
Support Vector Machine with cross-validation is implemented for the classification.The proposed scheme gives
improved outcome in attacking.
Secure Image Transmission for Cloud Storage System Using Hybrid SchemeIJERD Editor
- Data over the cloud is transferred or transmitted between servers and users. Privacy of that
data is very important as it belongs to personal information. If data get hacked by the hacker, can be
used to defame a person’s social data. Sometimes delay are held during data transmission. i.e. Mobile
communication, bandwidth is low. Hence compression algorithms are proposed for fast and efficient
transmission, encryption is used for security purposes and blurring is used by providing additional
layers of security. These algorithms are hybridized for having a robust and efficient security and
transmission over cloud storage system.
Application of Buckley-Leverett Equation in Modeling the Radius of Invasion i...IJERD Editor
A thorough review of existing literature indicates that the Buckley-Leverett equation only analyzes
waterflood practices directly without any adjustments on real reservoir scenarios. By doing so, quite a number
of errors are introduced into these analyses. Also, for most waterflood scenarios, a radial investigation is more
appropriate than a simplified linear system. This study investigates the adoption of the Buckley-Leverett
equation to estimate the radius invasion of the displacing fluid during waterflooding. The model is also adopted
for a Microbial flood and a comparative analysis is conducted for both waterflooding and microbial flooding.
Results shown from the analysis doesn’t only records a success in determining the radial distance of the leading
edge of water during the flooding process, but also gives a clearer understanding of the applicability of
microbes to enhance oil production through in-situ production of bio-products like bio surfactans, biogenic
gases, bio acids etc.
Gesture Gaming on the World Wide Web Using an Ordinary Web CameraIJERD Editor
- Gesture gaming is a method by which users having a laptop/pc/x-box play games using natural or
bodily gestures. This paper presents a way of playing free flash games on the internet using an ordinary webcam
with the help of open source technologies. Emphasis in human activity recognition is given on the pose
estimation and the consistency in the pose of the player. These are estimated with the help of an ordinary web
camera having different resolutions from VGA to 20mps. Our work involved giving a 10 second documentary to
the user on how to play a particular game using gestures and what are the various kinds of gestures that can be
performed in front of the system. The initial inputs of the RGB values for the gesture component is obtained by
instructing the user to place his component in a red box in about 10 seconds after the short documentary before
the game is finished. Later the system opens the concerned game on the internet on popular flash game sites like
miniclip, games arcade, GameStop etc and loads the game clicking at various places and brings the state to a
place where the user is to perform only gestures to start playing the game. At any point of time the user can call
off the game by hitting the esc key and the program will release all of the controls and return to the desktop. It
was noted that the results obtained using an ordinary webcam matched that of the Kinect and the users could
relive the gaming experience of the free flash games on the net. Therefore effective in game advertising could
also be achieved thus resulting in a disruptive growth to the advertising firms.
Hardware Analysis of Resonant Frequency Converter Using Isolated Circuits And...IJERD Editor
-LLC resonant frequency converter is basically a combo of series as well as parallel resonant ckt. For
LCC resonant converter it is associated with a disadvantage that, though it has two resonant frequencies, the
lower resonant frequency is in ZCS region[5]. For this application, we are not able to design the converter
working at this resonant frequency. LLC resonant converter existed for a very long time but because of
unknown characteristic of this converter it was used as a series resonant converter with basically a passive
(resistive) load. . Here, it was designed to operate in switching frequency higher than resonant frequency of the
series resonant tank of Lr and Cr converter acts very similar to Series Resonant Converter. The benefit of LLC
resonant converter is narrow switching frequency range with light load[6] . Basically, the control ckt plays a
very imp. role and hence 555 Timer used here provides a perfect square wave as the control ckt provides no
slew rate which makes the square wave really strong and impenetrable. The dead band circuit provides the
exclusive dead band in micro seconds so as to avoid the simultaneous firing of two pairs of IGBT’s where one
pair switches off and the other on for a slightest period of time. Hence, the isolator ckt here is associated with
each and every ckt used because it acts as a driver and an isolation to each of the IGBT is provided with one
exclusive transformer supply[3]. The IGBT’s are fired using the appropriate signal using the previous boards
and hence at last a high frequency rectifier ckt with a filtering capacitor is used to get an exact dc
waveform .The basic goal of this particular analysis is to observe the wave forms and characteristics of
converters with differently positioned passive elements in the form of tank circuits.
Simulated Analysis of Resonant Frequency Converter Using Different Tank Circu...IJERD Editor
LLC resonant frequency converter is basically a combo of series as well as parallel resonant ckt. For
LCC resonant converter it is associated with a disadvantage that, though it has two resonant frequencies, the
lower resonant frequency is in ZCS region [5]. For this application, we are not able to design the converter
working at this resonant frequency. LLC resonant converter existed for a very long time but because of
unknown characteristic of this converter it was used as a series resonant converter with basically a passive
(resistive) load. . Here, it was designed to operate in switching frequency higher than resonant frequency of the
series resonant tank of Lr and Cr converter acts very similar to Series Resonant Converter. The benefit of LLC
resonant converter is narrow switching frequency range with light load[6] . Basically, the control ckt plays a
very imp. role and hence 555 Timer used here provides a perfect square wave as the control ckt provides no
slew rate which makes the square wave really strong and impenetrable. The dead band circuit provides the
exclusive dead band in micro seconds so as to avoid the simultaneous firing of two pairs of IGBT’s where one
pair switches off and the other on for a slightest period of time. Hence, the isolator ckt here is associated with
each and every ckt used because it acts as a driver and an isolation to each of the IGBT is provided with one
exclusive transformer supply[3]. The IGBT’s are fired using the appropriate signal using the previous boards
and hence at last a high frequency rectifier ckt with a filtering capacitor is used to get an exact dc
waveform .The basic goal of this particular analysis is to observe the wave forms and characteristics of
converters with differently positioned passive elements in the form of tank circuits. The supported simulation
is done through PSIM 6.0 software tool
Amateurs Radio operator, also known as HAM communicates with other HAMs through Radio
waves. Wireless communication in which Moon is used as natural satellite is called Moon-bounce or EME
(Earth -Moon-Earth) technique. Long distance communication (DXing) using Very High Frequency (VHF)
operated amateur HAM radio was difficult. Even with the modest setup having good transceiver, power
amplifier and high gain antenna with high directivity, VHF DXing is possible. Generally 2X11 YAGI antenna
along with rotor to set horizontal and vertical angle is used. Moon tracking software gives exact location,
visibility of Moon at both the stations and other vital data to acquire real time position of moon.
“MS-Extractor: An Innovative Approach to Extract Microsatellites on „Y‟ Chrom...IJERD Editor
Simple Sequence Repeats (SSR), also known as Microsatellites, have been extensively used as
molecular markers due to their abundance and high degree of polymorphism. The nucleotide sequences of
polymorphic forms of the same gene should be 99.9% identical. So, Microsatellites extraction from the Gene is
crucial. However, Microsatellites repeat count is compared, if they differ largely, he has some disorder. The Y
chromosome likely contains 50 to 60 genes that provide instructions for making proteins. Because only males
have the Y chromosome, the genes on this chromosome tend to be involved in male sex determination and
development. Several Microsatellite Extractors exist and they fail to extract microsatellites on large data sets of
giga bytes and tera bytes in size. The proposed tool “MS-Extractor: An Innovative Approach to extract
Microsatellites on „Y‟ Chromosome” can extract both Perfect as well as Imperfect Microsatellites from large
data sets of human genome „Y‟. The proposed system uses string matching with sliding window approach to
locate Microsatellites and extracts them.
Importance of Measurements in Smart GridIJERD Editor
- The need to get reliable supply, independence from fossil fuels, and capability to provide clean
energy at a fixed and lower cost, the existing power grid structure is transforming into Smart Grid. The
development of a smart energy distribution grid is a current goal of many nations. A Smart Grid should have
new capabilities such as self-healing, high reliability, energy management, and real-time pricing. This new era
of smart future grid will lead to major changes in existing technologies at generation, transmission and
distribution levels. The incorporation of renewable energy resources and distribution generators in the existing
grid will increase the complexity, optimization problems and instability of the system. This will lead to a
paradigm shift in the instrumentation and control requirements for Smart Grids for high quality, stable and
reliable electricity supply of power. The monitoring of the grid system state and stability relies on the
availability of reliable measurement of data. In this paper the measurement areas that highlight new
measurement challenges, development of the Smart Meters and the critical parameters of electric energy to be
monitored for improving the reliability of power systems has been discussed.
Study of Macro level Properties of SCC using GGBS and Lime stone powderIJERD Editor
The document summarizes a study on the use of ground granulated blast furnace slag (GGBS) and limestone powder to replace cement in self-compacting concrete (SCC). Tests were conducted on SCC mixes with 0-50% replacement of cement with GGBS and 0-20% replacement with limestone powder. The results showed that replacing 30% of cement with GGBS and 15% with limestone powder produced SCC with the highest compressive strength of 46MPa, meeting fresh property requirements. The study concluded that this ternary blend of cement, GGBS and limestone powder can improve SCC properties while reducing costs.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
IJERD (www.ijerd.com) International Journal of Engineering Research and Development IJERD : hard copy of journal, Call for Papers 2012, publishing of journal, journal of science and technology, research paper publishing, where to publish research paper,
1. International Journal of Engineering Research and Development
e-ISSN: 2278-067X, p-ISSN: 2278-800X, www.ijerd.com
Volume 3, Issue 2 (August 2012), PP. 47-53
Document Image Segmentation for Analyzing of Data in Raster
Image
Dr .P.Sengottuvelan1, Mr.R.Arulmurugan2, Mr.R.Lokeshkumar3
1,2,3
Department of Information Technology, Bannari Amman Institute of Technology, Sathyamangalam, India
Abstract––This paper focuses on the needs of developing an automated Digital Library Management system .The purpose
is to automate the task of analyzing data containing in raster image documents for the purpose of intelligent information
retrieval in digital library. An efficient and computationally fast method for segmenting text and graphics part of
document images based on multi-scale wavelet analysis and statistical pattern recognition is presented. The extracted text
is further classified into Title, Author name, name of the publication etc and being stored in the database for further
Library related operations. We do not assume any a priori information regarding the font size, scanning resolution, type
of layout, etc. of the document in our segmentation scheme.
Keywords–––Document segmentation, daubechies wavelet, Multiscale wavelet analysis, priori Information, Fourier
transform.
I. INTRODUCTION
In Today’s world, automated processing and reading of documents has become an imperative need with efforts have been
made to store the documents in digitized form, but that requires an enormous storing space, even after compression using
modern techniques. Documents can be more effectively represented by separating the text and the graphics/image part and
storing the text as an ASCII (character) set and the graphics/image part as bit-maps. Document image segmentation plays an
important role because this facilitates efficient searching and storage of the text part in documents, required in large
databases. Consequently, several researchers have attempted different techniques to segment the text and graphics part in
document images [1]. Several useful techniques for text–graphics segmentation are given in, the most popular amongst these
being the top-down and bottom-up approaches[2]-[4].
The most common top-down techniques are run-length smoothing and projection profiles. Top-down approaches first split
the document into blocks, which are then identified and subdivided appropriately in terms of columns first and then into
paragraphs, text lines, and maybe also words[5]-[8]. Some assume these blocks to be only rectangular. The top-down
methods are not suitable for skewed texts, as these methods are restricted to rectangular blocks, whereas the bottom-up
methods are typically variants of the connected components which iteratively group together components of the same type
starting from the pixel level and form higher level descriptions of the printed regions of the document (words, text lines,
paragraphs etc.). The drawbacks with the connected components method is that it is sensitive to character size, scanning
resolution, inter-line, and inter-character spacing. A wavelet-based tool has been designed by them for distinguishing text
from non text regions and characterization of font sizes [8]. Some of the common difficulties that occur in documents are
given below:
Differences in font size, column layout, orientation, and other textual attributes.
Skewed documents and text regions with different orientations.
Degraded documents due to improper scanning.
Combinations of varying text and background gray levels.
Text regions touching or overlapping with non-text regions.
Irregular layout structures with non-convex or overlapping object boundaries.
Multicolumn document with misaligned text lines and different languages.
Thus to develop a full fledged system all the above said difficulties should be overcome.
II. PROPOSED SYSTEM
In the system thus proposed a technique called document image segmentation is used where text such as Title, Author name,
name of the publication etc.. is being extracted from the image being scanned (front cover of the book) classified accordingly
and is being stored in the database for further Library related operations. Thus the proposed system avoids the need for
manual entry of the information in the database.
47
2. Document Image Segmentation for Analyzing of Data in Raster Image
Fig.1.Raster Image
A working principle of the proposed system
The image is being scanned from book using a good quality scanner.
The textual regions are being separated and classified accordingly.
Then the segmented text is being stored in the database for further library related operations.
III. WAVELET THEORY
The wavelet transforms has many unique features that had made it a popular method for the purpose of image
processing. The wavelet transform performs a high degree of decor relation between neighboring pixels and it also provides
a distinct localization of the image in the spatial as well as frequency domain. The transform also provides an elegant sub-
band framework in which both high and low frequency component of the image can be analyzed separately.
A. Wavelet transform vs. Fourier transform
1. Similarities
The fast Fourier transform(FFT) and the discrete wavelet transform(DWT) are both linear operations that generate
a data structure that contains log2n segments of various lengths, usually filling and transforming it into a different a data
vector of length 2n.The mathematical properties of the matrices involved in the transforms are similar as well. The inverse
transform matrix for both FFT and DWT is the transpose of the original. As a result, both transforms can be viewed as a
rotation in function space to a different domain. For the FFT, this new domain contains basis functions that are sines and
cosines. For the wavelet transform, this new domain contains more complicated basis functions called wavelets, mother
wavelets, or analyzing wavelets.
Both transforms have another similarity, the basis functions are localized in frequency, making mathematical tools
such as power spectra useful at picking out frequencies and calculating power distributions.
2. Dissimilarities
The most interesting dissimilarity between these two kinds of transforms is that individual wavelet functions are
localized in space. Fourier sine and cosine functions are not. This localization feature, along with wavelet localization of
frequency, makes many functions and operators using wavelets sparse when transformed into the wavelet domain. This
sparseness, in turn, results in a number of useful applications such as data compression, detecting features in images, and
removing noise from time series.
An advantage of wavelet transforms is that the width of the windows varies. In order to isolate signal
discontinuities, one would like to have some very short basis functions. At the same time in order to obtain detail frequency
analysis, one would like to have some very long basis functions. A way to achieve this is to have short high-frequency basis
functions and long low-frequency ones. Thus wavelet analysis provides immediate access to information that can be
obscured by other time frequency methods such as Fourier analysis.
B. Wavelet Transform Applied on Images
To use the wavelet transform for image processing we must implement a 2D version of the analysis and synthesis
filter banks. In the 2D case, the 1D analysis filter bank is first applied to the columns of the image and then applied to rows.
If the image has N1 rows and N2 columns, then after applying the 1D analysis filter bank to each column we have two sub
band images, each having N1/2 rows and N2 columns; after applying the 1D analysis filter bank to each row of both of the
two sub band images, we have four sub band images, each having N1/2 rows and N2/2 columns. This is illustrated in the
diagram below. The 2D synthesis filter bank combines the four sub band images to obtain the original image of size N1 by
N2. This is shown in figure.2.
48
3. Document Image Segmentation for Analyzing of Data in Raster Image
Fig .1: Multi resolution wavelet decomposition of an image.
The following figure.3 is the decomposition of an image into four frequency bands by using wavelet transform as
explained above.
Fig.3: Decomposition of an image into four frequency bands by wavelet transforms
The continuous wavelet transform of a function is given as
Wf a (b) f ( x) a,b ( x)dx.
Applying wavelet transform for our sample image,
In this paper we have taken a sample image (front cover of the book).we must use a good quality scanner such that
the scanned image is free from any noise or distortion. If so the noise is being eliminated by using suitable filtering
techniques. The scanned image can be in anyone of the formats like JPEG, BMP and Tiff. Here we have used JPEG format.
Image decomposition is achieved at many levels and many type of wavelets like harr, daubechies, coiflets, symlets etc can be
used. Here we have used daubechies wavelet for decomposing an image into various levels. The following figure illustrates
the wavelet transform applied on a sample image.
Level 1 Filter type: db1
Level 2 Filter type: db1
49
4. Document Image Segmentation for Analyzing of Data in Raster Image
C. Wavelet Scale–Space Features
The feature-extraction scheme that we have used has a multi-channel filtering and a subsequent nonlinear stage
followed by a smoothing filter (both these constitute the local energy estimator) [8] [9].The objectives of the filtering and
that of the local energy estimator are to transform the edges between textures into detectable discontinuities.
The filter bank, in essence, is a set of bandpass filters with frequency- and orientation-selective properties. In the
filtering stage, we make use of an eight-tap, four-band, orthogonal– and linear phase wavelet transform following to
decompose the textured images into M×M corresponding to different direction and scales [14]. The one-dimensional (1-D)
four-band wavelet filter impulse responses denoted by ψ r are given in Table I, and their corresponding transfer functions are
represented by H r for r =1,2..4. In this paper, we extend the decomposition to the 2-D case by successively applying the M -
band transform separably in the horizontal and vertical directions without downsampling (i.e., an overcomplete
representation). The size of the filter is an important factor. The filter length is increased with increased level of
decomposition. The sequences of low-pass and bandpass filters of increasing width corresponding to an increased level of
decomposition are expanded by inserting an appropriate number of zeros between taps of filters. So, if the filter length
becomes large, it is possible that it may bias the decomposition of the image. We have chosen an eight-tap filter for
suitability of the size of the image that we have considered in this study (i.e., 512×512)
Table I - Coefficients for Eight-tap Four-band wavelet.
No.
of
Taps(
1 (n) 2 (n) 3 (n) 4 (n)
n)
0 -0.067371764 -0.09419511 -0.09419511 -0.067371764
1 0.09419511 0.067371764 -0.067371764 -0.09419511
2 0.40580489 0.56737176 0.56737176 0.40580489
3 0.56737176 0.40580489 -0.40580489 -0.56737176
4 0.56737176 -0.40580489 -0.40580489 0.56737176
5 0.40580489 -0.56737176 0.56737176 -0.40580489
6 0.09419511 -0.067371764 0.067371764 0.09419511
7 -0.067371764 0.09419511 -0.09419511 0.067371764
The objective of the filtering is to find out about the discontinuities that exist within the image. The spectral
response is strongest along the direction perpendicular to the edge of an image, while it decreases as the direction of the filter
approaches that of the edge [16]. Therefore we can perform edge detection by using 2-D filtering of the image as follows:
Horizontal edges are detected by high-pass filtering on columns and low-pass filtering on rows.
Vertical edges are detected by low-pass filtering on columns and high-pass filtering on rows.
Diagonal edges are detected by high-pass filtering on columns and high-pass filtering on rows.
Horizontal–diagonal edges are detected by high-pass filtering on columns and low-pass filtering on rows.
Vertical–diagonal edges are detected by low-pass filtering on columns and high-pass filtering on rows.
A typical edge-detection filter corresponding to a particular direction covers a certain region in the 2-D spatial-
frequency domain. Based on this concept, several wavelet-decomposition filters are designed which are given by the
summations ΣReg Hr, c, where Reg denotes the frequency sector of a certain direction and scale.
Fig.4: Frequency bands corresponding to decomposition filters.
The basic decomposition scheme followed in this paper is given in the following figure.5.
D. Local Energy Estimation
The next step is to estimate the energy of the filter responses in a local region around each pixel. The local energy
estimate is utilized for the purpose of identifying areas in each channel where the band pass frequency components are
strong resulting in a high energy value and the areas where it is weak into a low energy value. Although energy is usually
defined in terms of a squaring nonlinearity, in a generalized energy function, however, other alternatives are also used.
We have studied several nonlinear operators. These include the magnitude operation, average absolute deviation
and standard deviation calculated over small overlapping windows around each pixel. This nonlinear operator is
independent of any parameter, i.e., independent of the dynamic range of the input image and also of the filter amplification.
50
5. Document Image Segmentation for Analyzing of Data in Raster Image
Fig.5: Basic decomposition scheme
The images resulting from these operations are the features denoted by Feat hori , Featveri etc. for i=1, 2, 3.as shown in
Fig.5
Fig.6: (a) Test image with document skewed and text regions (b) Segmented result.
with different orientations.
51
6. Document Image Segmentation for Analyzing of Data in Raster Image
It is to be noted that all of our experiments were performed with no a priori knowledge about the input image. We
did not have any information about the font size or format of the text. While knowledge about these can definitely improve
the segmentation results, for this, we can make use of supervised segmentation.
IV. CONCLUSION
We have segmented the text regions from the scanned image (Front cover of any image).The segmented text must
further be subjected to further analysis for identifying the characters that facilitates intelligent information retrieval that
could be used efficiently in our system. It is quite apparent that there is a need for digitization of documents for making it
easily accessible via computers and networks, but it is not absolutely necessary to align the document in raster direction.
Thus the system overcomes the difficulties of the conventional bar code scanning wherein the user must manually
enter the details of every image that comes to the Cover design. This method of automation is more efficient and requires
less human labour. Thus an automated image information system is developed which minimizes manual interference
considerably saving time and money yet the system has its own difficulties like recognizing text with different font sizes and
styles may be quite difficult.
REFERENCES
[1]. S. N. Srihari, “Document image understanding,” Proc. IEEE Computer Society Fall Joint Computer Conf., pp. 87–
96, Nov. 1986.
[2]. P. Chauvet, J. Lopez-Krahe, E. Taflin, and H. Maitre, “System for an intelligent office document analysis,
recognition and description,” Signal Processing, vol. 32, no. 1–2, pp. 161–190, 1993.
[3]. F. M. wahl, K. Y. Wong, and R. G. Kasey, “Block segmentation and text extraction in mixed text/image
documents,” Comput. Graph. Image Process, vol. 20, pp. 375–390, 1982.
[4]. F. Shih, S.-S. Chen, D. Hung, and P. Ng, “A document image segmentation, classification and recognition
system,” in Proc. Int. Conf. Systems Integration, 1992, pp. 258–267.
[5]. O. Iwaki, H. Kida, and H. Arakawa, “A segmentation method based on office document hierarchical structure,” in
Proc. Intl. Conf. Systems, Man and Cybernetics, 1987, pp. 375–390.
[6]. D. Wang and S. N. Srihari, “Classification of newspaper image blocks using texture analysis,” Compu. Vis.,
Graph, Image Process, vol. 47, pp. 327–352, 1989.
[7]. G. Nagy, S. Seth, and M. Viswanathan, “A prototype document image analysis for technical journals,” Computer,
vol. 25, no. 7, pp. 10–22, 1992.
[8]. Mausumi Acharyya and Malay K. Kundu, “ Document Image Segmentation Using Wavelet Scale–Space Features”
IEEE transactions on circuits and systems for video technology, vol. 12, no. 12,December 2002.
[9]. M. Krishnamoorthy, G. Nagy, S. Seth, and M. Viswanathan, “Syntactic segmentation and labeling of digitized
pages from technical journals,” IEEE Trans. Pattern Anal. Machine Intell., vol. 15, no. 7, pp. 737–747,1993.
[10]. G. Loum, P. Provent, J. Lemoine, and E. Petit, “A New method for texture classification based on wavelet
transform,” in Proc. 3rd Int. Symp. Time-Frequency Time-Scale Analysis, June 1996, pp. 29–32.
[11]. S. G. Mallet, “A theory for multiresolution signal decomposition: The wavelet representation,” IEEE Trans.
Pattern Anal. Machine Intel, vol. 11, July 1989.
[12]. O. Rioul and M. Vetterli, “Wavelets and signal processing,” IEEE Signal Processing Mag., vol. 8, pp. 14–38, Oct.
1991.
[13]. O. Alkin and H. Caglar, “Design of efficient M-band coders with linear phase and perfect reconstruction
properties,” IEEE Trans. Signal Processing, vol. 43, pp. 1579–1590, July 1995.
[14]. F. Y. Shih and S. Chen, “Adaptive document block segmentation and classification,” IEEE Trans. Syst., Man,
Cybern. B, vol. 26, pp. 797–802, Oct. 1996.
[15]. M. Vetterli and J. Kovacevic, Wavelets and Subband Coding. Englewood Cliffs, NJ: Prentice-Hall, 1995, ch. 7.
[16]. D. Wang and S. N. Srihari, “Classification of newspaper image blocks using texture analysis,” Comput. Vis.,
Graph, Image Process, vol. 47, pp. 327–352, 1989.
52
7. Document Image Segmentation for Analyzing of Data in Raster Image
[17]. Y. Meyer, Wavelets and Operators. Cambridge, U.K.: CambridgeUniv. Press, 1992.
[18]. O. J. Kwon and R. Chellappa, “Segmentation based image compression,”Opt. Eng., vol. 32, pp. 1581–1587, 1993.
[19]. M. Unser, “Texture classification and segmentation using wavelet frames,” IEEE Trans. Image Processing, vol. 4,
pp. 1549–1560, Nov.1995.
[20]. A. Laine and J. Fan, “Frame representation for texture segmentation,”IEEE Trans. Image Processing, vol. 5, pp.
771–779, May 1996.
[21]. T. Chang and C. C. J. Kuo, “Texture analysis and classification with tree-structured wavelet transform,” IEEE
Trans. Image Processing, vol.2, pp. 42–44, Apr. 1993.
53