This document compares two methods for compressing 3D animation data: Frame-based Animated Mesh Compression (FAMC) and Optimized Mesh Traversal (OMT). FAMC was adopted in the MPEG-4 standard but has some weaknesses, while OMT uses principal component analysis and optimizes the mesh traversal order. Research has found that OMT achieves higher compression efficiency than FAMC, especially for irregular meshes. OMT focuses more on triangle regularity which helps define vectors more efficiently and optimize the traversal order.
This document discusses GPU-based image compression and interpolation using anisotropic diffusion. It presents a method for image compression using binary tree triangular coding to store pixel coordinates. For decompression and interpolation, a partial differential equation (PDE) method called Perona and Malik diffusion is used. Performance of the PDE-based interpolation algorithm is evaluated on CPU and GPU architectures, demonstrating that the GPU implementation significantly reduces computation time, especially for higher resolution images.
The document presents a new efficient color image compression technique that aims to improve the quality of decompressed images while achieving higher compression ratios. It does this by compressing important edge parts of the image differently than non-edge background parts. Specifically, it applies low-quality lossy compression to non-edge parts and high-quality lossy compression to edge parts. The technique uses edge detection, adaptive thresholding based on local variance and mean, and discrete cosine transform followed by quantization and entropy encoding. Experimental results on various images show it achieves better compression ratios, lower bit rates, and higher peak signal to noise ratios compared to non-adaptive methods.
11.0003www.iiste.org call for paper_d_discrete cosine transform for image com...Alexander Decker
This document summarizes a research paper on 3D discrete cosine transform (DCT) for image compression. It discusses how 3D-DCT video compression works by dividing video streams into groups of 8 frames treated as 3D images with 2 spatial and 1 temporal component. Each frame is divided into 8x8 blocks and each 8x8x8 cube is independently encoded using 3D-DCT, quantization, and entropy encoding. It achieves better compression ratios than 2D JPEG by exploiting correlations across multiple frames. The document provides details on the 3D-DCT compression and decompression process. It reports that testing on a set of 8 images achieved a compression ratio of around 27 with this technique.
This document discusses parallel processing and compound image compression techniques. It examines the computational complexity and quantitative optimization of various image compression algorithms like BTC, DCT, DWT, DTCWT, SPIHT and EZW. The performance is evaluated in terms of coding efficiency, memory usage, image quality and quantity. Block Truncation Coding and Discrete Cosine Transform compression methods are described in more detail.
EFFICIENT IMAGE COMPRESSION USING LAPLACIAN PYRAMIDAL FILTERS FOR EDGE IMAGESijcnac
This project presents a new image compression technique for the coding of retinal and
fingerprint images. Retinal images are used to detect diseases like diabetes or
hypertension. Fingerprint images are used for the security purpose. In this work, the
contourlet transform of the retinal and fingerprint image is taken first. The coefficients of
the contourlet transform are quantized using adaptive multistage vector quantization
scheme. The number of code vectors in the adaptive vector quantization scheme depends
on the dynamic range of the input image.
PIPELINED ARCHITECTURE OF 2D-DCT, QUANTIZATION AND ZIGZAG PROCESS FOR JPEG IM...VLSICS Design
This document presents a pipelined architecture for 2D discrete cosine transform (DCT), quantization, and zigzag processing for JPEG image compression implemented on an FPGA. The 2D-DCT is computed using separability into two 1D-DCTs separated by a transpose buffer. A quantizer divides each DCT coefficient by a value from a quantization table stored in ROM. A zigzag buffer reorders the coefficients in zigzag format. The design was synthesized to Xilinx Spartan-3E FPGA and achieved a maximum frequency of 101.35MHz, processing an 8x8 block in 6604ns with a pipeline latency of 140 cycles.
Tissue Segmentation Methods Using 2D Histogram Matching in a Sequence of MR B...Vladimir Kanchev
This presentation provides detailed description of the methodology of the segmentation method of brain tissues in MR image sequences using 2D histogram matching.
This document discusses GPU-based image compression and interpolation using anisotropic diffusion. It presents a method for image compression using binary tree triangular coding to store pixel coordinates. For decompression and interpolation, a partial differential equation (PDE) method called Perona and Malik diffusion is used. Performance of the PDE-based interpolation algorithm is evaluated on CPU and GPU architectures, demonstrating that the GPU implementation significantly reduces computation time, especially for higher resolution images.
The document presents a new efficient color image compression technique that aims to improve the quality of decompressed images while achieving higher compression ratios. It does this by compressing important edge parts of the image differently than non-edge background parts. Specifically, it applies low-quality lossy compression to non-edge parts and high-quality lossy compression to edge parts. The technique uses edge detection, adaptive thresholding based on local variance and mean, and discrete cosine transform followed by quantization and entropy encoding. Experimental results on various images show it achieves better compression ratios, lower bit rates, and higher peak signal to noise ratios compared to non-adaptive methods.
11.0003www.iiste.org call for paper_d_discrete cosine transform for image com...Alexander Decker
This document summarizes a research paper on 3D discrete cosine transform (DCT) for image compression. It discusses how 3D-DCT video compression works by dividing video streams into groups of 8 frames treated as 3D images with 2 spatial and 1 temporal component. Each frame is divided into 8x8 blocks and each 8x8x8 cube is independently encoded using 3D-DCT, quantization, and entropy encoding. It achieves better compression ratios than 2D JPEG by exploiting correlations across multiple frames. The document provides details on the 3D-DCT compression and decompression process. It reports that testing on a set of 8 images achieved a compression ratio of around 27 with this technique.
This document discusses parallel processing and compound image compression techniques. It examines the computational complexity and quantitative optimization of various image compression algorithms like BTC, DCT, DWT, DTCWT, SPIHT and EZW. The performance is evaluated in terms of coding efficiency, memory usage, image quality and quantity. Block Truncation Coding and Discrete Cosine Transform compression methods are described in more detail.
EFFICIENT IMAGE COMPRESSION USING LAPLACIAN PYRAMIDAL FILTERS FOR EDGE IMAGESijcnac
This project presents a new image compression technique for the coding of retinal and
fingerprint images. Retinal images are used to detect diseases like diabetes or
hypertension. Fingerprint images are used for the security purpose. In this work, the
contourlet transform of the retinal and fingerprint image is taken first. The coefficients of
the contourlet transform are quantized using adaptive multistage vector quantization
scheme. The number of code vectors in the adaptive vector quantization scheme depends
on the dynamic range of the input image.
PIPELINED ARCHITECTURE OF 2D-DCT, QUANTIZATION AND ZIGZAG PROCESS FOR JPEG IM...VLSICS Design
This document presents a pipelined architecture for 2D discrete cosine transform (DCT), quantization, and zigzag processing for JPEG image compression implemented on an FPGA. The 2D-DCT is computed using separability into two 1D-DCTs separated by a transpose buffer. A quantizer divides each DCT coefficient by a value from a quantization table stored in ROM. A zigzag buffer reorders the coefficients in zigzag format. The design was synthesized to Xilinx Spartan-3E FPGA and achieved a maximum frequency of 101.35MHz, processing an 8x8 block in 6604ns with a pipeline latency of 140 cycles.
Tissue Segmentation Methods Using 2D Histogram Matching in a Sequence of MR B...Vladimir Kanchev
This presentation provides detailed description of the methodology of the segmentation method of brain tissues in MR image sequences using 2D histogram matching.
Tissue Segmentation Methods using 2D Hiistogram Matching in a Sequence of MR ...Vladimir Kanchev
Methodology of the suggested method for tissue segmentation in MR brain images using 2D histogram matching. Each algorithmic step is given in detail and analyzed.
Performance Analysis of CRT for Image Encryption ijcisjournal
With the fast advancements of information technology, the security of image data transmitted or stored over
internet is become very difficult. To hide the details, an effective method is encryption, so that only
authorized persons can decrypt the image with the keys available. Since the default features of digital
image such as high capacity data, large redundancy and large similarities among pixels, the conventional
encryption algorithms such as AES, , DES, 3DES, and Blow Fish, are not applicable for real time image
encryption. This paper presents the performance of CRT for image encryption to secure storage and
transmission of image over internet.
Tissue segmentation methods using 2D histogram matching in a sequence of mr b...Vladimir Kanchev
This presentation aims to present segmentation results of the suggested segmentation method of tissues in MR brain images. For that purpose we give benchmark results and additional details of implementation of our method.
Repeat-Frame Selection Algorithm for Frame Rate Video TranscodingCSCJournals
To realize frame rate transcoding, the forward frame repeat mechanism is usually adopted to compensate the skipped frames in the video decoder for end-user. However, based on our observation, it is unsuitable for repeating all skipped frames only in the forward direction and sometimes the backward repeat may achieve better results. To deal with this issue, we propose the new reference frame selection method to determine the direction of repeat-frame for skipped Predictive (P) and Bidirectional (B) frames. For P-frame, the non-zero transformed coefficients and the magnitude of motion vectors are taken into consideration to determine the use of forward or backward repeat. For B-frame, the magnitude of motion vector and its corresponding reference directions of the blocks in B-frame are selected to be the decision criteria. Experimental results show that the proposed method provides 1.34 dB and 1.31 dB PSNR improvements in average for P and B frames, respectively, compared with forward frame repeat.
Image morphing has been the subject of much attention in recent years. It has proven to be a powerful visual effects tool
in film and television, depicting the fluid transformation of one digital image into another. This paper reviews the growth of this field
and describes recent advances in image morphing in terms of three areas: feature specification, warp generation methods, and
transition control. These areas relate to the ease of use and quality of results. We will describe the role of radial basis functions, thin
plate splines, energy minimization, and multilevel free-form deformations in advancing the state-of-the-art in image morphing. A
comparison of various techniques for morphing one digital image in to another is made. We will compare various morphing techniques
such as Feature based image morphing, Mesh and Thin Plate Splines based image morphing based on different attributes such as
Computational Time, Visual Quality of Morphs obtained and Complexity involved in Selection of features. We will demonstrate the
pros and cons of various techniques so as to allow the user to make an informed decision to suit his particular needs. Recent work on a
generalized framework for morphing among multiple images will be described.
The document proposes a method to improve security for LSB2 steganography. It involves:
1. Using LSB2 to hide a message in a cover image, generating a holding image.
2. Encrypting the holding image by dividing it into blocks, reordering the blocks, and using the reordering sequence as a private key.
3. To extract the message, the encrypted image is decrypted using the private key, then LSB2 is applied to extract the message.
Experimental results show the proposed encryption reduces the peak signal to noise ratio and increases the mean squared error compared to LSB2 alone, improving security. Larger cover images provide better results for LSB2 hiding and
Copy Move Forgery Detection Using GLCM Based Statistical Features ijcisjournal
The features Gray Level Co-occurrence Matrix (GLCM) are mostly explored in Face Recognition and
CBIR. GLCM technique is explored here for Copy-Move Forgery Detection. GLCMs are extracted from all
the images in the database and statistics such as contrast, correlation, homogeneity and energy are
derived. These statistics form the feature vector. Support Vector Machine (SVM) is trained on all these
features and the authenticity of the image is decided by SVM classifier. The proposed work is evaluated on
CoMoFoD database, on a whole 1200 forged and processed images are tested. The performance analysis
of the present work is evaluated with the recent methods.
This document presents a new approach for multiclass image segmentation and categorization using Bayesian networks and spatial Markov kernels. It first constructs an over-segmented image and Bayesian network to model relationships between image elements. Interactive segmentation is performed to match pixels to an outline provided by the user. The segmented image is then categorized using a spatial Markov kernel algorithm based on visual keywords assigned to image blocks. The approach achieves 93.5% accuracy on test images. It provides a probabilistic way to model image segmentation and allows new knowledge to be incorporated through the Bayesian network framework.
This document proposes a low bandwidth and low power video encoding method called MMSQ-EC. It uses scalar quantization to compress reference frames before storing them in external memory. For motion estimation, compressed reference frames are used, while only error data is fetched for motion compensation to recreate accurate reference pixels. This reduces external memory bandwidth and power compared to uncompressed reference frames. Experimental results on test videos show the method reduces required bandwidth by over 3x with minimal PSNR quality loss. The best compression performance is achieved with 8x8 pixel blocks.
Halftoning-based BTC image reconstruction using patch processing with border ...TELKOMNIKA JOURNAL
This paper presents a new halftoning-based block truncation coding (HBTC) image reconstruction using sparse representation framework. The HBTC is a simple yet powerful image compression technique, which can effectively remove the typical blocking effect and false contour. Two types of HBTC methods are discussed in this paper, i.e., ordered dither block truncation coding (ODBTC) and error diffusion block truncation coding (EDBTC). The proposed sparsity-based method suppresses the impulsive noise on ODBTC and EDBTC decoded image with a coupled dictionary containing the HBTC image component and the clean image component dictionaries. Herein, a sparse coefficient is estimated from the HBTC decoded image by means of the HBTC image dictionary. The reconstructed image is subsequently built and aligned from the clean, i.e. non-compressed image dictionary and predicted sparse coefficient. To further reduce the blocking effect, the image patch is firstly identified as “border” and “non-border” type before applying the sparse representation framework. Adding the Laplacian prior knowledge on HBTC decoded image, it yields better reconstructed image quality. The experimental results demonstrate the effectiveness of the proposed HBTC image reconstruction. The proposed method also outperforms the former schemes in terms of reconstructed image quality.
In this technical article, we present a Novel algorithm for the lossy compression method, where the performance and storage has been proscribed with hardware descriptive language (HDL).
This document presents a study on medial axis transformation (MAT) based skeletonization of image patterns using image processing techniques. It discusses how the MAT of an image can be extracted by first computing the Euclidean distance transform of the binary image. Local maxima in the distance transform image correspond to the MAT. Several performance evaluation metrics for analyzing skeletonized images are also introduced, such as connectivity number, thinness measurement and sensitivity. The technique is demonstrated on sample images and results show it can effectively extract the skeleton with good computational speed.
This document discusses the use of block coordinate descent (BCD) for training convolutional neural networks on computer vision tasks. The authors trained a CNN on MNIST and FashionMNIST datasets using both BCD and SGD. They found that BCD achieved slightly better accuracy than SGD (0.1% on average) when optimizing 2500 parameters per batch with a batch size of 50. BCD requires smaller batch sizes for stochasticity to improve convergence. While BCD did not significantly outperform SGD in terms of speed or accuracy on CPU, the authors believe improvements to the BCD implementation could make it faster than SGD, especially on GPUs.
This document contains summaries of interviews Spencer Hollandsworth conducted with entrepreneurs including Jeff Schwarting of Pivot Labs, Brad Cahoon of MRKT pop, Jordan Monroe of Phone Soap and Owlet, Eric Childs of FiberFix, and Corbin Church who has founded 6 companies. It also discusses lessons learned from a book on meeting and communicating with important people, and payoffs from the class including internships at ZenApply through Derek Miner and at Vidangel through Jeff Harmon.
Este documento presenta el plan de estudios de la Licenciatura en Ciencias Actuariales. Consiste en 10 semestres que cubren asignaturas de matemáticas, estadística, computación, economía y finanzas. Las asignaturas se organizan en requisitos previos y componentes de formación general y formativa.
This document summarizes a webinar about converting iOS code to Android code using MyAppConverter. It outlines the agenda which includes an introduction to MyAppConverter, how to do a successful conversion, and how to use the Sprite4Droid plugin. It provides details on signing up for MyAppConverter, preparing an iOS project for conversion, performing the conversion process, and getting support. It also covers how to use a converted Android project, including importing into Android Studio or Eclipse, and resolving common issues. Finally, it discusses what Sprite4Droid is, how to install the plugin, and how to create and use Sprite4Droid projects in Android.
Área: Mejora Profesional. Reflexión sobre la Práctica Docentescar47
Ley del Servicio Profesional docente.
Perfil, Parámetros e Indicadores para Docentes y Técnicos Docentes y Propuesta de etapas, aspectos, métodos e instrumentos de evaluación.
Área de mejoramiento profesional.
La reflexión sistemática sobre la propia práctica profesional.
Modelos de Práctica Reflexiva.
Instrumentos del Modelo de Práctica Reflexiva
Tissue Segmentation Methods using 2D Hiistogram Matching in a Sequence of MR ...Vladimir Kanchev
Methodology of the suggested method for tissue segmentation in MR brain images using 2D histogram matching. Each algorithmic step is given in detail and analyzed.
Performance Analysis of CRT for Image Encryption ijcisjournal
With the fast advancements of information technology, the security of image data transmitted or stored over
internet is become very difficult. To hide the details, an effective method is encryption, so that only
authorized persons can decrypt the image with the keys available. Since the default features of digital
image such as high capacity data, large redundancy and large similarities among pixels, the conventional
encryption algorithms such as AES, , DES, 3DES, and Blow Fish, are not applicable for real time image
encryption. This paper presents the performance of CRT for image encryption to secure storage and
transmission of image over internet.
Tissue segmentation methods using 2D histogram matching in a sequence of mr b...Vladimir Kanchev
This presentation aims to present segmentation results of the suggested segmentation method of tissues in MR brain images. For that purpose we give benchmark results and additional details of implementation of our method.
Repeat-Frame Selection Algorithm for Frame Rate Video TranscodingCSCJournals
To realize frame rate transcoding, the forward frame repeat mechanism is usually adopted to compensate the skipped frames in the video decoder for end-user. However, based on our observation, it is unsuitable for repeating all skipped frames only in the forward direction and sometimes the backward repeat may achieve better results. To deal with this issue, we propose the new reference frame selection method to determine the direction of repeat-frame for skipped Predictive (P) and Bidirectional (B) frames. For P-frame, the non-zero transformed coefficients and the magnitude of motion vectors are taken into consideration to determine the use of forward or backward repeat. For B-frame, the magnitude of motion vector and its corresponding reference directions of the blocks in B-frame are selected to be the decision criteria. Experimental results show that the proposed method provides 1.34 dB and 1.31 dB PSNR improvements in average for P and B frames, respectively, compared with forward frame repeat.
Image morphing has been the subject of much attention in recent years. It has proven to be a powerful visual effects tool
in film and television, depicting the fluid transformation of one digital image into another. This paper reviews the growth of this field
and describes recent advances in image morphing in terms of three areas: feature specification, warp generation methods, and
transition control. These areas relate to the ease of use and quality of results. We will describe the role of radial basis functions, thin
plate splines, energy minimization, and multilevel free-form deformations in advancing the state-of-the-art in image morphing. A
comparison of various techniques for morphing one digital image in to another is made. We will compare various morphing techniques
such as Feature based image morphing, Mesh and Thin Plate Splines based image morphing based on different attributes such as
Computational Time, Visual Quality of Morphs obtained and Complexity involved in Selection of features. We will demonstrate the
pros and cons of various techniques so as to allow the user to make an informed decision to suit his particular needs. Recent work on a
generalized framework for morphing among multiple images will be described.
The document proposes a method to improve security for LSB2 steganography. It involves:
1. Using LSB2 to hide a message in a cover image, generating a holding image.
2. Encrypting the holding image by dividing it into blocks, reordering the blocks, and using the reordering sequence as a private key.
3. To extract the message, the encrypted image is decrypted using the private key, then LSB2 is applied to extract the message.
Experimental results show the proposed encryption reduces the peak signal to noise ratio and increases the mean squared error compared to LSB2 alone, improving security. Larger cover images provide better results for LSB2 hiding and
Copy Move Forgery Detection Using GLCM Based Statistical Features ijcisjournal
The features Gray Level Co-occurrence Matrix (GLCM) are mostly explored in Face Recognition and
CBIR. GLCM technique is explored here for Copy-Move Forgery Detection. GLCMs are extracted from all
the images in the database and statistics such as contrast, correlation, homogeneity and energy are
derived. These statistics form the feature vector. Support Vector Machine (SVM) is trained on all these
features and the authenticity of the image is decided by SVM classifier. The proposed work is evaluated on
CoMoFoD database, on a whole 1200 forged and processed images are tested. The performance analysis
of the present work is evaluated with the recent methods.
This document presents a new approach for multiclass image segmentation and categorization using Bayesian networks and spatial Markov kernels. It first constructs an over-segmented image and Bayesian network to model relationships between image elements. Interactive segmentation is performed to match pixels to an outline provided by the user. The segmented image is then categorized using a spatial Markov kernel algorithm based on visual keywords assigned to image blocks. The approach achieves 93.5% accuracy on test images. It provides a probabilistic way to model image segmentation and allows new knowledge to be incorporated through the Bayesian network framework.
This document proposes a low bandwidth and low power video encoding method called MMSQ-EC. It uses scalar quantization to compress reference frames before storing them in external memory. For motion estimation, compressed reference frames are used, while only error data is fetched for motion compensation to recreate accurate reference pixels. This reduces external memory bandwidth and power compared to uncompressed reference frames. Experimental results on test videos show the method reduces required bandwidth by over 3x with minimal PSNR quality loss. The best compression performance is achieved with 8x8 pixel blocks.
Halftoning-based BTC image reconstruction using patch processing with border ...TELKOMNIKA JOURNAL
This paper presents a new halftoning-based block truncation coding (HBTC) image reconstruction using sparse representation framework. The HBTC is a simple yet powerful image compression technique, which can effectively remove the typical blocking effect and false contour. Two types of HBTC methods are discussed in this paper, i.e., ordered dither block truncation coding (ODBTC) and error diffusion block truncation coding (EDBTC). The proposed sparsity-based method suppresses the impulsive noise on ODBTC and EDBTC decoded image with a coupled dictionary containing the HBTC image component and the clean image component dictionaries. Herein, a sparse coefficient is estimated from the HBTC decoded image by means of the HBTC image dictionary. The reconstructed image is subsequently built and aligned from the clean, i.e. non-compressed image dictionary and predicted sparse coefficient. To further reduce the blocking effect, the image patch is firstly identified as “border” and “non-border” type before applying the sparse representation framework. Adding the Laplacian prior knowledge on HBTC decoded image, it yields better reconstructed image quality. The experimental results demonstrate the effectiveness of the proposed HBTC image reconstruction. The proposed method also outperforms the former schemes in terms of reconstructed image quality.
In this technical article, we present a Novel algorithm for the lossy compression method, where the performance and storage has been proscribed with hardware descriptive language (HDL).
This document presents a study on medial axis transformation (MAT) based skeletonization of image patterns using image processing techniques. It discusses how the MAT of an image can be extracted by first computing the Euclidean distance transform of the binary image. Local maxima in the distance transform image correspond to the MAT. Several performance evaluation metrics for analyzing skeletonized images are also introduced, such as connectivity number, thinness measurement and sensitivity. The technique is demonstrated on sample images and results show it can effectively extract the skeleton with good computational speed.
This document discusses the use of block coordinate descent (BCD) for training convolutional neural networks on computer vision tasks. The authors trained a CNN on MNIST and FashionMNIST datasets using both BCD and SGD. They found that BCD achieved slightly better accuracy than SGD (0.1% on average) when optimizing 2500 parameters per batch with a batch size of 50. BCD requires smaller batch sizes for stochasticity to improve convergence. While BCD did not significantly outperform SGD in terms of speed or accuracy on CPU, the authors believe improvements to the BCD implementation could make it faster than SGD, especially on GPUs.
This document contains summaries of interviews Spencer Hollandsworth conducted with entrepreneurs including Jeff Schwarting of Pivot Labs, Brad Cahoon of MRKT pop, Jordan Monroe of Phone Soap and Owlet, Eric Childs of FiberFix, and Corbin Church who has founded 6 companies. It also discusses lessons learned from a book on meeting and communicating with important people, and payoffs from the class including internships at ZenApply through Derek Miner and at Vidangel through Jeff Harmon.
Este documento presenta el plan de estudios de la Licenciatura en Ciencias Actuariales. Consiste en 10 semestres que cubren asignaturas de matemáticas, estadística, computación, economía y finanzas. Las asignaturas se organizan en requisitos previos y componentes de formación general y formativa.
This document summarizes a webinar about converting iOS code to Android code using MyAppConverter. It outlines the agenda which includes an introduction to MyAppConverter, how to do a successful conversion, and how to use the Sprite4Droid plugin. It provides details on signing up for MyAppConverter, preparing an iOS project for conversion, performing the conversion process, and getting support. It also covers how to use a converted Android project, including importing into Android Studio or Eclipse, and resolving common issues. Finally, it discusses what Sprite4Droid is, how to install the plugin, and how to create and use Sprite4Droid projects in Android.
Área: Mejora Profesional. Reflexión sobre la Práctica Docentescar47
Ley del Servicio Profesional docente.
Perfil, Parámetros e Indicadores para Docentes y Técnicos Docentes y Propuesta de etapas, aspectos, métodos e instrumentos de evaluación.
Área de mejoramiento profesional.
La reflexión sistemática sobre la propia práctica profesional.
Modelos de Práctica Reflexiva.
Instrumentos del Modelo de Práctica Reflexiva
IRJET- Efficient Image Encryption with Pixel Scrambling and Genetic AlgorithmIRJET Journal
The document proposes an efficient image encryption method using pixel scrambling and genetic algorithms. The method involves segmenting the input image into blocks, randomly shuffling the pixels within each block, slicing the scrambled image into bitplanes, and applying genetic algorithm operations of crossover and mutation to further encrypt the data. Experiments on standard images show the method achieves a uniform histogram and low correlation between pixels, indicating high encryption strength. It also has faster encryption speeds compared to other algorithms. The document concludes the hybrid genetic algorithm approach provides an efficient and secure way to encrypt images.
IRJET- LS Chaotic based Image Encryption System Via Permutation ModelsIRJET Journal
This document proposes an image encryption system using logistic sine map and permutation models. The system works as follows:
1. A plain image is converted to grayscale and decomposed into 8 bit planes.
2. Each bit plane is randomly scrambled.
3. A logistic sine map is used to generate a key to partially encrypt each bit plane.
4. The bits planes are then permuted to obtain the final encrypted image. Logistic sine maps are well-suited for this approach due to their sensitivity to initial parameter values and ability to generate seemingly random outputs. The system aims to increase security by efficiently scrambling and permuting the bit plane values of the input image.
An improved image compression algorithm based on daubechies wavelets with ar...Alexander Decker
This document summarizes an academic article that proposes a new image compression algorithm using Daubechies wavelets and arithmetic coding. It first discusses existing image compression techniques and their limitations. It then describes the proposed algorithm, which applies Daubechies wavelet transform followed by 2D Walsh wavelet transform on image blocks and arithmetic coding. Results show the proposed method achieves higher compression ratios and PSNR values than existing algorithms like EZW and SPIHT. Future work aims to improve results by exploring different wavelets and compression techniques.
The document describes a system that uses AI technologies like optical character recognition, text summarization, and speech synthesis to automatically recognize text from images, summarize it, and generate an audio podcast of the summary. The system segments text from images using a neural network with differentiable binarization. It recognizes words using a temporal recognition network with connectionist temporal classification. It summarizes the text using either an abstractive transformer model or extractive PageRank algorithm. Finally, it generates a mel-spectrogram from the summary and synthesizes speech from the spectrogram using generative adversarial networks. The system aims to quickly digest lengthy publications for users.
Complex Background Subtraction Using Kalman FilterIJERA Editor
Background subtraction from dynamic background, At any location of the scene, this system extract a sequence of regular video bricks, i.e., video volumes spanning over both spatial and temporal domain. The background modeling is thus posed as pursuing subspaces within the video bricks while adapting the scene variations. For each sequence of video bricks, it pursues the subspace by employing the auto regressive moving average model that jointly characterizes the appearance consistency and temporal coherence of the observations. During online processing, it use tracking algorithm kalman’s filter for background/foreground classification and incrementally update the subspaces to cope with disturbances from foreground objects and scene changes.
IRJET- Crowd Density Estimation using Novel Feature DescriptorIRJET Journal
This document proposes a new texture feature-based approach for crowd density estimation using Completed Local Binary Pattern (CLBP). The approach divides images into blocks and further divides blocks into cells to compute CLBP features for each cell. A multi-class Support Vector Machine (SVM) classifier is trained to classify each block into one of four crowd density categories (Very Low, Low, Medium, High). Experiments on the PETS 2009 dataset show the proposed CLBP descriptor achieves 95% accuracy and outperforms other texture descriptors for crowd density estimation.
Video Stitching using Improved RANSAC and SIFTIRJET Journal
1. The document discusses techniques for stitching multiple video frames into a panoramic video using Scale-Invariant Feature Transform (SIFT) and an improved RANSAC algorithm.
2. Key points and feature descriptors are extracted from frames using SIFT to find correspondences between frames. The improved RANSAC algorithm is used to estimate homography matrices between frames and filter outlier matches.
3. Frames are blended together to compensate for exposure differences and misalignments before being mapped to a reference plane to create the panoramic video mosaic. The algorithm aims to produce a high quality panoramic video in real-time.
A NOVEL APPROACH TO SMOOTHING ON 3D STRUCTURED ADAPTIVE MESH OF THE KINECT-BA...csandit
3-dimensional object modelling of real world objects in steady state by means of multiple point
cloud (pcl) depth scans taken by using sensing camera and application of smoothing algorithm
are suggested in this study. Polygon structure, which is constituted by coordinates of point
cloud (x,y,z) corresponding to the position of 3D model in space and obtained by nodal points
and connection of these points by means of triangulation, is utilized for the demonstration of 3D
models. Gaussian smoothing and developed methods are applied to the mesh consisting of
merge of these polygons, and a new mesh simplification and augmentation algorithm are
suggested for the over the 3D modelling. Mesh consisting of merge of polygons can be
demonstrated in a more packed, smooth and fluent way. In this study is shown that applied the
triangulation and smoothing method for 3D modelling, perform to a fast and robust mesh
structures compared to existing methods therewithal no remeshing is necessary for refinement
and reduction.
ROI Based Image Compression in Baseline JPEGIJERA Editor
To improve the efficiency of standard JPEG compression algorithm an adaptive quantization technique based on the support for region of interest of compression is introduced. Since this is a lossy compression technique the less important bits are discarded and are not restored back during decompression. Adaptive quantization is carried out by applying two different quantization to the picture provided by the user. The user can select any part of the image and enter the required quality for compression. If according to the user the subject is more important than the background then more quality is provided to the subject than the background and vice- versa. Adaptive quantization in baseline sequential JPEG is carried out by applying Forward Discrete Cosine Transform (FDCT), two different quantization provided by the user for compression, thereby achieving region of interest compression and Inverse Discrete Cosine Transform (IDCT) for decompression. This technique makes sure that the memory is used efficiently. Moreover we have specifically designed this for identifying defects in the leather samples clearly.
IRJET- Efficient JPEG Reconstruction using Bayesian MAP and BFMTIRJET Journal
This document discusses efficient JPEG reconstruction using Bayesian MAP and BFMT. It proposes using a Bayesian maximum a posteriori probability approach with an alternating direction method of multipliers iterative optimization algorithm. Specifically, it uses a learned frame prior and models the quantization noise as Gaussian. It also proposes using bilateral filter and its method noise thresholding using wavelets for image denoising as part of the JPEG reconstruction process. Experimental results show this approach improves reconstruction quality both visually and in terms of signal-to-noise ratio compared to other existing methods.
An Efficient Block Matching Algorithm Using Logical ImageIJERA Editor
Motion estimation, which has been widely used in various image sequence coding schemes, plays a key role in the transmission and storage of video signals at reduced bit rates. There are two classes of motion estimation methods, Block matching algorithms (BMA) and Pel-recursive algorithms (PRA). Due to its implementation simplicity, block matching algorithms have been widely adopted by various video coding standards such as CCITT H.261, ITU-T H.263, and MPEG. In BMA, the current image frame is partitioned into fixed-size rectangular blocks. The motion vector for each block is estimated by finding the best matching block of pixels within the search window in the previous frame according to matching criteria. The goal of this work is to find a fast method for motion estimation and motion segmentation using proposed model. Recent day Communication between ends is facilitated by the development in the area of wired and wireless networks. And it is a challenge to transmit large data file over limited bandwidth channel. Block matching algorithms are very useful in achieving the efficient and acceptable compression. Block matching algorithm defines the total computation cost and effective bit budget. To efficiently obtain motion estimation different approaches can be followed but above constraints should be kept in mind. This paper presents a novel method using three step and diamond algorithms with modified search pattern based on logical image for the block based motion estimation. It has been found that, the improved PSNR value obtained from proposed algorithm shows a better computation time (faster) as compared to original Three step Search (3SS/TSS ) method .The experimental results based on the number of video sequences were presented to demonstrate the advantages of proposed motion estimation technique.
COMPOSITE IMAGELET IDENTIFIER FOR ML PROCESSORSIRJET Journal
The document proposes a composite imagelet identifier technique for machine learning processors that uses seam carving to manipulate edges and pixilation in images for processing. It discusses using existing algorithms like SSIM and Dijkstra's algorithm to calculate image energy and identify optimal seam locations for manipulation. The technique is evaluated using test images in MATLAB and is presented as having potential applications in areas like forestry, animal husbandry and safety monitoring.
The document discusses implementing the Diamond Search algorithm for motion estimation in video compression using parallel processing on a GPU. Motion estimation is the most computationally expensive part of video compression. The Diamond Search algorithm was implemented on an NVIDIA GeForce 610 GPU using CUDA. Experimental results showed a 4x speedup compared to CPU implementation, demonstrating that GPUs can accelerate motion estimation to reduce video encoding time. Implementing fast motion estimation algorithms in parallel on GPUs is an effective approach for real-time video applications.
Advanced Algorithms for Etching Simulation of 3d Mems-Tunable Lasers ijctcm
This The integrated circuits (ICs) industry uses a number of technology computer aided design (TCAD) software tools to simulate the manufacturing and the operation of many ICs at different levels. At very low level, the simulation tools are used to simulate the device fabrication and design. These simulation tools are based on solving mathematical equations that describe the physics of dopant diffusion, silicon oxidation, etching, deposition, lithography, implantation, and metallization. The simulation of physical etching solves etching equations to calculate the etching rate. And this rate is used to move the geometry of the device. The simulation of non-physical (geometrical) etching is based on geometrical Boolean operations. In this paper, we are proposing new and advanced geometrical etching algorithms for the process simulation of three dimensional (3D) micro electro mechanical systems (MEMS) and MEMStunable vertical cavity semiconductor optical amplifiers (VCSOAs). These algorithms are based on advanced domain decomposition methods, Delaunay meshing algorithms, and surface re-meshing and smoothing techniques. These algorithms are simple, robust, and significantly reduce the overall run time of the process simulation of 3D MEMS and MEMS-tunable laser devices. The description of the proposed etching algorithms will be presented. Numerical simulation results showing the performances of these algorithms will be given and analyzed for realistic 3D MEMS and MEMS-tunable laser devices.
JOINT IMAGE WATERMARKING, COMPRESSION AND ENCRYPTION BASED ON COMPRESSED SENS...ijma
ABSTRACT
Image usage over the internet becomes more and more important each day. Over 3 billion images are shared each day over the internet which raise a concern about how to protect images copyrights? Or how to utilize image sharing experience? This paper proposes a new robust image watermarking algorithm based on compressed sensing (CS) and quantization index modulation (QIM) watermark embedding. The algorithm capitalizes on the CS to compress and encrypt images jointly with Entropy Coding, Arnold Cat Map, Pseudo-random numbers and Advanced Encryption Standard (AES). Our proposed algorithm works under the JPEG standard umbrella. Watermark embedding is done in 3 different locations inside the image using QIM. Those locations differ with each 8-by-8 image block. Choosing which combination of coefficients to be used in QIM watermark embedding depends on selecting a combination from combinations table, which is generated at the same time with projection matrices using a 10-digits Pseudorandom number secret key SK1. After quantization phase, the algorithm shuffles image blocks using Arnold’s Cat Map with a 10-digits Pseudo-random number secret key SK2, followed by a unique method for splitting every 8x8 block into two unequal parts. Part number one will act as the host for two QIM watermarks then goes through encoding phase using Run-Length Encoding (RLE) followed by Huffman Encoding, while part number two goes through sparse watermark embedding followed by a third QIM watermark embedding and compression phase using CS, then Huffman encoder is used to encode this part. The algorithm aims to combine image watermarking, compression and encryption capabilities in one algorithm while balancing how those capabilities works with each other to achieve significant improvement in terms of image watermarking, compression and encryption. 15 different images usually used in image processing benchmarking were used for testing the algorithm capabilities and experiments show that our proposed algorithm achieves robust watermarking jointly with encryption and compression under the JPEG standard framework.
Survey paper on image compression techniquesIRJET Journal
This document summarizes and compares several popular image compression techniques: wavelet compression, JPEG/DCT compression, vector quantization (VQ), fractal compression, and genetic algorithm compression. It finds that all techniques perform satisfactorily at 0.5 bits per pixel, but for very low bit rates like 0.25 bpp, wavelet compression techniques like EZW perform best in terms of compression ratio and quality. Specifically, EZW and JPEG are more practical than others at low bit rates. The document also notes advantages and disadvantages of each technique and concludes hybrid approaches may achieve even higher compression ratios while maintaining image quality.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document discusses image compression using discrete wavelet transform (DWT) and principal component analysis (PCA). It first reviews several related works that use transforms like curvelet, wavelet and discrete cosine transform for image compression. It then describes preprocessing the input image using DWT to decompose it into sub-bands, and applying PCA on the high-frequency sub-bands to reduce dimensions and compress the image while preserving important boundaries. The algorithm is implemented and evaluated based on metrics like peak signal-to-noise ratio, standard deviation and entropy. Results show 95% accuracy in image identification from a database, though processing time increases significantly with database size.
Review of Diverse Techniques Used for Effective Fractal Image CompressionIRJET Journal
This document reviews different techniques for fractal image compression to enhance compression ratio while maintaining image quality. It discusses algorithms like quadtree partitioning with Huffman coding (QPHC), discrete cosine transform based fractal image compression (DCT-FIC), discrete wavelet transform based fractal image compression (DWTFIC), and Grover's quantum search algorithm based fractal image compression (QAFIC). The document also analyzes works applying these techniques and concludes that combining QAFIC with the tiny block size processing algorithm may further improve compression ratio with minimal quality loss.
A VIDEO COMPRESSION TECHNIQUE UTILIZING SPATIO-TEMPORAL LOWER COEFFICIENTSIAEME Publication
With the advancement of communication in recent trends, video compression plays an important role in the transmission of information on social networking and for storage with limited memory capacity. Also the inadequate bandwidth for transmission and lower quality make video compression a serious phenomenon to consider in the field of communication. There is a need to improve the video compression process which can encode the video data with low computational complexity with better quality along with maintaining speed. In this work, a new technique is developed based on the block processing utilizing the lower coefficients between frames.
Low power architecture of logic gates using adiabatic techniquesnooriasukmaningtyas
The growing significance of portable systems to limit power consumption in ultra-large-scale-integration chips of very high density, has recently led to rapid and inventive progresses in low-power design. The most effective technique is adiabatic logic circuit design in energy-efficient hardware. This paper presents two adiabatic approaches for the design of low power circuits, modified positive feedback adiabatic logic (modified PFAL) and the other is direct current diode based positive feedback adiabatic logic (DC-DB PFAL). Logic gates are the preliminary components in any digital circuit design. By improving the performance of basic gates, one can improvise the whole system performance. In this paper proposed circuit design of the low power architecture of OR/NOR, AND/NAND, and XOR/XNOR gates are presented using the said approaches and their results are analyzed for powerdissipation, delay, power-delay-product and rise time and compared with the other adiabatic techniques along with the conventional complementary metal oxide semiconductor (CMOS) designs reported in the literature. It has been found that the designs with DC-DB PFAL technique outperform with the percentage improvement of 65% for NOR gate and 7% for NAND gate and 34% for XNOR gate over the modified PFAL techniques at 10 MHz respectively.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network