This document discusses various algorithms for super-resolution image reconstruction. It begins by introducing the topic and defining super-resolution as producing a high-resolution image from multiple low-resolution images. It then presents a mathematical model to describe the super-resolution problem and shows how existing single-image restoration algorithms like maximum likelihood (ML), maximum a posteriori (MAP), and projection onto convex sets (POCS) can be applied to the super-resolution problem. Finally, it proposes a hybrid algorithm combining POCS and ML to improve performance and ensure convergence for super-resolution image reconstruction.
A Quantitative Comparative Study of Analytical and Iterative Reconstruction T...CSCJournals
A special image restoration problem is the reconstruction of image from projections – a problem of immense importance in medical imaging, computed tomography and non-destructive testing of objects. This is a problem where a two – dimensional (or higher) object is reconstructed from several one –dimensional projections [1]. The reconstruction techniques are broadly classified into three categories, analytical, iterative, and statistical [2]. The comparative study among these is of great importance in the field of medical imaging. This paper aims at comparative study by analyzing quantitatively the quality of image reconstructed by analytical and iterative techniques. Projections (parallel beam type) for the reconstruction are calculated analytically by defining Shepp logan phantom head model with coverage angle ranging from 0 to ±180o with rotational increment of 2o to 10o. For iterative reconstruction coverage angle of ±90o, iteration up to 10 is used. The original image is grayscale image of size 128 X 128. The Image quality of the reconstructed image is measured by six quality measurement parameters. In this paper as analytical technique; simple back projection and filtered back projection are implemented, while as iterative; algebraic reconstruction technique is implemented. Experiment result reveals that quality of reconstructed image increase as coverage angle, and number of views increases. The processing time is one major deciding component for reconstruction. Keywords: Reconstruction algorithm, Simple-Back projection algorithm (SBP), Filter-Back projection algorithm (FBP), Algebraic Reconstruction Technique algorithm (ART), Image quality, coverage angle, Computed tomography (CT).
Background Estimation Using Principal Component Analysis Based on Limited Mem...IJECEIAES
Given a video of 푀 frames of size ℎ × 푤. Background components of a video are the elements matrix which relative constant over 푀 frames. In PCA (principal component analysis) method these elements are referred as “principal components”. In video processing, background subtraction means excision of background component from the video. PCA method is used to get the background component. This method transforms 3 dimensions video (ℎ × 푤 × 푀) into 2 dimensions one (푁 × 푀), where 푁 is a linear array of size ℎ × 푤 . The principal components are the dominant eigenvectors which are the basis of an eigenspace. The limited memory block Krylov subspace optimization then is proposed to improve performance the computation. Background estimation is obtained as the projection each input image (the first frame at each sequence image) onto space expanded principal component. The procedure was run for the standard dataset namely SBI (Scene Background Initialization) dataset consisting of 8 videos with interval resolution [146 150, 352 240], total frame [258,500]. The performances are shown with 8 metrics, especially (in average for 8 videos) percentage of error pixels (0.24%), the percentage of clustered error pixels (0.21%), multiscale structural similarity index (0.88 form maximum 1), and running time (61.68 seconds).
Hybrid medical image compression method using quincunx wavelet and geometric ...journalBEEI
The purpose of this article is to find an efficient and optimal method of compression by reducing the file size while retaining the information for a good quality processing and to produce credible pathological reports, based on the extraction of the information characteristics contained in medical images. In this article, we proposed a novel medical image compression that combines geometric active contour model and quincunx wavelet transform. In this method it is necessary to localize the region of interest, where we tried to localize all the part that contain the pathological, using the level set for an optimal reduction, then we use the quincunx wavelet coupled with the set partitioning in hierarchical trees (SPIHT) algorithm. After testing several algorithms we noticed that the proposed method gives satisfactory results. The comparison of the experimental results is based on parameters of evaluation.
This document summarizes research on using particle swarm optimization to reconstruct microwave images of two-dimensional dielectric scatterers. It formulates the inverse scattering problem as an optimization problem to find the dielectric parameter distribution that minimizes the difference between measured and simulated scattered field data. Numerical results show that a particle swarm optimization approach can accurately reconstruct the shape and dielectric properties of a test cylindrical scatterer, with lower background reconstruction error than a genetic algorithm approach. The research demonstrates that particle swarm optimization is a suitable technique for high-dimensional microwave imaging problems.
A new hybrid method for the segmentation of the brain mrissipij
The magnetic resonance imaging is a method which has undeniable qualities of contrast and tissue
characterization, presenting an interest in the follow-up of various pathologies such as the multiple
sclerosis. In this work, a new method of hybrid segmentation is presented and applied to Brain MRIs. The
extraction of the image of the brain is pretreated with the Non Local Means filter. A theoretical approach is
proposed; finally the last section is organized around an experimental part allowing the study of the
behavior of our model on textured images. In the aim to validate our model, different segmentations were
down on pathological Brain MRI, the obtained results have been compared to the results obtained by
another models. This results show the effectiveness and the robustness of the suggested approach.
This document presents a novel two-step approach for skull stripping MRI brain images. The first step uses morphological reconstruction operations to generate a mask of the brain. The second step applies thresholding to the mask to extract the brain. The method was tested on axial PD and FLAIR MRI images. Results found Jaccard and Dice similarity scores above 0.8 and 0.9 respectively, indicating the method efficiently extracts the brain from the skull.
Multiple Ant Colony Optimizations for Stereo MatchingCSCJournals
The stereo matching problem, which obtains the correspondence between right and left images, can be cast as a search problem. The matching of all candidates in the same line forms a 2D optimization task and the two dimensional (2D) optimization is a NP-hard problem. There are two characteristics in stereo matching. Firstly, the local optimization process along each scan-line can be done concurrently; secondly, there are some relationship among adjacent scan-lines can be explored to promote the matching correctness. Although there are many methods, such as GCPs, GGCPs are proposed, but these so called GCPs maybe not be ground. The relationship among adjacent scan-lines is posteriori, that is to say the relationship can only be discovered after every optimization is finished. The Multiple Ant Colony Optimization(MACO) is efficient to solve large scale problem. It is a proper way to settle down the stereo matching task with constructed MACO, in which the master layer values the sub-solutions and propagate the reliability after every local optimization is finished. Besides, whether the ordering and uniqueness constraints should be considered during the optimization is discussed, and the proposed algorithm is proved to guarantee its convergence to find the optimal matched pairs.
A novel approach for efficient skull stripping using morphological reconstruc...eSAT Journals
This document presents a novel two-step approach for skull stripping of MRI brain images. The first step uses morphological reconstruction operations including erosion, opening by reconstruction, dilation, and opening-closing by reconstruction to generate a primary segmentation mask. The second step applies thresholding to the primary mask to extract the final skull-stripped brain image. The method is tested on axial PD and FLAIR MRI images and achieves high Jaccard and Dice similarity scores compared to manually stripped images, demonstrating its effectiveness at skull stripping.
A Quantitative Comparative Study of Analytical and Iterative Reconstruction T...CSCJournals
A special image restoration problem is the reconstruction of image from projections – a problem of immense importance in medical imaging, computed tomography and non-destructive testing of objects. This is a problem where a two – dimensional (or higher) object is reconstructed from several one –dimensional projections [1]. The reconstruction techniques are broadly classified into three categories, analytical, iterative, and statistical [2]. The comparative study among these is of great importance in the field of medical imaging. This paper aims at comparative study by analyzing quantitatively the quality of image reconstructed by analytical and iterative techniques. Projections (parallel beam type) for the reconstruction are calculated analytically by defining Shepp logan phantom head model with coverage angle ranging from 0 to ±180o with rotational increment of 2o to 10o. For iterative reconstruction coverage angle of ±90o, iteration up to 10 is used. The original image is grayscale image of size 128 X 128. The Image quality of the reconstructed image is measured by six quality measurement parameters. In this paper as analytical technique; simple back projection and filtered back projection are implemented, while as iterative; algebraic reconstruction technique is implemented. Experiment result reveals that quality of reconstructed image increase as coverage angle, and number of views increases. The processing time is one major deciding component for reconstruction. Keywords: Reconstruction algorithm, Simple-Back projection algorithm (SBP), Filter-Back projection algorithm (FBP), Algebraic Reconstruction Technique algorithm (ART), Image quality, coverage angle, Computed tomography (CT).
Background Estimation Using Principal Component Analysis Based on Limited Mem...IJECEIAES
Given a video of 푀 frames of size ℎ × 푤. Background components of a video are the elements matrix which relative constant over 푀 frames. In PCA (principal component analysis) method these elements are referred as “principal components”. In video processing, background subtraction means excision of background component from the video. PCA method is used to get the background component. This method transforms 3 dimensions video (ℎ × 푤 × 푀) into 2 dimensions one (푁 × 푀), where 푁 is a linear array of size ℎ × 푤 . The principal components are the dominant eigenvectors which are the basis of an eigenspace. The limited memory block Krylov subspace optimization then is proposed to improve performance the computation. Background estimation is obtained as the projection each input image (the first frame at each sequence image) onto space expanded principal component. The procedure was run for the standard dataset namely SBI (Scene Background Initialization) dataset consisting of 8 videos with interval resolution [146 150, 352 240], total frame [258,500]. The performances are shown with 8 metrics, especially (in average for 8 videos) percentage of error pixels (0.24%), the percentage of clustered error pixels (0.21%), multiscale structural similarity index (0.88 form maximum 1), and running time (61.68 seconds).
Hybrid medical image compression method using quincunx wavelet and geometric ...journalBEEI
The purpose of this article is to find an efficient and optimal method of compression by reducing the file size while retaining the information for a good quality processing and to produce credible pathological reports, based on the extraction of the information characteristics contained in medical images. In this article, we proposed a novel medical image compression that combines geometric active contour model and quincunx wavelet transform. In this method it is necessary to localize the region of interest, where we tried to localize all the part that contain the pathological, using the level set for an optimal reduction, then we use the quincunx wavelet coupled with the set partitioning in hierarchical trees (SPIHT) algorithm. After testing several algorithms we noticed that the proposed method gives satisfactory results. The comparison of the experimental results is based on parameters of evaluation.
This document summarizes research on using particle swarm optimization to reconstruct microwave images of two-dimensional dielectric scatterers. It formulates the inverse scattering problem as an optimization problem to find the dielectric parameter distribution that minimizes the difference between measured and simulated scattered field data. Numerical results show that a particle swarm optimization approach can accurately reconstruct the shape and dielectric properties of a test cylindrical scatterer, with lower background reconstruction error than a genetic algorithm approach. The research demonstrates that particle swarm optimization is a suitable technique for high-dimensional microwave imaging problems.
A new hybrid method for the segmentation of the brain mrissipij
The magnetic resonance imaging is a method which has undeniable qualities of contrast and tissue
characterization, presenting an interest in the follow-up of various pathologies such as the multiple
sclerosis. In this work, a new method of hybrid segmentation is presented and applied to Brain MRIs. The
extraction of the image of the brain is pretreated with the Non Local Means filter. A theoretical approach is
proposed; finally the last section is organized around an experimental part allowing the study of the
behavior of our model on textured images. In the aim to validate our model, different segmentations were
down on pathological Brain MRI, the obtained results have been compared to the results obtained by
another models. This results show the effectiveness and the robustness of the suggested approach.
This document presents a novel two-step approach for skull stripping MRI brain images. The first step uses morphological reconstruction operations to generate a mask of the brain. The second step applies thresholding to the mask to extract the brain. The method was tested on axial PD and FLAIR MRI images. Results found Jaccard and Dice similarity scores above 0.8 and 0.9 respectively, indicating the method efficiently extracts the brain from the skull.
Multiple Ant Colony Optimizations for Stereo MatchingCSCJournals
The stereo matching problem, which obtains the correspondence between right and left images, can be cast as a search problem. The matching of all candidates in the same line forms a 2D optimization task and the two dimensional (2D) optimization is a NP-hard problem. There are two characteristics in stereo matching. Firstly, the local optimization process along each scan-line can be done concurrently; secondly, there are some relationship among adjacent scan-lines can be explored to promote the matching correctness. Although there are many methods, such as GCPs, GGCPs are proposed, but these so called GCPs maybe not be ground. The relationship among adjacent scan-lines is posteriori, that is to say the relationship can only be discovered after every optimization is finished. The Multiple Ant Colony Optimization(MACO) is efficient to solve large scale problem. It is a proper way to settle down the stereo matching task with constructed MACO, in which the master layer values the sub-solutions and propagate the reliability after every local optimization is finished. Besides, whether the ordering and uniqueness constraints should be considered during the optimization is discussed, and the proposed algorithm is proved to guarantee its convergence to find the optimal matched pairs.
A novel approach for efficient skull stripping using morphological reconstruc...eSAT Journals
This document presents a novel two-step approach for skull stripping of MRI brain images. The first step uses morphological reconstruction operations including erosion, opening by reconstruction, dilation, and opening-closing by reconstruction to generate a primary segmentation mask. The second step applies thresholding to the primary mask to extract the final skull-stripped brain image. The method is tested on axial PD and FLAIR MRI images and achieves high Jaccard and Dice similarity scores compared to manually stripped images, demonstrating its effectiveness at skull stripping.
The document describes a method for tracking objects of deformable shapes in images. It proposes representing the matching of a deformable template to an image as a minimum cost cyclic path in a product space of the template and image. An energy functional is introduced that consists of a data term favoring strong image gradients, a shape consistency term favoring similar tangent angles, and an elastic penalty. Optimization is performed using a minimum ratio cycle algorithm parallelized on GPUs. This provides efficient, pixel-accurate segmentation and correspondence between template and image curve. The method can be extended to 4D to segment and track multiple deformable anatomical structures in medical images.
Image segmentation by modified map ml estimationsijesajournal
Though numerous algorithms exist to perform image segmentation there are several issues
related to execution time of these algorithm. Image Segmentation is nothing but label relabeling
problem under probability framework. To estimate the label configuration, an iterative
optimization scheme is implemented to alternately carry out the maximum a posteriori (MAP)
estimation and the maximum likelihood (ML) estimations. In this paper this technique is
modified in such a way so that it performs segmentation within stipulated time period. The
extensive experiments shows that the results obtained are comparable with existing algorithms.
This algorithm performs faster execution than the existing algorithm to give automatic
segmentation without any human intervention. Its result match image edges very closer to
human perception.
Image Restoration Using Joint Statistical Modeling in a Space-Transform Domainjohn236zaq
This document summarizes a research paper that presents a novel strategy for high-fidelity image restoration. It establishes a joint statistical model in an adaptive hybrid space-transform domain to characterize both local smoothness and nonlocal self-similarity of natural images. A new minimization functional is formulated using this joint statistical model within a regularization framework. A Split Bregman-based algorithm is developed to efficiently solve the severely underdetermined inverse problem and recover images from degradation while preserving details. Experiments on image inpainting, deblurring and denoising demonstrate the effectiveness of the proposed approach.
Image Denoising Based On Sparse Representation In A Probabilistic FrameworkCSCJournals
Image denoising is an interesting inverse problem. By denoising we mean finding a clean image, given a noisy one. In this paper, we propose a novel image denoising technique based on the generalized k density model as an extension to the probabilistic framework for solving image denoising problem. The approach is based on using overcomplete basis dictionary for sparsely representing the image under interest. To learn the overcomplete basis, we used the generalized k density model based ICA. The learned dictionary used after that for denoising speech signals and other images. Experimental results confirm the effectiveness of the proposed method for image denoising. The comparison with other denoising methods is also made and it is shown that the proposed method produces the best denoising effect.
This document discusses various methods for interpolating geofield parameters to model the surface of geofields. It analyzes methods like algebraic polynomials, filters, splines, kriging and neural networks. It then focuses on using neural networks to identify parameters for a mathematical model of a geofield by training the network parameters using experimental statistical data. As a result, it finds parameters for a regression equation that satisfy the training data. The application of neural networks is shown to have advantages over traditional statistical methods for modeling geofields, especially when data is limited in the early stages.
Spectral approach to image projection with cubic b spline interpolationiaemedu
This document summarizes a research paper that proposes a new method for image projection using spectral interpolation with cubic B-spline interpolation. The key points are:
1) Existing super resolution methods based on Fourier transforms have limitations in providing high visual quality when scaling images.
2) The proposed method first transforms image frames into the frequency domain using FFT. It then interpolates in the spectral domain using cubic B-spline interpolation to project the images onto a higher resolution grid.
3) Experimental results show the proposed method provides higher visual quality projections compared to conventional Fourier-based approaches, while also having faster computation times.
Abstract: Many applications such as robot navigation, defense, medical and remote sensing performvarious processing tasks, which can be performed more easily when all objects in different images of the same scene are combined into a single fused image. In this paper, we propose a fast and effective method for image fusion. The proposed method derives the intensity based variations that is large and small scale, from the source images. In this approach, guided filtering is employed for this extraction. Gaussian and Laplacian pyramidal approach is then used to fuse the different layers obtained. Experimental results demonstrate that the proposed method can obtain better performance for fusion of
all sets of images. The results clearly indicate the feasibility of the proposed approach.
Performance Comparison of Image Retrieval Using Fractional Coefficients of Tr...CSCJournals
The thirst of better and faster retrieval techniques has always fuelled to the research in content based image retrieval (CBIR). The paper presents innovative content based image retrieval (CBIR) techniques based on feature vectors as fractional coefficients of transformed images using Discrete Cosine, Walsh, Haar and Kekre’s transforms. Here the advantage of energy compaction of transforms in higher coefficients is taken to greatly reduce the feature vector size per image by taking fractional coefficients of transformed image. The feature vectors are extracted in fourteen different ways from the transformed image, with the first being considering all the coefficients of transformed image and then fourteen reduced coefficients sets (as 50%, 25%, 12.5%, 6.25%, 3.125%, 1.5625% ,0.7813%, 0.39%, 0.195%, 0.097%, 0.048%, 0.024%, 0.012% and 0.06% of complete transformed image) are considered as feature vectors. The four transforms are applied on gray image equivalents and the colour components of images to extract Gray and RGB feature sets respectively. Instead of using all coefficients of transformed images as feature vector for image retrieval, these fourteen reduced coefficients sets for gray as well as RGB feature vectors are used, resulting into better performance and lower computations. The proposed CBIR techniques are implemented on a database having 1000 images spread across 11 categories. For each proposed CBIR technique 55 queries (5 per category) are fired on the database and net average precision and recall are computed for all feature sets per transform. The results have shown performance improvement (higher precision and recall values) with fractional coefficients compared to complete transform of image at reduced computations resulting in faster retrieval. Finally Kekre’s transform surpasses all other discussed transforms in performance with highest precision and recall values for fractional coefficients (6.25% and 3.125% of all coefficients) and computation are lowered by 94.08% as compared to DCT.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
The fourier transform for satellite image compressioncsandit
The document presents a new method for compressing satellite images using the Fourier transform and scalar quantization. The method involves taking the Fourier transform of the image, scalar quantizing the amplitude values, and encoding the results with run-length encoding and Huffman coding. Testing on satellite images and Lena showed compression ratios over 65% while maintaining good image quality after reconstruction.
Lec10: Medical Image Segmentation as an Energy Minimization ProblemUlaş Bağcı
Enhancement, Noise Reduction, and Signal Processing • MedicalImageRegistration • MedicalImageSegmentation • MedicalImageVisualization • Machine Learning in Medical Imaging • Shape Modeling/Analysis of Medical Images Deep Learning in Radiology Fuzzy Connectivity (FC) – Affinity functions • Absolute FC • Relative FC (and Iterative Relative FC) • Successful example applications of FC in medical imaging • Segmentation of Airway and Airway Walls using RFC based method
Energyfunctional
– Data and Smoothness terms
• GraphCut – Min cut
– Max Flow
• ApplicationsinRadiologyImages
Brain tumor segmentation using asymmetry based histogram thresholding and k m...eSAT Publishing House
This document presents a method for segmenting brain tumors from MRI images using asymmetry-based histogram thresholding and k-means clustering. The method involves 8 steps: 1) preprocessing the MRI image using sharpening and median filters, 2) computing histograms of the left and right halves of the image, 3) calculating a threshold value using the difference between left and right histograms, 4) applying thresholding and morphological operations to extract the tumor region, 5) applying k-means clustering and using the cluster centroids to refine the segmentation. The method is tested on 30 MRI images and results show the tumor region is accurately segmented. The segmented tumors can then be used for quantification, classification, and computer-assisted diagnosis of brain tumors.
Modeling and experimental analysis of variable speed three phase squirrel 2IAEME Publication
This document summarizes a research paper that proposes a novel three-phase squirrel cage induction generator configuration called two-series-connected-and-one isolated (TSCAOI) that can generate single-phase electricity at constant frequency from variable rotor speeds without an intermediate converter. It presents the generator system, which uses a three-phase induction machine with one isolated winding for excitation and the other two windings connected in series as the power winding. An experimental analysis and mathematical model are also developed to accurately predict the generator's behavior.
This document summarizes and evaluates techniques for identifying adversary attacks in wireless sensor networks. It begins by describing common types of attacks and issues with cryptographic identification methods. It then evaluates existing localization techniques like Received Signal Strength (RSS) and spatial correlation analysis. Specifically, it proposes the Generalized Model for Attack Detection (GMFAD) which uses Partitioning Around Medoids (PaM) clustering on RSS readings to detect multiple attackers. It also presents the Coherent Detection and Localization Model (CDAL-M) which integrates PaM with localization algorithms like RADAR and Bayesian networks to determine attacker locations. The document analyzes these techniques' effectiveness at detecting and localizing multiple adversary attackers in wireless sensor networks.
Mr. Akshay Ratnakar Pawar has been working at Siddhivinayak Agency, an authorized Exide Care dealer, for the past 3 months. During this time, he has demonstrated excellent experience and skills in accounting. The letter certifies that Mr. Pawar handled accounting duties honestly and sincerely at the car shop.
Ansh Kamal Bhandari is seeking a job and provides his resume. He includes his contact information, date of birth, scholastic achievements including academic awards. He has experience with technical seminars and projects including a wall climbing robot seminar and engineering final year projects. He has work experience interning at BL Chains & Spares where he studied equipment and processes. He also lists social responsibilities and hobbies like sports and movies. He provides details of his software skills and languages known. Finally, he includes his educational qualifications starting from SSC through diploma and bachelor's degrees in mechanical engineering.
dFuse: An Optimized Compression Algorithm for DICOM-Format Image ArchiveCSCJournals
Medical images are useful for knowing the details of the human body for health science or remedial reasons. DICOM is structured as a multi-part document in order to facilitate extension of these images. Additionally, DICOM defined information objects are not only for images but also for patients, studies, reports, and other data groupings. More information details in DICOM, resulted in large size, and transferring or communicating these files took lots of time. To solve this, files can be compressed and transferred. Efficient compression solutions are available and they are becoming more critical with the recent intensive growth of data and medical imaging. In order to receive the original and less sized image, we need effective compression algorithm. There are different algorithms for compression such as DCT, Haar, Daubuchies which has its roots in cosine and wavelet transforms. In this paper, we propose a new compression algorithm called “dFuse”. It uses cosine based three dimensional transform to compress the DICOM files. We use the following parameters to check the efficiency of the proposed algorithm, they are i) file size, ii) PSNR, iii) compression percentage and iv) compression ratio. From the experimental results obtained, the proposed algorithm works well for compressing medical images.
Google X is a secretive division of Google dedicated to developing major technological advancements. It is located in Mountain View, CA and oversees projects such as self-driving cars, solar-powered drones, Google Glass, smart contact lenses that monitor glucose levels, Project Loon which provides internet access via high-altitude balloons, and Project Tango smartphones that can map 3D environments. The life sciences division of Google X also conducts research on life sciences topics.
Smart Fire Detection System using Image ProcessingIJSRD
Fire is greatest genuine interruption which prompts monetary and natural misfortunes. The determination of flame edges is the procedure of a distinguishing limit between the range where thermo-chemical response and those without. It is an ancestor to picture based fire observing, when fire discovery, fire assessment, and the determination of fire and fire parameters. A few conventional edge-discovery techniques have been tried to discover fire edges, yet the outcomes accomplished has baffling. Some examination works identified with fire and fire edge recognition were accounted for distinctive applications; then again, the systems don't underscore the progression and clarity of the fire and fire edges. In this manner, to conquer these issues, applicant fire locales are initially recognized utilizing a foundation model and shading model of flame. The proposed framework was effectively connected to different errands in true situations and successfully recognized fire from flame hued objects. Exploratory results will show that the proposed strategy beats different routines in both of flame target upgrade and foundation point of interest.
The document discusses the history and growth of internet usage worldwide. It notes that in 1995 less than 1% of the world's population had internet access, while today around 40% do. The number of internet users increased tenfold from 1999 to 2013. The first billion internet users was reached in 2005, the second billion in 2010, and the third billion in 2014. The project aims to continue expanding internet access to more parts of the world through the use of high-altitude balloons as part of Google's Project Loon.
Project Loon is a network of balloons travelling in the stratosphere and designed by Google to provide internet connectivity worldwide. The balloons float 20 km above the Earth's surface, where winds are steady at 5-20 mph, and each balloon can rise or descend to different wind layers to be steered in desired directions. The balloons are composed of polyethylene envelopes that are inflated to 15m x 12m sizes, solar panels that provide up to 100W of power, and electronic equipment boxes. Users on the ground connect to the balloon network using special antennas that bounce signals between balloons and then down to the global internet. Google aims to use this technology to connect the two-thirds of the world's population that currently
The document describes a method for tracking objects of deformable shapes in images. It proposes representing the matching of a deformable template to an image as a minimum cost cyclic path in a product space of the template and image. An energy functional is introduced that consists of a data term favoring strong image gradients, a shape consistency term favoring similar tangent angles, and an elastic penalty. Optimization is performed using a minimum ratio cycle algorithm parallelized on GPUs. This provides efficient, pixel-accurate segmentation and correspondence between template and image curve. The method can be extended to 4D to segment and track multiple deformable anatomical structures in medical images.
Image segmentation by modified map ml estimationsijesajournal
Though numerous algorithms exist to perform image segmentation there are several issues
related to execution time of these algorithm. Image Segmentation is nothing but label relabeling
problem under probability framework. To estimate the label configuration, an iterative
optimization scheme is implemented to alternately carry out the maximum a posteriori (MAP)
estimation and the maximum likelihood (ML) estimations. In this paper this technique is
modified in such a way so that it performs segmentation within stipulated time period. The
extensive experiments shows that the results obtained are comparable with existing algorithms.
This algorithm performs faster execution than the existing algorithm to give automatic
segmentation without any human intervention. Its result match image edges very closer to
human perception.
Image Restoration Using Joint Statistical Modeling in a Space-Transform Domainjohn236zaq
This document summarizes a research paper that presents a novel strategy for high-fidelity image restoration. It establishes a joint statistical model in an adaptive hybrid space-transform domain to characterize both local smoothness and nonlocal self-similarity of natural images. A new minimization functional is formulated using this joint statistical model within a regularization framework. A Split Bregman-based algorithm is developed to efficiently solve the severely underdetermined inverse problem and recover images from degradation while preserving details. Experiments on image inpainting, deblurring and denoising demonstrate the effectiveness of the proposed approach.
Image Denoising Based On Sparse Representation In A Probabilistic FrameworkCSCJournals
Image denoising is an interesting inverse problem. By denoising we mean finding a clean image, given a noisy one. In this paper, we propose a novel image denoising technique based on the generalized k density model as an extension to the probabilistic framework for solving image denoising problem. The approach is based on using overcomplete basis dictionary for sparsely representing the image under interest. To learn the overcomplete basis, we used the generalized k density model based ICA. The learned dictionary used after that for denoising speech signals and other images. Experimental results confirm the effectiveness of the proposed method for image denoising. The comparison with other denoising methods is also made and it is shown that the proposed method produces the best denoising effect.
This document discusses various methods for interpolating geofield parameters to model the surface of geofields. It analyzes methods like algebraic polynomials, filters, splines, kriging and neural networks. It then focuses on using neural networks to identify parameters for a mathematical model of a geofield by training the network parameters using experimental statistical data. As a result, it finds parameters for a regression equation that satisfy the training data. The application of neural networks is shown to have advantages over traditional statistical methods for modeling geofields, especially when data is limited in the early stages.
Spectral approach to image projection with cubic b spline interpolationiaemedu
This document summarizes a research paper that proposes a new method for image projection using spectral interpolation with cubic B-spline interpolation. The key points are:
1) Existing super resolution methods based on Fourier transforms have limitations in providing high visual quality when scaling images.
2) The proposed method first transforms image frames into the frequency domain using FFT. It then interpolates in the spectral domain using cubic B-spline interpolation to project the images onto a higher resolution grid.
3) Experimental results show the proposed method provides higher visual quality projections compared to conventional Fourier-based approaches, while also having faster computation times.
Abstract: Many applications such as robot navigation, defense, medical and remote sensing performvarious processing tasks, which can be performed more easily when all objects in different images of the same scene are combined into a single fused image. In this paper, we propose a fast and effective method for image fusion. The proposed method derives the intensity based variations that is large and small scale, from the source images. In this approach, guided filtering is employed for this extraction. Gaussian and Laplacian pyramidal approach is then used to fuse the different layers obtained. Experimental results demonstrate that the proposed method can obtain better performance for fusion of
all sets of images. The results clearly indicate the feasibility of the proposed approach.
Performance Comparison of Image Retrieval Using Fractional Coefficients of Tr...CSCJournals
The thirst of better and faster retrieval techniques has always fuelled to the research in content based image retrieval (CBIR). The paper presents innovative content based image retrieval (CBIR) techniques based on feature vectors as fractional coefficients of transformed images using Discrete Cosine, Walsh, Haar and Kekre’s transforms. Here the advantage of energy compaction of transforms in higher coefficients is taken to greatly reduce the feature vector size per image by taking fractional coefficients of transformed image. The feature vectors are extracted in fourteen different ways from the transformed image, with the first being considering all the coefficients of transformed image and then fourteen reduced coefficients sets (as 50%, 25%, 12.5%, 6.25%, 3.125%, 1.5625% ,0.7813%, 0.39%, 0.195%, 0.097%, 0.048%, 0.024%, 0.012% and 0.06% of complete transformed image) are considered as feature vectors. The four transforms are applied on gray image equivalents and the colour components of images to extract Gray and RGB feature sets respectively. Instead of using all coefficients of transformed images as feature vector for image retrieval, these fourteen reduced coefficients sets for gray as well as RGB feature vectors are used, resulting into better performance and lower computations. The proposed CBIR techniques are implemented on a database having 1000 images spread across 11 categories. For each proposed CBIR technique 55 queries (5 per category) are fired on the database and net average precision and recall are computed for all feature sets per transform. The results have shown performance improvement (higher precision and recall values) with fractional coefficients compared to complete transform of image at reduced computations resulting in faster retrieval. Finally Kekre’s transform surpasses all other discussed transforms in performance with highest precision and recall values for fractional coefficients (6.25% and 3.125% of all coefficients) and computation are lowered by 94.08% as compared to DCT.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
The fourier transform for satellite image compressioncsandit
The document presents a new method for compressing satellite images using the Fourier transform and scalar quantization. The method involves taking the Fourier transform of the image, scalar quantizing the amplitude values, and encoding the results with run-length encoding and Huffman coding. Testing on satellite images and Lena showed compression ratios over 65% while maintaining good image quality after reconstruction.
Lec10: Medical Image Segmentation as an Energy Minimization ProblemUlaş Bağcı
Enhancement, Noise Reduction, and Signal Processing • MedicalImageRegistration • MedicalImageSegmentation • MedicalImageVisualization • Machine Learning in Medical Imaging • Shape Modeling/Analysis of Medical Images Deep Learning in Radiology Fuzzy Connectivity (FC) – Affinity functions • Absolute FC • Relative FC (and Iterative Relative FC) • Successful example applications of FC in medical imaging • Segmentation of Airway and Airway Walls using RFC based method
Energyfunctional
– Data and Smoothness terms
• GraphCut – Min cut
– Max Flow
• ApplicationsinRadiologyImages
Brain tumor segmentation using asymmetry based histogram thresholding and k m...eSAT Publishing House
This document presents a method for segmenting brain tumors from MRI images using asymmetry-based histogram thresholding and k-means clustering. The method involves 8 steps: 1) preprocessing the MRI image using sharpening and median filters, 2) computing histograms of the left and right halves of the image, 3) calculating a threshold value using the difference between left and right histograms, 4) applying thresholding and morphological operations to extract the tumor region, 5) applying k-means clustering and using the cluster centroids to refine the segmentation. The method is tested on 30 MRI images and results show the tumor region is accurately segmented. The segmented tumors can then be used for quantification, classification, and computer-assisted diagnosis of brain tumors.
Modeling and experimental analysis of variable speed three phase squirrel 2IAEME Publication
This document summarizes a research paper that proposes a novel three-phase squirrel cage induction generator configuration called two-series-connected-and-one isolated (TSCAOI) that can generate single-phase electricity at constant frequency from variable rotor speeds without an intermediate converter. It presents the generator system, which uses a three-phase induction machine with one isolated winding for excitation and the other two windings connected in series as the power winding. An experimental analysis and mathematical model are also developed to accurately predict the generator's behavior.
This document summarizes and evaluates techniques for identifying adversary attacks in wireless sensor networks. It begins by describing common types of attacks and issues with cryptographic identification methods. It then evaluates existing localization techniques like Received Signal Strength (RSS) and spatial correlation analysis. Specifically, it proposes the Generalized Model for Attack Detection (GMFAD) which uses Partitioning Around Medoids (PaM) clustering on RSS readings to detect multiple attackers. It also presents the Coherent Detection and Localization Model (CDAL-M) which integrates PaM with localization algorithms like RADAR and Bayesian networks to determine attacker locations. The document analyzes these techniques' effectiveness at detecting and localizing multiple adversary attackers in wireless sensor networks.
Mr. Akshay Ratnakar Pawar has been working at Siddhivinayak Agency, an authorized Exide Care dealer, for the past 3 months. During this time, he has demonstrated excellent experience and skills in accounting. The letter certifies that Mr. Pawar handled accounting duties honestly and sincerely at the car shop.
Ansh Kamal Bhandari is seeking a job and provides his resume. He includes his contact information, date of birth, scholastic achievements including academic awards. He has experience with technical seminars and projects including a wall climbing robot seminar and engineering final year projects. He has work experience interning at BL Chains & Spares where he studied equipment and processes. He also lists social responsibilities and hobbies like sports and movies. He provides details of his software skills and languages known. Finally, he includes his educational qualifications starting from SSC through diploma and bachelor's degrees in mechanical engineering.
dFuse: An Optimized Compression Algorithm for DICOM-Format Image ArchiveCSCJournals
Medical images are useful for knowing the details of the human body for health science or remedial reasons. DICOM is structured as a multi-part document in order to facilitate extension of these images. Additionally, DICOM defined information objects are not only for images but also for patients, studies, reports, and other data groupings. More information details in DICOM, resulted in large size, and transferring or communicating these files took lots of time. To solve this, files can be compressed and transferred. Efficient compression solutions are available and they are becoming more critical with the recent intensive growth of data and medical imaging. In order to receive the original and less sized image, we need effective compression algorithm. There are different algorithms for compression such as DCT, Haar, Daubuchies which has its roots in cosine and wavelet transforms. In this paper, we propose a new compression algorithm called “dFuse”. It uses cosine based three dimensional transform to compress the DICOM files. We use the following parameters to check the efficiency of the proposed algorithm, they are i) file size, ii) PSNR, iii) compression percentage and iv) compression ratio. From the experimental results obtained, the proposed algorithm works well for compressing medical images.
Google X is a secretive division of Google dedicated to developing major technological advancements. It is located in Mountain View, CA and oversees projects such as self-driving cars, solar-powered drones, Google Glass, smart contact lenses that monitor glucose levels, Project Loon which provides internet access via high-altitude balloons, and Project Tango smartphones that can map 3D environments. The life sciences division of Google X also conducts research on life sciences topics.
Smart Fire Detection System using Image ProcessingIJSRD
Fire is greatest genuine interruption which prompts monetary and natural misfortunes. The determination of flame edges is the procedure of a distinguishing limit between the range where thermo-chemical response and those without. It is an ancestor to picture based fire observing, when fire discovery, fire assessment, and the determination of fire and fire parameters. A few conventional edge-discovery techniques have been tried to discover fire edges, yet the outcomes accomplished has baffling. Some examination works identified with fire and fire edge recognition were accounted for distinctive applications; then again, the systems don't underscore the progression and clarity of the fire and fire edges. In this manner, to conquer these issues, applicant fire locales are initially recognized utilizing a foundation model and shading model of flame. The proposed framework was effectively connected to different errands in true situations and successfully recognized fire from flame hued objects. Exploratory results will show that the proposed strategy beats different routines in both of flame target upgrade and foundation point of interest.
The document discusses the history and growth of internet usage worldwide. It notes that in 1995 less than 1% of the world's population had internet access, while today around 40% do. The number of internet users increased tenfold from 1999 to 2013. The first billion internet users was reached in 2005, the second billion in 2010, and the third billion in 2014. The project aims to continue expanding internet access to more parts of the world through the use of high-altitude balloons as part of Google's Project Loon.
Project Loon is a network of balloons travelling in the stratosphere and designed by Google to provide internet connectivity worldwide. The balloons float 20 km above the Earth's surface, where winds are steady at 5-20 mph, and each balloon can rise or descend to different wind layers to be steered in desired directions. The balloons are composed of polyethylene envelopes that are inflated to 15m x 12m sizes, solar panels that provide up to 100W of power, and electronic equipment boxes. Users on the ground connect to the balloon network using special antennas that bounce signals between balloons and then down to the global internet. Google aims to use this technology to connect the two-thirds of the world's population that currently
Your presentation will summarize a business plan for launching an Internet service using Google's Project Loon balloons. The proposed business model involves households subscribing to the Internet service. The primary targeted market is rural areas lacking traditional broadband infrastructure. Specifically, you will launch in a region of South America to test reliability and cost-effectiveness over varied terrain. Pricing will be competitive with other rural Internet options. Your analysis finds that Loon can exploit new markets and potentially capture shares in underserved areas. You will recommend leveraging Google's financial and technical resources to foster Loon's development and penetration of additional markets over time through continuous technological and service improvements.
Project Loon is a Google X project that aims to provide internet access to rural and remote areas using high-altitude balloons placed in the stratosphere. The balloons float in the stratosphere and are maneuvered to different wind layers to remain over desired locations. People in remote areas can connect to the balloon network using special antennas. The signal hops between balloons and then connects to the global internet via base stations. The technology is still in development but could help bring affordable internet access to more parts of the world.
Project Loon is a Google project that aims to provide internet access to rural and remote areas using high-altitude balloons placed in the stratosphere. The balloons create an aerial wireless network with speeds of up to 3G. They are manoeuvred by adjusting their altitude to float on wind currents identified using NOAA wind data. Users connect to the balloon network using a special antenna, and the signal travels between balloons and to ground stations connected to ISPs. If successful, this technology could provide internet access without expensive fiber cable infrastructure.
This Presentation consists of Paytm business model, Revenue model, Marketing campaign, Services offered, Supply Chain of Paytm, Web Technologies of paytm. Through this presentation you will get to know every thing about paytm.
This document discusses micelles and critical micelle concentration (CMC). It defines micelles as aggregates of surfactant molecules that form in solution above the CMC. The CMC is the minimum concentration of surfactant needed for spontaneous micelle formation. Above the CMC, additional surfactant molecules do not affect properties but may change micelle size or shape. The document outlines factors that influence the CMC like temperature, electrolytes, and hydrocarbon chain length. Micelles can solubilize hydrophobic compounds in their cores and increase drug solubility. The formation of micelles allows modification of drug release profiles and improved drug stability.
Google's Project Loon aims to provide internet access to rural and remote areas using high-altitude balloons. Balloons float in the stratosphere, carrying communications equipment and solar panels. They are moved using winds at different altitudes to position them over desired locations. People on the ground connect to the balloon network using special antennas. Signals hop between balloons and back to the ground, providing internet speeds comparable to 3G. The balloons are designed to operate autonomously for months at a time in the stratosphere's harsh conditions.
Project Loon is a network of balloons traveling in the stratosphere designed to connect people in rural areas. An experimental pilot launched 30 balloons over New Zealand in 2013 to test the technology. The balloons float 20 miles above the Earth, using software to ride wind currents to positions that form a communications network. Each balloon provides internet coverage to an area of about 40 square kilometers using solar power and bouncing signals between balloons. The goal is to increase internet access for remote areas around the world.
Project Loon is Google's initiative to provide internet access to rural and remote areas using high-altitude balloons. The balloons float in the stratosphere and work together to connect people on the ground. In 2013, Google launched a pilot test involving 30 balloons over New Zealand that successfully provided internet access to 50 test users. Project Loon aims to continue expanding its pilot program to create continuous connectivity around the 40th parallel south latitude using balloons and renewable energy sources.
The document discusses three common image formats - JPG, GIF, and PNG - and provides guidance on when to use each one. JPG is best for photographs and when file size needs to be small, as it is lossy but supports 24-bit color and continuous tones. GIF supports transparency and animations but is limited to 256 colors, so it works well for simple icons and logos. PNG has lossless compression, small file sizes, and transparency support, making it suitable for complex graphics like logos with drop shadows. The document emphasizes choosing formats wisely based on each image's specific needs and testing images across devices.
A Novel Blind SR Method to Improve the Spatial Resolution of Real Life Video ...IRJET Journal
This document proposes a novel blind super resolution method to improve the spatial resolution of real-life video sequences. The key aspects of the proposed method are:
1) It estimates blur without knowing the point spread function or noise statistics using a non-uniform interpolation super resolution method and multi-scale processing.
2) It uses a cost function with fidelity and regularization terms of a Huber-Markov random field to preserve edges and fine details in the reconstructed high resolution frames.
3) It performs masking to suppress artifacts from inaccurate motions, adaptively weighting the fidelity term at each iteration for faster convergence.
The method is tested on real-life videos with complex motions, objects, and brightness changes, showing
Survey on Single image Super Resolution TechniquesIOSR Journals
Super-resolution is the process of recovering a high-resolution image from multiple lowresolutionimages
of the same scene. The key objective of super-resolution (SR) imaging is to reconstruct a
higher-resolution image based on a set of images, acquired from the same scene and denoted as ‘lowresolution’
images, to overcome the limitation and/or ill-posed conditions of the image acquisition process for
facilitating better content visualization and scene recognition. In this paper, we provide a comprehensive review
of existing super-resolution techniques and highlight the future research challenges. This includes the
formulation of an observation model and coverage of the dominant algorithm – Iterative back projection.We
critique these methods and identify areas which promise performance improvements. In this paper, future
directions for super-resolution algorithms are discussed. Finally results of available methods are given.
Survey on Single image Super Resolution TechniquesIOSR Journals
Abstract:Super-resolution is the process of recovering a high-resolution image from multiple low-resolutionimages of the same scene. The key objective of super-resolution (SR) imaging is to reconstruct a higher-resolution image based on a set of images, acquired from the same scene and denoted as ‘low-resolution’ images, to overcome the limitation and/or ill-posed conditions of the image acquisition process for facilitating better content visualization and scene recognition. In this paper, we provide a comprehensive review of existing super-resolution techniques and highlight the future research challenges. This includes the formulation of an observation model and coverage of the dominant algorithm – Iterative back projection.We critique these methods and identify areas which promise performance improvements. In this paper, future directions for super-resolution algorithms are discussed. Finally results of available methods are given. Keywords: Super-resolution, POCS, IBP, Canny Edge Detection
Repairing and Inpainting Damaged Images using Adaptive Diffusion TechniqueIJMTST Journal
Learning good image priors is of utmost importance for the study of vision, computer vision and image
processing applications. Learning priors and optimizing over whole images can lead to tremendous
computational challenges. In contrast, when we work with small image patches, it is possible to learn priors
and perform patch restoration very efficiently. This raises three questions - do priors that give high likelihood
to the data also lead to good performance in restoration? Can we use such patch based priors to restore a full
image? Can we learn better patch priors? In this work we answer these questions. We compare the
likelihood of several patch models and show that priors that give high likelihood to data perform better in
patch restoration. Motivated by this result, we propose a generic framework which allows for whole image
restoration using any patch based prior for which a MAP (or approximate MAP) estimate can be calculated.
We show how to derive an appropriate cost function, how to optimize it and how to use it to restore whole
images. Finally, we present a generic, surprisingly simple Gaussian Mixture prior, learned from a set of
natural images. When used with the proposed framework, this Gaussian Mixture Model outperforms all other
generic prior methods for image denoising, deblurring and inpainting.
MULTIFOCUS IMAGE FUSION USING MULTIRESOLUTION APPROACH WITH BILATERAL GRADIEN...cscpconf
The fusion of two or more images is required for images captured using different sensors,
different modalities or different camera settings to produce the image which is more suitable for
computer processing and human visual perception. The optical lenses in the cameras are having
limited depth of focus so it is not possible to acquire an image that contains all the objects infocus.
In this case we need a Multifocus image fusion technique to create a single image where
all objects are in-focus by combining relevant information in the two or more images. As the
sharp images contain more information than blurred images image sharpness will be taken as
one of the relevant information in framing the fusion rule. Many existing algorithms use
contrast or high local energy as a measure of local sharpness (relevant information). In
practice particularly in multimodal image fusion this assumption is not true. Here in this paper
we are proposing the method which combines the multiresolution transform and local phase
coherence measure to measure the sharpness in the images. The performance of the fusion
process was evaluated with mutual information, edge-association and spatial frequency as
quality metrics and compared with Laplacian pyramid, DWT (Discrete Wavelet Transform) and
bilateral gradient based sharpness criterion methods etc. The results showed that the proposed
algorithm is performing better than the existing ones.
An adaptive-model-for-blind-image-restoration-using-bayesian-approachCemal Ardil
This document describes an adaptive model for blind image restoration using a Bayesian approach. It discusses image degradation and restoration techniques such as inverse filtering, Kalman filtering, and regularization. It then introduces Bayesian inference for image processing, describing how the posterior distribution of an image can be estimated given observed noisy data. The paper formulates the image restoration problem using a Bayesian model, where the observed image is considered the sum of the true image and noise. It describes estimating the true image distribution given the observed data by calculating the relative probabilities of possible true images. The goal is to find the most suitable estimate of the true image to enhance a degraded observed image and reduce blur and noise.
FusIon - On-Field Security and Privacy Preservation for IoT Edge Devices: Con...jamesinniss
FusIon - On-Field Security and Privacy Preservation for IoT Edge Devices: Concurrent Defense Against Multiple types of Hardware Trojan Attacks
Get more info here:
>>> https://bit.ly/38FXJav
Mr image compression based on selection of mother wavelet and lifting based w...ijma
Magnetic Resonance (MR) image is a medical image technique required enormous data to be stored and
transmitted for high quality diagnostic application. Various algorithms have been proposed to improve the
performance of the compression scheme. In this paper we extended the commonly used algorithms to image
compression and compared its performance. For an image compression technique, we have linked different
wavelet techniques using traditional mother wavelets and lifting based Cohen-Daubechies-Feauveau
wavelets with the low-pass filters of the length 9 and 7 (CDF 9/7) wavelet transform with Set Partition in
Hierarchical Trees (SPIHT) algorithm. A novel image quality index with highlighting shape of histogram
of the image targeted is introduced to assess image compression quality. The index will be used in place of
existing traditional Universal Image Quality Index (UIQI) “in one go”. It offers extra information about
the distortion between an original image and a compressed image in comparisons with UIQI. The proposed
index is designed based on modelling image compression as combinations of four major factors: loss of
correlation, luminance distortion, contrast distortion and shape distortion. This index is easy to calculate
and applicable in various image processing applications. One of our contributions is to demonstrate the
choice of mother wavelet is very important for achieving superior wavelet compression performances based
on proposed image quality indexes. Experimental results show that the proposed image quality index plays
a significantly role in the quality evaluation of image compression on the open sources “BrainWeb:
Simulated Brain Database (SBD) ”.
Review on Medical Image Fusion using Shearlet TransformIRJET Journal
This document reviews medical image fusion using the shearlet transform. It discusses how medical image fusion combines information from multimodality images like CT, MRI, PET into a single image. The shearlet transform allows for more efficient encoding of anisotropic features compared to wavelets. The proposed algorithm involves decomposing registered input images using shearlet transforms, applying fusion rules to select coefficients, and reconstructing the fused image. Medical image fusion using shearlets can improve diagnosis by combining complementary anatomical and functional details from different imaging modalities.
Abstract: Primarily due to the progresses in super resolution imagery, the methods of segment-based image analysis for generating and updating geographical information are becoming more and more important. This work presents a image segmentation based on colour features with K-means clustering. The entire work is divided into two stages. First enhancement of color separation of satellite image using de correlation stretching is carried out and then the regions are grouped into a set of five classes using K-means clustering algorithm. At first, the spatial data is concentrated focused around every pixel, and at that point two separating procedures are added to smother the impact of pseudoedges. What's more, the spatial data weight is built and grouped with k-means bunching, and the regularization quality in every district is controlled by the bunching focus esteem. The exploratory results, on both reenacted and genuine datasets, demonstrate that the proposed methodology can adequately lessen the pseudoedges of the aggregate variety regularization in the level.
Analysis of Image Super-Resolution via Reconstruction Filters for Pure Transl...CSCJournals
In this work, a special case of the image super-resolution problem where the only type of motion is global translational motion and the blurs are shift-invariant is investigated. The necessary conditions for exact reconstruction of the original image by using finite impulse-response reconstruction filters are investigated and determined. If the number of available low-resolution images is larger than a threshold and the blur functions meet a certain property, a reconstruction filter set for perfect image super-resolution can be generated even in the absence of motion. Given that the conditions are satisfied, a method for exact super-resolution is presented to validate the analysis results and it is shown that for the fully determined case, perfect reconstruction of the original image is achieved. Finally, some realistic conditions that make the super-resolution problem ill-posed are treated and their effects on exact super-resolution are discussed.
This document discusses image restoration techniques for images degraded by space-variant blurs. It describes running sinusoidal transforms as a method for space-variant image restoration. Running transforms involve applying a short-time orthogonal transform within a moving window, allowing approximately stationary processing. This addresses limitations of methods that assume space-invariance or require coordinate transformations. The chapter presents running discrete sinusoidal transforms as a way to perform the space-variant restoration by modifying orthogonal transform coefficients within the window to estimate pixel values.
THE FOURIER TRANSFORM FOR SATELLITE IMAGE COMPRESSIONcscpconf
This document presents a new coding scheme for satellite image compression using the Fourier transform and scalar quantization. The proposed method involves taking the fast Fourier transform (FFT) of the input image, scalar quantizing the amplitude results, and entropy encoding the output. Testing on satellite images and Lena picture showed compression ratios over 80% while maintaining good reconstruction quality with minimal distortion. The method provides an effective way to significantly reduce the storage and transmission requirements for large satellite images.
This document discusses image superresolution techniques in both the spatial and frequency domains. It compares a spatial domain algorithm by Keren et al. to a frequency domain algorithm by Vandewalle et al. for image registration. The spatial domain algorithm more accurately estimates shift and rotation parameters, especially when images contain strong directionality. For image reconstruction, it compares interpolation, iterative backpropagation, and a robust super resolution algorithm. Experimental results show the spatial domain approach works better for the application of image superresolution.
Visualization of hyperspectral images on parallel and distributed platform: A...IJECEIAES
The field of hyperspectral image storage and processing has undergone a remarkable evolution in recent years. The visualization of these images represents a challenge as the number of bands exceeds three bands, since direct visualization using the trivial system red, green and blue (RGB) or hue, saturation and lightness (HSL) is not feasible. One potential solution to resolve this problem is the reduction of the dimensionality of the image to three dimensions and thereafter assigning each dimension to a color. Conventional tools and algorithms have become incapable of producing results within a reasonable time. In this paper, we present a new distributed method of visualization of hyperspectral image based on the principal component analysis (PCA) and implemented in a distributed parallel environment (Apache Spark). The visualization of the big hyperspectral images with the proposed method is made in a smaller time and with the same performance as the classical method of visualization.
This document summarizes a method for tracking deformable objects in images. It proposes casting the problem as finding optimal cyclic paths in a product space of the template shape and input image. A cost functional is introduced that considers data fidelity, shape consistency, and elastic deformation. The functional is optimized using a minimum ratio cycle algorithm on graphics cards, allowing real-time segmentation and tracking of deformable objects while guaranteeing a globally optimal solution. The method can be extended to track multiple deformable anatomical structures in medical images.
This document summarizes a method for tracking deformable objects in images. It proposes casting the problem as finding optimal cyclic paths in a product space of the template shape and input image. A cost functional is introduced that consists of three terms: data fidelity favoring strong edges, shape consistency favoring similar tangent angles, and an elastic penalty for stretching. Optimization is performed using simulated annealing for segmentation and iterated conditional modes for tracking. The algorithm provides optimal segmentation and point correspondences between template and image curve in linear time.
This document summarizes a research paper that proposes a new approach for tracking multiple deformable anatomical structures in medical images using geometrically deformable templates (GDTs). The GDTs can deform to match similar shapes based on image forces while minimizing a penalty function that measures deformation from the template's equilibrium shape. This allows simultaneous segmentation of multiple objects using intra- and inter-shape information. Simulated annealing is used for segmentation while iterated conditional modes is used for tracking. The paper also reviews previous work on image segmentation, tracking deformable objects, and shape-based image segmentation.
Image restoration model with wavelet based fusionAlexander Decker
1. The document discusses various techniques for image restoration, which aims to recover a sharp original image from a degraded one using mathematical models of degradation and restoration.
2. It analyzes techniques like deconvolution using Lucy Richardson algorithm, Wiener filter, regularized filter, and blind image deconvolution on different image formats based on metrics like PSNR, MSE, and RMSE.
3. Previous studies have applied techniques like Wiener filtering, wavelet-based fusion, and iterative blind deconvolution for motion blur restoration and compared their performance.
Eugen Zaharescu-PROJECT STATEMENT-Morphological Medical Image Indexing and Cl...Eugen Zaharescu
This document outlines PhD. Assoc. Professor Eugen ZAHARESCU's project on developing an advanced mathematical model for analyzing and processing medical images using mathematical morphology methods. The project aims to 1) implement modern mathematical methods for solving problems in medical imaging, 2) collect, index and classify medical images and store them in a digital library, and 3) develop a software system for medical imaging using a formal model based on mathematical morphology. ZAHARESCU has done research on extending mathematical morphology to logarithmic image processing and developing new morphological operators for medical image enhancement.
Similar to A comprehensive study of different image super resolution reconstruction algorithms (20)
Submission Deadline: 30th September 2022
Acceptance Notification: Within Three Days’ time period
Online Publication: Within 24 Hrs. time Period
Expected Date of Dispatch of Printed Journal: 5th October 2022
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...IAEME Publication
White layer thickness (WLT) formed and surface roughness in wire electric discharge turning (WEDT) of tungsten carbide composite has been made to model through response surface methodology (RSM). A Taguchi’s standard Design of experiments involving five input variables with three levels has been employed to establish a mathematical model between input parameters and responses. Percentage of cobalt content, spindle speed, Pulse on-time, wire feed and pulse off-time were changed during the experimental tests based on the Taguchi’s orthogonal array L27 (3^13). Analysis of variance (ANOVA) revealed that the mathematical models obtained can adequately describe performance within the parameters of the factors considered. There was a good agreement between the experimental and predicted values in this study.
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURSIAEME Publication
The study explores the reasons for a transgender to become entrepreneurs. In this study transgender entrepreneur was taken as independent variable and reasons to become as dependent variable. Data were collected through a structured questionnaire containing a five point Likert Scale. The study examined the data of 30 transgender entrepreneurs in Salem Municipal Corporation of Tamil Nadu State, India. Simple Random sampling technique was used. Garrett Ranking Technique (Percentile Position, Mean Scores) was used as the analysis for the present study to identify the top 13 stimulus factors for establishment of trans entrepreneurial venture. Economic advancement of a nation is governed upon the upshot of a resolute entrepreneurial doings. The conception of entrepreneurship has stretched and materialized to the socially deflated uncharted sections of transgender community. Presently transgenders have smashed their stereotypes and are making recent headlines of achievements in various fields of our Indian society. The trans-community is gradually being observed in a new light and has been trying to achieve prospective growth in entrepreneurship. The findings of the research revealed that the optimistic changes are taking place to change affirmative societal outlook of the transgender for entrepreneurial ventureship. It also laid emphasis on other transgenders to renovate their traditional living. The paper also highlights that legislators, supervisory body should endorse an impartial canons and reforms in Tamil Nadu Transgender Welfare Board Association.
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURSIAEME Publication
Since ages gender difference is always a debatable theme whether caused by nature, evolution or environment. The birth of a transgender is dreadful not only for the child but also for their parents. The pain of living in the wrong physique and treated as second class victimized citizen is outrageous and fully harboured with vicious baseless negative scruples. For so long, social exclusion had perpetuated inequality and deprivation experiencing ingrained malign stigma and besieged victims of crime or violence across their life spans. They are pushed into the murky way of life with a source of eternal disgust, bereft sexual potency and perennial fear. Although they are highly visible but very little is known about them. The common public needs to comprehend the ravaged arrogance on these insensitive souls and assist in integrating them into the mainstream by offering equal opportunity, treat with humanity and respect their dignity. Entrepreneurship in the current age is endorsing the gender fairness movement. Unstable careers and economic inadequacy had inclined one of the gender variant people called Transgender to become entrepreneurs. These tiny budding entrepreneurs resulted in economic transition by means of employment, free from the clutches of stereotype jobs, raised standard of living and handful of financial empowerment. Besides all these inhibitions, they were able to witness a platform for skill set development that ignited them to enter into entrepreneurial domain. This paper epitomizes skill sets involved in trans-entrepreneurs of Thoothukudi Municipal Corporation of Tamil Nadu State and is a groundbreaking determination to sightsee various skills incorporated and the impact on entrepreneurship.
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONSIAEME Publication
The banking and financial services industries are experiencing increased technology penetration. Among them, the banking industry has made technological advancements to better serve the general populace. The economy focused on transforming the banking sector's system into a cashless, paperless, and faceless one. The researcher wants to evaluate the user's intention for utilising a mobile banking application. The study also examines the variables affecting the user's behaviour intention when selecting specific applications for financial transactions. The researcher employed a well-structured questionnaire and a descriptive study methodology to gather the respondents' primary data utilising the snowball sampling technique. The study includes variables like performance expectations, effort expectations, social impact, enabling circumstances, and perceived risk. Each of the aforementioned variables has a major impact on how users utilise mobile banking applications. The outcome will assist the service provider in comprehending the user's history with mobile banking applications.
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONSIAEME Publication
Technology upgradation in banking sector took the economy to view that payment mode towards online transactions using mobile applications. This system enabled connectivity between banks, Merchant and user in a convenient mode. there are various applications used for online transactions such as Google pay, Paytm, freecharge, mobikiwi, oxygen, phonepe and so on and it also includes mobile banking applications. The study aimed at evaluating the predilection of the user in adopting digital transaction. The study is descriptive in nature. The researcher used random sample techniques to collect the data. The findings reveal that mobile applications differ with the quality of service rendered by Gpay and Phonepe. The researcher suggest the Phonepe application should focus on implementing the application should be user friendly interface and Gpay on motivating the users to feel the importance of request for money and modes of payments in the application.
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINOIAEME Publication
The prototype of a voice-based ATM for visually impaired using Arduino is to help people who are blind. This uses RFID cards which contain users fingerprint encrypted on it and interacts with the users through voice commands. ATM operates when sensor detects the presence of one person in the cabin. After scanning the RFID card, it will ask to select the mode like –normal or blind. User can select the respective mode through voice input, if blind mode is selected the balance check or cash withdraw can be done through voice input. Normal mode procedure is same as the existing ATM.
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...IAEME Publication
There is increasing acceptability of emotional intelligence as a major factor in personality assessment and effective human resource management. Emotional intelligence as the ability to build capacity, empathize, co-operate, motivate and develop others cannot be divorced from both effective performance and human resource management systems. The human person is crucial in defining organizational leadership and fortunes in terms of challenges and opportunities and walking across both multinational and bilateral relationships. The growing complexity of the business world requires a great deal of self-confidence, integrity, communication, conflict and diversity management to keep the global enterprise within the paths of productivity and sustainability. Using the exploratory research design and 255 participants the result of this original study indicates strong positive correlation between emotional intelligence and effective human resource management. The paper offers suggestions on further studies between emotional intelligence and human capital development and recommends for conflict management as an integral part of effective human resource management.
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMYIAEME Publication
Our life journey, in general, is closely defined by the way we understand the meaning of why we coexist and deal with its challenges. As we develop the "inspiration economy", we could say that nearly all of the challenges we have faced are opportunities that help us to discover the rest of our journey. In this note paper, we explore how being faced with the opportunity of being a close carer for an aging parent with dementia brought intangible discoveries that changed our insight of the meaning of the rest of our life journey.
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...IAEME Publication
The main objective of this study is to analyze the impact of aspects of Organizational Culture on the Effectiveness of the Performance Management System (PMS) in the Health Care Organization at Thanjavur. Organizational Culture and PMS play a crucial role in present-day organizations in achieving their objectives. PMS needs employees’ cooperation to achieve its intended objectives. Employees' cooperation depends upon the organization’s culture. The present study uses exploratory research to examine the relationship between the Organization's culture and the Effectiveness of the Performance Management System. The study uses a Structured Questionnaire to collect the primary data. For this study, Thirty-six non-clinical employees were selected from twelve randomly selected Health Care organizations at Thanjavur. Thirty-two fully completed questionnaires were received.
Living in 21st century in itself reminds all of us the necessity of police and its administration. As more and more we are entering into the modern society and culture, the more we require the services of the so called ‘Khaki Worthy’ men i.e., the police personnel. Whether we talk of Indian police or the other nation’s police, they all have the same recognition as they have in India. But as already mentioned, their services and requirements are different after the like 26th November, 2008 incidents, where they without saving their own lives has sacrificed themselves without any hitch and without caring about their respective family members and wards. In other words, they are like our heroes and mentors who can guide us from the darkness of fear, militancy, corruption and other dark sides of life and so on. Now the question arises, if Gandhi would have been alive today, what would have been his reaction/opinion to the police and its functioning? Would he have some thing different in his mind now what he had been in his mind before the partition or would he be going to start some Satyagraha in the form of some improvement in the functioning of the police administration? Really these questions or rather night mares can come to any one’s mind, when there is too much confusion is prevailing in our minds, when there is too much corruption in the society and when the polices working is also in the questioning because of one or the other case throughout the India. It is matter of great concern that we have to thing over our administration and our practical approach because the police personals are also like us, they are part and parcel of our society and among one of us, so why we all are pin pointing towards them.
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...IAEME Publication
The goal of this study was to see how talent management affected employee retention in the selected IT organizations in Chennai. The fundamental issue was the difficulty to attract, hire, and retain talented personnel who perform well and the gap between supply and demand of talent acquisition and retaining them within the firms. The study's main goals were to determine the impact of talent management on employee retention in IT companies in Chennai, investigate talent management strategies that IT companies could use to improve talent acquisition, performance management, career planning and formulate retention strategies that the IT firms could use. The respondents were given a structured close-ended questionnaire with the 5 Point Likert Scale as part of the study's quantitative research design. The target population consisted of 289 IT professionals. The questionnaires were distributed and collected by the researcher directly. The Statistical Package for Social Sciences (SPSS) was used to collect and analyse the questionnaire responses. Hypotheses that were formulated for the various areas of the study were tested using a variety of statistical tests. The key findings of the study suggested that talent management had an impact on employee retention. The studies also found that there is a clear link between the implementation of talent management and retention measures. Management should provide enough training and development for employees, clarify job responsibilities, provide adequate remuneration packages, and recognise employees for exceptional performance.
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...IAEME Publication
Globally, Millions of dollars were spent by the organizations for employing skilled Information Technology (IT) professionals. It is costly to replace unskilled employees with IT professionals possessing technical skills and competencies that aid in interconnecting the business processes. The organization’s employment tactics were forced to alter by globalization along with technological innovations as they consistently diminish to remain lean, outsource to concentrate on core competencies along with restructuring/reallocate personnel to gather efficiency. As other jobs, organizations or professions have become reasonably more appropriate in a shifting employment landscape, the above alterations trigger both involuntary as well as voluntary turnover. The employee view on jobs is also afflicted by the COVID-19 pandemic along with the employee-driven labour market. So, having effective strategies is necessary to tackle the withdrawal rate of employees. By associating Emotional Intelligence (EI) along with Talent Management (TM) in the IT industry, the rise in attrition rate was analyzed in this study. Only 303 respondents were collected out of 350 participants to whom questionnaires were distributed. From the employees of IT organizations located in Bangalore (India), the data were congregated. A simple random sampling methodology was employed to congregate data as of the respondents. Generating the hypothesis along with testing is eventuated. The effect of EI and TM along with regression analysis between TM and EI was analyzed. The outcomes indicated that employee and Organizational Performance (OP) were elevated by effective EI along with TM.
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...IAEME Publication
By implementing talent management strategy, organizations would have the option to retain their skilled professionals while additionally working on their overall performance. It is the course of appropriately utilizing the ideal individuals, setting them up for future top positions, exploring and dealing with their performance, and holding them back from leaving the organization. It is employee performance that determines the success of every organization. The firm quickly obtains an upper hand over its rivals in the event that its employees having particular skills that cannot be duplicated by the competitors. Thus, firms are centred on creating successful talent management practices and processes to deal with the unique human resources. Firms are additionally endeavouring to keep their top/key staff since on the off chance that they leave; the whole store of information leaves the firm's hands. The study's objective was to determine the impact of talent management on organizational performance among the selected IT organizations in Chennai. The study recommends that talent management limitedly affects performance. On the off chance that this talent is appropriately management and implemented properly, organizations might benefit as much as possible from their maintained assets to support development and productivity, both monetarily and non-monetarily.
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...IAEME Publication
Banking regulations act of India, 1949 defines banking as “acceptance of deposits for the purpose of lending or investment from the public, repayment on demand or otherwise and withdrawable through cheques, drafts order or otherwise”, the major participants of the Indian financial system are commercial banks, the financial institution encompassing term lending institutions. Investments institutions, specialized financial institution and the state level development banks, non banking financial companies (NBFC) and other market intermediaries such has the stock brokers and money lenders are among the oldest of the certain variants of NBFC and the oldest market participants. The asset quality of banks is one of the most important indicators of their financial health. The Indian banking sector has been facing severe problems of increasing Non- Performing Assets (NPAs). The NPAs growth directly and indirectly affects the quality of assets and profitability of banks. It also shows the efficiency of banks credit risk management and the recovery effectiveness. NPA do not generate any income, whereas, the bank is required to make provisions for such as assets that why is a double edge weapon. This paper outlines the concept of quality of bank loans of different types like Housing, Agriculture and MSME loans in state Haryana of selected public and private sector banks. This study is highlighting problems associated with the role of commercial bank in financing Small and Medium Scale Enterprises (SME). The overall objective of the research was to assess the effect of the financing provisions existing for the setting up and operations of MSMEs in the country and to generate recommendations for more robust financing mechanisms for successful operation of the MSMEs, in turn understanding the impact of MSME loans on financial institutions due to NPA. There are many research conducted on the topic of Non- Performing Assets (NPA) Management, concerning particular bank, comparative study of public and private banks etc. In this paper the researcher is considering the aggregate data of selected public sector and private sector banks and attempts to compare the NPA of Housing, Agriculture and MSME loans in state Haryana of public and private sector banks. The tools used in the study are average and Anova test and variance. The findings reveal that NPA is common problem for both public and private sector banks and is associated with all types of loans either that is housing loans, agriculture loans and loans to SMES. NPAs of both public and private sector banks show the increasing trend. In 2010-11 GNPA of public and private sector were at same level it was 2% but after 2010-11 it increased in many fold and at present there is GNPA in some more than 15%. It shows the dark area of Indian banking sector.
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...IAEME Publication
An experiment conducted in this study found that BaSO4 changed Nylon 6's mechanical properties. By changing the weight ratios, BaSO4 was used to make Nylon 6. This Researcher looked into how hard Nylon-6/BaSO4 composites are and how well they wear. Experiments were done based on Taguchi design L9. Nylon-6/BaSO4 composites can be tested for their hardness number using a Rockwell hardness testing apparatus. On Nylon/BaSO4, the wear behavior was measured by a wear monitor, pinon-disc friction by varying reinforcement, sliding speed, and sliding distance, and the microstructure of the crack surfaces was observed by SEM. This study provides significant contributions to ultimate strength by increasing BaSO4 content up to 16% in the composites, and sliding speed contributes 72.45% to the wear rate
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...IAEME Publication
The majority of the population in India lives in villages. The village is the back bone of the country. Village or rural industries play an important role in the national economy, particularly in the rural development. Developing the rural economy is one of the key indicators towards a country’s success. Whether it be the need to look after the welfare of the farmers or invest in rural infrastructure, Governments have to ensure that rural development isn’t compromised. The economic development of our country largely depends on the progress of rural areas and the standard of living of rural masses. Village or rural industries play an important role in the national economy, particularly in the rural development. Rural entrepreneurship is based on stimulating local entrepreneurial talent and the subsequent growth of indigenous enterprises. It recognizes opportunity in the rural areas and accelerates a unique blend of resources either inside or outside of agriculture. Rural entrepreneurship brings an economic value to the rural sector by creating new methods of production, new markets, new products and generate employment opportunities thereby ensuring continuous rural development. Social Entrepreneurship has the direct and primary objective of serving the society along with the earning profits. So, social entrepreneurship is different from the economic entrepreneurship as its basic objective is not to earn profits but for providing innovative solutions to meet the society needs which are not taken care by majority of the entrepreneurs as they are in the business for profit making as a sole objective. So, the Social Entrepreneurs have the huge growth potential particularly in the developing countries like India where we have huge societal disparities in terms of the financial positions of the population. Still 22 percent of the Indian population is below the poverty line and also there is disparity among the rural & urban population in terms of families living under BPL. 25.7 percent of the rural population & 13.7 percent of the urban population is under BPL which clearly shows the disparity of the poor people in the rural and urban areas. The need to develop social entrepreneurship in agriculture is dictated by a large number of social problems. Such problems include low living standards, unemployment, and social tension. The reasons that led to the emergence of the practice of social entrepreneurship are the above factors. The research problem lays upon disclosing the importance of role of social entrepreneurship in rural development of India. The paper the tendencies of social entrepreneurship in India, to present successful examples of such business for providing recommendations how to improve situation in rural areas in terms of social entrepreneurship development. Indian government has made some steps towards development of social enterprises, social entrepreneurship, and social in- novation, but a lot remains to be improved.
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...IAEME Publication
Distribution system is a critical link between the electric power distributor and the consumers. Most of the distribution networks commonly used by the electric utility is the radial distribution network. However in this type of network, it has technical issues such as enormous power losses which affect the quality of the supply. Nowadays, the introduction of Distributed Generation (DG) units in the system help improve and support the voltage profile of the network as well as the performance of the system components through power loss mitigation. In this study network reconfiguration was done using two meta-heuristic algorithms Particle Swarm Optimization and Gravitational Search Algorithm (PSO-GSA) to enhance power quality and voltage profile in the system when simultaneously applied with the DG units. Backward/Forward Sweep Method was used in the load flow analysis and simulated using the MATLAB program. Five cases were considered in the Reconfiguration based on the contribution of DG units. The proposed method was tested using IEEE 33 bus system. Based on the results, there was a voltage profile improvement in the system from 0.9038 p.u. to 0.9594 p.u.. The integration of DG in the network also reduced power losses from 210.98 kW to 69.3963 kW. Simulated results are drawn to show the performance of each case.
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...IAEME Publication
Manufacturing industries have witnessed an outburst in productivity. For productivity improvement manufacturing industries are taking various initiatives by using lean tools and techniques. However, in different manufacturing industries, frugal approach is applied in product design and services as a tool for improvement. Frugal approach contributed to prove less is more and seems indirectly contributing to improve productivity. Hence, there is need to understand status of frugal approach application in manufacturing industries. All manufacturing industries are trying hard and putting continuous efforts for competitive existence. For productivity improvements, manufacturing industries are coming up with different effective and efficient solutions in manufacturing processes and operations. To overcome current challenges, manufacturing industries have started using frugal approach in product design and services. For this study, methodology adopted with both primary and secondary sources of data. For primary source interview and observation technique is used and for secondary source review has done based on available literatures in website, printed magazines, manual etc. An attempt has made for understanding application of frugal approach with the study of manufacturing industry project. Manufacturing industry selected for this project study is Mahindra and Mahindra Ltd. This paper will help researcher to find the connections between the two concepts productivity improvement and frugal approach. This paper will help to understand significance of frugal approach for productivity improvement in manufacturing industry. This will also help to understand current scenario of frugal approach in manufacturing industry. In manufacturing industries various process are involved to deliver the final product. In the process of converting input in to output through manufacturing process productivity plays very critical role. Hence this study will help to evolve status of frugal approach in productivity improvement programme. The notion of frugal can be viewed as an approach towards productivity improvement in manufacturing industries.
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENTIAEME Publication
In this paper, we investigated a queuing model of fuzzy environment-based a multiple channel queuing model (M/M/C) ( /FCFS) and study its performance under realistic conditions. It applies a nonagonal fuzzy number to analyse the relevant performance of a multiple channel queuing model (M/M/C) ( /FCFS). Based on the sub interval average ranking method for nonagonal fuzzy number, we convert fuzzy number to crisp one. Numerical results reveal that the efficiency of this method. Intuitively, the fuzzy environment adapts well to a multiple channel queuing models (M/M/C) ( /FCFS) are very well.