IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
This document discusses accelerating the seam carving algorithm for image resizing using CUDA (Compute Unified Device Architecture) on a GPU (graphics processing unit). Seam carving is a content-aware image resizing technique that identifies paths of least importance (seams) through an image that can be removed or inserted to change the image size. The document explains that seam carving involves large matrix calculations that can be significantly accelerated by implementing them in parallel on a CUDA-enabled GPU. It presents the seam carving algorithm, describes CUDA and GPU architecture, proposes implementing seam carving using CUDA to achieve speed-ups, and concludes that parallelizing seam carving calculations on a GPU exploits its massive parallelism for faster execution compared to
The Computation Complexity Reduction of 2-D Gaussian FilterIRJET Journal
This document discusses reducing the computational complexity of a 2D Gaussian filter for image smoothing. It begins with an abstract that notes 2D Gaussian filters are commonly used for image smoothing but require heavy computational resources. It then proposes using fixed-point arithmetic rather than floating point to implement the filter on an FPGA, which can increase efficiency and decrease area and complexity. The document is divided into sections that cover the theory behind image filtering, image smoothing and sharpening, quality metrics for evaluation, and an energy scalable Gaussian smoothing filter architecture. It concludes by discussing results and benefits of implementing the filter using fixed-point arithmetic on an FPGA.
Iaetsd a low power and high throughput re-configurable bip for multipurpose a...Iaetsd Iaetsd
This document presents a reconfigurable low power binary image processor for image processing applications. The processor consists of a reconfigurable binary processing module with mixed-grained architecture providing flexibility, efficiency and performance. Line memories are selected for lower power consumption and clock gating technique is used to reduce their power. The processor supports real-time binary image processing operations like morphological transformations and is suitable for applications like object recognition and tracking.
A review on automatic wavelet based nonlinear image enhancement for aerial ...IAEME Publication
This document summarizes an article from the International Journal of Electronics and Communication Engineering & Technology about improving aerial imagery through automatic wavelet-based nonlinear image enhancement. It discusses how aerial images often have low clarity due to atmospheric effects and limited dynamic range of cameras. The proposed method uses wavelet-based dynamic range compression to enhance aerial images while preserving local contrast and tonal rendition. It applies techniques like nonlinear processing, selective enhancement based on the human visual system, and uses Gabor filters for high-pass filtering to generate a enhanced image. The results of applying this algorithm to various aerial images show strong robustness and improved image quality.
n recent years, with the development of graphics p
rocessors, graphics cards have been widely
used to perform general-purpose calculations. Espec
ially with release of CUDA C
programming languages in 2007, most of the research
ers have been used CUDA C
programming language for the processes which needs
high performance computing.
In this paper, a scaling approach for image segment
ation using level sets is carried out by the
GPU programming techniques. Approach to level sets
is mainly based on the solution of partial
differential equations. The proposed method does no
t require the solution of partial differential
equation. Scaling approach, which uses basic geomet
ric transformations, is used. Thus, the
required computational cost reduces. The use of the
CUDA programming on the GPU has taken
advantage of classic programming as spending time a
nd performance. Thereby results are
obtained faster. The use of the GPU has provided to
enable real-time processing. The developed
application in this study is used to find tumor on
MRI brain images.
Generating higher accuracy digital data products by model parameterIAEME Publication
This document discusses an approach for generating higher accuracy digital data products from Indian satellites. The approach uses automatic image matching between an input image and reference image to refine the satellite's geometric sensor model and estimate biases. Estimated biases from selected images are then used to improve the geo-location accuracy of multiple other scenes, saving significant processing time. The approach was implemented using the IRS Core Preprocessing Engine library, which provides sensor modeling capabilities and allows dynamic configuration of image processing chains. This bias estimation technique ensures improved location accuracy of data products and insights into satellite behavior over its lifetime.
Cuda Based Performance Evaluation Of The Computational Efficiency Of The Dct ...acijjournal
Recent advances in computing such as the massively parallel GPUs (Graphical Processing Units),coupled
with the need to store and deliver large quantities of digital data especially images, has brought a number
of challenges for Computer Scientists, the research community and other stakeholders. These challenges,
such as prohibitively large costs to manipulate the digital data amongst others, have been the focus of the
research community in recent years and has led to the investigation of image compression techniques that
can achieve excellent results. One such technique is the Discrete Cosine Transform, which helps separate
an image into parts of differing frequencies and has the advantage of excellent energy-compaction.
This paper investigates the use of the Compute Unified Device Architecture (CUDA) programming model
to implement the DCT based Cordic based Loeffler algorithm for efficient image compression. The
computational efficiency is analyzed and evaluated under both the CPU and GPU. The PSNR (Peak Signal
to Noise Ratio) is used to evaluate image reconstruction quality in this paper. The results are presented
and discussed
This document discusses content-based image mining techniques for image retrieval. It provides an overview of image mining, describing how image mining goes beyond content-based image retrieval by aiming to discover significant patterns in large image collections according to user queries. The document reviews several existing image mining techniques, including those using color histograms, texture analysis, clustering algorithms like k-means, and association rule mining. It discusses challenges in developing universal image retrieval methods and proposes combining low-level visual features with high-level semantic features. Overall, the document surveys the state of the art in content-based image mining and retrieval.
This document discusses accelerating the seam carving algorithm for image resizing using CUDA (Compute Unified Device Architecture) on a GPU (graphics processing unit). Seam carving is a content-aware image resizing technique that identifies paths of least importance (seams) through an image that can be removed or inserted to change the image size. The document explains that seam carving involves large matrix calculations that can be significantly accelerated by implementing them in parallel on a CUDA-enabled GPU. It presents the seam carving algorithm, describes CUDA and GPU architecture, proposes implementing seam carving using CUDA to achieve speed-ups, and concludes that parallelizing seam carving calculations on a GPU exploits its massive parallelism for faster execution compared to
The Computation Complexity Reduction of 2-D Gaussian FilterIRJET Journal
This document discusses reducing the computational complexity of a 2D Gaussian filter for image smoothing. It begins with an abstract that notes 2D Gaussian filters are commonly used for image smoothing but require heavy computational resources. It then proposes using fixed-point arithmetic rather than floating point to implement the filter on an FPGA, which can increase efficiency and decrease area and complexity. The document is divided into sections that cover the theory behind image filtering, image smoothing and sharpening, quality metrics for evaluation, and an energy scalable Gaussian smoothing filter architecture. It concludes by discussing results and benefits of implementing the filter using fixed-point arithmetic on an FPGA.
Iaetsd a low power and high throughput re-configurable bip for multipurpose a...Iaetsd Iaetsd
This document presents a reconfigurable low power binary image processor for image processing applications. The processor consists of a reconfigurable binary processing module with mixed-grained architecture providing flexibility, efficiency and performance. Line memories are selected for lower power consumption and clock gating technique is used to reduce their power. The processor supports real-time binary image processing operations like morphological transformations and is suitable for applications like object recognition and tracking.
A review on automatic wavelet based nonlinear image enhancement for aerial ...IAEME Publication
This document summarizes an article from the International Journal of Electronics and Communication Engineering & Technology about improving aerial imagery through automatic wavelet-based nonlinear image enhancement. It discusses how aerial images often have low clarity due to atmospheric effects and limited dynamic range of cameras. The proposed method uses wavelet-based dynamic range compression to enhance aerial images while preserving local contrast and tonal rendition. It applies techniques like nonlinear processing, selective enhancement based on the human visual system, and uses Gabor filters for high-pass filtering to generate a enhanced image. The results of applying this algorithm to various aerial images show strong robustness and improved image quality.
n recent years, with the development of graphics p
rocessors, graphics cards have been widely
used to perform general-purpose calculations. Espec
ially with release of CUDA C
programming languages in 2007, most of the research
ers have been used CUDA C
programming language for the processes which needs
high performance computing.
In this paper, a scaling approach for image segment
ation using level sets is carried out by the
GPU programming techniques. Approach to level sets
is mainly based on the solution of partial
differential equations. The proposed method does no
t require the solution of partial differential
equation. Scaling approach, which uses basic geomet
ric transformations, is used. Thus, the
required computational cost reduces. The use of the
CUDA programming on the GPU has taken
advantage of classic programming as spending time a
nd performance. Thereby results are
obtained faster. The use of the GPU has provided to
enable real-time processing. The developed
application in this study is used to find tumor on
MRI brain images.
Generating higher accuracy digital data products by model parameterIAEME Publication
This document discusses an approach for generating higher accuracy digital data products from Indian satellites. The approach uses automatic image matching between an input image and reference image to refine the satellite's geometric sensor model and estimate biases. Estimated biases from selected images are then used to improve the geo-location accuracy of multiple other scenes, saving significant processing time. The approach was implemented using the IRS Core Preprocessing Engine library, which provides sensor modeling capabilities and allows dynamic configuration of image processing chains. This bias estimation technique ensures improved location accuracy of data products and insights into satellite behavior over its lifetime.
Cuda Based Performance Evaluation Of The Computational Efficiency Of The Dct ...acijjournal
Recent advances in computing such as the massively parallel GPUs (Graphical Processing Units),coupled
with the need to store and deliver large quantities of digital data especially images, has brought a number
of challenges for Computer Scientists, the research community and other stakeholders. These challenges,
such as prohibitively large costs to manipulate the digital data amongst others, have been the focus of the
research community in recent years and has led to the investigation of image compression techniques that
can achieve excellent results. One such technique is the Discrete Cosine Transform, which helps separate
an image into parts of differing frequencies and has the advantage of excellent energy-compaction.
This paper investigates the use of the Compute Unified Device Architecture (CUDA) programming model
to implement the DCT based Cordic based Loeffler algorithm for efficient image compression. The
computational efficiency is analyzed and evaluated under both the CPU and GPU. The PSNR (Peak Signal
to Noise Ratio) is used to evaluate image reconstruction quality in this paper. The results are presented
and discussed
This document discusses content-based image mining techniques for image retrieval. It provides an overview of image mining, describing how image mining goes beyond content-based image retrieval by aiming to discover significant patterns in large image collections according to user queries. The document reviews several existing image mining techniques, including those using color histograms, texture analysis, clustering algorithms like k-means, and association rule mining. It discusses challenges in developing universal image retrieval methods and proposes combining low-level visual features with high-level semantic features. Overall, the document surveys the state of the art in content-based image mining and retrieval.
Graphics Processing Unit GPU is a processor or electronic chip for graphics. GPUs are massively parallel processors used widely used for 3D graphic and many non graphic applications. As the demand for graphics applications increases, GPU has become indispensable. The use of GPUs has now matured to a point where there are countless industrial applications. This paper provides a brief introduction on GPUs, their properties, and their applications. Matthew N. O. Sadiku | Adedamola A. Omotoso | Sarhan M. Musa "Graphics Processing Unit: An Introduction" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-1 , December 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29647.pdf Paper URL: https://www.ijtsrd.com/engineering/electrical-engineering/29647/graphics-processing-unit-an-introduction/matthew-n-o-sadiku
IRJET- Kidney Stone Classification using Deep Neural Networks and Facilitatin...IRJET Journal
The document discusses classifying kidney stone images using deep neural networks and facilitating diagnosis using IoT. Kidney stone images are acquired and preprocessed by converting to grayscale, enhancing, and segmenting the area of interest. Texture features are extracted using active contour segmentation and classified using a deep neural network model. The results, including stone type and treatment recommendations, are sent to the cloud where doctors and patients can access them, allowing automated diagnosis without human intervention.
Prediction based lossless medical image compressionIAEME Publication
This document discusses lossless medical image compression using the CALIC algorithm. It begins with an introduction to the need for medical image compression to reduce storage requirements and transmission times. It then provides details on the CALIC lossless compression algorithm, which uses context-based adaptive prediction and arithmetic coding. The algorithm is described as having two modes (binary and continuous-tone) to improve compression of images with different content. Experimental results on medical images show that CALIC achieves higher compression ratios than JPEG-LS. The document concludes that CALIC's context-based prediction allows it to achieve good lossless compression performance for medical images.
Intensity Enhancement in Gray Level Images using HSV Color Coding TechniqueIRJET Journal
This document discusses techniques for enhancing the intensity of gray scale images using HSV color space coding. It begins with an abstract discussing the motivation to increase image clarity and reduce errors from fatigue. Section 1 provides an introduction to image processing and enhancement. Section 1.1 discusses digital images, including types such as black and white, color, binary, and indexed color images. Section 2 covers hardware used in image processing like lights. Section 3 discusses linear filters that can perform operations like smoothing and sharpening through convolution.
IRJET - Plant Disease Detection using ML and UAVIRJET Journal
This document describes a proposed system to detect plant diseases using machine learning and unmanned aerial vehicles. The system has two parts: 1) building a UAV equipped with a camera to automatically capture images of plant leaves, and 2) training a neural network model to detect diseases by classifying the images. The UAV would be programmed to autonomously survey farm areas and capture images. These images would then be used to train a neural network model to accurately identify plant diseases from leaf images. The goal is to automate disease detection to help farmers identify infections early and minimize crop failures.
A study on the importance of image processing and its apllicationseSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document describes a hybrid genetic algorithm fuzzy neural network controller (GAFLNNC) for controlling a servo motor. The GAFLNNC tunes the parameters of a fuzzy neural network controller for the servo motor using a genetic algorithm. This allows the membership functions, rule outputs, and other parameters of the fuzzy neural network controller to be optimized to minimize error for the servo motor system. The controller architecture uses the fuzzy neural network controller in a PID structure. Simulation results showed the GAFLNNC improved settling time, rising time, and reduced overshoot and error metrics compared to a traditional controller when subjected to load variations.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Overall fuzzy logic control strategy of direct driven PMSG wind turbine conne...IJECEIAES
The fuzzy logic strategies reported in the literature about the control of direct drive permanent magnet synchronous generator (PMSG) connected to grid are limited in terms of inclusiveness and efficiency. So an overall control based on fuzzy logic and anti-windup compensation is proposed in this paper. Aiming at the inadequate of hill climb search (HCS) MPPT with fixed step size, the fuzzy logic is introduced in the stage of "generating rotor speed reference" to overcome the oscillations and slowness in traditional method. PI controllers are replaced by anti-windup fuzzy logic controllers in the "machine side control" stage and in "grid side control" stage to pertinently regulate the reference parameters. Then comparison tests with classical methods are implemented under varying climatic conditions. The results obtained demonstrate that the developed control is superior to other methods in response time (less than 4.528E-04 s), precision (an overshoot about 0.41%) and quality of produced energy (efficiency is 91%). The study verifying the feasibility and effectiveness of this algorithm in PMSG wind turbine connected to grid.
Power Usage Effectiveness (PUE) is a metric used to measure data center infrastructure efficiency. While PUE provides a simple and useful ratio, it does not capture many important factors like resilience, load diversity, and server utilization. Additional metrics are needed to fully understand efficiency opportunities and benchmark the performance of the IT equipment itself. PUE should be considered as just one aspect of data center efficiency measurement and management.
The document describes an image exploitation system developed for airborne surveillance using unmanned aerial vehicles. The system processes real-time video and flight data acquired by the UAV. It uses image processing algorithms like enhancement, filtering, target tracking, and geo-referencing of targets. The system displays mission parameters in real-time on multiple displays. It was found to be qualified for surveillance applications. Sample results are presented.
International Journal of Computational Engineering Research(IJCER)ijceronline
This document presents a novel D.C. motor protection system developed using NI LabVIEW. The system is able to protect D.C. motors within fractions of seconds against different types of faults. It provides real-time protection and monitoring of motors. The LabVIEW software is divided into four sections - a test panel to input values, a preset value section, a fault indication section, and a waveform section. The system was tested on a prototype that uses NI ELVIS instruments and sensors to acquire test values and compare them to preset thresholds. If a test value exceeds the preset value, the software stops the motor and identifies the type of fault. This provides a powerful and accurate solution for D.C. motor protection
GPUs are highly parallel and programmable processors that were originally designed for graphics processing but are now commonly used for general purpose computing. GPU power is increasing much faster than CPU power, with estimates of a 570x increase in GPU power versus a 3x increase in CPU power over 6 years. GPUs excel at applications with large datasets, high parallelism, and minimal data dependencies, such as molecular dynamics, physics simulations, and database operations. Programming models for GPUs follow a single-program multiple-data approach to maximize parallel processing across many cores.
Medical Image Processing on NVIDIA TK1/TX1NVIDIA Taiwan
This document discusses using NVIDIA TK1/TX1 platforms for medical image processing. It summarizes segmenting brain MRI images using fuzzy c-means clustering on these platforms. Performance is evaluated for single and multiple GPU implementations using different precision formats and memory copy modes. Texture-based image processing for mammographic images is also discussed.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
High performance low leakage power full subtractor circuit design using rate ...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Graphics Processing Unit GPU is a processor or electronic chip for graphics. GPUs are massively parallel processors used widely used for 3D graphic and many non graphic applications. As the demand for graphics applications increases, GPU has become indispensable. The use of GPUs has now matured to a point where there are countless industrial applications. This paper provides a brief introduction on GPUs, their properties, and their applications. Matthew N. O. Sadiku | Adedamola A. Omotoso | Sarhan M. Musa "Graphics Processing Unit: An Introduction" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-1 , December 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29647.pdf Paper URL: https://www.ijtsrd.com/engineering/electrical-engineering/29647/graphics-processing-unit-an-introduction/matthew-n-o-sadiku
IRJET- Kidney Stone Classification using Deep Neural Networks and Facilitatin...IRJET Journal
The document discusses classifying kidney stone images using deep neural networks and facilitating diagnosis using IoT. Kidney stone images are acquired and preprocessed by converting to grayscale, enhancing, and segmenting the area of interest. Texture features are extracted using active contour segmentation and classified using a deep neural network model. The results, including stone type and treatment recommendations, are sent to the cloud where doctors and patients can access them, allowing automated diagnosis without human intervention.
Prediction based lossless medical image compressionIAEME Publication
This document discusses lossless medical image compression using the CALIC algorithm. It begins with an introduction to the need for medical image compression to reduce storage requirements and transmission times. It then provides details on the CALIC lossless compression algorithm, which uses context-based adaptive prediction and arithmetic coding. The algorithm is described as having two modes (binary and continuous-tone) to improve compression of images with different content. Experimental results on medical images show that CALIC achieves higher compression ratios than JPEG-LS. The document concludes that CALIC's context-based prediction allows it to achieve good lossless compression performance for medical images.
Intensity Enhancement in Gray Level Images using HSV Color Coding TechniqueIRJET Journal
This document discusses techniques for enhancing the intensity of gray scale images using HSV color space coding. It begins with an abstract discussing the motivation to increase image clarity and reduce errors from fatigue. Section 1 provides an introduction to image processing and enhancement. Section 1.1 discusses digital images, including types such as black and white, color, binary, and indexed color images. Section 2 covers hardware used in image processing like lights. Section 3 discusses linear filters that can perform operations like smoothing and sharpening through convolution.
IRJET - Plant Disease Detection using ML and UAVIRJET Journal
This document describes a proposed system to detect plant diseases using machine learning and unmanned aerial vehicles. The system has two parts: 1) building a UAV equipped with a camera to automatically capture images of plant leaves, and 2) training a neural network model to detect diseases by classifying the images. The UAV would be programmed to autonomously survey farm areas and capture images. These images would then be used to train a neural network model to accurately identify plant diseases from leaf images. The goal is to automate disease detection to help farmers identify infections early and minimize crop failures.
A study on the importance of image processing and its apllicationseSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document describes a hybrid genetic algorithm fuzzy neural network controller (GAFLNNC) for controlling a servo motor. The GAFLNNC tunes the parameters of a fuzzy neural network controller for the servo motor using a genetic algorithm. This allows the membership functions, rule outputs, and other parameters of the fuzzy neural network controller to be optimized to minimize error for the servo motor system. The controller architecture uses the fuzzy neural network controller in a PID structure. Simulation results showed the GAFLNNC improved settling time, rising time, and reduced overshoot and error metrics compared to a traditional controller when subjected to load variations.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Overall fuzzy logic control strategy of direct driven PMSG wind turbine conne...IJECEIAES
The fuzzy logic strategies reported in the literature about the control of direct drive permanent magnet synchronous generator (PMSG) connected to grid are limited in terms of inclusiveness and efficiency. So an overall control based on fuzzy logic and anti-windup compensation is proposed in this paper. Aiming at the inadequate of hill climb search (HCS) MPPT with fixed step size, the fuzzy logic is introduced in the stage of "generating rotor speed reference" to overcome the oscillations and slowness in traditional method. PI controllers are replaced by anti-windup fuzzy logic controllers in the "machine side control" stage and in "grid side control" stage to pertinently regulate the reference parameters. Then comparison tests with classical methods are implemented under varying climatic conditions. The results obtained demonstrate that the developed control is superior to other methods in response time (less than 4.528E-04 s), precision (an overshoot about 0.41%) and quality of produced energy (efficiency is 91%). The study verifying the feasibility and effectiveness of this algorithm in PMSG wind turbine connected to grid.
Power Usage Effectiveness (PUE) is a metric used to measure data center infrastructure efficiency. While PUE provides a simple and useful ratio, it does not capture many important factors like resilience, load diversity, and server utilization. Additional metrics are needed to fully understand efficiency opportunities and benchmark the performance of the IT equipment itself. PUE should be considered as just one aspect of data center efficiency measurement and management.
The document describes an image exploitation system developed for airborne surveillance using unmanned aerial vehicles. The system processes real-time video and flight data acquired by the UAV. It uses image processing algorithms like enhancement, filtering, target tracking, and geo-referencing of targets. The system displays mission parameters in real-time on multiple displays. It was found to be qualified for surveillance applications. Sample results are presented.
International Journal of Computational Engineering Research(IJCER)ijceronline
This document presents a novel D.C. motor protection system developed using NI LabVIEW. The system is able to protect D.C. motors within fractions of seconds against different types of faults. It provides real-time protection and monitoring of motors. The LabVIEW software is divided into four sections - a test panel to input values, a preset value section, a fault indication section, and a waveform section. The system was tested on a prototype that uses NI ELVIS instruments and sensors to acquire test values and compare them to preset thresholds. If a test value exceeds the preset value, the software stops the motor and identifies the type of fault. This provides a powerful and accurate solution for D.C. motor protection
GPUs are highly parallel and programmable processors that were originally designed for graphics processing but are now commonly used for general purpose computing. GPU power is increasing much faster than CPU power, with estimates of a 570x increase in GPU power versus a 3x increase in CPU power over 6 years. GPUs excel at applications with large datasets, high parallelism, and minimal data dependencies, such as molecular dynamics, physics simulations, and database operations. Programming models for GPUs follow a single-program multiple-data approach to maximize parallel processing across many cores.
Medical Image Processing on NVIDIA TK1/TX1NVIDIA Taiwan
This document discusses using NVIDIA TK1/TX1 platforms for medical image processing. It summarizes segmenting brain MRI images using fuzzy c-means clustering on these platforms. Performance is evaluated for single and multiple GPU implementations using different precision formats and memory copy modes. Texture-based image processing for mammographic images is also discussed.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
High performance low leakage power full subtractor circuit design using rate ...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Extracting interesting knowledge from versions of dynamic xml documentseSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document evaluates the performance of different node types in a mobile ad hoc network (MANET) using the dynamic source routing (DSR) protocol. It simulates static nodes, dynamic nodes that move randomly, and dynamic nodes following trajectory paths. The OPNET simulator is used to compare performance based on data dropped, load, media access delay, and network load under an FTP traffic load. The results found that dynamic nodes generally had better performance than static nodes across the metrics, while dynamic nodes following trajectories performed best, experiencing lower data dropped, load, delays and network load.
The document discusses a study on the properties of banana fiber reinforced composites. Banana fibers were treated with sodium hydroxide and used to reinforce both epoxy and vinyl ester resin matrices. Coconut shell powder was also used along with the fibers. Various mechanical properties including tensile strength, flexural strength, and impact strength were tested for the untreated and treated fiber composites. The treated fiber composites showed improved mechanical properties compared to the untreated fibers due to better adhesion between the fiber and matrix. A hybrid composite containing both banana fiber and coconut shell powder showed higher strength values than the banana fiber only composite.
New optimization scheme for cooperative spectrum sensing taking different snr...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Contractual implications of cash flow on owner and contractor in villa constr...eSAT Publishing House
This document summarizes a study analyzing the contractual implications of cash flow on owners and contractors for villa construction projects in Oman. Data from 25 villa projects was used to calculate the minimum fund contractors require to continue work in cases of delayed interim payments. The analysis found that for a maximum 4-week payment delay, contractors require on average 8.5% of the contract value in minimum funds. Delayed payments can result in work stoppages, delayed project completion, penalties for the contractor, and even project incompletion. Ensuring adequate minimum funds mitigates these risks for contractors while also helping owners receive projects on schedule.
This document summarizes the performance analysis of VRLA batteries under continuous operation. It discusses testing various capacity VRLA battery banks to analyze electrical and thermal characteristics. The batteries were tested with 80% depth of discharge over 32-43 hours. A battery regenerator was used to reduce sulfation and a battery measurement system monitored individual cell voltages. Testing showed battery capacity and lifespan increased after regeneration, with backups extending 1-2 hours. Larger 550Ah-682Ah batteries showed greater improvements than the 300Ah batteries tested. Regenerating existing batteries can save significant power compared to replacing them.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Abstract
The purpose of this review paper is to show the difference between executing the seam carving algorithm using sequential approach on a traditional CPU (central processing unit) and using parallel approach on a modern CUDA (compute unified device architecture) enabled GPU (graphics processing unit). Seam Carving is a content-aware image resizing method proposed by Avidan and Shamir of MERL.[1] It functions by identifying seams, or paths of least importance, through an image. These seams can either be removed or inserted in order to change the size of the image. It is determined that the success of this algorithm depends on a lot of factors: the number of objects in the picture, the size of monotonous background and the energy function. The purpose of the algorithm is to reduce image distortion in applications where images cannot be displayed at their original size. CUDA is a parallel architecture for GPUs, developed in the year 2007 by the Nvidia Corporation. Besides their primary function i.e. rendering of graphics, GPUs can also be used for general purpose computing (GPGPU). CUDA enabled GPU helps its user to harness massive parallelism in regular computations. If an algorithm can be made parallel, the use of GPUs significantly improves the performance and reduces the load of the central processing units (CPUs). The implementation of seam carving uses massive matrix calculations which could be performed in parallel to achieve speed ups in the execution of the algorithm as a whole. The entire algorithm itself cannot be run in parallel, and so some part of the algorithm mandatorily needs a CPU for performing sequential computations.
Keywords: Seam Carving, CUDA, Parallel Processing, GPGPU, CPU, GPU, Parallel Computing.
Study of Energy Efficient Images with Just Noticeable Difference Threshold Ba...ijtsrd
This document presents a novel method for producing energy-efficient images using a Feature Transform based Just Noticeable Difference Threshold (FTJNDT) model. The proposed method aims to reduce image energy consumption on displays like OLED by lowering pixel luminance below the just-noticeable difference threshold while maintaining perceptual quality. The FTJNDT model determines individual luminance thresholds for each image block based on visual saliency and non-linear modulation functions. An optimization framework is used to estimate modulation parameters and feature values using an objective image quality assessment. Experimental results showed the method reduced image energy consumption by an average of 4.31% compared to original images.
This document presents Jeevn-Net, a new neural network architecture for brain tumor segmentation and overall survival prediction. Jeevn-Net uses a cascaded U-Net structure with two U-Nets and applies auto-encoder regularization. It takes in MRI scans and outputs a segmented tumor image with extracted features. Random forest regression is then used to predict survival based on these features. The network achieves state-of-the-art performance for brain tumor segmentation and survival prediction.
The development of supervisory system for electricity control in smgg, guntur...eSAT Journals
Abstract The main intention of the paper is to build a prototype model, for to supervise the electricity control and monitoring for the purpose of ease of operation, power saving, man power reduction. The paper may also aimed for to help society, how to build application oriented designs for the purpose of human need to simplify the lifes using open source hardware and software tools. The worlds knows the scarcity of power. So many techniques available to monitor and control. But how the paper is new? The reason is this, the paper is developed completely using open source hardware and software which follows laws of lumped matter approaches and fundamental physics rules. The SCADA software price in multiples of thousands in Indian rupees. The authors developed SCADA software using open source tools. The SCADA software interact with the hardware in real time. The authors proposed method of approach is much simpler and economical. The authors developed prototype model to apply in St.Marrys Group of Institutions, Guntur, Ap, India. The paper is developed in following manner, 1) Problem identification in organization, 2) Practical concept development and mathematical supplementary work 3) Flow chart of working mechanism and architectural development 4) Hardware & Software development according to Architecture.5) And finally results and discussion. The word SCADA in long form Supervisory Control and Data Acquisition. The paper combined so many fields of engineering and physical sciences by name Sensor technology , System On chip, Microcontrollers, Electromagnetism, software application development. The entire electrification routing in the organization is connected to SCADA Software in serial connection with centralized Personal Computer. The sensors are interfaced in server rooms which are monitoring room critical temperature. If temperature sensors feeds the data to SCADA, if the readings are out of range, the AC will ON by observer. In the same manner power control will be takes place. Keywords: SCADA, Open source, Hardware, Software, Mathematics, Electricity, Monitoring. Etc.
Design of power and delay efficient 32 bit x 32 bit multi precision multiplie...eSAT Journals
encountered arithmetic operations in DSP applications. The proposed multi precision(MP) multiplier that incorporates variable precision, parallel processing (PP), and dedicated MP operands scheduling to provide optimum performance for a various operating conditions. The building blocks of the proposed reconfigurable multiplier can either work as independent smaller-precision multipliers or it also work parallel to form higher-precision multipliers. To reduce power consumption and delay, replaces the razor flip flop and voltage scaling unit .The Look up table (LUT) together with dynamic voltage and frequency management system configure the multiplier to work at low power consumption. The LUT stores the minimum voltages required for the multiplication of 8-bit, 16-bit and 32-bit multiplications. The multiplier consists of carry propagation adder which is replaced by carry select adder due to this a considerable delay reduction can be achieved. The MP multiplier is also consists of Frequency management unit makes the multiplier to operate at proper frequency. Finally, the proposed novel MP multiplier can further benefit from an operands scheduler that rearranges the input data, that determine the optimum voltage and frequency operating conditions for minimum power consumption. Experimental results show that the proposed MP Multiplier provides a 14.55% reduction in power consumption and 9.67% reduction in delay compared with conventional razor based DVS MP Multiplier. When combining this MP design with LUT, Parallel processing, and the operands scheduler, delay and power reduction can be achieved to a great extent. This paper successfully demonstrates that MP multiplier architecture can allow more aggressive frequency/supply voltage scaling for improved power and delay efficiency.
Keywords: Computer arithmetic, low power design, multi-precision multiplier, Input Operands Scheduler (IOS), Look Up Table.
A novel and innovative method for designing of rf mems deviceseSAT Journals
Abstract
The design complexity of the RF MEMS devices is increasing with fast rate, which require more accurate designing and simulation techniques. With upcoming technologies accurate simulation capability is a necessity for designing smaller chips. When devices are designed at nano scale, it presents a number of unique challenges. As the scale of the individual device decreases and the complexity of the physical structure increases, the nature of the device characteristics depart from those obtained from many of the classically held modeling concepts. Furthermore, the difficulty encountered in performing measurements on these devices means we have to put more emphasis on the results obtained from theoretical characteristics. Modeling also allows new device structures to be rigorously investigated prior to fabrication. This paper reports a novel and innovative method of design and simulation of MEMS based inductor by the method of co-simulation.
Keywords: Micro Electro Mechanical System, Radio Frequency, Co-Simulation, COMSOL Multiphysics, SOLIDWORKS, Computer Aided Design, Integrated Circuit
A novel and innovative method for designing of rf mems deviceseSAT Journals
Abstract
The design complexity of the RF MEMS devices is increasing with fast rate, which require more accurate designing and simulation techniques. With upcoming technologies accurate simulation capability is a necessity for designing smaller chips. When devices are designed at nano scale, it presents a number of unique challenges. As the scale of the individual device decreases and the complexity of the physical structure increases, the nature of the device characteristics depart from those obtained from many of the classically held modeling concepts. Furthermore, the difficulty encountered in performing measurements on these devices means we have to put more emphasis on the results obtained from theoretical characteristics. Modeling also allows new device structures to be rigorously investigated prior to fabrication. This paper reports a novel and innovative method of design and simulation of MEMS based inductor by the method of co-simulation.
Keywords: Micro Electro Mechanical System, Radio Frequency, Co-Simulation, COMSOL Multiphysics, SOLIDWORKS, Computer Aided Design, Integrated Circuit
Performance analysis of sobel edge filter on heterogeneous system using opencleSAT Publishing House
This document discusses performance analysis of the Sobel edge detection filter on heterogeneous systems using OpenCL. It begins with an introduction to OpenCL and describes its architecture, including the platform model, execution model, memory model, and programming model. It then provides an overview of GPUs and CPUs, comparing their architectures and number of cores. It also gives mathematical representations of image convolution and describes how the Sobel filter works. The document analyzes the performance of implementing Sobel edge detection using OpenCL on CPUs and GPUs and finds that GPUs provide much higher performance compared to CPUs.
Real time energy data acquisition and alarming system for monitoring power co...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Real time energy data acquisition and alarming system for monitoring power co...eSAT Journals
Abstract Manufacturing and the processes involved consume substantial amounts of energy. There can be requirement of energy management technique in the power tool manufacturing industry. The maintenance department in power tool manufacturing industry may have different load centers like machine shop, winding shop, utility and assembly shop etc. Readings on these meters are taken manually. This process is time consuming and has inefficient accuracy. So, there can be need of computerized energy data acquisition system. By providing an alarming system the energy losses will be monitored. Through this system design, there will be automatic elimination of man-made errors, reading energy consumption report through Microsoft excel as well as firing through email. Keywords: Energy losses, Microcontroller, RS 232 and PC
A METHODOLOGY FOR IMPROVEMENT OF ROBA MULTIPLIER FOR ELECTRONIC APPLICATIONSVLSICS Design
In this paper, propose an approximate multiplier that is high speed yet energy efficient. The approach is to
around the operands to the closest exponent of 2. This way the machine intensive a part of the
multiplication is omitted up speed and energy consumption. The potency of the planned multiplier factor is
evaluated by comparing its performance with those of some approximate and correct multipliers using
different design parameters. In this proposed approach combined the conventional RoBA multiplier with
Kogge-stone parallel prefix adder. The results revealed that, in most (all) cases, the newly designed RoBA
multiplier architectures outperformed the corresponding approximate (exact) multipliers. Thus improved
the parameters of RoBA multiplier which can be used in the voice or image smoothing applications in the
DSP
A METHODOLOGY FOR IMPROVEMENT OF ROBA MULTIPLIER FOR ELECTRONIC APPLICATIONSVLSICS Design
In this paper, propose an approximate multiplier that is high speed yet energy efficient. The approach is to around the operands to the closest exponent of 2. This way the machine intensive a part of the multiplication is omitted up speed and energy consumption. The potency of the planned multiplier factor is evaluated by comparing its performance with those of some approximate and correct multipliers using different design parameters. In this proposed approach combined the conventional RoBA multiplier with Kogge-stone parallel prefix adder. The results revealed that, in most (all) cases, the newly designed RoBA multiplier architectures outperformed the corresponding approximate (exact) multipliers. Thus improved the parameters of RoBA multiplier which can be used in the voice or image smoothing applications in the DSP.
A METHODOLOGY FOR IMPROVEMENT OF ROBA MULTIPLIER FOR ELECTRONIC APPLICATIONSVLSICS Design
In this paper, propose an approximate multiplier that is high speed yet energy efficient. The approach is to
around the operands to the closest exponent of 2. This way the machine intensive a part of the
multiplication is omitted up speed and energy consumption. The potency of the planned multiplier factor is
evaluated by comparing its performance with those of some approximate and correct multipliers using
different design parameters. In this proposed approach combined the conventional RoBA multiplier with
Kogge-stone parallel prefix adder. The results revealed that, in most (all) cases, the newly designed RoBA
multiplier architectures outperformed the corresponding approximate (exact) multipliers. Thus improved
the parameters of RoBA multiplier which can be used in the voice or image smoothing applications in the
DSP.
A METHODOLOGY FOR IMPROVEMENT OF ROBA MULTIPLIER FOR ELECTRONIC APPLICATIONSVLSICS Design
In this paper, propose an approximate multiplier that is high speed yet energy efficient. The approach is to around the operands to the closest exponent of 2. This way the machine intensive a part of the multiplication is omitted up speed and energy consumption. The potency of the planned multiplier factor is evaluated by comparing its performance with those of some approximate and correct multipliers using different design parameters. In this proposed approach combined the conventional RoBA multiplier with Kogge-stone parallel prefix adder. The results revealed that, in most (all) cases, the newly designed RoBA multiplier architectures outperformed the corresponding approximate (exact) multipliers. Thus improved the parameters of RoBA multiplier which can be used in the voice or image smoothing applications in the DSP.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
The interconnecting mechanism for monitoring regular domestic conditioneSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Design, development and implementation of novel ideaeSAT Journals
Abstract Low cost low power RF modem is very efficient in providing excellent communication at various ranges and also for low power
applications. A novel idea of motor control using PWM technique has been developed and controlled using wireless RF
transreceiver. The combination of transmitter and receiver is termed as RF transreceiver is used to exchange information in half –
duplex mode. The control of potentiometer from 0 – 5 volts controls the speed of dc motor.PIC microcontroller is used for
programming and it is downloaded to PIC microcontroller circuit used for wide range of applications the Max 232 is a driver used
to convert TTL/CMOS logic to RS 232 levels when microcontroller is serially communicated with PC. Wireless technology using
RF modem controls the motor in 30 meters range is very useful in remote and rural areas where we cannot visit directly and
frequently and monitor certain parameters.
Keywords: RF Modem, DC Motor, PIC Microcontroller, PWM
Design of wheelchair using finger operation with image processing algorithmseSAT Journals
Abstract This paper may be helpful in making the living of handicapped people easy which will be helpful for the handicapped people to drive their wheelchair through figure operation. It is an opportunity to be a handicapped as independent. System comprises of integration of finger operation using digital image processing and embedded technology.According to the World Health Organization (WHO), between the 7 and 10% of the population worldwide suffer from some physical disability. In Latin-America the physically disabled are estimated in 55 million people, which represent the 9% of its total population. This census indicates that the most common disability is motor, followed by blindness, deafness, intellectual and language. All alone, the motor-disabled achieve 20 million in Latin-America and there is an anticipated continued growth due to increasing aging, longevity and accident related injuries. Wheelchairs make up a significant portion of the mobility assistive devices in use for those. Since the 1930s, the design concept of a wheelchair has been the same: a main frame, two large rear wheels and two small front wheels called casters. This basic design has been used to develop many of today’ wheelchairs by applying slight design modifications to produce lighter, more durable and more comfortable wheelchairs for people who use them on a daily basis. Finally it means that is affordable and beneficial by the people. Keywords: Digital Image Processing, Embedded Technology, Finger operated Wheelchair
Design of wheelchair using finger operation with image processing algorithmseSAT Publishing House
The document describes the design of a finger-operated wheelchair using digital image processing and embedded technology. The system would allow handicapped individuals to control a wheelchair through finger gestures detected by a camera and processed by a microcontroller. The wheelchair would be navigated based on finger position analysis and gesture recognition algorithms to move the wheelchair forward, backward, left, and right.
This document summarizes a research paper that proposes a new approach called DiffP for more energy efficient ad hoc reprogramming of sensor networks. DiffP aims to mitigate the effects of program layout modifications and maximize similarity between old and new software. It also organizes global variables in a novel way to eliminate the effect of variable shifting. The document provides background on challenges with reprogramming deployed sensor networks due to limited energy, processing and memory resources. It reviews related work on dissemination protocols and reprogramming schemes, noting limitations such as producing large patches from layout changes or variable shifts. DiffP is presented as a potential improvement over existing approaches.
Similar to Medical imaging computing based on graphical processing units for high performance computing (20)
Hudhud cyclone caused extensive damage in Visakhapatnam, India in October 2014, especially to tree cover. This will likely impact the local environment in several ways: increased air pollution as trees absorb less; higher temperatures without tree canopy; increased erosion and landslides. It also created large amounts of waste from destroyed trees. Proper management of solid waste is needed to prevent disease spread. Suggested measures include restoring damaged plants, building fountains to reduce heat, mandating light-colored buildings, improving waste management, and educating public on health risks. Overall, changes are needed to water, land, and waste practices to rebuild the environment after the cyclone removed green cover.
Impact of flood disaster in a drought prone area – case study of alampur vill...eSAT Publishing House
1) In September-October 2009, unprecedented heavy rainfall and dam releases caused widespread flooding in Alampur village in Mahabub Nagar district, a historically drought-prone area.
2) The flood damaged or destroyed homes, buildings, infrastructure, crops, and documents. It displaced many residents and cut off the village.
3) The socioeconomic conditions and mud-based construction of homes in the village exacerbated the flood's impacts, making damage more severe and recovery more difficult.
The document summarizes the Hudhud cyclone that struck Visakhapatnam, India in October 2014. It describes the cyclone's formation, rapid intensification to winds of 175 km/h, and landfall near Visakhapatnam. The cyclone caused extensive damage estimated at over $1 billion and at least 109 deaths in India and Nepal. Infrastructure like buildings, bridges, and power lines were destroyed. Crops and fishing boats were also damaged. The document then discusses coping strategies and improvements needed to disaster management plans to better prepare for future cyclones.
Groundwater investigation using geophysical methods a case study of pydibhim...eSAT Publishing House
This document summarizes the results of a geophysical investigation using vertical electrical sounding (VES) methods at 13 locations around an industrial area in India. The VES data was interpreted to generate geo-electric sections and pseudo-sections showing subsurface resistivity variations. Three main layers were typically identified - a high resistivity topsoil, a weathered middle layer, and a basement rock. Pseudo-sections revealed relatively more weathered areas in the northwest and southwest. Resistivity sections helped identify zones of possible high groundwater potential based on low resistivity anomalies sandwiched between more resistive layers. The study concluded the electrical resistivity method was useful for understanding subsurface geology and identifying areas prospective for groundwater exploration.
Flood related disasters concerned to urban flooding in bangalore, indiaeSAT Publishing House
1. The document discusses urban flooding in Bangalore, India. It describes how factors like heavy rainfall, population growth, and improper land use have contributed to increased flooding in the city.
2. Flooding events in 2013 are analyzed in detail. A November rainfall caused runoff six times higher than the drainage capacity, inundating low-lying residential areas.
3. Impacts of urban flooding include disrupted daily life, damaged infrastructure, and decreased economic activity in affected areas. The document calls for improved flood management strategies to better mitigate urban flooding risks in Bangalore.
Enhancing post disaster recovery by optimal infrastructure capacity buildingeSAT Publishing House
This document discusses enhancing post-disaster recovery through optimal infrastructure capacity building. It presents a model to minimize the cost of meeting demand using auxiliary capacities when disaster damages infrastructure. The model uses genetic algorithms to select optimal capacity combinations. The document reviews how infrastructure provides vital services supporting recovery activities and discusses classifying infrastructure into six types. When disaster reduces infrastructure services, a gap forms between community demands and available support, hindering recovery. The proposed research aims to identify this gap and optimize capacity selection to fill it cost-effectively.
Effect of lintel and lintel band on the global performance of reinforced conc...eSAT Publishing House
This document analyzes the effect of lintels and lintel bands on the seismic performance of reinforced concrete masonry infilled frames through non-linear static pushover analysis. Four frame models are considered: a frame with a full masonry infill wall; a frame with a central opening but no lintel/band; a frame with a lintel above the opening; and a frame with a lintel band above the opening. The results show that the full infill wall model has 27% higher stiffness and 32% higher strength than the model with just an opening. Models with lintels or lintel bands have slightly higher strength and stiffness than the model with just an opening. The document concludes lintels and lintel
Wind damage to trees in the gitam university campus at visakhapatnam by cyclo...eSAT Publishing House
1) A cyclone with wind speeds of 175-200 kph caused massive damage to the green cover of Gitam University campus in Visakhapatnam, India. Thousands of trees were uprooted or damaged.
2) A study assessed different types of damage to trees from the cyclone, including defoliation, salt spray damage, damage to stems/branches, and uprooting. Certain tree species were more vulnerable than others.
3) The results of the study can help in selecting more wind-resistant tree species for future planting and reducing damage from future storms.
Wind damage to buildings, infrastrucuture and landscape elements along the be...eSAT Publishing House
1) A visual study was conducted to assess wind damage from Cyclone Hudhud along the 27km Visakha-Bheemli Beach road in Visakhapatnam, India.
2) Residential and commercial buildings suffered extensive roof damage, while glass facades on hotels and restaurants were shattered. Infrastructure like electricity poles and bus shelters were destroyed.
3) Landscape elements faced damage, including collapsed trees that damaged pavements, and debris in parks. The cyclone wiped out over half the city's green cover and caused beach erosion around protected areas.
1) The document reviews factors that influence the shear strength of reinforced concrete deep beams, including compressive strength of concrete, percentage of tension reinforcement, vertical and horizontal web reinforcement, aggregate interlock, shear span-to-depth ratio, loading distribution, side cover, and beam depth.
2) It finds that compressive strength of concrete, tension reinforcement percentage, and web reinforcement all increase shear strength, while shear strength decreases as shear span-to-depth ratio increases.
3) The distribution and amount of vertical and horizontal web reinforcement also affects shear strength, but closely spaced stirrups do not necessarily enhance capacity or performance.
Role of voluntary teams of professional engineers in dissater management – ex...eSAT Publishing House
1) A team of 17 professional engineers from various disciplines called the "Griha Seva" team volunteered after the 2001 Gujarat earthquake to provide technical assistance.
2) The team conducted site visits, assessments, testing and recommended retrofitting strategies for damaged structures in Bhuj and Ahmedabad. They were able to fully assess and retrofit 20 buildings in Ahmedabad.
3) Factors observed that exacerbated the earthquake's impacts included unplanned construction, non-engineered buildings, improper prior retrofitting, and defective materials and workmanship. The professional engineers' technical expertise was crucial for effective post-disaster management.
This document discusses risk analysis and environmental hazard management. It begins by defining risk, hazard, and toxicity. It then outlines the steps involved in hazard identification, including HAZID, HAZOP, and HAZAN. The document presents a case study of a hypothetical gas collecting station, identifying potential accidents and hazards. It discusses quantitative and qualitative approaches to risk analysis, including calculating a fire and explosion index. The document concludes by discussing hazard management strategies like preventative measures, control measures, fire protection, relief operations, and the importance of training personnel on safety.
Review study on performance of seismically tested repaired shear wallseSAT Publishing House
This document summarizes research on the performance of reinforced concrete shear walls that have been repaired after damage. It begins with an introduction to shear walls and their failure modes. The literature review then discusses the behavior of original shear walls as well as different repair techniques tested by other researchers, including conventional repair with new concrete, jacketing with steel plates or concrete, and use of fiber reinforced polymers. The document focuses on evaluating the strength retention of shear walls after being repaired with various methods.
Monitoring and assessment of air quality with reference to dust particles (pm...eSAT Publishing House
This document summarizes a study on monitoring and assessing air quality with respect to dust particles (PM10 and PM2.5) in the urban environment of Visakhapatnam, India. Sampling was conducted in residential, commercial, and industrial areas from October 2013 to August 2014. The average PM2.5 and PM10 concentrations were within limits in residential areas but moderate to high in commercial and industrial areas. Exceedance factor levels indicated moderate pollution for residential areas and moderate to high pollution for commercial and industrial areas. There is a need for management measures like improved public transport and green spaces to combat particulate air pollution in the study areas.
Low cost wireless sensor networks and smartphone applications for disaster ma...eSAT Publishing House
This document describes a low-cost wireless sensor network and smartphone application system for disaster management. The system uses an Arduino-based wireless sensor network comprising nodes with various sensors to monitor the environment. The sensor data is transmitted to a central gateway and then to the cloud for analysis. A smartphone app connected to the cloud can detect disasters from the sensor data and send real-time alerts to users to help with early evacuation. The system aims to provide low-cost localized disaster detection and warnings to improve safety.
Coastal zones – seismic vulnerability an analysis from east coast of indiaeSAT Publishing House
This document summarizes an analysis of seismic vulnerability along the east coast of India. It discusses the geotectonic setting of the region as a passive continental margin and reports some moderate seismic activity from offshore in recent decades. While seismic stability cannot be assumed given events like the 2004 tsunami, no major earthquakes have been recorded along this coast historically. The document calls for further study of active faults, neotectonics, and implementation of improved seismic building codes to mitigate vulnerability.
Can fracture mechanics predict damage due disaster of structureseSAT Publishing House
This document discusses how fracture mechanics can be used to better predict damage and failure of structures. It notes that current design codes are based on small-scale laboratory tests and do not account for size effects, which can lead to more brittle failures in larger structures. The document outlines how fracture mechanics considers factors like size effect, ductility, and minimum reinforcement that influence the strength and failure behavior of structures. It provides examples of how fracture mechanics has been applied to problems like evaluating shear strength in deep beams and investigating a failure of an oil platform structure. The document argues that fracture mechanics provides a more scientific basis for structural design compared to existing empirical code provisions.
This document discusses the assessment of seismic susceptibility of reinforced concrete (RC) buildings. It begins with an introduction to earthquakes and the importance of vulnerability assessment in mitigating earthquake risks and losses. It then describes modeling the nonlinear behavior of RC building elements and performing pushover analysis to evaluate building performance. The document outlines modeling RC frames and developing moment-curvature relationships. It also summarizes the results of pushover analyses on sample 2D and 3D RC frames with and without shear walls. The conclusions emphasize that pushover analysis effectively assesses building properties but has limitations, and that capacity spectrum method provides appropriate results for evaluating building response and retrofitting impact.
A geophysical insight of earthquake occurred on 21 st may 2014 off paradip, b...eSAT Publishing House
1) A 6.0 magnitude earthquake occurred off the coast of Paradip, Odisha in the Bay of Bengal on May 21, 2014 at a depth of around 40 km.
2) Analysis of magnetic and bathymetric data from the area revealed the presence of major lineaments in NW-SE and NE-SW directions that may be responsible for seismic activity through stress release.
3) Movements along growth faults at the margins of large Bengal channels, due to large sediment loads, could also contribute to seismic events by triggering movements along the faults.
Effect of hudhud cyclone on the development of visakhapatnam as smart and gre...eSAT Publishing House
This document discusses the effects of Cyclone Hudhud on the development of Visakhapatnam as a smart and green city through a case study and preliminary surveys. The surveys found that 31% of participants had experienced cyclones, 9% floods, and 59% landslides previously in Visakhapatnam. Awareness of disaster alarming systems increased from 14% before the 2004 tsunami to 85% during Cyclone Hudhud, while awareness of disaster management systems increased from 50% before the tsunami to 94% during Hudhud. The surveys indicate that initiatives after the tsunami improved awareness and preparedness. Developing Visakhapatnam as a smart, green city should consider governance
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Medical imaging computing based on graphical processing units for high performance computing
1. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 03 Special Issue: 05 | May-2014 | NCEITCS-2014, Available @ http://www.ijret.org 17
MEDICAL IMAGING COMPUTING BASED ON GRAPHICAL
PROCESSING UNITS FOR HIGH PERFORMANCE COMPUTING
K.Suresh1
, L.Gangadhar2
, M.Vidya3
1
Assistant Professor, Department of IT, AITS, Autonomous Institution, Rajampet, Kadapa (Dt.,) AP.
2
Student of Department of IT, AITS, Autonomous Institution, Rajampet, Kadapa (Dt.,) AP
3
Student of Department of IT, AITS, Autonomous Institution, Rajampet, Kadapa (Dt.,) AP
Abstract
The Design of GPU(Graphical Processing Unit) will well suitable for express the data parallel computations because GPU will
specialized for parallel and today’s digital images in medical are huge volume of collections in every day, however medical imaging
produces demand to improve the medical diagnosis and procedures. This survey is provide graphical processing computations and
hardware require to compute and give better information for diagnosis of Cancer Treatment using Radiation Therapy. It is important
techniques to increase quality of medical image data clinically under pressure to make enriched data and improve accurate treatment
and decrease the complications.
Keywords— High Performance Computing, CPU, GPU, Medical Image Computing.
----------------------------------------------------------------------***-----------------------------------------------------------------------
1. INTRODUCTION
Computer assisted medical treatment is more impact on now a
days to diagnosis the patients in lifesaving .visualization
techniques will help the doctors to minimize the overheads and
better understanding with accurate manner, most challenging
role of a doctor is to perform successfully operations taking
decisions, potentially lifesaving with the help of medical
imaging. Medical imaging plays a vital role to minimize the
overheads of clinicians all over the world. Clinicians are under
pressure by evolvement of medical imaging and understanding
the problems of patients in an accurate manner. This can be
clearly observed in the area of medical imaging computing, the
associated diagnosis and medical treatment. The visualization
system needs to be clear and gives quality imaging to perform
the operations in desired manner. Central Processing Unit (CPU)
has serial von Neumann processor ,which is highly optimized to
perform series of computations .Replacing today’s computers
systems are taking a new turn unlike traditional processors with
Graphical Processing Unit (GPU) because GPU[2] can handle
Massive parallelism to compute high performance accelerator
platform for parallel computing ,especially with efficient
framework with excellent performance faster and fine for
demanding tasks . GPU can increase performance when
compared to CPU’s which are serving in medical imaging
because GPU perform and give quality digital images which
help doctors for in treatment.
GPU has graphics pipeline to fixed functionality with limited
configuration for implement hardware efficiency this restriction
helps for the programmers to allow different stages in high level
programming languages. This impact to able creates unique
style of each application and make different stages of rendering
pipeline [3]. The high level programming languages are
associated with different graphics application programming
interfaces such as OpenGL, DirectX [12] [13] [14].
Modern medical imaging technologies are one of the important
fields for physicians [4]. For analyse modalities with various
dimensions or common attributes, the transformation need to be
extracted from the imaging data for common clinical work flow
to understand the patient diagnostic purpose. This is a common
procedure of treatment to scan the image for quick diagnosis
decisions towards more efficient image computation techniques
[1]. This required to device an algorithm with computational
efficiency and make use of optimum computational power of
the hardware.
The rest paper is organized as follows Section 2 discussed
Graphical Computing via Graphical Processing Unit and
Section 3 Medical Image Context to be consider all applications
of related to medical with help of GPU and finally GPU based
medical imaging .
2. GPU COMPUTING
Intel introduced the technology which allows CPU to slow
down, suspend or shut down [3]. The demand of hand held
devices also increased in early 90’s. To minimize power
consumption scientists started paying attention on sequential
improvement of hardware and software both. It has combined to
increase the performance and battery lifetime of hand held
devices [4].In terms of development of personal computers
power management there are three specifications power
2. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 03 Special Issue: 05 | May-2014 | NCEITCS-2014, Available @ http://www.ijret.org 18
management, System design and Application. These are 3-
interlocked tracks that helped in the development [5]. In terms
of computing systems the power and performance management
mainly highlights computer architecture, operating systems,
CAD, compilers etc. Here, the description starts from operating
system first. Operating systems have basically two major roles,
providing an abstraction and managing resources like CPU time,
memory allocation, disk quota etc. On OS level an efficient
power performance can be enhanced by Power Management on
IO Devices, Low-Power Scheduling for Processors or by
Reducing Memory Power [6].
The era of the principles of Moore’s Law is a single core
processor is rapidly closing due to various things including the
memory and clock rate. To overcome these limitations, industry
trends have moved to multicore and many-core processors.
Continued development of multicore and massively
multiprocessing architectures in recent years holds great
promise, such as IBM, AMD [19] and Oracle-SUN are
presenting prototypes with as many as 80 cores that can
theoretically achieve more than 1 Teraflops of GPU computing
performance. The current is a 100-core processor available for
general-purpose and high-performance computing for the years;
the GPU has evolved from a highly specialized pixel processor
to more flexible, highly programmable architecture that can
perform a wide range of data parallel operations [4]. Next
coming years later, the task of transforming the geometry was
also moved from the CPU to the GPU, one of the first steps
toward the modern graphics pipeline. Because the processing of
vertices and pixels is inherently parallel, the number of
dedicated processing units increased rapidly, allowing
commodity PCs to render ever more complex 3-D scenes [22]
in tens of milliseconds.
Fig 1 a. Comparison of CPU and GPU in terms of Gigaflops
Fig 1.b. Performance Comparison in terms of Bandwidth
As shown in Fig.1 (a) and 1. (b) Of these focus since 2000, the
number of compute cores in GPU processors has doubled
roughly every 16 months. Over the same period, GPU cores
have become increasingly sophisticated and versatile, enriching
their instruction set with a wide variety of control flow
mechanisms, support for double-precision floating-point
arithmetic, built-in mathematical functions, a shared-memory
model for inter thread communications, atomic operations, and
so forth. In order to sustain the increased computation
throughput, the GPU memory bandwidth has doubled every 19
months, and recent GPUs can achieve a peak memory
bandwidth of 408 GB/s . With more computing cores, the peak
performance of GPUs, measured in billion floating-point
operations per second (GFLOPS)[5], has been steadily
increasing .
Fig 2 GPU based Technologies for Medical Image
3. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 03 Special Issue: 05 | May-2014 | NCEITCS-2014, Available @ http://www.ijret.org 19
2.1 Objectives
1. To develop high level parallel design for Reducing the
radiation from CT scans.
2. Efficient implementation corresponding to patterns that can
be easily reconstruction CT scan and reduces radiation by 35-
70x cpu takes around 2hours take process, but CUDA[11][22]
take 2minutes and clinically practical.
3. Developing the adaptation mechanism that dynamically
Operating on a beating heart, only 2% of surgeons can operate
on a beating heart patient stands to lose1 point of IQ every
10minutes with heart stopped. GPU enables real time motion
compensation to virtually stop beating heart for surgeons.
4. To define a virtualisation of the parallel software in the form
of low energy consumption.
GPU programming model is different from CPU programming
because it has unique hardware supporting, however, whilst
speedup over CPU code, to achieve a scalable high performance
code hardware resources efficiently is still a difficult task.
Main bottlenecks are GPU handling massive threaded
architecture is used to hide memory latency
3. MEDICAL IMAGING CONTEXT
In coming Surgical intervention and clinical diagnostic systems
will be benefit from modern modalities and visualization tools
based on GPU technologies for example Cancer Treatment
consuming time on Conventional Radiotherapy it took days but
based GPU technology it took less time within hours .
Throughout the years, medical science has evolved towards
providing a better and longer life for every human being.
Medicine has taken technological advances which make
available new tools and methods. Medical imaging is one
important field fruitfully used nowadays. Doctors can now have
visualization equipment that gives them more information for
diagnosis. Ultrasound visualization and magnetic resonance
imaging are broadly used examples of medical imaging. Such
techniques require intensive computation power that may imply
trade-offs on quality for achieving a reasonable performance
based on Figure 2 and Figure 3. However, the surfacing of
general purpose graphic processing units leads to a new
software paradigm that can handle a larger bulk of intense
computation requirements. Medical imaging is broadly used as
a diagnosis tool. X-rays and MRI give doctors a better
understanding of the condition of the patient and help them
decide the best way to treat them. There is still a lot of research
currently focused on improving these techniques. Engineers are
working on new and improved algorithms for processing the
information and providing doctors with better visualization
tools[8].
The continued development of multicore and massively
multiprocessing architectures in recent years holds great
promise for interventional setups. In particular, massively
multiprocessing graphics units with general-purpose
programming capabilities have emerged as front runners for
low-cost high-performance processing. HPC, in the order of 1
TFLOPS, is available on commodity single-chip graphics
processing units (GPUs) with power requirements not much
greater than an office computer. Multi-GPU systems with up to
eight GPUs can be built in a single host and can provide a
nominal processing capacity of eight TFLOPS with less than
1,500 W power consumption under full load
GPU implementations to be more challenging than multicore
CPU implementation and in terms of achievable performance
gain[6].
Hardware and architectural complexities in designing multicore
systems aside, perhaps as big a challenge is an overhaul of
existing application design methodologies to allow efficient
implementation on a range of massively multicore architectures.
As one quickly might find, direct adaptation of existing serial
algorithms is more often than not neither possible due to
hardware constraints nor computationally justified.
An important amount of the available algorithms require very
intense computations over huge amounts of data. When these
algorithms are processed by CPU implementations, the time
consumed is too long because of the sequential nature of the
CPU architecture. I.e. consecutively all computations are
executed on each piece of data even if such pieces of data have
no dependencies between them. This approach may still be
useful for procedures like MRI visualization where the time to
get results is not so critical. However, when we talk about
interventions, the response time plays an important role.
A fresh approach consists of taking advantage of the intrinsic
parallel nature of the processing (Since same computations may
be done on different pieces of data at the same time).GPGPU
implementation is a promising approach for achieving medical
imaging with a better time performance with no quality
sacrifices) from the parallel nature of image processing.
The improvement in performance due to the usage of GPGPUs
may allow to get results in shorter periods of time. The
processing time reduction may give room for larger data sets
which contain information at a higher resolution. In conclusion,
the introduction of a GPGPU[16] implementation for the
medical imaging algorithms can produce better quality images
in less time. Furthermore, medical imaging for interventions
require a real time (RT) application with hard teal time
constraints and low latency.
RT medical imaging implies a major challenge for engineers.
While diagnosis visualizations (e.g. Radiographies) are softly
constrained in time, visualization tools for interventions must
be very precise. Visualization tools that help doctors on medical
procedures have very tight constraints as human lives are at
stake during the procedures. There is no room for delays or
4. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 03 Special Issue: 05 | May-2014 | NCEITCS-2014, Available @ http://www.ijret.org 20
imprecise timing and doctors expect a good visualization
quality. In this scenario, a GPGPU implementation becomes
more than an alternative for getting results in a shorter time; it
becomes the option for meeting the real-time constraints that
medical interventions require while keeping a good quality.
Medical imaging techniques such as CT, MRI and x-ray
imaging are a crucial component of modern diagnostics and
treatment [20]. As a result, many automated methods involving
digital image processing have been developed for the medical
field. Image segmentation is the process of ending the
boundaries of one or more objects or regions of interest in an
image.
Modern graphics processing units a high performance platform
for accelerating the variation level set method, which, in its
simplest sense, consists of a large number of parallel
computations over a grid. NVIDIA's CUDA framework for
general purpose computation [15] on GPUs was used in
conjunction with three different NVIDIA GPUs to reduce
processing time by11 . This speedup was sufficient to allow
real-time segmentation at moderate cost.
Geometry usually a collection of triangles is generated by an
application on the host processor, and handed onto a graphics
processor to be rendered. The first stage of the pipeline operates
on triangle vertices, mapping them from three-dimensional
scene space to two-dimensional display space. Since scenes can
consist of millions of triangles, and each mapping operation is
completely independent, parallel hardware is important.
Early GPUs consisted entirely of fixed function hardware and
thus served little purpose when they were not being used to
generate graphics. The only programmability was in what scene
and texture data were passed to them [12]. Over time,
programmability has been Apply texture (optional) Rasterizing
(convert geometry to pixels) Composite (combine fragments
into image) added to GPUs to enable more inventive and life-
like effects[8]. Both the vertex and fragment processing engines
were made programmable to allow dynamic scene
transformation and innovative lighting techniques. In each new
hardware release, the features of the programmable portion of
the hardware were expanded, allowing longer programs, more
complex control, and higher precision. NVIDIA's G80
hardware was significant on this front, as it used programmable
execution cores as its basis, almost entirely eliminating fixed-
function units[10]. Additionally, it debuted unified shader
architecture: rather than having separate programmable units
for vertex and fragment operations, the same collection of
processing elements could operate on any data. This unified
pool of processing resources makes GPUs much more capable
of general-purpose workloads than they were previously.
4. GPU COMPUTING TRENDS
The GPU is consuming the a part of high performance
computing it is more flexible ,accurate and required complete
designing portion of High performance computing, actually
GPU erand CPU are different types of processor that can
operate parallel or asynchronous that is it can operate
simultaneously so it can help get heterogeneous computing.
Based on CPU and GPU is better resource managing to the
efficient tasks with multiple resources, each resource
performance tasks in which best suited.
The CPU then continues to perform CPU-side calculations
simultaneously as the GPU processes its operations, and only
synchronizes with the GPU when its results are needed. There
is also support for independent streams, which can execute their
operations simultaneously as long as they obey their own
streams order. Current GPUs support up-to 16 concurrent
kernel launches [18], which means that we can both have data
parallelism, in terms of a computational grid of blocks, and task
parallelism, in terms of different concurrent kernels. GPUs
furthermore support overlapping memory copies between the
CPU and the GPU and kernel execution. This means that we
can simultaneously copy data from the CPU to the GPU,
execute 16 different kernels, and copy data from the GPU back
to the CPU if all these operations are scheduled properly to
different streams.
Medical imaging techniques such as CT and MRI scans
generate very large sets of data. The large volume of data
present results in high computation times, so GPUs have
frequently been used reduce this time. Computed Tomography
(CT) scans are a method of capturing three dimensional images
of body structures. These scans are done by rotating an x-ray
capture system around the target, capturing a large sequence of
two-dimensional radiographic images. Computer 38 systems
are used to construct a volumetric image from these scans.
Fig 3 GPU based Radiographs
GPU-based radiographs become digital pixel for constructing
the traditional and new era GPU based Radiography based on
fig.3 show the GPU based radiographs require 1/10 of CPU
voxel
pixel
5. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 03 Special Issue: 05 | May-2014 | NCEITCS-2014, Available @ http://www.ijret.org 21
based Construction,Cone Beam CT reconstruction application.
This application used an NVIDIA GeForce 8800GT GPU [9],
and was able to compute a complete 5123pixel reconstruction
in 12.61 seconds. This compares favourably with an earlier
CPU-based result of 201 seconds.
Another application of GPU-based Cone Beam CT
reconstruction [23], demonstrated a speedup from 178 seconds
with a CPU implementation to 53 seconds using an NVIDIA
Quadro FX 4500 GPU. This GPU used an earlier architecture
and thus had lower performance than the GeForce 8800GT used
in [23].In interactive random walk-based image segmentation
algorithm, implemented both on a GPU and a standard CPU. In
this approach, a user places several seeds at locations inside and
outside the segmentation target. A pixel is classified as part of
the segmentation target if a random walk from that pixel is
more likely to arrive at a target seed than a non-target seed.
This algorithm's performance varies based on the number and
placement of seeds, and this is rejected in segmentation time.
CPU-based segmentation time is vary from 3to83 seconds
across differing images, segmentation targets, and seed
placement, while equivalent GPU-based segmentation times
vary from 0.3{1.5seconds}.
Table 1 Different Clinical Tasks to Human Body with
Techniques
Clinical Task Techniques
Image acquisition CT, MRI, US
Visualization of
scans in
their entirety
Direct volume rendering
Multi-modality
rendering,
segmentation of
different
tissues or organs
Tagged volume rendering
Visualization of
brous
tissue
Fiber tracking
Grouping of
anatomically
similar structures
Clustering
Visualization of
vessels
Model-based reconstruction,
implicit visualization
Associate
preoperative
data with patient in
operation room
Tracking, registration
algorithms,
intra-operative
imaging
Medical simulations,
Training
Virtual and mixed reality
5. CONCLUSIONS
Research in this area continues to be motivated by the need to
minimize the overhead mismatched diagnose. the race for
higher clock-rates within the CPU industry has come to an end
and processors started to become rather ―wider than faster‖,
research results in general purpose computation on graphics
hardware are an important factor towards a possible adoption of
GPU technology by CPU manufactures. Novel processor
designs with a multitude of cores started to be available (IBM
cell processor) or announced, which will be interesting
platforms for algorithms that require a rather hybrid processor
design. At this point, developers still have to decide for a
platform for massively parallel processing, be that a single GPU,
or multiple processors organized in clusters. As OpenCL [21]
receives broad support from all processor manufacturers, it
appears as an emerging standard for programming parallel
architectures. Such standardization might allow the reduction of
processor specific programming efforts.
REFERENCES
[1] U. Pietrzyk, K. Herholz, A. Schuster, H.-M.v.
Stockhausen, H. Lucht, W.-D. Heiss, Clinical
applications of registration and fusion of multimodality
brain images from PET, SPECT, CT,and MRI, Eur. J.
Radiol. 21 (3) (1996) 174–182.
[2] R. Shams, P. Sadeghi, R.A. Kennedy, R.I. Hartley, A
survey of medical image registration on multicore and
the GPU, IEEE Signal Process. Mag. 27 (2) (2010) 50–
60.
[3] J. Tsao, Interpolation artifacts in multimodality image
registration based on maximization of mutual
information,IEEE Trans. Med. Imaging 22 (7) (2003)
854–864,doi:10.1109/TMI.2003.815077.
[4] J.P.W. Pluim, J.B.A. Maintz, M.A. Viergever,
Interpolation artefacts in mutual information-based
image registration,Comput. Vis. Image Underst. 77 (9)
(2000)211–232.
[5] GPGPU, Website of general purpose computation on
theGPU, http://www.gpgpu.org.
[6] J.D. Owens, D. Luebke, N. Govindaraju, M. Harris, J.
Krüger,A.E. Lefohn, T.J. Purcell, A survey of general-
purposecomputation on graphics hardware, Comput.
Graph. Forum26 (1) (2007) 80–113.
[7] A. Thall, Extended-precision floating-point numbers for
GPU computation, in: SIGGRAPH’06: ACM
SIGGRAPH 2006Research Posters, ACM, New York,
NY,
USA,2006,p.52.doi:http://doi.acm.org/10.1145/1179622.
1179682.
[8] C. Vetter, C. Guetter, C. Xu, R. Westermann, Non-rigid
multi-modal registration on the GPU, in: Proceedings of
SPIE Medical Imaging, vol. 6512, 2007, pp. 651228-1–
651228-8.
[9] Z. Fan, C. Vetter, C. Guetter, D. Yu, R. Westermann,
A.Kaufman, C. Xu, Optimized GPU implementation of
6. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 03 Special Issue: 05 | May-2014 | NCEITCS-2014, Available @ http://www.ijret.org 22
learning-based non-rigid multi-modal registration, in:
Proceedings of SPIE Medical Imaging, 2008,
pp.69142Y-1–169142Y-10.
[10] V. Podlozhnyuk, Histogram calculation in CUDA, Tech.
rep.,NVidia, 2007.
[11] R. Shams, N. Barnes, Speeding up mutual information
computation using NVIDIA CUDA hardware, in:
DICTA’07:Proceedings of the 9th Biennial Conference
of the Australian Pattern Recognition Society on Digital
Image Computing Techniques and Applications, IEEE
Computer Society, Washington, DC, USA, 2007, pp.
555–560.
[12] R.J. Rost, OpenGL Shading Language, 2nd edition,
Addison-Wesley Professional, 2006.
[13] J. Neider, T. Davis, OpenGL Programming Guide: The
Official Guide to Learning OpenGL, Release 1,
Addison-Wesley Longman Publishing Co., Inc., Boston,
MA, USA, 1993.
[14] D. Blythe, The Direct3D 10 system, ACM Trans. Graph.
25 (3) (2006) 724–734,
doi:http://doi.acm.org/10.1145/1141911.1141947.
[15] W.R. Mark, R.S. Glanville, K. Akeley, M.J. Kilgard, Cg:
a system for programming graphics hardware in a C-like
language, in: SIGGRAPH’03: ACM SIGGRAPH, ACM
Press, New York, NY, USA, 2003, pp. 896–
907,doi:http://doi.acm.org/10.1145/1201775.882362.
[16] I. Buck, T. Foley, D. Horn, J. Sugerman, K. Fatahalian,
M. Houston, P. Hanrahan, Brook for GPUs: stream
computing on graphics hardware, ACM Trans. Graph.
23 (3) (2004) 777–
786,doi:http://doi.acm.org/10.1145/1015706.1015800.
[17] J. Owens, Streaming architectures and technology trends,
in: GPU Gems 2, Addison-Wesley Professional, 2005,
pp. 457–
470.AMD,CALSDK,http://ati.amd.com/developer/.
[18] G.C. Sharp, N. Kandasamy, H. Singh, M. Folkert, GPU-
based streaming architectures for fast cone-beam ct
image reconstruction and demons deformable
registration, Phys.Med. Biol. 52 (19) (2004) 5771–5783.
[19] Khronos Group, The OpenCL specification,
http://www.khronos.org/opencl/.
[20] R. Strzodka, M. Droske, M. Rumpf, Fast image
registration in DX9 graphics hardware, J. Med. Inform.
Technol. 6 (2003) 43–49.
[21] A. Köhn, J. Drexl, F. Ritter, M. Koenig, H.-O. Peitgen,
GPU accelerated image registration in two and three
dimensions, in: Bildverarbeitung für die Medizin,
Informatik Aktuell, Springer, 2006, pp. 261–265.
[22] N. Courty, P. Hellier, Accelerating 3D non-rigid
registration using graphics hardware, Int. J. Image Graph.
8 (1) (2008) 1–18.
[23] P. Muyan-Özc¸elik, J.D. Owens, J. Xia, S.S. Samant,
Fast deformable registration on the GPU: a CUDA
implementation of demons, in: The 2008 International
Conference on Computational Science and its
Applications,ICCSA 2008, IEEE Computer Society,
2008, pp. 223–233.