High Performance Computing (HPC) is the recently developed technology in the field of computer science, which evolved
due to meet increasing demands for processing speed and analysing/processing huge size of data sets. HPC brings together several
technologies such as computer architecture, algorithm, programs and system software under one canopy to solve/handle advanced
complex problems quickly and effectively. It is a crucial element today to gather and process large amount of satellite (remote sensing)
data which is the need of an hour. In this paper, we review recent development in HPC technology (Parallel, Distributed and Cluster
Computing) for satellite data processing and analysing. We attempt to discuss the fundamentals of High Performance Computing
(HPC) for satellite data processing and analysing, in a way which is easy to understand without much previous background. We sketch
the various HPC approach such as Parallel, Distributed & Cluster Computing and subsequent satellite data processing & analysing
methods like geo-referencing, image mosaicking, image classification, image fusion and Morphological/neural approach for hyperspectral satellite data. Collective, these works deliver a snapshot, tables and algorithms of the recent developments in those sectors and
offer a thoughtful perspective of the potential and promising challenges of satellite data processing and analysing using HPC
paradigms.
Satellite Image Processing technique to enhance raw images received from cameras or sensors placed on satellites, space probes and aircrafts or pictures taken in normal day to day life in various applications.
it is highly useful for geography students in the field of remote sensing and it is in very simple and explanatory for the purpose of simplification with relevant images in this ppt.
this presentation briefly describes the digital image processing and its various procedures and techniques which include image correction or rectification with remote sensing data/ images. it also contains various image classification techniques.
Satellite Image Processing technique to enhance raw images received from cameras or sensors placed on satellites, space probes and aircrafts or pictures taken in normal day to day life in various applications.
it is highly useful for geography students in the field of remote sensing and it is in very simple and explanatory for the purpose of simplification with relevant images in this ppt.
this presentation briefly describes the digital image processing and its various procedures and techniques which include image correction or rectification with remote sensing data/ images. it also contains various image classification techniques.
Digital Elevation Model (DEM) is the digital representation of the land surface elevation with respect to any reference datum. DEM is frequently used to refer to any digital representation of a topographic surface. DEM is the simplest form of digital representation of topography. GIS applications depend mainly on DEMs, today.
Digital Ortho Image Creation of Hall County Aerial Photosmpadams77
Powerpoint Presentation that I presented at the Florida Academy of Science and Georgia Academy of Science Joint Conference held in Jacksonville, FL March 14th and 15th of 2008
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Digital Ortho Image Creation of Hall County Aerial Photos Papermpadams77
Special Topics Project Paper “Digital Ortho Image Creation of Hall County Aerial Photos” which I presented at the Florida Academy of Science and Georgia Academy of Science Joint Conference held in Jacksonville, FL March 14th and 15th of 2008
International Refereed Journal of Engineering and Science (IRJES)irjes
International Refereed Journal of Engineering and Science (IRJES) is a leading international journal for publication of new ideas, the state of the art research results and fundamental advances in all aspects of Engineering and Science. IRJES is a open access, peer reviewed international journal with a primary objective to provide the academic community and industry for the submission of half of original research and applications
16 9252 eeem gis based satellite image denoising (edit ari)IAESIJEECS
Generally, satellite images contain very significant information about geographical features such as rivers, roads, building and bridges etc of the earth. Geographic Information System (GIS) requires these features for automatic detection and it has been corrupted by various types of noise. Curvelet Transform (CT) is used in the proposed system for denoising the images. Advantages of multi resolution image such as line, compatibility of human visual system and edge detection are provided. Then K-Means clustering is used in this system for segmentation purpose after the pre processing done. First, K-Means algorithm is used for segmenting background and water then extraction of bridges is done based on pixel intensity difference.
RSDC (Reliable Scheduling Distributed in Cloud Computing)IJCSEA Journal
In this paper we will present a reliable scheduling algorithm in cloud computing environment. In this algorithm we create a new algorithm by means of a new technique and with classification and considering request and acknowledge time of jobs in a qualification function. By evaluating the previous algorithms, we understand that the scheduling jobs have been performed by parameters that are associated with a failure rate. Therefore in the roposed algorithm, in addition to previous parameters, some other important parameters are used so we can gain the jobs with different scheduling based on these parameters. This work is associated with a mechanism. The major job is divided to sub jobs. In order to balance the jobs we should calculate the request and acknowledge time separately. Then we create the scheduling of each job by calculating the request and acknowledge time in the form of a shared job. Finally efficiency of the system is increased. So the real time of this algorithm will be improved in comparison with the other algorithms. Finally by the mechanism presented, the total time of processing in cloud computing is improved in comparison with the other algorithms.
HOMOGENEOUS MULTISTAGE ARCHITECTURE FOR REAL-TIME IMAGE PROCESSINGcscpconf
In this article, we present a new multistage architecture oriented to real-time complex processing applications. Given a set of rules, this proposed architecture allows the using of different communication links (point to point link, hardware router…) to connect unlimited number of parallel computing elements (software processors) to follow the increasing complexity of algorithms. In particular, this work brings out a parallel implementation of multihypothesis approach for road recognition application on the proposed Multiprocessor Systemon-Chip (MP-SoC) architecture. This algorithm is usually the main part of the lane keeping applications. Experimental results using images of a real road scene are presented. Using a low cost FPGA-based System-on-Chip, our hardware architecture is able to detect and recognize the roadsides in a time limit of 60 mSec. Moreover, we demonstrate that our multistage architecture may be used to achieve good speed-up in solving automotive applications.
Digital Elevation Model (DEM) is the digital representation of the land surface elevation with respect to any reference datum. DEM is frequently used to refer to any digital representation of a topographic surface. DEM is the simplest form of digital representation of topography. GIS applications depend mainly on DEMs, today.
Digital Ortho Image Creation of Hall County Aerial Photosmpadams77
Powerpoint Presentation that I presented at the Florida Academy of Science and Georgia Academy of Science Joint Conference held in Jacksonville, FL March 14th and 15th of 2008
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Digital Ortho Image Creation of Hall County Aerial Photos Papermpadams77
Special Topics Project Paper “Digital Ortho Image Creation of Hall County Aerial Photos” which I presented at the Florida Academy of Science and Georgia Academy of Science Joint Conference held in Jacksonville, FL March 14th and 15th of 2008
International Refereed Journal of Engineering and Science (IRJES)irjes
International Refereed Journal of Engineering and Science (IRJES) is a leading international journal for publication of new ideas, the state of the art research results and fundamental advances in all aspects of Engineering and Science. IRJES is a open access, peer reviewed international journal with a primary objective to provide the academic community and industry for the submission of half of original research and applications
16 9252 eeem gis based satellite image denoising (edit ari)IAESIJEECS
Generally, satellite images contain very significant information about geographical features such as rivers, roads, building and bridges etc of the earth. Geographic Information System (GIS) requires these features for automatic detection and it has been corrupted by various types of noise. Curvelet Transform (CT) is used in the proposed system for denoising the images. Advantages of multi resolution image such as line, compatibility of human visual system and edge detection are provided. Then K-Means clustering is used in this system for segmentation purpose after the pre processing done. First, K-Means algorithm is used for segmenting background and water then extraction of bridges is done based on pixel intensity difference.
RSDC (Reliable Scheduling Distributed in Cloud Computing)IJCSEA Journal
In this paper we will present a reliable scheduling algorithm in cloud computing environment. In this algorithm we create a new algorithm by means of a new technique and with classification and considering request and acknowledge time of jobs in a qualification function. By evaluating the previous algorithms, we understand that the scheduling jobs have been performed by parameters that are associated with a failure rate. Therefore in the roposed algorithm, in addition to previous parameters, some other important parameters are used so we can gain the jobs with different scheduling based on these parameters. This work is associated with a mechanism. The major job is divided to sub jobs. In order to balance the jobs we should calculate the request and acknowledge time separately. Then we create the scheduling of each job by calculating the request and acknowledge time in the form of a shared job. Finally efficiency of the system is increased. So the real time of this algorithm will be improved in comparison with the other algorithms. Finally by the mechanism presented, the total time of processing in cloud computing is improved in comparison with the other algorithms.
HOMOGENEOUS MULTISTAGE ARCHITECTURE FOR REAL-TIME IMAGE PROCESSINGcscpconf
In this article, we present a new multistage architecture oriented to real-time complex processing applications. Given a set of rules, this proposed architecture allows the using of different communication links (point to point link, hardware router…) to connect unlimited number of parallel computing elements (software processors) to follow the increasing complexity of algorithms. In particular, this work brings out a parallel implementation of multihypothesis approach for road recognition application on the proposed Multiprocessor Systemon-Chip (MP-SoC) architecture. This algorithm is usually the main part of the lane keeping applications. Experimental results using images of a real road scene are presented. Using a low cost FPGA-based System-on-Chip, our hardware architecture is able to detect and recognize the roadsides in a time limit of 60 mSec. Moreover, we demonstrate that our multistage architecture may be used to achieve good speed-up in solving automotive applications.
DEVELOPMENT AND PERFORMANCE EVALUATION OF A LAN-BASED EDGE-DETECTION TOOL ijsc
This paper presents a description and performance evaluation of an efficient and reliable edge-detection tool that utilize the growing computational power of local area networks (LANs). It is therefore referred to as LAN-based edge detection (LANED) tool. The processor-farm methodology is used in porting the
sequential edge-detection calculations to run efficiently on the LAN. In this methodology, each computer on the LAN executes the same program independently from other computers, each operating on different part of the total data. It requires no data communication other than that involves in forwarding input
data/results between the LAN computers. LANED uses the Java parallel virtual machine (JPVM) data communication library to exchange data between computers. For equivalent calculations, the computation times on a single computer and a LAN of various number of computers, are estimated, and the resulting
speedup and parallelization efficiency, are computed. The estimated results demonstrated that parallelization efficiencies achieved vary between 87% to 60% when the number of computers on the LAN varies between 2 to 5 computers connected through 10/100 Mbps Ethernet switch.
Development and Performance Evaluation Of A Lan-Based EDGE-Detection Tool ijsc
This paper presents a description and performance evaluation of an efficient and reliable edge-detection tool that utilize the growing computational power of local area networks (LANs). It is therefore referred to as LAN-based edge detection (LANED) tool. The processor-farm methodology is used in porting the sequential edge-detection calculations to run efficiently on the LAN. In this methodology, each computer on the LAN executes the same program independently from other computers, each operating on different part of the total data. It requires no data communication other than that involves in forwarding input data/results between the LAN computers. LANED uses the Java parallel virtual machine (JPVM) data communication library to exchange data between computers. For equivalent calculations, the computation times on a single computer and a LAN of various number of computers, are estimated, and the resulting speedup and parallelization efficiency, are computed. The estimated results demonstrated that parallelization efficiencies achieved vary between 87% to 60% when the number of computers on the LAN varies between 2 to 5 computers connected through 10/100 Mbps Ethernet switch.
LARGE-SCALE DATA PROCESSING USING MAPREDUCE IN CLOUD COMPUTING ENVIRONMENTijwscjournal
The computer industry is being challenged to develop methods and techniques for affordable data processing on large datasets at optimum response times. The technical challenges in dealing with the increasing demand to handle vast quantities of data is daunting and on the rise. One of the recent processing models with a more efficient and intuitive solution to rapidly process large amount of data in parallel is called MapReduce. It is a framework defining a template approach of programming to perform large-scale data computation on clusters of machines in a cloud computing environment. MapReduce provides automatic parallelization and distribution of computation based on several processors. It hides the complexity of writing parallel and distributed programming code. This paper provides a comprehensive systematic review and analysis of large-scale dataset processing and dataset handling challenges and
requirements in a cloud computing environment by using the MapReduce framework and its open-source implementation Hadoop. We defined requirements for MapReduce systems to perform large-scale data processing. We also proposed the MapReduce framework and one implementation of this framework on Amazon Web Services. At the end of the paper, we presented an experimentation of running MapReduce
system in a cloud environment. This paper outlines one of the best techniques to process large datasets is MapReduce; it also can help developers to do parallel and distributed computation in a cloud environment.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
SPEED-UP IMPROVEMENT USING PARALLEL APPROACH IN IMAGE STEGANOGRAPHYcsandit
This paper presents a parallel approach to improve the time complexity problem associated
with sequential algorithms. An image steganography algorithm in transform domain is
considered for implementation. Image steganography is a technique to hide secret message in
an image. With the parallel implementation, large message can be hidden in large image since
it does not take much processing time. It is implemented on GPU systems. Parallel
programming is done using OpenCL in CUDA cores from NVIDIA. The speed-up improvement
obtained is very good with reasonably good output signal quality, when large amount of data is
processed
NETWORK-AWARE DATA PREFETCHING OPTIMIZATION OF COMPUTATIONS IN A HETEROGENEOU...IJCNCJournal
Rapid development of diverse computer architectures and hardware accelerators caused that designing parallel systems faces new problems resulting from their heterogeneity. Our implementation of a parallel
system called KernelHive allows to efficiently run applications in a heterogeneous environment consisting
of multiple collections of nodes with different types of computing devices. The execution engine of the
system is open for optimizer implementations, focusing on various criteria. In this paper, we propose a new
optimizer for KernelHive, that utilizes distributed databases and performs data prefetching to optimize the
execution time of applications, which process large input data. Employing a versatile data management
scheme, which allows combining various distributed data providers, we propose using NoSQL databases
for our purposes. We support our solution with results of experiments with real executions of our OpenCL
implementation of a regular expression matching application in various hardware configurations.
Additionally, we propose a network-aware scheduling scheme for selecting hardware for the proposed
optimizer and present simulations that demonstrate its advantages.
Comparative Study of Neural Networks Algorithms for Cloud Computing CPU Sched...IJECEIAES
Cloud Computing is the most powerful computing model of our time. While the major IT providers and consumers are competing to exploit the benefits of this computing model in order to thrive their profits, most of the cloud computing platforms are still built on operating systems that uses basic CPU (Core Processing Unit) scheduling algorithms that lacks the intelligence needed for such innovative computing model. Correspdondingly, this paper presents the benefits of applying Artificial Neural Networks algorithms in regards to enhancing CPU scheduling for Cloud Computing model. Furthermore, a set of characteristics and theoretical metrics are proposed for the sake of comparing the different Artificial Neural Networks algorithms and finding the most accurate algorithm for Cloud Computing CPU Scheduling.
Implementation of p pic algorithm in map reduce to handle big dataeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
SPEED-UP IMPROVEMENT USING PARALLEL APPROACH IN IMAGE STEGANOGRAPHY cscpconf
This paper presents a parallel approach to improve the time complexity problem associated with sequential algorithms. An image steganography algorithm in transform domain is considered for implementation. Image steganography is a technique to hide secret message in an image. With the parallel implementation, large message can be hidden in large image since it does not take much processing time. It is implemented on GPU systems. Parallel programming is done using OpenCL in CUDA cores from NVIDIA. The speed-up improvement
obtained is very good with reasonably good output signal quality, when large amount of data is processed
DYNAMIC TASK PARTITIONING MODEL IN PARALLEL COMPUTINGcscpconf
Parallel computing systems compose task partitioning strategies in a true multiprocessing
manner. Such systems share the algorithm and processing unit as computing resources which
leads to highly inter process communications capabilities. The main part of the proposed
algorithm is resource management unit which performs task partitioning and co-scheduling .In
this paper, we present a technique for integrated task partitioning and co-scheduling on the
privately owned network. We focus on real-time and non preemptive systems. A large variety of
experiments have been conducted on the proposed algorithm using synthetic and real tasks.
Goal of computation model is to provide a realistic representation of the costs of programming
The results show the benefit of the task partitioning. The main characteristics of our method are
optimal scheduling and strong link between partitioning, scheduling and communication. Some
important models for task partitioning are also discussed in the paper. We target the algorithm
for task partitioning which improve the inter process communication between the tasks and use
the recourses of the system in the efficient manner. The proposed algorithm contributes the
inter-process communication cost minimization amongst the executing processes.
In the era of big data, even though we have large infrastructure, storage data varies in size,
formats, variety, volume and several platforms such as hadoop, cloud since we have problem associated
with an application how to process the data which is varying in size and format. Data varying in
application and resources available during run time is called dynamic workflow. Using large
infrastructure and huge amount of resources for the analysis of data is time consuming and waste of
resources, it’s better to use scheduling algorithm to analyse the given data set, for efficient execution of
data set without time consuming and evaluate which scheduling algorithm is best and suitable for the
given data set. We evaluate with different data set understand which is the most suitable algorithm for
analysis of data being efficient execution of data set and store the data after analysis
The influence of data size on a high-performance computing memetic algorithm ...journalBEEI
The fingerprint is one kind of biometric. This biometric unique data have to be processed well and secure. The problem gets more complicated as data grows. This work is conducted to process image fingerprint data with a memetic algorithm, a simple and reliable algorithm. In order to achieve the best result, we run this algorithm in a parallel environment by utilizing a multi-thread feature of the processor. We propose a high-performance computing memetic algorithm (HPCMA) to process a 7200 image fingerprint dataset which is divided into fifteen specimens based on its characteristics based on the image specification to get the detail of each image. A combination of each specimen generates a new data variation. This algorithm runs in two different operating systems, Windows 7 and Windows 10 then we measure the influence of data size on processing time, speed up, and efficiency of HPCMA with simple linear regression. The result shows data size is very influencing to processing time more than 90%, to speed up more than 30%, and to efficiency more than 19%.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online
Review: Data Driven Traffic Flow Forecasting using MapReduce in Distributed M...AM Publications
from last decade, the use of communication and transportation technology increases in urban traffic
management system. To predict the correct result forecasting technique is used. Furthermore, as more data are
collected, increase in traffic data. In short, traffic flow forecasting system find out collection of historical observations
for records similar to the current conditions and uses these to estimate the future state of the system. In this paper we
focus on data driven traffic flow forecasting system which is based on MapReduce framework for distributed system
with Bayesian network approach. For probability distribution of data between two adjacent node i.e. data used for
forecasting(Input node) and data which is forecasted (output node) used a Gaussian mixture model (GMM) whose
parameters are updated using Expectation Maximization algorithm. Finally focus on model fusion, main problem in
distributed modelling for data storage and processing in traffic flow forecasting system.
Text Mining in Digital Libraries using OKAPI BM25 ModelEditor IJCATR
The emergence of the internet has made vast amounts of information available and easily accessible online. As a result, most libraries have digitized their content in order to remain relevant to their users and to keep pace with the advancement of the internet. However, these digital libraries have been criticized for using inefficient information retrieval models that do not perform relevance ranking to the retrieved results. This paper proposed the use of OKAPI BM25 model in text mining so as means of improving relevance ranking of digital libraries. Okapi BM25 model was selected because it is a probability-based relevance ranking algorithm. A case study research was conducted and the model design was based on information retrieval processes. The performance of Boolean, vector space, and Okapi BM25 models was compared for data retrieval. Relevant ranked documents were retrieved and displayed at the OPAC framework search page. The results revealed that Okapi BM 25 outperformed Boolean model and Vector Space model. Therefore, this paper proposes the use of Okapi BM25 model to reward terms according to their relative frequencies in a document so as to improve the performance of text mining in digital libraries.
Green Computing, eco trends, climate change, e-waste and eco-friendlyEditor IJCATR
This study focused on the practice of using computing resources more efficiently while maintaining or increasing overall performance. Sustainable IT services require the integration of green computing practices such as power management, virtualization, improving cooling technology, recycling, electronic waste disposal, and optimization of the IT infrastructure to meet sustainability requirements. Studies have shown that costs of power utilized by IT departments can approach 50% of the overall energy costs for an organization. While there is an expectation that green IT should lower costs and the firm’s impact on the environment, there has been far less attention directed at understanding the strategic benefits of sustainable IT services in terms of the creation of customer value, business value and societal value. This paper provides a review of the literature on sustainable IT, key areas of focus, and identifies a core set of principles to guide sustainable IT service design.
Policies for Green Computing and E-Waste in NigeriaEditor IJCATR
Computers today are an integral part of individuals’ lives all around the world, but unfortunately these devices are toxic to the environment given the materials used, their limited battery life and technological obsolescence. Individuals are concerned about the hazardous materials ever present in computers, even if the importance of various attributes differs, and that a more environment -friendly attitude can be obtained through exposure to educational materials. In this paper, we aim to delineate the problem of e-waste in Nigeria and highlight a series of measures and the advantage they herald for our country and propose a series of action steps to develop in these areas further. It is possible for Nigeria to have an immediate economic stimulus and job creation while moving quickly to abide by the requirements of climate change legislation and energy efficiency directives. The costs of implementing energy efficiency and renewable energy measures are minimal as they are not cash expenditures but rather investments paid back by future, continuous energy savings.
Performance Evaluation of VANETs for Evaluating Node Stability in Dynamic Sce...Editor IJCATR
Vehicular ad hoc networks (VANETs) are a favorable area of exploration which empowers the interconnection amid the movable vehicles and between transportable units (vehicles) and road side units (RSU). In Vehicular Ad Hoc Networks (VANETs), mobile vehicles can be organized into assemblage to promote interconnection links. The assemblage arrangement according to dimensions and geographical extend has serious influence on attribute of interaction .Vehicular ad hoc networks (VANETs) are subclass of mobile Ad-hoc network involving more complex mobility patterns. Because of mobility the topology changes very frequently. This raises a number of technical challenges including the stability of the network .There is a need for assemblage configuration leading to more stable realistic network. The paper provides investigation of various simulation scenarios in which cluster using k-means algorithm are generated and their numbers are varied to find the more stable configuration in real scenario of road.
Optimum Location of DG Units Considering Operation ConditionsEditor IJCATR
The optimal sizing and placement of Distributed Generation units (DG) are becoming very attractive to researchers these days. In this paper a two stage approach has been used for allocation and sizing of DGs in distribution system with time varying load model. The strategic placement of DGs can help in reducing energy losses and improving voltage profile. The proposed work discusses time varying loads that can be useful for selecting the location and optimizing DG operation. The method has the potential to be used for integrating the available DGs by identifying the best locations in a power system. The proposed method has been demonstrated on 9-bus test system.
Analysis of Comparison of Fuzzy Knn, C4.5 Algorithm, and Naïve Bayes Classifi...Editor IJCATR
Early detection of diabetes mellitus (DM) can prevent or inhibit complication. There are several laboratory test that must be done to detect DM. The result of this laboratory test then converted into data training. Data training used in this study generated from UCI Pima Database with 6 attributes that were used to classify positive or negative diabetes. There are various classification methods that are commonly used, and in this study three of them were compared, which were fuzzy KNN, C4.5 algorithm and Naïve Bayes Classifier (NBC) with one identical case. The objective of this study was to create software to classify DM using tested methods and compared the three methods based on accuracy, precision, and recall. The results showed that the best method was Fuzzy KNN with average and maximum accuracy reached 96% and 98%, respectively. In second place, NBC method had respective average and maximum accuracy of 87.5% and 90%. Lastly, C4.5 algorithm had average and maximum accuracy of 79.5% and 86%, respectively.
Web Scraping for Estimating new Record from Source SiteEditor IJCATR
Study in the Competitive field of Intelligent, and studies in the field of Web Scraping, have a symbiotic relationship mutualism. In the information age today, the website serves as a main source. The research focus is on how to get data from websites and how to slow down the intensity of the download. The problem that arises is the website sources are autonomous so that vulnerable changes the structure of the content at any time. The next problem is the system intrusion detection snort installed on the server to detect bot crawler. So the researchers propose the use of the methods of Mining Data Records and the method of Exponential Smoothing so that adaptive to changes in the structure of the content and do a browse or fetch automatically follow the pattern of the occurrences of the news. The results of the tests, with the threshold 0.3 for MDR and similarity threshold score 0.65 for STM, using recall and precision values produce f-measure average 92.6%. While the results of the tests of the exponential estimation smoothing using ? = 0.5 produces MAE 18.2 datarecord duplicate. It slowed down to 3.6 datarecord from 21.8 datarecord results schedule download/fetch fix in an average time of occurrence news.
Evaluating Semantic Similarity between Biomedical Concepts/Classes through S...Editor IJCATR
Most of the existing semantic similarity measures that use ontology structure as their primary source can measure semantic similarity between concepts/classes using single ontology. The ontology-based semantic similarity techniques such as structure-based semantic similarity techniques (Path Length Measure, Wu and Palmer’s Measure, and Leacock and Chodorow’s measure), information content-based similarity techniques (Resnik’s measure, Lin’s measure), and biomedical domain ontology techniques (Al-Mubaid and Nguyen’s measure (SimDist)) were evaluated relative to human experts’ ratings, and compared on sets of concepts using the ICD-10 “V1.0” terminology within the UMLS. The experimental results validate the efficiency of the SemDist technique in single ontology, and demonstrate that SemDist semantic similarity techniques, compared with the existing techniques, gives the best overall results of correlation with experts’ ratings.
Semantic Similarity Measures between Terms in the Biomedical Domain within f...Editor IJCATR
The techniques and tests are tools used to define how measure the goodness of ontology or its resources. The similarity between biomedical classes/concepts is an important task for the biomedical information extraction and knowledge discovery. However, most of the semantic similarity techniques can be adopted to be used in the biomedical domain (UMLS). Many experiments have been conducted to check the applicability of these measures. In this paper, we investigate to measure semantic similarity between two terms within single ontology or multiple ontologies in ICD-10 “V1.0” as primary source, and compare my results to human experts score by correlation coefficient.
A Strategy for Improving the Performance of Small Files in Openstack Swift Editor IJCATR
This is an effective way to improve the storage access performance of small files in Openstack Swift by adding an aggregate storage module. Because Swift will lead to too much disk operation when querying metadata, the transfer performance of plenty of small files is low. In this paper, we propose an aggregated storage strategy (ASS), and implement it in Swift. ASS comprises two parts which include merge storage and index storage. At the first stage, ASS arranges the write request queue in chronological order, and then stores objects in volumes. These volumes are large files that are stored in Swift actually. During the short encounter time, the object-to-volume mapping information is stored in Key-Value store at the second stage. The experimental results show that the ASS can effectively improve Swift's small file transfer performance.
Integrated System for Vehicle Clearance and RegistrationEditor IJCATR
Efficient management and control of government's cash resources rely on government banking arrangements. Nigeria, like many low income countries, employed fragmented systems in handling government receipts and payments. Later in 2016, Nigeria implemented a unified structure as recommended by the IMF, where all government funds are collected in one account would reduce borrowing costs, extend credit and improve government's fiscal policy among other benefits to government. This situation motivated us to embark on this research to design and implement an integrated system for vehicle clearance and registration. This system complies with the new Treasury Single Account policy to enable proper interaction and collaboration among five different level agencies (NCS, FRSC, SBIR, VIO and NPF) saddled with vehicular administration and activities in Nigeria. Since the system is web based, Object Oriented Hypermedia Design Methodology (OOHDM) is used. Tools such as Php, JavaScript, css, html, AJAX and other web development technologies were used. The result is a web based system that gives proper information about a vehicle starting from the exact date of importation to registration and renewal of licensing. Vehicle owner information, custom duty information, plate number registration details, etc. will also be efficiently retrieved from the system by any of the agencies without contacting the other agency at any point in time. Also number plate will no longer be the only means of vehicle identification as it is presently the case in Nigeria, because the unified system will automatically generate and assigned a Unique Vehicle Identification Pin Number (UVIPN) on payment of duty in the system to the vehicle and the UVIPN will be linked to the various agencies in the management information system.
Assessment of the Efficiency of Customer Order Management System: A Case Stu...Editor IJCATR
The Supermarket Management System deals with the automation of buying and selling of good and services. It includes both sales and purchase of items. The project Supermarket Management System is to be developed with the objective of making the system reliable, easier, fast, and more informative.
Energy-Aware Routing in Wireless Sensor Network Using Modified Bi-Directional A*Editor IJCATR
Energy is a key component in the Wireless Sensor Network (WSN)[1]. The system will not be able to run according to its function without the availability of adequate power units. One of the characteristics of wireless sensor network is Limitation energy[2]. A lot of research has been done to develop strategies to overcome this problem. One of them is clustering technique. The popular clustering technique is Low Energy Adaptive Clustering Hierarchy (LEACH)[3]. In LEACH, clustering techniques are used to determine Cluster Head (CH), which will then be assigned to forward packets to Base Station (BS). In this research, we propose other clustering techniques, which utilize the Social Network Analysis approach theory of Betweeness Centrality (BC) which will then be implemented in the Setup phase. While in the Steady-State phase, one of the heuristic searching algorithms, Modified Bi-Directional A* (MBDA *) is implemented. The experiment was performed deploy 100 nodes statically in the 100x100 area, with one Base Station at coordinates (50,50). To find out the reliability of the system, the experiment to do in 5000 rounds. The performance of the designed routing protocol strategy will be tested based on network lifetime, throughput, and residual energy. The results show that BC-MBDA * is better than LEACH. This is influenced by the ways of working LEACH in determining the CH that is dynamic, which is always changing in every data transmission process. This will result in the use of energy, because they always doing any computation to determine CH in every transmission process. In contrast to BC-MBDA *, CH is statically determined, so it can decrease energy usage.
Security in Software Defined Networks (SDN): Challenges and Research Opportun...Editor IJCATR
In networks, the rapidly changing traffic patterns of search engines, Internet of Things (IoT) devices, Big Data and data centers has thrown up new challenges for legacy; existing networks; and prompted the need for a more intelligent and innovative way to dynamically manage traffic and allocate limited network resources. Software Defined Network (SDN) which decouples the control plane from the data plane through network vitalizations aims to address these challenges. This paper has explored the SDN architecture and its implementation with the OpenFlow protocol. It has also assessed some of its benefits over traditional network architectures, security concerns and how it can be addressed in future research and related works in emerging economies such as Nigeria.
Measure the Similarity of Complaint Document Using Cosine Similarity Based on...Editor IJCATR
Report handling on "LAPOR!" (Laporan, Aspirasi dan Pengaduan Online Rakyat) system depending on the system administrator who manually reads every incoming report [3]. Read manually can lead to errors in handling complaints [4] if the data flow is huge and grows rapidly, it needs at least three days to prepare a confirmation and it sensitive to inconsistencies [3]. In this study, the authors propose a model that can measure the identities of the Query (Incoming) with Document (Archive). The authors employed Class-Based Indexing term weighting scheme, and Cosine Similarities to analyse document similarities. CoSimTFIDF, CoSimTFICF and CoSimTFIDFICF values used in classification as feature for K-Nearest Neighbour (K-NN) classifier. The optimum result evaluation is pre-processing employ 75% of training data ratio and 25% of test data with CoSimTFIDF feature. It deliver a high accuracy 84%. The k = 5 value obtain high accuracy 84.12%
Hangul Recognition Using Support Vector MachineEditor IJCATR
The recognition of Hangul Image is more difficult compared with that of Latin. It could be recognized from the structural arrangement. Hangul is arranged from two dimensions while Latin is only from the left to the right. The current research creates a system to convert Hangul image into Latin text in order to use it as a learning material on reading Hangul. In general, image recognition system is divided into three steps. The first step is preprocessing, which includes binarization, segmentation through connected component-labeling method, and thinning with Zhang Suen to decrease some pattern information. The second is receiving the feature from every single image, whose identification process is done through chain code method. The third is recognizing the process using Support Vector Machine (SVM) with some kernels. It works through letter image and Hangul word recognition. It consists of 34 letters, each of which has 15 different patterns. The whole patterns are 510, divided into 3 data scenarios. The highest result achieved is 94,7% using SVM kernel polynomial and radial basis function. The level of recognition result is influenced by many trained data. Whilst the recognition process of Hangul word applies to the type 2 Hangul word with 6 different patterns. The difference of these patterns appears from the change of the font type. The chosen fonts for data training are such as Batang, Dotum, Gaeul, Gulim, Malgun Gothic. Arial Unicode MS is used to test the data. The lowest accuracy is achieved through the use of SVM kernel radial basis function, which is 69%. The same result, 72 %, is given by the SVM kernel linear and polynomial.
Application of 3D Printing in EducationEditor IJCATR
This paper provides a review of literature concerning the application of 3D printing in the education system. The review identifies that 3D Printing is being applied across the Educational levels [1] as well as in Libraries, Laboratories, and Distance education systems. The review also finds that 3D Printing is being used to teach both students and trainers about 3D Printing and to develop 3D Printing skills.
Survey on Energy-Efficient Routing Algorithms for Underwater Wireless Sensor ...Editor IJCATR
In underwater environment, for retrieval of information the routing mechanism is used. In routing mechanism there are three to four types of nodes are used, one is sink node which is deployed on the water surface and can collect the information, courier/super/AUV or dolphin powerful nodes are deployed in the middle of the water for forwarding the packets, ordinary nodes are also forwarder nodes which can be deployed from bottom to surface of the water and source nodes are deployed at the seabed which can extract the valuable information from the bottom of the sea. In underwater environment the battery power of the nodes is limited and that power can be enhanced through better selection of the routing algorithm. This paper focuses the energy-efficient routing algorithms for their routing mechanisms to prolong the battery power of the nodes. This paper also focuses the performance analysis of the energy-efficient algorithms under which we can examine the better performance of the route selection mechanism which can prolong the battery power of the node
Comparative analysis on Void Node Removal Routing algorithms for Underwater W...Editor IJCATR
The designing of routing algorithms faces many challenges in underwater environment like: propagation delay, acoustic channel behaviour, limited bandwidth, high bit error rate, limited battery power, underwater pressure, node mobility, localization 3D deployment, and underwater obstacles (voids). This paper focuses the underwater voids which affects the overall performance of the entire network. The majority of the researchers have used the better approaches for removal of voids through alternate path selection mechanism but still research needs improvement. This paper also focuses the architecture and its operation through merits and demerits of the existing algorithms. This research article further focuses the analytical method of the performance analysis of existing algorithms through which we found the better approach for removal of voids
Decay Property for Solutions to Plate Type Equations with Variable CoefficientsEditor IJCATR
In this paper we consider the initial value problem for a plate type equation with variable coefficients and memory in
1 n R n ), which is of regularity-loss property. By using spectrally resolution, we study the pointwise estimates in the spectral
space of the fundamental solution to the corresponding linear problem. Appealing to this pointwise estimates, we obtain the global
existence and the decay estimates of solutions to the semilinear problem by employing the fixed point theorem
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Acetabularia Information For Class 9 .docxvaibhavrinwa19
Acetabularia acetabulum is a single-celled green alga that in its vegetative state is morphologically differentiated into a basal rhizoid and an axially elongated stalk, which bears whorls of branching hairs. The single diploid nucleus resides in the rhizoid.
High Performance Computing for Satellite Image Processing and Analyzing – A Review
1. International Journal of Computer Applications Technology and Research
Volume 2– Issue 4, 424 - 430, 2013
www.ijcat.com 424
High Performance Computing for Satellite Image
Processing and Analyzing – A Review
Mamta Bhojne
Advanced Computing Training School (ACTS),
Centre for Development of Advanced Computing
(C-DAC), Pune, India.
Anshu Pallav
Advanced Computing Training School (ACTS),
Centre for Development of Advanced Computing
(C-DAC), Pune, India.
Abhishek Chakravarti
Advanced Computing Training School (ACTS),
Centre for Development of Advanced Computing
(C-DAC), Pune, India.
Sivakumar V
Geomatics Solutions Development Group,
Centre for Development of Advanced Computing
(C-DAC), Pune, India.
Abstract: High Performance Computing (HPC) is the recently developed technology in the field of computer science, which evolved
due to meet increasing demands for processing speed and analysing/processing huge size of data sets. HPC brings together several
technologies such as computer architecture, algorithm, programs and system software under one canopy to solve/handle advanced
complex problems quickly and effectively. It is a crucial element today to gather and process large amount of satellite (remote sensing)
data which is the need of an hour. In this paper, we review recent development in HPC technology (Parallel, Distributed and Cluster
Computing) for satellite data processing and analysing. We attempt to discuss the fundamentals of High Performance Computing
(HPC) for satellite data processing and analysing, in a way which is easy to understand without much previous background. We sketch
the various HPC approach such as Parallel, Distributed & Cluster Computing and subsequent satellite data processing & analysing
methods like geo-referencing, image mosaicking, image classification, image fusion and Morphological/neural approach for hyper-
spectral satellite data. Collective, these works deliver a snapshot, tables and algorithms of the recent developments in those sectors and
offer a thoughtful perspective of the potential and promising challenges of satellite data processing and analysing using HPC
paradigms.
Keywords: Satellite Image, Remote Sensing, High Performance Computing, Parallel Computing, Distributed Computing & Cluster
Computing
1. INTRODUCTION
High Performance Computing (HPC) is the recently
developed technology in the field of computer science, which
evolved due to meet increasing demands for processing speed.
HPC brings together several technologies such as computer
architecture, algorithm, programs and system software under
one canopy to solve advanced complex problems quickly and
effectively. This technology focuses on developing and
implementing methods like parallel processing, cluster
processing and distributed processing for solving problems.
Parallel processing is a computing approach to increase the
rate at which the set of data is processed by processing
different parts of the data at the same time [1]. Unlike other
methods where data in inputted in memory system in a step by
step manner, distributed processing uses parallel processing
on multiple machines, where data is distributed to all parts of
the memory system at once. Parallel computing may be seen
as a particular tightly coupled form of distributed computing
[2] and distributed computing may be seen as a loosely
coupled form of parallel computing [3]. In cluster computing
many CPUs hooked up via high speed internet connections to
a central server which gives each of them several task [1].
With the advancement of satellite remote sensing technology
we are getting high spatial, spectral and radiometric resolution
images with a huge data available. But problems occur when
Remote Sensing image processing speed falls far behind
which means that abundant data cannot be translated into
useful information in time. Recently, application of HPC
technology is getting more importance in remote sensing
research work. The utilization of HPC systems in remote
sensing applications has become more and more widespread
in recent years [4]. HPC is able to improve the computing
speed to a great extent in massive data processing, which
makes itself an effective way to solve the problem of
processing efficiency in remote sensing data. In this paper we
present various techniques and methods of High Performance
Computing for remotely sensed satellite image processing and
analyzing. The following sections briefly describe the High
Performance Computing technology for remote sensing data
processing and analyzing methods.
2. PARALLEL COMPUTING
Parallel processing is the simultaneous processing of the same
task on two or more microprocessors in order to obtain faster
results. The computer resources can include a single computer
with multiple processors, or a number of computers connected
by a network, or a combination of both. The processors access
data through shared memory. With the help of parallel
processing, a number of computations can be performed at
once, bringing down the time required to complete a project.
Parallel processing is particularly useful in projects that
require complex computations [5]. Han S.H. et al. (2009)
explained that parallel processing system denotes a multiple-
processor computer system consisting of centralized
multiprocessors or multi-computers [6]. Figure-1 shows task
based parallel processing workflow of automatic geometric
correction (step 0), image matching (step 1) and Digital
Surface Model generation (step n) using various data / block.
In parallel computing more than one processor is required to
perform any task. There are two basic types of parallel
computer systems i.e., shared memory multi-computers
(SMMC) and message passing multi-computers (MPMC) [7].
2. International Journal of Computer Applications Technology and Research
Volume 2– Issue 4, 424 - 430, 2013
www.ijcat.com 425
The difference between these two types is based on their
memory storage unit. In SMMC memories are shared among
computers, which means multi-computers share a uniformly
coded storage unit and data exchange is realized by
addressing operations. Whereas, MPMC uses network to
connect computers or processors and each computer has its
own storage unit which cannot be accessed by other
computers [8].
Figure 1. Task based parallel processing workflow
(modified after, Hangye Liu et al., 2009).
2.1 Image fusion
Image fusion is the process of merging multi-spectral image
having high spectral resolution and pan-chromatic image
having high spatial resolution (co-georeferenced). There are
different algorithms and operations used for image fusion like
Brovey, SFIM, HFM, Wavelet, HIS, Gram-Schmidt, PCA,
etc. Yang J. H., et al., (2010) explained that the parallel
processing framework can be applied to most image fusion
algorithms, which are divided into three categories such as
component substitution (CS), modulation based fusion
techniques and multi resolution analysis (MRA) based fusion
techniques [9]. On satellite image fusion, algorithms can be
examined in main four steps i.e., i) Co-register of MS and Pan
image, ii) Upscale (interpolation) of MS image, iii) Gather
spatial image in Pan image and iv) Merge spatial details with
MS image [10]. They analyzed fourteen data fusion methods,
execute both serial and parallel algorithms and compare
execution times and quality performances. After
experimenting serial and parallel algorithm they concluded
that all parallel algorithms performed on average 4.4 times
faster than serial algorithms with minimum 1.75 and 4.4 times
faster. Yang Jinghui et al., (2012) proposed that the parallel
processing mechanism can divide an entire image into
different blocks which are dispatched to different processing
units [11]. Thus the processing efficiency is improved.
Although, splitting the computation to more processing
threads shortens the executing time but, it also increases the
additional cost caused by inefficient memory usage when
number of threads increases. In order to check the efficiency
of different fusion methods, Alper et al., (2013) applied
indexes like Spectral Angle Mapper (SAM), Root Mean
Square Error(RMSE), Relative Average Spectral (RASE) and
Erreur Relative Globale Adimensionelle de Syntheses
(ERGAS) [10]. SAM is used to measure the angle between
the spectral vector of fused MS bands and original MS bands
to analyze the spectral similarity [12]. Smaller results express
high similarity between the images and higher express low
similarity [13].
(1)
v^= spectral vector of fused image band
I = band
RMSE defines an error between the reference image and
fused image for each band. The lower value or RMSE shows
the higher spectral quality.
(2)
M = spectral vector of referenced image
tP =total pixels
RASE expressed the average implementation of RMSE of
each spectral band,
(3)
µi= mean radiance of n spectral bands of the reference image
Lower values for ERGAS represent higher spectral quality,
(4)
h= resolution of pan image
l= resolution of MS image
They further explained that the variation in performance of
each test differs according to characteristics of methods and
their algorithms, hardware limits, cache memory usage, hyper
threading etc. They concluded that the best result is observed
on Gram-Schmidt followed by IHS- Wavelet hybrid method.
2.2 Image classification
Image classification is the most important part of digital
image processing. The intent of the classification process is to
categorize all pixels in a digital image into one of several land
cover classes or themes. This categorized data may then be
used to produce thematic maps of the land cover present in an
image [14]. There are two types of image classification-
supervised and unsupervised. Supervised classification makes
use of the training samples. While in unsupervised
classification natural clustering or grouping of the pixel values
i.e., gray levels of the pixels are observed. Smit M. et al.,
(2000) described that technology to rapidly process imagery
data into useful information products has not kept pace with
the rapidly growing volume and complexity of imagery data
increasing available from government and commercial
sources. Significant processing speed improvements have
been achieved by implementation of classification methods on
3. International Journal of Computer Applications Technology and Research
Volume 2– Issue 4, 424 - 430, 2013
www.ijcat.com 426
the highly parallel integrated virtual environment (HIVE) - a
Beowulf class system using parallel virtual machine software
[15].
Kato Z. et al., (1999) dealt with the problem of unsupervised
classification of images modeled by Markov Random Fields
(MRF). They worked on parameter estimation methods
related to monogrid and hierarchical MRF models using some
iterative unsupervised parallel segmentation algorithms. They
described algorithms which have been tested on image
segmentation problems [16]. Also comparative tests have
been tested on noisy synthetic data and on real satellite
images. The algorithms were implemented on a Connection
Machine CM200 [17, 18]. They compared the obtained
parameters and segmentation results to the supervised results
presented by Kato Z. et al., (1996) in the given Table 1 [19].
Table 1. Comparison of supervised and unsupervised
classification (Number of misclassifed pixels)
The result shows that unsupervised algorithms provide results
comparable to those obtained by supervised segmentations,
but they require much more computing time due to hyper
parameter estimation and they are slightly more sensitive to
noise. The main advantage is that unsupervised methods are
completely data-driven where only input parameter is the
number of classes.
2.3 Image mosaicking
Hongyu Wang (2005) has explained that image mosaicking is
the process of combining a set of small images into a larger
composite image [20]. However, it is very complex to mosaic
multiple small images because individual images must be
projected into a common coordinate space, overlap between
images has to be calculated, the images should be processed
so that the backgrounds match, and images composed while
using a variety of techniques to handle the presence of
multiple pixels in the same output space. To accomplish these
tasks, a suite of software tools called Montage has been
developed. The modules in this suite can be run on a single
processor computer using a simple shell script, and can
additionally be run using a combination of parallel
approaches. These include running MPI versions of some
modules, and using standard grid tools. In the latter case,
processing workflows are automatically generated, and
appropriate data sources are located and transferred to a
variety of parallel processing environments for execution. As
a result, it is now possible to generate large-scale mosaics on-
demand in timescales that support iterative, scientific
exploration [21]. Yan Ying Wang et al., (2010) described that
image mosaic for large scale RS images, the registration and
blending of mosaic is I/O sensitive and time consuming. They
proposed an Optimized Image Mosaic Algorithm with Parallel
I/O and Dynamic Grouped Parallel Strategy Based on
Minimal Spanning Tree to solve the problems associated with
image mosaicking. An effective parallel strategy of data
splitting is adopted in the time-consuming part, registration
and blending. In addition, the multi-thread parallel I/O
strategy which is overlapping I/O and computing time is
adopted to speed up the algorithm efficiency. Its outstanding
parallel efficiency and perfect linear speedup is shown
through experimental and comparative analysis [22].
2.4 Morphological /neural approach for
hyperspectral satellite image processing
Valencia D. et al., (2007) demonstrated new parallel
processing methodologies for hyper spectral image processing
based on neural architectures and morphological concepts
[23]. The computational performance of the proposed
methods is demonstrated using real analysis scenarios based
on the exploitation of AVIRIS data using two parallel
computer systems and SGI Origin 2000 multicomputer
located at the Barcelona Supercomputing Center (BSC) and
the Thunderhead Beowulf cluster at NASA’s Goddard Space
Flight Center (NASA/GSFC). They developed a new parallel
morphological /neural approach for hyper spectral image
classification and specifically discuss implementation aspects
using several commodity cluster-based architectures. They
proposed methods for hyper spectral analysis which can be
included in the category of spectral un-mixing and
classification approaches respectively [24]. Valencia D. et al.,
(2007) described classification problem of spectral mixing,
and then introduce a morphological operations to solve the
problem using SOM (Self Organizing Map) and end member
extraction-based approach/ algorithms [23]. Based on the
morphological concept they proposed Automated End
member Extraction Algorithm (AMEE) method, and allows
soft classification of hyper spectral images in fully automated
fashion. In addition to this they discussed parallelization
strategies for AMEE and SOM algorithms. The proposed
parallel algorithm fully exploits the underlying parallelism
inherent in image processing methods which, minimizes the
communication between processors [25]. Execution time (in
seconds) of the AMEE algorithm at the SGI Origin 2000
multi-computer for several combinations of number of
iterations. IMAX, and number of processors, N is given in the
Table 2 & 3.
Table 2. AMEE algorithm (time in seconds) in SGI origin
2000 (From Valencia D. et al., 2007).
Execution time (in seconds) of the AMEE algorithm at the
Thunderhead Beowulf cluster for several combinations of
number of iterations. IMAX and number of processors N. They
concluded that parallel computing at the massively parallelism
level, supported by message passing, provides a unique
framework to accomplish the above goals. For this purpose,
computing systems made up of arrays of commercial off-the-
shelf computing hardware are a cost-effective way of
exploiting this sort of parallelism in remote sensing
applications. Specifically, the proposed MPI-based parallel
implementation minimizes inter-processor communication
overhead and can be ported to any type of distributed memory
system.
Model Image Supervised Unsupervised
Monogrid
Checkerboard 260
(1.59%)
213 (1.41%)
Triangle 112
(0.68%)
103 (0.63%)
Hierarchical
Checkerboard 115 (0.7%) 147 (0.9%)
Triangle 104
(0.63%)
111 (0.68%)
N IMAX=1 IMAX=3 IMAX=5 IMAX=7
1 372 1066 1809 2476
2 182 522 864 1178
4 89 252 429 569
8 264 143 338 293
4. International Journal of Computer Applications Technology and Research
Volume 2– Issue 4, 424 - 430, 2013
www.ijcat.com 427
Table 3. AMEE algorithm (time in seconds) in
Thunderhead Beowulf cluster (From Valencia D. et al.,
2007).
3. CLUSTER COMPUTING
A computer cluster is a group of interconnected CPU’s which
are employed to process large datasets. The interconnection
can be of many different types including via LAN, ftp server,
Bluetooth network, Wi-Fi, etc. Computer cluster emerged as a
result of convergence of a number of computing trends
including the availability of low cost microprocessors, high
speed networks and software for high performance distributed
computing [1]. Clusters are usually deployed to improve
performance and availability over that of single computer,
while typically being much more cost effective than single
computers of comparable speed or availability [26]. A cluster
computing system is a compromise between a massively
parallel processing system and a distributed system [27]. The
architecture of cluster computing is given in the Figure 2.
Figure 2. Architecture of remote sensing parallel
processing system based on cluster computing (Modified
after, Hangye Liu et al., 2009).
During recent years, cluster systems have played a more
important role in the architecture design of high-performance
computing area. Yuanli Shi et al., (2012) stated that Satellite
Environment Center, Ministry of Environment Protection of
China has built a powerful cluster system which is designed to
process massive remote sensing data of HJ-1 satellites
automatically every day [28]. To verify the performance of
cluster system, image registration has been chose to
experiment with one scene of HJ-1 CCD sensor. The
experiments of imagery registration show that it is an
effective system to improve the efficiency of data processing,
which could provide a response rapidly in applications. Wang
Xuezhi et al., (2010) have developed a web based data
processing system based on Geospatial Data Abstraction
Library (GDAL) which made the use of cluster computing
and parallel computing [29]. The system achieved not only the
online processing of 14 vegetation indices like NDVI and
EVI, but also the online gap-fill algorithm for Landsat-7 SLC-
off datasets.
Yang C.T. and Hung C.C. (2000) present the basic
programming techniques by using PVM (Parallel Virtual
Machine) to implement a message-passing program to utilize
the parallelism of cluster of SMPs (Symmetric Multi
Processor) [27]. The matrix multiplication of the parallel ray
tracing problems is illustrated and the experiments are also
demonstrated on Linux SMPs cluster. The program for matrix
multiplication is given below:
for (i = 0; i < N; i ++ ) { /* can be parallelized */
for (j = 0; j < M; j++ ){ /* can be parallelized */
c[i] [j] = 0;
for (k = 0; k < P; k++)
c[i] [j] = c[i] [j] + a[i] [k] * b [k] [j];
}
}
The matrix multiplication algorithm is implemented in PVM
(Parallel Virtual Machine) using the master-slave paradigm.
The experimental results showed that the highest speedup
were 10.89 and 13.67 respectively for matrix multiplication of
the PVMPOV (an unofficial version of Pov-ray), when the
number of processors is 16, by creating 16 tasks on SMPs
cluster. The results of this study will make theoretical and
technical contributions to the design of a PVM program of a
Linux SMP clusters for remote sensing data processing. It also
shows that Linux/PVM cluster can achieve high speedups for
applications.
4. DISTRIBUTED COMPUTING
Distributed computing is the process of aggregating the power
of several computing entities to collaboratively run a single
computational task in a transparent and coherent way, so that
they appear as a single centralized system. Connecting users
and resources in a transparent, open and scalable way is the
main goal of distributed operating system [30]. Godfrey
B.(2002) has described that distributed computing works by
splitting the larger into smaller chunks which can be
performed at the same time independently of each other [31].
The two main entities in distributing computing are the server
and the many clients. A central computer, the server will
generate work packages which are passed onto worker clients.
The clients will perform the task, detailed in a work package
data and when it has finished the completed work package
will be passed back to the server. The working process of
semi distributed scheduling policy is given in the Figure 3.
Processing image data generated by new remote sensing
systems can severely tax the computational limits of the
classic single processor systems that are normally available to
the remote sensing practitioner. Operating on these large data
sets with a single computer system, sometimes simplifying
approximations are used that can limit the precision of the
final result. Recent work at Pacific Northwest National
Laboratory strongly suggest that a distributed network of
inexpensive PCs can be designed that is optimal to deal with
intensive computationally problems. The new type of
distributed computing will remove computational constraints;
image processing algorithms for remote sensed images are
now being considered [32].
Geo referencing is basic function of remote sensing data
processing. It is a process of assigning geographic information
N IMAX=1 IMAX=3 IMAX=5 IMAX=7
1 311 947 1528 1925
4 124 321 557 685
16 45 95 144 156
36 26 46 61 71
64 19 29 41 43
100 12 20 26 29
144 9 15 20 23
196 6 11 17 20
256 4 10 14 18
5. International Journal of Computer Applications Technology and Research
Volume 2– Issue 4, 424 - 430, 2013
www.ijcat.com 428
Figure 3. Semi distributed scheduling policy (modified after, Hangye Liu et al., 2009).
to an image. Knowing where an image is located in the world
allows information about features contained in that image to
be determined. This information includes location, size and
distance. But it is very time consuming and computationally
intensive process. To improve the efficiency of processing
Yincui Hu et al., (2005) focuses on parallelization of remote
sensing data on a grid platform [33]. As an important new
field in the distributed computing arena, Grid computing
focuses on intensive resource sharing, innovative applications,
and, in some cases, high performance orientation. They
performed their experiments on MODIS level 1B data. Two
strategies were followed by them for geo referencing process
viz. parallel rectification on grid and data partition strategy.
They explained three components to rectify image, i.e.
transformation model selection, coordinate transformation and
resampling to correct every part of the large image. The
partition strategy influences the process of efficiency and
determines the merge strategy. According to feature of
algorithm the applied backward decomposing techniques
which comprises four steps i.e. partitioning the output array
into equal sized block, computing geographical range of every
block, finding GPCs triangulations contained with
geographical range and extract block from original data in
accordance with these triangulation. This extracted block is
the data that will be distributed to producers. The experiment
shows that data- parallel geo-reference is efficient especially
for large-size data. The large data is decomposed into small
parts and distributed to the Grid. The experiments indicate
that Grid is efficient for data-parallel geo-reference.
Shamim Akhter et al., (2005) described a parallel approach in
cluster computing and MPI (Message Passing Interface)
parallel programming and provide results of experiments on
studying the porting of remote sensing algorithm [34]. They
used MPI as programming tool and all the codes were tested
Beowulf cluster using GNU C compiler with the MPICH.
implementation of MPI. They performed their experiment in
compressed ASTER image and uncompressed MODIS image.
The tasks are allocated to slave processor by a master
processor. After getting data from input file, server put these
data into 2D array which is distributed to different processors
using either of two procedures, first, distributed each input
pixel of a particular remote sensing image to corresponding
processor and distributed row or column of that image at a
time to the corresponding processor. Figure 4 shows flow of
task allocation by master processor.
Figure 4. Client server paradigms
(modified after, Shamim et al., 2005).
From these two experiments with a simple processing at two
images size difference, it is observed that there is a point of
convergence of all curves for a given image with an
increasing number of operations applied.
5. CONCLUSIONS & FUTURE SCOPE
Methods discussed above are some of the techniques that are
the subset of High Performance Computing (HPC) techniques
that are being employed today to process large amounts of
data. Thus different approaches can be employed for different
projects. Parallel processing is particularly useful in projects
that require complex computations. Parallel processing
framework can be applied to most image fusion algorithms
and hyper spectral image processing based on neural
architectures and morphological model. Distributed
computing is particularly useful when large amounts of data
have to be processed within a given time period, keeping in
mind the economic restrictions. A distributed network of
inexpensive PCs can be designed that is optimal to deal with
the type of computationally intensive problems encountered in
processing remotely sensed images. Cluster computing is one
of the most widely used HPC approach for processes such as
geo-referencing, image transformation, image mosaicking,
etc. Currently most of these approaches are limited to military
6. International Journal of Computer Applications Technology and Research
Volume 2– Issue 4, 424 - 430, 2013
www.ijcat.com 429
and government organizations but private enterprises are also
employing this technology at a rapid pace. Although, more
research work requires on satellite data processing and
analyzing over HPC platform for getting an enhanced and fast
output for various remote sensing applications.
6. ACKNOWLEDGMENTS
The authors are thankful to ACTS, C-DAC, Pune for
providing facilities and support to carry out this study.
Authors would like to extend their gratitude to Ms. Asima
Mishra, Joint Director–GSDG and Dr. Sunil Londhe GSDG,
C-DAC, Pune for their encouragement.
7. REFERENCES
[1] WIKI. (2013). http://wiki.answers.com/Q, Accessed.
June, 2013.
[2] Peleg D. (2000). Distributed Computing: A Locality-
Sensitive Approach, SIAM, ISBN 0-89871-464-8:pp. 10.
https://en.wikipedia.org/wiki/Distributed_computing.
Accessed, June, 2013.
[3] Ghosh and Sukumar (2007). Distributed Systems – An
Algorithmic Approach. Chapman & Hall/CRC.
https://en.wikipedia.org/wiki/Distributed_computing.
Accessed, June, 2013.
[4] Lee, C. A., Gasster, S. D., Plaza, A., Chang, C. I., &
Huang, B. (2011). Recent developments in high
performance computing for remote sensing- A review.
IEEE Journal of Selected Topics in Applied Earth
Observations and Remote Sensing. 4.3: 508-527.
[5] WISEGEEK. (2013). http://www.wisegeek.com/what-is-
parallel-processing.html. Accessed, June, 2013.
[6] Han S.H, Joon Heo, Hong Gyoo Sohn and Kiyun Yu.
(2009). Parallel Processing Method For A Airborne
Laser Scanning Data Using a PC Cluster and a Virtual
Grid. Sensors, 2009, 9: pp.2555-2573; DOI:
10.3390/s90402555.
www.mdpi.com/1424-8220/9/4/2555/pdf. Accessed,
June, 2013.
[7] Barry Wilkinson B. and Michael Allen (2002). Parallel
Programming. China Machine Press.
www.cse.ucsc.edu/classes/cmpe113/Fall02/slides1.4.ps.
Accessed, June, 2013.
[8] Hangye Liu, Yonghong Fan, Xueqing Deng, Song Ji.
(2009). Parallel Processing Architecture of Remote
Sensed Image Processing System Based on Cluster.
IEEE Image and Signal Processing 09. CISP '09. 2nd
International Congress.
http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber
=5300938. Accessed June, 2013.
[9] Yang J.H., J.X. Zhang, Li Haitao, Sun Yushan, Pu
Pengxian.(2010). Pixel Level Fusion Methods for
Remote Sensing Images: a Current Review,Technical
Commission VII Symposium, Vienna, Austria. pp.680.
http://www.isprs.org/proceedings/XXXVIII/part7/b/pdf/6
80_XXXVIII-part7B.pdf. Accessed, June, 2013.
[10] Alper A G., Adnan O, Meric Y., Sedef K P., Serdar B.,
and Mesut K. (2013). Remote Sensing Data Fusion
Algorithms With Parallel Computing. .
http://academia.edu/3266480/Remote_Sensing_Data_Fus
ion_Algorithms_with_Parallel_Computing. Accessed
June, 2013.
[11] Yang Jinghui, Zhang Jixian (2012). A Parallel
Implementation Framework For Remotely Sensed Image
Fusion. ISPRS Annals of the Photogrammetry, Remote
Sensing and Spatial Information Sciences, Volume I-7,
2012 XXII ISPRS Congress, 25 August – 01 September
2012, Melbourne, Australia.
http://www.isprs-ann-photogramm-remote-sens-spatial-
inf-sci.net/I-7/329/2012/isprsannals-I-7-329-2012.pdf.
Accessed, June, 2013.
[12] Yuhas R.H., Goetz A.F. And Boardman J.W. (1992).
Discrimination Among Semi Arid Landscape End
Members Using Spectral Angle Mapper (SAM)
Algorithm. Summeries Of The Third Annual JPL
Airborne Geosciences Workshop ,Vol. 1,Pasadena,
CA:JPL Publication.pp.147-149.
http://academia.edu/3266480/Remote_Sensing_Data_Fus
ion_Algorithms_with_Parallel_Computing. Accessed,
June, 2013.
[13] Chikr M E Mezouar, N.Taleb, K.Kpalma and J. Ronsin
(2011). An HIS Based Fusion For Color Distortion And
Vegetation Enhancement In Ikonos Imagery.
Geosciences And Remote Sensing, IEEE Transactions
on,Vol.49,No.5. pp. 1590-1602.
http://academia.edu/3266480/Remote_Sensing_Data_Fus
ion_Algorithms_with_Parallel_Computing. Accessed,
June, 2013.
[14] SC.(2013). Remote sensing University Lecture note.
http://www.sc.chula.ac.th/courseware/2309507/Lecture/r
emote18.html, Accessed, June, 2013.
[15] Smit M.,Garegnani J,Bechdol M and Chettri S.(2000).
Parallel Image classification on HIVE. Applied imagery
pattern recognition workshop,IEEE, 2000:39-46.
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=953
601&tag=1. Accessed, June, 2013.
[16] Kato Z., Zerubia J., Berthod M. (1999). Unsupervised
parallel image classification using Markovian models.
Pattern Recognition 32 (1999) 591Ð604, p.591-602.
http://www.inf.u-szeged.hu/~kato/papers/pattrec99.pdf.
Accessed, June, 2013.
[17] Hillis W.D. (1985). The Connection Machine. MIT press
New York.
http://www.inf.u-szeged.hu/~kato/papers/pattrec99.pdf.
Accessed, June, 2013.
[18] TMC, Thinking Machines Corporation, Cambridge,
Massachusetts, Connection Machine Technical Summary
(1989). Version 5.1 ed.
http://www.inf.u-szeged.hu/~kato/papers/pattrec99.pdf.
Accessed, June, 2013.
[19] Kato Z, Berthod M, Zerubia J., A. (1996). hierarchical
Markov random Field model and multi-temperature
annealing for parallel image classification, Compute.
Vision Graphics Image Process. Graphical Models Image
Process. 58:18-37.
http://www.inf.u-szeged.hu/~kato/papers/pattrec99.pdf.
Accessed, June, 2013.
[20] Hongyu Wang (2005). Parallel Algorithms For Image
And Video Mosaic Based Applications.p.1.
http://athenaeum.libs.uga.edu/bitstream/handle/10724/83
51/wang_hongyu_200508_ms.pdf?sequence=1.
Accessed, June, 2013.
[21] Katz D.S., Nathaniel Anagnostou, G. Bruce Berriman,
Ewa Deelmans, John Good Joseph C.Jaco, Arl
Kesselman, Anastasia Laity, Thomas A. Prince, Gurmeet
Singh, Mei Hui Su, Roy Williams (2006). Astronomical
Image Mosaicking On a Grid: Initial Experiences.
Engeneering the Grid Status and Perspective. Book -
American Scientific Publishers. ISBN:1-58883-038-1.
http://montage.ipac.caltech.edu/publications/montage_E
TG.pdf. Accessed June, 2013.
[22] Yan Ying Wang, Yan Ma, Peng Liu, Dingsheng Liu and
Jibo Xie. (2010). An Optimized Image Mosaic
Algorithm with Parallel IO and Dynamic Grouped
Parallel Strategy Based on Minimal Spanning Tree.
7. International Journal of Computer Applications Technology and Research
Volume 2– Issue 4, 424 - 430, 2013
www.ijcat.com 430
Proceeding Gcc ’10 proceeding of the 2010 ninth
international conference on grid and cloud computing.pp.
http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=05
662698. Accessed, June, 2013.
[23] Valencia D., Plaza A., Martinez P., Plaza J. (2007).
Parallel Processing of High Dimensional Images Using
Cluster Computer Architectures. IJCA, vol.14, no. 1. pp
23-34. IJCA, Vol. 14, No. 1: pp.23-34.
http://www.umbc.edu/rssipl/people/aplaza/Papers/Journa
ls/2007.IJCA.Cluster.pdf. Accessed, June, 2013.
[24] Plaza A., Martínez P., Pérez R. and Plaza J. (2004). A
New Approach for Mixed Pixel Classification in
Hyperspectral Imagery Based on Extended
Morphological Profiles. Pattern Recognition, 37:1097-
1116.
http://www.umbc.edu/rssipl/people/aplaza/Papers/Journa
ls/2007.IJCA.Cluster.pdf
[25] Seinstra F.J., Koelma D., And Geusebroek J. M., (2002).
A Software Architecture For Transparent Parallel Image
Processing. Parallel Computing,28: pp.967-923.
http://www.umbc.edu/rssipl/people/aplaza/Papers/Journa
ls/2007.IJCA.Cluster.pdf. Accessed, June, 2013.
[26] Bader D., Pennington R. (2007). Cluster Computing
Applications. The international journal of high
performance computing. 15(2):181-185.
http://en.wikipedia.org/wiki/Computer_cluster.
Accessed, June, 2013.
[27] Yang C.T. and Hung C.C. (2000). Parallel Computing in
Remote Sensing Data Processing. GIS
DEVELOPMENT, AARS, ACRS, Image Processing.
1(1):1-6.
http://www.a-a-r-s.org/acrs/proceeding/ACRS2000/Paper
s /OMP00-4.htm. Accessed June, 2013.
[28] Yuanli Shi, Wenming Shen, Wencheng Xiong, Zhuo
Fu, Rulin Xiao (2012). High Performance Cluster
System Design for Remote Sensing Data Processing.
High-Performance Computing in Remote Sensing II.
Proceedings of the SPIE, Volume 8539, article id.
85390N.
[29] Wang Xuezhi, Lin Qinghui, Zhou Yuanchun (2010). A
Web-Based Data-Processing System For
LandsatImagery.
http://www.codata.org/10Conf/abstracts. Accessed, June,
2013.
[30] WORDIQ (2013). Distributed_computing.
http://www.wordiq.com/definition/Distributed_computin
g, Accessed. June, 2013. Accessed, June, 2013.
[31] Godfrey B. (2002). Document, Primer on Distributed
Computing.
http://www.bacchae.co.uk/docs/dist.html. Accessed June,
2013.
[32] Petrie G. M., Dippold C., Fann G., Jones D., Jurrus
E., Moon B. (2002). Distributed Computing Approach
For Remote Sensing Data. (Conference) Symposium On
Parallel And Distributed Computing, SPDP.
http://academic.research.microsoft.com/Paper/2563627.a
spx. Accessed, June, 2013.
[33] Yincui Hu, Yong Tang, Jiakui Tang, Shaobo Zhong and
Guoyin Cai. (2005). Data Parallel Method for
Georeferencing of MODIS Level 1B Data Using Grid
Computing. 5th International Conference, Atlanta, GA,
USA, May 22-25, 2005, Proceedings, Part III.
http://rsgisforum.irsa.ac.cn/download/05_SCI/Data-
parallel%20method%20for%20georeferencing%20of%2
0MODIS%20level%201B%20data%20using%20grid%2
0computing.pdf. Accessed, June, 2013.
[34] Shamim Akhter , Kiyoshi Honda , Yann Chemin , M.
Ashraful Amin (2005). Experiments On Distributed
Remote Sensing Data (Modis And Aster) Processing
Using Optima Cluster.
http://www.academia.edu/1760912/EXPERIMENTS_O
N_DISTRIBUTED_REMOTE_SENSING_DATA_MO
DIS_AND_ASTER_PROCESSING_USING_OPTIMA_
CLUSTER. Accessed, June, 2013.