The RoboCup Rescue simulation models an earthquake in an urban centre presented in the form of a map. The goal of this project is to develop a machine learning technique able to predict the expected time of death (ETD) of civilians and use it in the task planning of the ambulance team in order to save the maximum number of civilians.
Sensitivity analysis in a lidar camera calibrationcsandit
In this paper, variability analysis was performed o
n the model calibration methodology between
a multi-camera system and a LiDAR laser sensor (Lig
ht Detection and Ranging). Both sensors
are used to digitize urban environments. A practica
l and complete methodology is presented to
predict the error propagation inside the LiDAR-came
ra calibration. We perform a sensitivity
analysis in a local and global way. The local appro
ach analyses the output variance with
respect to the input, only one parameter is varied
at once. In the global sensitivity approach, all
parameters are varied simultaneously and sensitivit
y indexes are calculated on the total
variation range of the input parameters. We quantif
y the uncertainty behaviour in the intrinsic
camera parameters and the relationship between the
noisy data of both sensors and their
calibration. We calculated the sensitivity indexes
by two techniques, Sobol and FAST (Fourier
amplitude sensitivity test). Statistics of the sens
itivity analysis are displayed for each sensor, the
sensitivity ratio in laser-camera calibration data
ANALYTICAL STUDY OF FEATURE EXTRACTION TECHNIQUES IN OPINION MININGcsandit
Although opinion mining is in a nascent stage of development but still the ground is set for dense growth of researches in the field. One of the important activities of opinion mining is to extract opinions of people based on characteristics of the object under study. Feature extraction in opinion mining can be done by various ways like that of clustering, support vector machines
etc. This paper is an attempt to appraise the various techniques of feature extraction. The first part discusses various techniques and second part makes a detailed appraisal of the major techniques used for feature extraction
Analytical study of feature extraction techniques in opinion miningcsandit
Although opinion mining is in a nascent stage of development but still the ground is set for
dense growth of researches in the field. One of the important activities of opinion mining is to
extract opinions of people based on characteristics of the object under study. Feature extraction
in opinion mining can be done by various ways like that of clustering, support vector machines
etc. This paper is an attempt to appraise the various techniques of feature extraction. The first
part discusses various techniques and second part makes a detailed appraisal of the major
techniques used for feature extraction
SYNTHETICAL ENLARGEMENT OF MFCC BASED TRAINING SETS FOR EMOTION RECOGNITIONcsandit
Emotional state recognition through speech is being a very interesting research topic nowadays.
Using subliminal information of speech, it is possible to recognize the emotional state of the
person. One of the main problems in the design of automatic emotion recognition systems is the
small number of available patterns. This fact makes the learning process more difficult, due to
the generalization problems that arise under these conditions.
In this work we propose a solution to this problem consisting in enlarging the training set
through the creation the new virtual patterns. In the case of emotional speech, most of the
emotional information is included in speed and pitch variations. So, a change in the average
pitch that does not modify neither the speed nor the pitch variations does not affect the
expressed emotion. Thus, we use this prior information in order to create new patterns applying
a pitch shift modification in the feature extraction process of the classification system. For this
purpose, we propose a frequency scaling modification of the Mel Frequency Cepstral
Coefficients, used to classify the emotion. This proposed process allows us to synthetically
increase the number of available patterns in thetraining set, thus increasing the generalization
capability of the system and reducing the test error.
A FUZZY INTERACTIVE BI-OBJECTIVE MODEL FOR SVM TO IDENTIFY THE BEST COMPROMIS...ijfls
A support vector machine (SVM) learns the decision surface from two different classes of the input points. In several applications, some of the input points are misclassified and each is not fully allocated to either of these two groups. In this paper a bi-objective quadratic programming model with fuzzy parameters is utilized and different feature quality measures are optimized simultaneously. An α-cut is defined to transform the fuzzy model to a family of classical bi-objective quadratic programming problems. The weighting method is used to optimize each of these problems. For the proposed fuzzy bi-objective quadratic programming model, a major contribution will be added by obtaining different effective support vectors due to changes in weighting values. The experimental results, show the effectiveness of the α-cut with the weighting parameters on reducing the misclassification between two classes of the input points. An interactive procedure will be added to identify the best compromise solution from the generated efficient solutions. The main contribution of this paper includes constructing a utility function for measuring the degree of infection with coronavirus disease (COVID-19).
MULTI-OBJECTIVE ENERGY EFFICIENT OPTIMIZATION ALGORITHM FOR COVERAGE CONTROL ...ijcseit
Many studies have been done in the area of Wireless Sensor Networks (WSNs) in recent years. In this kind of networks, some of the key objectives that need to be satisfied are area coverage, number of active sensors and energy consumed by nodes. In this paper, we propose a NSGA-II based multi-objective algorithm for optimizing all of these objectives simultaneously. The efficiency of our algorithm is demonstrated in the simulation results. This efficiency can be shown as finding the optimal balance point among the maximum coverage rate, the least energy consumption, and the minimum number of active nodes while maintaining the connectivity of the network
Sensitivity analysis in a lidar camera calibrationcsandit
In this paper, variability analysis was performed o
n the model calibration methodology between
a multi-camera system and a LiDAR laser sensor (Lig
ht Detection and Ranging). Both sensors
are used to digitize urban environments. A practica
l and complete methodology is presented to
predict the error propagation inside the LiDAR-came
ra calibration. We perform a sensitivity
analysis in a local and global way. The local appro
ach analyses the output variance with
respect to the input, only one parameter is varied
at once. In the global sensitivity approach, all
parameters are varied simultaneously and sensitivit
y indexes are calculated on the total
variation range of the input parameters. We quantif
y the uncertainty behaviour in the intrinsic
camera parameters and the relationship between the
noisy data of both sensors and their
calibration. We calculated the sensitivity indexes
by two techniques, Sobol and FAST (Fourier
amplitude sensitivity test). Statistics of the sens
itivity analysis are displayed for each sensor, the
sensitivity ratio in laser-camera calibration data
ANALYTICAL STUDY OF FEATURE EXTRACTION TECHNIQUES IN OPINION MININGcsandit
Although opinion mining is in a nascent stage of development but still the ground is set for dense growth of researches in the field. One of the important activities of opinion mining is to extract opinions of people based on characteristics of the object under study. Feature extraction in opinion mining can be done by various ways like that of clustering, support vector machines
etc. This paper is an attempt to appraise the various techniques of feature extraction. The first part discusses various techniques and second part makes a detailed appraisal of the major techniques used for feature extraction
Analytical study of feature extraction techniques in opinion miningcsandit
Although opinion mining is in a nascent stage of development but still the ground is set for
dense growth of researches in the field. One of the important activities of opinion mining is to
extract opinions of people based on characteristics of the object under study. Feature extraction
in opinion mining can be done by various ways like that of clustering, support vector machines
etc. This paper is an attempt to appraise the various techniques of feature extraction. The first
part discusses various techniques and second part makes a detailed appraisal of the major
techniques used for feature extraction
SYNTHETICAL ENLARGEMENT OF MFCC BASED TRAINING SETS FOR EMOTION RECOGNITIONcsandit
Emotional state recognition through speech is being a very interesting research topic nowadays.
Using subliminal information of speech, it is possible to recognize the emotional state of the
person. One of the main problems in the design of automatic emotion recognition systems is the
small number of available patterns. This fact makes the learning process more difficult, due to
the generalization problems that arise under these conditions.
In this work we propose a solution to this problem consisting in enlarging the training set
through the creation the new virtual patterns. In the case of emotional speech, most of the
emotional information is included in speed and pitch variations. So, a change in the average
pitch that does not modify neither the speed nor the pitch variations does not affect the
expressed emotion. Thus, we use this prior information in order to create new patterns applying
a pitch shift modification in the feature extraction process of the classification system. For this
purpose, we propose a frequency scaling modification of the Mel Frequency Cepstral
Coefficients, used to classify the emotion. This proposed process allows us to synthetically
increase the number of available patterns in thetraining set, thus increasing the generalization
capability of the system and reducing the test error.
A FUZZY INTERACTIVE BI-OBJECTIVE MODEL FOR SVM TO IDENTIFY THE BEST COMPROMIS...ijfls
A support vector machine (SVM) learns the decision surface from two different classes of the input points. In several applications, some of the input points are misclassified and each is not fully allocated to either of these two groups. In this paper a bi-objective quadratic programming model with fuzzy parameters is utilized and different feature quality measures are optimized simultaneously. An α-cut is defined to transform the fuzzy model to a family of classical bi-objective quadratic programming problems. The weighting method is used to optimize each of these problems. For the proposed fuzzy bi-objective quadratic programming model, a major contribution will be added by obtaining different effective support vectors due to changes in weighting values. The experimental results, show the effectiveness of the α-cut with the weighting parameters on reducing the misclassification between two classes of the input points. An interactive procedure will be added to identify the best compromise solution from the generated efficient solutions. The main contribution of this paper includes constructing a utility function for measuring the degree of infection with coronavirus disease (COVID-19).
MULTI-OBJECTIVE ENERGY EFFICIENT OPTIMIZATION ALGORITHM FOR COVERAGE CONTROL ...ijcseit
Many studies have been done in the area of Wireless Sensor Networks (WSNs) in recent years. In this kind of networks, some of the key objectives that need to be satisfied are area coverage, number of active sensors and energy consumed by nodes. In this paper, we propose a NSGA-II based multi-objective algorithm for optimizing all of these objectives simultaneously. The efficiency of our algorithm is demonstrated in the simulation results. This efficiency can be shown as finding the optimal balance point among the maximum coverage rate, the least energy consumption, and the minimum number of active nodes while maintaining the connectivity of the network
Selection of optimal hyper-parameter values of support vector machine for sen...journalBEEI
Sentiment analysis and classification task is used in recommender systems to analyze movie reviews, tweets, Facebook posts, online product reviews, blogs, discussion forums, and online comments in social networks. Usually, the classification is performed using supervised machine learning methods such as support vector machine (SVM) classifier, which have many distinct parameters. The selection of the values for these parameters can greatly influence the classification accuracy and can be addressed as an optimization problem. Here we analyze the use of three heuristics, nature-inspired optimization techniques, cuckoo search optimization (CSO), ant lion optimizer (ALO), and polar bear optimization (PBO), for parameter tuning of SVM models using various kernel functions. We validate our approach for the sentiment classification task of Twitter dataset. The results are compared using classification accuracy metric and the Nemenyi test.
Comparision of Clustering Algorithms usingNeural Network Classifier for Satel...IJERA Editor
This paper presents a hybrid clustering algorithm and feed-forward neural network classifier for land-cover mapping of trees, shade, building and road. It starts with the single step preprocessing procedure to make the image suitable for segmentation. The pre-processed image is segmented using the hybrid genetic-Artificial Bee Colony(ABC) algorithm that is developed by hybridizing the ABC and FCM to obtain the effective segmentation in satellite image and classified using neural network . The performance of the proposed hybrid algorithm is compared with the algorithms like, k-means, Fuzzy C means(FCM), Moving K-means, Artificial Bee Colony(ABC) algorithm, ABC-GA algorithm, Moving KFCM and KFCM algorithm.
APPLYING DYNAMIC MODEL FOR MULTIPLE MANOEUVRING TARGET TRACKING USING PARTICL...IJITCA Journal
In this paper, we applied a dynamic model for manoeuvring targets in SIR particle filter algorithm for improving tracking accuracy of multiple manoeuvring targets. In our proposed approach, a color distribution model is used to detect changes of target's model . Our proposed approach controls
deformation of target's model. If deformation of target's model is larger than a predetermined threshold,then the model will be updated. Global Nearest Neighbor (GNN) algorithm is used as data association algorithm. We named our proposed method as Deformation Detection Particle Filter (DDPF) . DDPF
approach is compared with basic SIR-PF algorithm on real airshow videos. Comparisons results show that, the basic SIR-PF algorithm is not able to track the manoeuvring targets when the rotation or scaling is occurred in target' s model. However, DDPF approach updates target's model when the rotation or
scaling is occurred. Thus, the proposed approach is able to track the manoeuvring targets more efficiently
and accurately.
One-Sample Face Recognition Using HMM Model of Fiducial AreasCSCJournals
In most real world applications, multiple image samples of individuals are not easy to collate for direct implementation of recognition or verification systems. Therefore there is a need to perform these tasks even if only one training sample per person is available. This paper describes an effective algorithm for recognition and verification with one sample image per class. It uses two dimensional discrete wavelet transform (2D DWT) to extract features from images and hidden Markov model (HMM) was used for training, recognition and classification. It was tested with a subset of the AT&T database and up to 90% correct classification (Hit) and false acceptance rate (FAR) of 0.02% was achieved.
Research Inventy : International Journal of Engineering and Scienceinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Color Image Watermarking Application for ERTU CloudCSCJournals
Color image is one of the the Egyptian Radio and Television Union (ERTU)’s content should be saved from any abuse from outside or inside the organization alike. The application of saving color image deploys the watermarking techniques based on Discrete Wavelet Transform (DWT). This application is implemented by software that suits the ERTU’s cloud besides many tests to insure the originality of the photo and if there is any changes applied on. All that provides the essential objectives of the cloud to overcome the limitation of distance as well as provide reliable and trusted services to Authorized group.
Methodological study of opinion mining and sentiment analysis techniquesijsc
Decision making both on individual and organizational level is always accompanied by the search of
other’s opinion on the same. With tremendous establishment of opinion rich resources like, reviews, forum
discussions, blogs, micro-blogs, Twitter etc provide a rich anthology of sentiments. This user generated
content can serve as a benefaction to market if the semantic orientations are deliberated. Opinion mining
and sentiment analysis are the formalization for studying and construing opinions and sentiments. The
digital ecosystem has itself paved way for use of huge volume of opinionated data recorded. This paper is
an attempt to review and evaluate the various techniques used for opinion and sentiment analysis.
Multi Resolution features of Content Based Image RetrievalIDES Editor
Many content based retrieval systems have been
proposed to manage and retrieve images on the basis of their
content. In this paper we proposed Color Histogram, Discrete
Wavelet Transform and Complex Wavelet Transform
techniques for efficient image retrieval from huge database.
Color Histogram technique is based on exact matching of
histogram of query image and database. Discrete Wavelet
transform technique retrieves images based on computation
of wavelet coefficients of subbands. Complex Wavelet
Transform technique includes computation of real and
imaginary part to extract the details from texture. The
proposed method is tested on COREL1000 database and
retrieval results have demonstrated a significant improvement
in precision and recall.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
A Threshold Fuzzy Entropy Based Feature Selection: Comparative StudyIJMER
Feature selection is one of the most common and critical tasks in database classification. It
reduces the computational cost by removing insignificant and unwanted features. Consequently, this
makes the diagnosis process accurate and comprehensible. This paper presents the measurement of
feature relevance based on fuzzy entropy, tested with Radial Basis Classifier (RBF) network,
Bagging(Bootstrap Aggregating), Boosting and stacking for various fields of datasets. Twenty
benchmarked datasets which are available in UCI Machine Learning Repository and KDD have been
used for this work. The accuracy obtained from these classification process shows that the proposed
method is capable of producing good and accurate results with fewer features than the original
datasets.
We're Going Global - With or without you! by Melissa Powell, Founder & CEO, ...Melissa Powell
This presentation was successfully made to the South Florida Organization Development Network. The internal and external consultants, as well as leaders at all levels found this material very useful and walked away with key elements to help their organizations with this unique growth strategy.
With globalization on the rise, organizations have realized a spike in the multicultural diversity of their customer base. By creating multicultural diverse teams, businesses are able to provide better service and innovative products that are more relatable to all.
The key ideas gained were:
- The evolution of diversity and what does it mean for organizations?
- Expanding talent development initiatives and innovation by exposing an organization to global diversity
- Handling the diversity strategy with care (Emphasis on proactive initiatives, team culture readiness, and verification)
- Proven organization benefits of exploring talent expansion initiatives outside national local zones
Hope you find this presentation useful!
Check us out at Pocm.com for more information.
About Pocmi, Inc.
Pocmi, Inc. is a platform which provides end to end solutions for employers and workers to connect, evaluate and facilitate immigration of knowledge workers. Maneuvering through the process of the human capital management and immigration can be overwhelming. However, companies that remain agile are the ones that include international hiring as a proactive strategy towards innovation and leadership. At Pocmi, Inc. we start with organization growth strategy and end when your new international hire and their team are performing like superstars. This experience will fully equip and empower your organization to embrace multicultural diversity and lead in innovation. Visit us at www.pocmi.com to learn more about our solutions.
About Our Founder
Melissa Powell is an international speaker and advocate for professional migration and multicultural diversity. She is committed to lifting organizations and nations through access to global talent management. She has experience helping organizations in the Caribbean, North & South America and Europe grow through strategic thinking in organizational development as well as human capital management. She focuses on leading clients through specialized coaching and assessments to ensure efficient recruitment, on-boarding and cultural immersion of international talent, so that they can reap the benefits of a diversified team. Outside of Pocmi, Inc. (Pocmi.com) Melissa facilitates human capital development through an upcoming podcast Invest Human (InvestHuman.co).
Selection of optimal hyper-parameter values of support vector machine for sen...journalBEEI
Sentiment analysis and classification task is used in recommender systems to analyze movie reviews, tweets, Facebook posts, online product reviews, blogs, discussion forums, and online comments in social networks. Usually, the classification is performed using supervised machine learning methods such as support vector machine (SVM) classifier, which have many distinct parameters. The selection of the values for these parameters can greatly influence the classification accuracy and can be addressed as an optimization problem. Here we analyze the use of three heuristics, nature-inspired optimization techniques, cuckoo search optimization (CSO), ant lion optimizer (ALO), and polar bear optimization (PBO), for parameter tuning of SVM models using various kernel functions. We validate our approach for the sentiment classification task of Twitter dataset. The results are compared using classification accuracy metric and the Nemenyi test.
Comparision of Clustering Algorithms usingNeural Network Classifier for Satel...IJERA Editor
This paper presents a hybrid clustering algorithm and feed-forward neural network classifier for land-cover mapping of trees, shade, building and road. It starts with the single step preprocessing procedure to make the image suitable for segmentation. The pre-processed image is segmented using the hybrid genetic-Artificial Bee Colony(ABC) algorithm that is developed by hybridizing the ABC and FCM to obtain the effective segmentation in satellite image and classified using neural network . The performance of the proposed hybrid algorithm is compared with the algorithms like, k-means, Fuzzy C means(FCM), Moving K-means, Artificial Bee Colony(ABC) algorithm, ABC-GA algorithm, Moving KFCM and KFCM algorithm.
APPLYING DYNAMIC MODEL FOR MULTIPLE MANOEUVRING TARGET TRACKING USING PARTICL...IJITCA Journal
In this paper, we applied a dynamic model for manoeuvring targets in SIR particle filter algorithm for improving tracking accuracy of multiple manoeuvring targets. In our proposed approach, a color distribution model is used to detect changes of target's model . Our proposed approach controls
deformation of target's model. If deformation of target's model is larger than a predetermined threshold,then the model will be updated. Global Nearest Neighbor (GNN) algorithm is used as data association algorithm. We named our proposed method as Deformation Detection Particle Filter (DDPF) . DDPF
approach is compared with basic SIR-PF algorithm on real airshow videos. Comparisons results show that, the basic SIR-PF algorithm is not able to track the manoeuvring targets when the rotation or scaling is occurred in target' s model. However, DDPF approach updates target's model when the rotation or
scaling is occurred. Thus, the proposed approach is able to track the manoeuvring targets more efficiently
and accurately.
One-Sample Face Recognition Using HMM Model of Fiducial AreasCSCJournals
In most real world applications, multiple image samples of individuals are not easy to collate for direct implementation of recognition or verification systems. Therefore there is a need to perform these tasks even if only one training sample per person is available. This paper describes an effective algorithm for recognition and verification with one sample image per class. It uses two dimensional discrete wavelet transform (2D DWT) to extract features from images and hidden Markov model (HMM) was used for training, recognition and classification. It was tested with a subset of the AT&T database and up to 90% correct classification (Hit) and false acceptance rate (FAR) of 0.02% was achieved.
Research Inventy : International Journal of Engineering and Scienceinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Color Image Watermarking Application for ERTU CloudCSCJournals
Color image is one of the the Egyptian Radio and Television Union (ERTU)’s content should be saved from any abuse from outside or inside the organization alike. The application of saving color image deploys the watermarking techniques based on Discrete Wavelet Transform (DWT). This application is implemented by software that suits the ERTU’s cloud besides many tests to insure the originality of the photo and if there is any changes applied on. All that provides the essential objectives of the cloud to overcome the limitation of distance as well as provide reliable and trusted services to Authorized group.
Methodological study of opinion mining and sentiment analysis techniquesijsc
Decision making both on individual and organizational level is always accompanied by the search of
other’s opinion on the same. With tremendous establishment of opinion rich resources like, reviews, forum
discussions, blogs, micro-blogs, Twitter etc provide a rich anthology of sentiments. This user generated
content can serve as a benefaction to market if the semantic orientations are deliberated. Opinion mining
and sentiment analysis are the formalization for studying and construing opinions and sentiments. The
digital ecosystem has itself paved way for use of huge volume of opinionated data recorded. This paper is
an attempt to review and evaluate the various techniques used for opinion and sentiment analysis.
Multi Resolution features of Content Based Image RetrievalIDES Editor
Many content based retrieval systems have been
proposed to manage and retrieve images on the basis of their
content. In this paper we proposed Color Histogram, Discrete
Wavelet Transform and Complex Wavelet Transform
techniques for efficient image retrieval from huge database.
Color Histogram technique is based on exact matching of
histogram of query image and database. Discrete Wavelet
transform technique retrieves images based on computation
of wavelet coefficients of subbands. Complex Wavelet
Transform technique includes computation of real and
imaginary part to extract the details from texture. The
proposed method is tested on COREL1000 database and
retrieval results have demonstrated a significant improvement
in precision and recall.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
A Threshold Fuzzy Entropy Based Feature Selection: Comparative StudyIJMER
Feature selection is one of the most common and critical tasks in database classification. It
reduces the computational cost by removing insignificant and unwanted features. Consequently, this
makes the diagnosis process accurate and comprehensible. This paper presents the measurement of
feature relevance based on fuzzy entropy, tested with Radial Basis Classifier (RBF) network,
Bagging(Bootstrap Aggregating), Boosting and stacking for various fields of datasets. Twenty
benchmarked datasets which are available in UCI Machine Learning Repository and KDD have been
used for this work. The accuracy obtained from these classification process shows that the proposed
method is capable of producing good and accurate results with fewer features than the original
datasets.
We're Going Global - With or without you! by Melissa Powell, Founder & CEO, ...Melissa Powell
This presentation was successfully made to the South Florida Organization Development Network. The internal and external consultants, as well as leaders at all levels found this material very useful and walked away with key elements to help their organizations with this unique growth strategy.
With globalization on the rise, organizations have realized a spike in the multicultural diversity of their customer base. By creating multicultural diverse teams, businesses are able to provide better service and innovative products that are more relatable to all.
The key ideas gained were:
- The evolution of diversity and what does it mean for organizations?
- Expanding talent development initiatives and innovation by exposing an organization to global diversity
- Handling the diversity strategy with care (Emphasis on proactive initiatives, team culture readiness, and verification)
- Proven organization benefits of exploring talent expansion initiatives outside national local zones
Hope you find this presentation useful!
Check us out at Pocm.com for more information.
About Pocmi, Inc.
Pocmi, Inc. is a platform which provides end to end solutions for employers and workers to connect, evaluate and facilitate immigration of knowledge workers. Maneuvering through the process of the human capital management and immigration can be overwhelming. However, companies that remain agile are the ones that include international hiring as a proactive strategy towards innovation and leadership. At Pocmi, Inc. we start with organization growth strategy and end when your new international hire and their team are performing like superstars. This experience will fully equip and empower your organization to embrace multicultural diversity and lead in innovation. Visit us at www.pocmi.com to learn more about our solutions.
About Our Founder
Melissa Powell is an international speaker and advocate for professional migration and multicultural diversity. She is committed to lifting organizations and nations through access to global talent management. She has experience helping organizations in the Caribbean, North & South America and Europe grow through strategic thinking in organizational development as well as human capital management. She focuses on leading clients through specialized coaching and assessments to ensure efficient recruitment, on-boarding and cultural immersion of international talent, so that they can reap the benefits of a diversified team. Outside of Pocmi, Inc. (Pocmi.com) Melissa facilitates human capital development through an upcoming podcast Invest Human (InvestHuman.co).
The goal of this project is to build a classifier able to predict whether a song is happy or sad analysing its lyrics. Most of the research on music classication is based on features
obtained by audio signals. However, the exploration of lyrics alone as a source of information can be relevant in music
classication. It is an interesting problem and it has not been widely explored in the literature.
Art Everywhere: progetto per workshop Google. Sviluppo di sistemi di pattern ...Francesco Cucari
Tesi di Laurea Triennale in Ingegneria Informatica.
Art Everywhere è un social network in cui gli artisti emergenti possono farsi conoscere e condividere le proprie opere d’arte. Il progetto di tesi è il frutto di un lavoro di gruppo svolto durante il
workshop "Google Technologies for Cloud and Web Development", in cui ciascun componente ha sviluppato una o più funzionalità.
L’obiettivo del mio progetto di tesi è quello di mostrare gli algoritmi usati per il trasferimento (upload/download) efficiente delle opere d’arte, cioè di immagini, e per la loro compressione, prevenendo una perdita significativa della qualità e ottenendo così un comportamento simile all’algoritmo di compressione immagini usato da Whatsapp.
Inoltre viene presentato il sistema realizzato per il riconoscimento di pattern, finalizzato ad individuare similarità tra le opere d’arte presenti nel database e con opere d’arte famose, in modo da individuare falsi d’autore e prevenire eventuali furti di proprietà intellettuale.
This article aims at a new algorithm for tracking moving objects in the long term. We have tried to overcome some potential difficulties, first by a comparative study of the measuring methods of the difference and the similarity between the template and the source image. In the second part, an improvement of the best method allows us to follow the target in a robust way. This method also allows us to effectively overcome the problems of geometric deformation, partial occlusion and recovery after the target leaves the field of vision. The originality of our algorithm is based on a new model, which does not depend on a probabilistic process and does not require a data based detection in advance. Experimental results on several difficult video sequences have proven performance advantages over many recent trackers. The developed algorithm can be employed in several applications such as video surveillance, active vision or industrial visual servoing.
Extended Fuzzy C-Means with Random Sampling Techniques for Clustering Large DataAM Publications
Big data are any data that you cannot load into your computer’s primary memory. Clustering is a primary
task in pattern recognition and data mining. We need algorithms that scale well with the data size. The former
implementation, literal Fuzzy C-Means is linear or serialized. FCM algorithm attempts to partition a finite collection
of n elements into collection of c fuzzy clusters. So, given a finite set of data, this algorithm returns a list of c cluster
centers. However it doesn't scale well and slows down with increase in the size of data and is thus impractical and
sometimes undesirable. In this paper, we propose an extended version of fuzzy c-means clustering algorithm by means of various random sampling techniques to study which method scales well for large or very large data.
Performance Comparision of Machine Learning AlgorithmsDinusha Dilanka
In this paper Compare the performance of two
classification algorithm. I t is useful to differentiate
algorithms based on computational performance rather
than classification accuracy alone. As although
classification accuracy between the algorithms is similar,
computational performance can differ significantly and it
can affect to the final results. So the objective of this paper
is to perform a comparative analysis of two machine
learning algorithms namely, K Nearest neighbor,
classification and Logistic Regression. In this paper it
was considered a large dataset of 7981 data points and 112
features. Then the performance of the above mentioned
machine learning algorithms are examined. In this paper
the processing time and accuracy of the different machine
learning techniques are being estimated by considering the
collected data set, over a 60% for train and remaining
40% for testing. The paper is organized as follows. In
Section I, introduction and background analysis of the
research is included and in section II, problem statement.
In Section III, our application and data analyze Process,
the testing environment, and the Methodology of our
analysis are being described briefly. Section IV comprises
the results of two algorithms. Finally, the paper concludes
with a discussion of future directions for research by
eliminating the problems existing with the current
research methodology.
Advanced SOM & K Mean Method for Load Curve Clustering IJECEIAES
From the load curve classification for one customer, the main features such as the seasonal factors, the weekday factors influencing on the electricity consumption may be extracted. By this way some utilities can make decision on the tariff by seasons or by day in week. The popular clustering techniques are the SOM & K-mean or Fuzzy K-mean. SOM &Kmean is a prominent approach for clustering with a two-level approach: first, the data set will be clustered using the SOM and in the second level, the SOM will be clustered by K-mean. In the first level, two training algorithms were examined: sequential and batch training. For the second level, the K-mean has the results that are strongly depended on the initial values of the centers. To overcome this, this paper used the subtractive clustering approach proposed by Chiu in 1994 to determine the centers. Because the effective radius in Chiu’s method has some influence on the number of centers, the paper applied the PSO technique to find the optimum radius. To valid the proposed approach, the test on well-known data samples is carried out. The applications for daily load curves of one Southern utility are presented.
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...gerogepatton
A support vector machine (SVM) learns the decision surface from two different classes of the input points, there are misclassifications in some of the input points in several applications. In this paper a bi-objective quadratic programming model is utilized and different feature quality measures are optimized simultaneously using the weighting method for solving our bi-objective quadratic programming problem. An important contribution will be added for the proposed bi-objective quadratic programming model by getting different efficient support vectors due to changing the weighting values. The numerical examples, give evidence of the effectiveness of the weighting parameters on reducing the misclassification between two classes of the input points. An interactive procedure will be added to identify the best compromise solution from the generated efficient solutions.
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...ijaia
A support vector machine (SVM) learns the decision surface from two different classes of the input points, there are misclassifications in some of the input points in several applications. In this paper a bi-objective quadratic programming model is utilized and different feature quality measures are optimized simultaneously using the weighting method for solving our bi-objective quadratic programming problem. An important contribution will be added for the proposed bi-objective quadratic programming model by getting different efficient support vectors due to changing the weighting values. The numerical examples, give evidence of the effectiveness of the weighting parameters on reducing the misclassification between two classes of the input points. An interactive procedure will be added to identify the best compromise solution from the generated efficient solutions.
07 18sep 7983 10108-1-ed an edge edit ariIAESIJEECS
Edge exposure or edge detection is an important and classical study of the medical field and computer vision. Caliber Fuzzy C-means (CFCM) clustering Algorithm for edge detection depends on the selection of initial cluster center value. This endeavor to put in order a collection of pixels into a cluster, such that a pixel within the cluster must be more comparable to every other pixel. Using CFCM techniques first cluster the BSDS image, next the clustered image is given as an input to the basic canny edge detection algorithm. The application of new parameters with fewer operations for CFCM is fruitful. According to the calculation, a result acquired by using CFCM clustering function divides the image into four clusters in common. The proposed method is evidently robust into the modification of fuzzy c-means and canny algorithm. The convergence of this algorithm is very speedy compare to the entire edge detection algorithms. The consequences of this proposed algorithm make enhanced edge detection and better result than any other traditional image edge detection techniques.
A comparative study of three validities computation methods for multimodel ap...IJECEIAES
The multimodel approach offers a very satisfactory results in modelling, diagnose and control of complex systems. In the modelling case, this approach passes by three steps: the determination of the model’s library, the validities computation and the establishment of the final model. In this context, this paper focuses on the elaboration of a comparative study between three recent methods of validities computation. Thus, it highlight the method that offers the best performances in term of precision. To achieve this goal, we apply, these three methods on two simulation examples in order to compare their performances.
FACE RECOGNITION USING DIFFERENT LOCAL FEATURES WITH DIFFERENT DISTANCE TECHN...IJCSEIT Journal
A face recognition system using different local features with different distance measures is proposed in this
paper. Proposed method is fast and gives accurate detection. Feature vector is based on Eigen values,
Eigen vectors, and diagonal vectors of sub images. Images are partitioned into sub images to detect local
features. Sub partitions are rearranged into vertically and horizontally matrices. Eigen values, Eigenvector
and diagonal vectors are computed for these matrices. Global feature vector is generated for face
recognition. Experiments are performed on benchmark face YALE database. Results indicate that the
proposed method gives better recognition performance in terms of average recognized rate and retrieval
time compared to the existing methods.
Abstract: Tracking and detecting the crowd is the main problem in the current era hence we are making video scenes method .Detection of many individual objects has been improved over recent years. It has been challenging to detect and track the tasks due to occlusions and variation in people appearance. Facing these challenges, we suggest to leverage information on the global structure of the scenes and to resolve jointly. We explore the constraints of the crowd density and detection of optimization using joint energy function. We show how the optimization of such energy function improves to track and detect in floating crowds. We validate our approach on a challenging video dataset of crowded scenes. The addition of different features which is relevant to tracking peoples such as movement, size, height and the observation models in the particle filters and followed by a clustering methods. It minimizes the communication cost and Data Retrieval is easy.
MAGNETIC RESONANCE BRAIN IMAGE SEGMENTATIONVLSICS Design
Segmentation of tissues and structures from medical images is the first step in many image analysis applications developed for medical diagnosis. With the growing research on medical image segmentation, it is essential to categorize the research outcomes and provide researchers with an overview of the existing segmentation techniques in medical images. In this paper, different image segmentation methods applied on magnetic resonance brain images are reviewed. The selection of methods includes sources from image processing journals, conferences, books, dissertations and thesis. The conceptual details of the methods are explained and mathematical details are avoided for simplicity. Both broad and detailed categorizations of reviewed segmentation techniques are provided. The state of art research is provided with emphasis on developed techniques and image properties used by them. The methods defined are not always mutually independent. Hence, their inter relationships are also stated. Finally, conclusions are drawn summarizing commonly used techniques and their complexities in application.
APPLYING DYNAMIC MODEL FOR MULTIPLE MANOEUVRING TARGET TRACKING USING PARTICL...IJITCA Journal
In this paper, we applied a dynamic model for manoeuvring targets in SIR particle filter algorithm for improving tracking accuracy of multiple manoeuvring targets. In our proposed approach, a color distribution model is used to detect changes of target's model. Our proposed approach controls deformation of target's model. If deformation of target's model is larger than a predetermined threshold, then the model will be updated. Global Nearest Neighbor (GNN) algorithm is used as data association algorithm. We named our proposed method as Deformation Detection Particle Filter (DDPF). DDPF approach is compared with basic SIR-PF algorithm on real airshow videos. Comparisons results show that, the basic SIR-PF algorithm is not able to track the manoeuvring targets when the rotation or scaling is occurred in target's model. However, DDPF approach updates target's model when the rotation or scaling is occurred. Thus, the proposed approach is able to track the manoeuvring targets more efficientlyand accurately.
FIDUCIAL POINTS DETECTION USING SVM LINEAR CLASSIFIERScsandit
Currently, there is a growing interest from the scientific and/or industrial community in respect
to methods that offer solutions to the problem of fiducial points detection in human faces. Some
methods use the SVM for classification, but we observed that some formulations of optimization
problems were not discussed. In this article, we propose to investigate the performance of
mathematical formulation C-SVC when applied in fiducial point detection system. Futhermore,
we explore new parameters for training the proposed system. The performance of the proposed
system is evaluated in a fiducial points detection problem. The results demonstrate that the
method is competitive.
Similar to Machine Learning techniques for the Task Planning of the Ambulance Rescue Team (20)
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Machine Learning techniques for the Task Planning of the Ambulance Rescue Team
1. Machine Learning techniques for the Task
Planning of the Ambulance Rescue Team
Francesco Cucari - fracu758
TDDD10 AI Programming - Individual Report
Link¨oping University
1 Introduction
The RoboCup Rescue project started after the earthquake that hit Kobe City
in the January of 1995 causing enormous number of victims and damage.
The aim behind this project is to propose solutions for overcoming these dis-
astrous scenarios with minimal loss. In order to achieve this goal, the RoboCup
Rescue simulation models an earthquake in an urban centre presented in the
form of a map. This simulation matches real world limits and problems as accu-
rately as possible. In fact, the simulated earthquake causes building to collapse,
roads to be blocked, fires ignitions and civilians to be trapped and buried inside
collapsed buildings.
In the simulator, there are three teams responsible for all rescuing purposes:
the ambulance team, fire-brigade and police forces. The main task of ambulance
team is to rescue civilians and carry them safely in the refuge; the aim of fire-
brigades is to extinguish buildings on fire and the task of police forces is to clear
roads.
1.1 Motivation
This project is mainly focused on the tasks of the ambulance team. Since
the score function of the simulator highly depends on the number of the alive
civilians and on the percentage of the health left for all civilians [1], in order
to get an high score, the highest priority is to save the maximum number of
civilians possible.
This implies the maximum utilization of the time ahead of each agent and
agents should be sure that no time will be wasted on targets that will die either
during or after rescuing. This problem arises for example when civilians are
prioritized according to the shortest distance from each agent.
1.2 Aim
The proposed solution is inspired by the GUC ArtSapience team’s approach
described in [6] and it is based on learning the Expected Time of Death (ETD) of
each civilian and thus having realistic estimations will lead to better performance
of the Ambulance team by prioritizing the agents tasks accordingly. Due to
difficulty in finding a realistic formula to estimate the ETD, a learning algorithm
1
2. can be used to learn the factors that are hard to be calculated using those
implicitly parameters [5]. The goal of this project is to reach better performance
introducing the ETD in the prioritization task compared to the results of the
shortest distances approach.
1.3 Research questions
Given the described domain, questions regarding this arise. The focus of the
resulting report as well as the project in general will revolve around these ques-
tions, and this project will aim for getting an answer to these questions:
1. Is it possible to introduce the ETD in the prioritization task?
(a) If so, does this lead to better performance compared to the results of
shortest distances approach?
2 Methods
This work can be divided in two main steps: Learning and Planning.
2.1 Learning
In each time step of simulation the state of any given civilian changes. These
values are collected and preprocessed in a dataset that is used for the learning
phase.
2.1.1 Data collection
Data are collected after many runs of multiple maps, where the agents log the
state of civilians at each step instead of rescuing them. Then, some further
changes to the simulator are done: maps were run without blockades, with the
fire-simulator enabled and with all static civilians. In order to collect more
data, maps were run with different scenarios where more fires were added in the
map. Tab.1 is an example of collected dataset that represent the history of each
civilian.
ID civilian Timestep HP Damage Buriedness
406067950 177 3000 1000 0
406067950 178 2000 1300 0
406067950 179 0 1800 0
1769673037 79 1000 100 60
1769673037 80 0 200 60
Table 1: An example of collected data before preprocessing
2.1.2 Preprocessing and Labelling
The collected data need some preprocessing. In fact, for a supervised learning
algorithm to be applied on a training dataset, each example of that set has to
have an output attribute, which is the output value of the classification. In the
2
3. proposed approach this attribute is the ETD of the civilian. The ID civilian and
time steps are used to label each example with the value of ETD. The resulting
dataset contains pairs of values for each attribute in the set and a value for the
time where this civilian will die. If civilian is not dead during the simulation,
this value is set to the maximum time step, that is 300. Tab.2 shows an extract
of the final dataset, that is used as input of the learning classifier.
HP Damage Buriedness ETD
3000 1000 0 179
2000 1300 0 179
0 1800 0 179
1000 100 60 80
0 200 60 80
Table 2: Final dataset after preprocessing and labelling
2.1.3 Classifier
Given the dataset, the goal is to learn the relation between the input pairs (HP,
Damage, Buriedness) and the output (ETD), following the approach developed
in [2]: this relation was obtained first by training the dataset and then using
the output learning model for future predictions. Thus, a classifier is needed to
achieve both goals: multiple linear regression.
Linear regression is an approach for predicting a quantitative response (nu-
meric value) using multiple feature (or ”predictor” or ”input variable”) [3]. It
takes the following form:
y = β0 + β1 ∗ x1 + β2 ∗ x2 + β3 ∗ x3 (1)
where β0 represent the intercept, each xi represents a different feature (hp,
damage and buriedness), and each feature has its own coefficient βi. Learning
these coefficients it’s possible to make prediction of the output value y, that is
the ETD of a civilian.
For model validation, 10-fold cross validation was used. As defined in [4],
cross-validation is a statistical method of evaluating and comparing learning
algorithms by dividing data into two segments: one used to learn or train a
model and the other used to validate the model. The division process can be
repeated k times, using different subsets of the data. So, the idea of cross
validation is to estimate how well the current dataset can predict an output
value for any given input instance.
Finally, the Weka1
tool is used for all learning purposes. Weka is a com-
prehensive suite of Java class libraries that implement many state-of-the-art
machine learning and data mining algorithms [7]. It is quite easy to use it since
the learning algorithms can be called directly from the Java code.
2.2 Planning
Rescue agents need to plan their decisions to reach their restricted goal that is
saving the maximum number of civilians possible. An ambulance agent notifies
1http://www.cs.waikato.ac.nz/ml/weka/
3
4. the parameters of found civilian to the ambulance centre. The ambulance centre
uses these parameters and the output learning model (3.1) to predict the ETD
and then to prioritize the agents tasks accordingly.
2.2.1 Exploring-Rescuing trade-off
The first decision of the centre takes in consideration the exploring-rescuing
trade-off. In fact, the ambulance centre should answer to the question: When
does the ambulance centre decide which civilian should be rescued? The first
option is to rescue when a victim is notified. This is not efficient because this
approach lead to a waste of time due to the fact that there could be a civilian
who need more help or with more HP and so on. The chosen approach is to
introduce a priority value based on ETD: if the priority value is greater than
a certain threshold then the ambulance centre orders for an agent to rescue a
civilian.
2.2.2 Task prioritization
The ambulance centre can prioritize the agents tasks according to:
• shortest distance, where closest targets have high priority;
• ETD, where targets with low ETD have high priority;
• shortest distance + ETD, where closest targets with low ETD have high
priority. This combined approach is inspired by A*: f(n) = g(n) + h(n)
where g(n) is the normalized shortest path’s cost and h(n) is the ETD of
the target.
3 Results
3.1 Learning
Using the Weka tool and applying the classifier, described in 2.1.3, on the ob-
tained training dataset, described in 2.1.2, the resulting model is the following:
Figure 1: Resulting output learning model
This model will be used by the ambulance center to predict the ETD and
prioritize the agent task accordingly using the parameters of each civilian no-
tified by the agents. Then, a summary of the model validation, described in
2.1.3, is shown in Fig.2.
4
5. Figure 2: Summary of the model validation
3.2 Planning
The number of rescued civilians was chosen to be the evaluation measurement
of proposed approaches since the score doesn’t depends only on civilians. Fig.3
shows the number of rescued civilians using the approaches described in 2.2.2.
Figure 3: Comparison of the number of rescued civilians using the three ap-
proaches
4 Discussion
The proposed approach, based on the introduction of the ETD in the prioriti-
zation task and in particular using the combined approach distance+ETD, lead
to better performance compared with the results of shortest-distance strategy.
It is worth noting that some statistical parameters of the output learning
model of this work are better than those described in [6]. In fact, the value of
correlation coefficient, that quantifies a statistical relationships between two or
more observed data values, is higher by 7%. Furthermore, the value of the root
relative squared error is lower by 5%.
Finally, with the proposed approach the number of rescued civilians is in-
creased by 7% compared with the shortest-distance strategy and by 15% com-
pared with only ETD-based strategy.
5
6. 5 Conclusion
The introduction of a learning model in the task planning of the ambulance
rescue team helped to reach better results in the rescuing operation. This model
was the outcome of a training data set that was trained using linear regression
algorithm. Then, this model was used for prediction of ETD for task prioritizing
and planning. In particular, ETD was used for optimizing the search algorithm
that constructs paths for the agents to move from one location on the map to
another. This was done by replacing the traditional breadth first search by
a heuristic search, which includes the ETD as a heuristic for the evaluation
function of expanding nodes.
The proposed solution didn’t only help optimize the task planning of the
agents and achieve better results. It also helped to overcome the obstacles en-
forced by the inaccurate values retrieved from the simulator regarding civilians.
Having a training dataset allows to know and learn the relation between the
parameters of civilians.
In future work, the training dataset can be augmented with other param-
eters, for example the shortest distance of a civilian from the nearest agent.
Furthermore, the proposed approach combined with the division of the map in
clusters and the assignment of one or more agents to specific clusters could lead
to better results, increasing the number of rescue civilians and decreasing the
waste of time in rescuing operations.
References
[1] Cyrille Berger Developing a team, 2015.
[2] Fadwa Sakr, Slim Abdennadher, Harnessing Supervised Learning Techniques
for the Task Planning of Ambulance Rescue Agents, ICAART 2016, 2016
[3] James, Gareth, et al. An introduction to statistical learning. Vol. 6. New
York: springer, 2013.
[4] Refaeilzadeh, Payam, Lei Tang, and Huan Liu. Cross-validation. Encyclope-
dia of database systems. Springer US, 2009. 532-538.
[5] Sameh Metias, Mahmoud Walid and others, RoboCup 2015 Rescue Simula-
tion League Team Description GUC ArtSapience (Egypt), 2015
[6] Sameh Metias, Mohammed Waheed and others, RoboCup 2016 Rescue Sim-
ulation League Team Description GUC ArtSapience (Egypt), 2014
[7] Witten, Ian H and Frank, Eibe and Trigg, Leonard E and Hall, Mark A
and Holmes, Geoffrey and Cunningham, Sally Jo, Weka: Practical machine
learning tools and techniques with Java implementations, 1999
6