In this paper, a study was carried out to aid in
adequate allocation of resources in the College of Natural
Sciences, TYZ University (not real name because of ethical
issue). Questionnaires were administered to the highranking officials of one the Colleges, College of Pure and
Applied Sciences, to examine how resources were allocated
for three consecutive sessions(the sessions were 2009/2010,
2010/2011 and 2011/2012),then used the data gathered and
analysed to generate contributory inputs for the three basic
outputs (variables)formed for the purpose of the study.
These variables are: 1
x
represents the quality of graduates
produced;
2
x
stands for research papers, Seminars,
Journals articles etc. published by faculties and
3
x
denotes service delivery within the three sessions under study.
Simplex Method of Linear Programming was used to solve
the model formulated.
A novel population-based local search for nurse rostering problem IJECEIAES
Population-based approaches regularly are better than single based (local search) approaches in exploring the search space. However, the drawback of population-based approaches is in exploiting the search space. Several hybrid approaches have proven their efficiency through different domains of optimization problems by incorporating and integrating the strength of population and local search approaches. Meanwhile, hybrid methods have a drawback of increasing the parameter tuning. Recently, population-based local search was proposed for a university course-timetabling problem with fewer parameters than existing approaches, the proposed approach proves its effectiveness. The proposed approach employs two operators to intensify and diversify the search space. The first operator is applied to a single solution, while the second is applied for all solutions. This paper aims to investigate the performance of population-based local search for the nurse rostering problem. The INRC2010 database with a dataset composed of 69 instances is used to test the performance of PB-LS. A comparison was made between the performance of PB-LS and other existing approaches in the literature. Results show good performances of proposed approach compared to other approaches, where population-based local search provided best results in 55 cases over 69 instances used in experiments.
Selecting the best stochastic systems for large scale engineering problemsIJECEIAES
Selecting a subset of the best solutions among large-scale problems is an important area of research. When the alternative solutions are stochastic in nature, then it puts more burden on the problem. The objective of this paper is to select a set that is likely to contain the actual best solutions with high probability. If the selected set contains all the best solutions, then the selection is denoted as correct selection. We are interested in maximizing the probability of this selection; P(CS). In many cases, the available computation budget for simulating the solution set in order to maximize P(CS) is limited. Therefore, instead of distributing these computational efforts equally likely among the alternatives, the optimal computing budget allocation (OCBA) procedure came to put more effort on the solutions that have more impact on the selected set. In this paper, we derive formulas of how to distribute the available budget asymptotically to find the approximation of P(CS). We then present a procedure that uses OCBA with the ordinal optimization (OO) in order to select the set of best solutions. The properties and performance of the proposed procedure are illustrated through a numerical example. Overall results indicate that the procedure is able to select a subset of the best systems with high probability of correct selection using small number of simulation samples under different parameter settings.
A preliminary survey on optimized multiobjective metaheuristic methods for da...ijcsit
The present survey provides the state-of-the-art of research, copiously devoted to Evolutionary Approach
(EAs) for clustering exemplified with a diversity of evolutionary computations. The Survey provides a
nomenclature that highlights some aspects that are very important in the context of evolutionary data
clustering. The paper missions the clustering trade-offs branched out with wide-ranging Multi Objective
Evolutionary Approaches (MOEAs) methods. Finally, this study addresses the potential challenges of
MOEA design and data clustering, along with conclusions and recommendations for novice and
researchers by positioning most promising paths of future research.
Engineering Research Publication
Best International Journals, High Impact Journals,
International Journal of Engineering & Technical Research
ISSN : 2321-0869 (O) 2454-4698 (P)
www.erpublication.org
The Optimization of choosing Investment in the capital markets using artifici...inventionjournals
Optimization is one of crucial items in behavioural sciences. These daystheuse of Meta heuristic has grown considerably in all fields. In this study, we will look for optimization of selection in a portfolio of investment opportunities. We’ve been looking for a selection logic using a meta-heuristic algorithm Called artificial neural networks. The results showed that using artificial neural network algorithm had an optimization in decision-making and selection of investment opportunities. The research is applied one considering the purpose and is looking for developing knowledge in a particular field.
The final cost of public school building projects, like other construction projects, is unknown
to the owner till the account closure. Artificial Neural Networks (ANN) is used in an attempt to
predict the final cost of two story (12 classes) school projects under lowest bid system of award
before work starts. A database of (65) school projects records completed in (2007-2012) are used to
develop and verify the ANN model. Based on expert opinions, nine out of eleven parameters are
considered to have the most significant impact on the magnitude of final cost. Hence they are used as
model inputs while the output of the model is going to be the final cost (FC). These parameters are;
accepted bid price, average bid price, estimated cost, contractor rank, supervising engineer
experience, project location, number of bidders, year of contracting, and contractual duration. It was
found that ANN has the ability to predict the final cost for school projects with very good degree of
accuracy having a coefficient of correlation (R) of (91%), and an average accuracy percentage of
(99.98%).
11.software modules clustering an effective approach for reusabilityAlexander Decker
This document summarizes previous work on using clustering techniques for software module classification and reusability. It discusses hierarchical clustering and non-hierarchical clustering methods. Previous studies have used these techniques for software component classification, identifying reusable software modules, course clustering based on industry needs, mobile phone clustering based on attributes, and customer clustering based on electricity load. The document provides background on clustering analysis and its uses in various domains including software testing, pattern recognition, and software restructuring.
A novel population-based local search for nurse rostering problem IJECEIAES
Population-based approaches regularly are better than single based (local search) approaches in exploring the search space. However, the drawback of population-based approaches is in exploiting the search space. Several hybrid approaches have proven their efficiency through different domains of optimization problems by incorporating and integrating the strength of population and local search approaches. Meanwhile, hybrid methods have a drawback of increasing the parameter tuning. Recently, population-based local search was proposed for a university course-timetabling problem with fewer parameters than existing approaches, the proposed approach proves its effectiveness. The proposed approach employs two operators to intensify and diversify the search space. The first operator is applied to a single solution, while the second is applied for all solutions. This paper aims to investigate the performance of population-based local search for the nurse rostering problem. The INRC2010 database with a dataset composed of 69 instances is used to test the performance of PB-LS. A comparison was made between the performance of PB-LS and other existing approaches in the literature. Results show good performances of proposed approach compared to other approaches, where population-based local search provided best results in 55 cases over 69 instances used in experiments.
Selecting the best stochastic systems for large scale engineering problemsIJECEIAES
Selecting a subset of the best solutions among large-scale problems is an important area of research. When the alternative solutions are stochastic in nature, then it puts more burden on the problem. The objective of this paper is to select a set that is likely to contain the actual best solutions with high probability. If the selected set contains all the best solutions, then the selection is denoted as correct selection. We are interested in maximizing the probability of this selection; P(CS). In many cases, the available computation budget for simulating the solution set in order to maximize P(CS) is limited. Therefore, instead of distributing these computational efforts equally likely among the alternatives, the optimal computing budget allocation (OCBA) procedure came to put more effort on the solutions that have more impact on the selected set. In this paper, we derive formulas of how to distribute the available budget asymptotically to find the approximation of P(CS). We then present a procedure that uses OCBA with the ordinal optimization (OO) in order to select the set of best solutions. The properties and performance of the proposed procedure are illustrated through a numerical example. Overall results indicate that the procedure is able to select a subset of the best systems with high probability of correct selection using small number of simulation samples under different parameter settings.
A preliminary survey on optimized multiobjective metaheuristic methods for da...ijcsit
The present survey provides the state-of-the-art of research, copiously devoted to Evolutionary Approach
(EAs) for clustering exemplified with a diversity of evolutionary computations. The Survey provides a
nomenclature that highlights some aspects that are very important in the context of evolutionary data
clustering. The paper missions the clustering trade-offs branched out with wide-ranging Multi Objective
Evolutionary Approaches (MOEAs) methods. Finally, this study addresses the potential challenges of
MOEA design and data clustering, along with conclusions and recommendations for novice and
researchers by positioning most promising paths of future research.
Engineering Research Publication
Best International Journals, High Impact Journals,
International Journal of Engineering & Technical Research
ISSN : 2321-0869 (O) 2454-4698 (P)
www.erpublication.org
The Optimization of choosing Investment in the capital markets using artifici...inventionjournals
Optimization is one of crucial items in behavioural sciences. These daystheuse of Meta heuristic has grown considerably in all fields. In this study, we will look for optimization of selection in a portfolio of investment opportunities. We’ve been looking for a selection logic using a meta-heuristic algorithm Called artificial neural networks. The results showed that using artificial neural network algorithm had an optimization in decision-making and selection of investment opportunities. The research is applied one considering the purpose and is looking for developing knowledge in a particular field.
The final cost of public school building projects, like other construction projects, is unknown
to the owner till the account closure. Artificial Neural Networks (ANN) is used in an attempt to
predict the final cost of two story (12 classes) school projects under lowest bid system of award
before work starts. A database of (65) school projects records completed in (2007-2012) are used to
develop and verify the ANN model. Based on expert opinions, nine out of eleven parameters are
considered to have the most significant impact on the magnitude of final cost. Hence they are used as
model inputs while the output of the model is going to be the final cost (FC). These parameters are;
accepted bid price, average bid price, estimated cost, contractor rank, supervising engineer
experience, project location, number of bidders, year of contracting, and contractual duration. It was
found that ANN has the ability to predict the final cost for school projects with very good degree of
accuracy having a coefficient of correlation (R) of (91%), and an average accuracy percentage of
(99.98%).
11.software modules clustering an effective approach for reusabilityAlexander Decker
This document summarizes previous work on using clustering techniques for software module classification and reusability. It discusses hierarchical clustering and non-hierarchical clustering methods. Previous studies have used these techniques for software component classification, identifying reusable software modules, course clustering based on industry needs, mobile phone clustering based on attributes, and customer clustering based on electricity load. The document provides background on clustering analysis and its uses in various domains including software testing, pattern recognition, and software restructuring.
IRJET- Multi-Document Summarization using Fuzzy and Hierarchical ApproachIRJET Journal
This document discusses multi-document summarization using fuzzy and hierarchical approaches. It begins with an abstract describing multi-document summarization as extracting important information from multiple source documents to create a short summary. The introduction discusses the need for efficient multi-document summarization due to the large amount of online information. It then reviews related literature on multi-document summarization techniques including neuro-fuzzy approaches and modified K-nearest neighbor algorithms. Finally, it describes the proposed methodology which uses statistical approaches like similarity measures, page rank and expectation maximization to cluster sentences and extract a summary from the clustered sentences.
DEVELOPING A CASE-BASED RETRIEVAL SYSTEM FOR SUPPORTING IT CUSTOMERSIJCSEA Journal
This document describes the development of a case-based retrieval system to help IT customers solve problems. It discusses using a case-based reasoning approach where prior solutions and experiences are stored as cases. A conversational case-based reasoning system is developed that allows users to describe problems and receive potential solutions through a dialogue. The system was tested on sample problems and achieved a high success rate of 90% with an average of 7.7 steps to retrieve solutions.
Calculation of Reusability Matrices for Object Oriented applicationsIJMERJOURNAL
ABSTRACT: Reusability is one of the major concerns of object oriented applications. Object oriented paradigm is having its own various advantages including reusability. There are a lots of Metrics available for quantitative measure of readabilities of any of the application which is developed using object oriented paradigm. The goals of software metrics are to identify and control essential parameters that affect the parameters related to software development. There are various types of measurements which are required in software development including size of the project, complexities involved, measurement of cohesion and coupling among modules, testability, reusability, effort and resources required etc. This paper presents a practical calculation on some of the reusability metrics which can be used for object oriented applications
A Preference Model on Adaptive Affinity PropagationIJECEIAES
In recent years, two new data clustering algorithms have been proposed. One of them is Affinity Propagation (AP). AP is a new data clustering technique that use iterative message passing and consider all data points as potential exemplars. Two important inputs of AP are a similarity matrix (SM) of the data and the parameter ”preference” p. Although the original AP algorithm has shown much success in data clustering, it still suffer from one limitation: it is not easy to determine the value of the parameter ”preference” p which can result an optimal clustering solution. To resolve this limitation, we propose a new model of the parameter ”preference” p, i.e. it is modeled based on the similarity distribution. Having the SM and p, Modified Adaptive AP (MAAP) procedure is running. MAAP procedure means that we omit the adaptive p-scanning algorithm as in original Adaptive-AP (AAP) procedure. Experimental results on random non-partition and partition data sets show that (i) the proposed algorithm, MAAP-DDP, is slower than original AP for random non-partition dataset, (ii) for random 4-partition dataset and real datasets the proposed algorithm has succeeded to identify clusters according to the number of dataset’s true labels with the execution times that are comparable with those original AP. Beside that the MAAP-DDP algorithm demonstrates more feasible and effective than original AAP procedure.
MINING DISCIPLINARY RECORDS OF STUDENT WELFARE AND FORMATION OFFICE: AN EXPLO...IJITCA Journal
Data mining is the process of analyzing large datasets, understanding their patterns and discovering useful
information from a large amount of data. Decision tree as one of the common algorithm of data mining is a
tree structure entailing of internal and terminal nodes which process the data to eventually produce a
classification. Classification is the process of dividing a dataset together in a high-class set such that the
members of each set are nearby as expected to one another, and different groups are as far as expected
from one another, where distance is measured with respect to the specific variable(s) you are trying to
predict. Data Envelopment Analysis is a technique wherein the productivity of a unit is evaluated by
equating the volume/amount of output(s) in relation to the volume/amount of input(s) used. The
performance of a unit is calculated by equating its efficiency with the best-perceived performance in the
data set. In this study, a model for measuring the efficiency of Decision Making Units will be presented,
along with related methods of implementation and interpretation. DEA assesses and evaluates the
efficiency of a unit dubbed as Decision-Making Units or DMU. There are many classification techniques
and algorithms but the research study used decision tree using CHAID algorithms. Classification decision
tree algorithm using CHAID as data mining technique identifies the relationship between the demographic
profile of the students and the category of offenses. Cross tabulation is a tool used to analyze categorical
data. It is a type of table in a matrix format that shows the multivariate occurrence dissemination of the
variables and delivers a basic picture of the interrelation between two variables. Both CHAID algorithm
and cross tabulation obtained the same results implying that higher percentage of students commit minor
offenses regardless of college, gender, year level, month and course. The CHAID algorithm used in a
software application Student Offenses Remediation System (STORES) serves as remediation plan for the
university. Further studies should be conducted to identify the effectiveness of the remediation plan by
conducting an empirical investigation on the rule set and/or implement another algorithm to determine the
program efficiency.
ADABOOST ENSEMBLE WITH SIMPLE GENETIC ALGORITHM FOR STUDENT PREDICTION MODELijcsit
Predicting the student performance is a great concern to the higher education managements.This
prediction helps to identify and to improve students' performance.Several factors may improve this
performance.In the present study, we employ the data mining processes, particularly classification, to
enhance the quality of the higher educational system. Recently, a new direction is used for the improvement
of the classification accuracy by combining classifiers.In thispaper, we design and evaluate a fastlearning
algorithm using AdaBoost ensemble with a simple genetic algorithmcalled “Ada-GA” where the genetic
algorithm is demonstrated to successfully improve the accuracy of the combined classifier performance.
The Ada-GA algorithm proved to be of considerable usefulness in identifying the students at risk early,
especially in very large classes. This early prediction allows the instructor to provide appropriate advising
to those students. The Ada/GA algorithm is implemented and tested on ASSISTments dataset, the results
showed that this algorithm hassuccessfully improved the detection accuracy as well as it reduces the
complexity of computation.
Enhancement of student performance prediction using modified K-nearest neighborTELKOMNIKA JOURNAL
The traditional K-nearest neighbor (KNN) algorithm uses an exhaustive search for a complete training set to predict a single test sample. This procedure can slow down the system to consume more time for huge datasets. The selection of classes for a new sample depends on a simple majority voting system that does not reflect the various significance of different samples (i.e. ignoring the similarities among samples). It also leads to a misclassification problem due to the occurrence of a double majority class. In reference to the above-mentioned issues, this work adopts a combination of moment descriptor and KNN to optimize the sample selection. This is done based on the fact that classifying the training samples before the searching actually takes place can speed up and improve the predictive performance of the nearest neighbor. The proposed method can be called as fast KNN (FKNN). The experimental results show that the proposed FKNN method decreases original KNN consuming time within a range of (75.4%) to (90.25%), and improve the classification accuracy percentage in the range from (20%) to (36.3%) utilizing three types of student datasets to predict whether the student can pass or fail the exam automatically.
Feature selection using modified particle swarm optimisation for face recogni...eSAT Journals
Abstract
One of the major influential factors which affects the accuracy of classification rate is the selection of right features. Not all features have vital role in classification. Many of the features in the dataset may be redundant and irrelevant, which increase the computational cost and may reduce classification rate. In this paper, we used DCT(Discrete cosine transform) coefficients as features for face recognition application. The coefficients are optimally selected based on a modified PSO algorithm. In this, the choice of coefficients is done by incorporating the average of the mean normalized standard deviations of various classes and giving more weightage to the lower indexed DCT coefficients. The algorithm is tested on ORL database. A recognition rate of 97% is obtained. Average number of features selected is about 40 percent for a 10 × 10 input. The modified PSO took about 50 iterations for convergence. These performance figures are found to be better than some of the work reported in literature.
Keywords: Particle swarm optimization, Discrete cosine transform, feature extraction, feature selection, face recognition, classification rate.
Multi Label Spatial Semi Supervised Classification using Spatial Associative ...cscpconf
Multi-label spatial classification based on association rules with multi objective genetic
algorithms (MOGA) enriched by semi supervised learning is proposed in this paper. It is to deal
with multiple class labels problem. In this paper we adapt problem transformation for the multi
label classification. We use hybrid evolutionary algorithm for the optimization in the generation
of spatial association rules, which addresses single label. MOGA is used to combine the single
labels into multi labels with the conflicting objectives predictive accuracy and
comprehensibility. Semi supervised learning is done through the process of rule cover
clustering. Finally associative classifier is built with a sorting mechanism. The algorithm is
simulated and the results are compared with MOGA based associative classifier, which out
performs the existing
Data Warehouses are structures with large amount of data collected from heterogeneous sources to be
used in a decision support system. Data Warehouses analysis identifies hidden patterns initially unexpected
which analysis requires great memory and computation cost. Data reduction methods were proposed to
make this analysis easier. In this paper, we present a hybrid approach based on Genetic Algorithms (GA)
as Evolutionary Algorithms and the Multiple Correspondence Analysis (MCA) as Analysis Factor Methods
to conduct this reduction. Our approach identifies reduced subset of dimensions p’ from the initial subset p
where p'<p where it is proposed to find the profile fact that is the closest to reference. Gas identify the
possible subsets and the Khi² formula of the ACM evaluates the quality of each subset. The study is based
on a distance measurement between the reference and n facts profile extracted from the warehouse.
Tap changer optimisation using embedded differential evolutionary programming...journalBEEI
Over-compensation and under-compensation phenomena are two undesirable results in power system compensation. This will be not a good option in power system planning and operation. The non-optimal values of the compensating parameters subjected to a power system have contributed to these phenomena. Thus, a reliable optimization technique is mandatory to alleviate this issue. This paper presents a stochastic optimization technique used to fix the power loss control in a high demand power system due to the load increase, which causes the voltage decay problems leading to current increase and system loss increment. A new optimization technique termed as embedded differential evolutionary programming (EDEP) is proposed, which integrates the traditional differential evolution (DE) and evolutionary programming (EP). Consequently, EDEP was for solving optimizations problem in power system through the tap changer optimizations scheme. Results obtained from this study are significantly superior compared to the traditional EP with implementation on the IEEE 30-bus reliability test system (RTS) for the loss minimization scheme.
Technical Efficiency of Management wise Schools in Secondary School Examinati...IOSRJM
In this paper we measuring the Board of Secondary education data by CCR Model for the Andhra Pradesh state for the academic years 2012-2013 and 2013-2014 to see the Pattern through CCR Technical Efficiency of the Management wise school results in prior to the division of state in to two separate states. The Performance of the Management wise schools are presented along with the Peer Management Schools performance of the state as a whole.
- The document proposes a new model called Performance Factors Analysis (PFA) as an alternative to the commonly used Knowledge Tracing (KT) and Learning Factors Analysis (LFA) models for adaptive instruction.
- PFA modifies LFA to make it sensitive to student performance (correct vs incorrect responses) in order to enable individualized modeling of student learning needed for adaptive tutoring, while retaining LFA's advantages for educational data mining.
- Comparison of PFA to LFA and a non-adaptive version of LFA on four datasets showed PFA performed comparably to LFA, demonstrating it can effectively capture individual student differences for adaptive instruction.
Proposing an Appropriate Pattern for Car Detection by Using Intelligent Algor...Editor IJCATR
Nowadays, the automotive industry has attracted the attention of consumers, and product quality is considered as an
essential element in current competitive markets. Security and comfort are the main criteria and parameters of selecting a car.
Therefore, standard dataset of CAR involving six features and characteristics and 1728 instances have been used. In this paper, it
has been tried to select a car with the best characteristics by using intelligent algorithms (Random Forest, J48, SVM,
NaiveBayse) and combining these algorithms with aggregated classifiers such as Bagging and AdaBoostMI. In this study, speed
and accuracy of intelligent algorithms in identifying the best car have been taken into account.
Recent Database Management Systems Research Articles - September 2020ijdms
Recent Database Management Systems
Research Articles - September 2020
International Journal of Database Management Systems (IJDMS)
ISSN: 0975-5705 (Online); 0975-5985 (Print)
http://airccse.org/journal/ijdms/index.html
Contact us: ijdmsjournal@airccse.org
Oversampling technique in student performance classification from engineering...IJECEIAES
This document discusses various oversampling techniques for dealing with imbalanced data in student performance classification. It compares SMOTE, Borderline-SMOTE, SVMSMOTE, and ADASYN oversampling combined with MLP, gradient boosting, AdaBoost, and random forest classifiers. The results show that Borderline-SMOTE gave the best performance for predicting the minority (low performance) class according to several evaluation metrics. SVMSMOTE also performed well overall, particularly for recall, F1-measure, and AUC. Gradient boosting provided high and consistent precision, recall, F1-measure, and AUC across the different oversampling methods.
A Software Measurement Using Artificial Neural Network and Support Vector Mac...ijseajournal
Today, Software measurement are based on various techniques such that neural network, Genetic
algorithm, Fuzzy Logic etc. This study involves the efficiency of applying support vector machine using
Gaussian Radial Basis kernel function to software measurement problem to increase the performance and
accuracy. Support vector machines (SVM) are innovative approach to constructing learning machines that
Minimize generalization error. There is a close relationship between SVMs and the Radial Basis Function
(RBF) classifiers. Both have found numerous applications such as in optical character recognition, object
detection, face verification, text categorization, and so on. The result demonstrated that the accuracy and
generalization performance of SVM Gaussian Radial Basis kernel function is better than RBFN. We also
examine and summarize the several superior points of the SVM compared with RBFN.
A Survey of Modern Data Classification Techniquesijsrd.com
This document provides an overview of modern data classification techniques. It describes decision tree learning algorithms, which use tree structures to classify observations by mapping them to target class labels based on their features. The document discusses common decision tree algorithms like ID3 and C4.5 and their use of recursive partitioning to split data into subsets. It also reviews related work on decision tree algorithms and their applications in domains like medicine, manufacturing, and molecular biology. The conclusion states that current and improved classification algorithms efficiently predict target attributes but require significant time and complex extracted rules.
IRJET- Using Data Mining to Predict Students PerformanceIRJET Journal
This document describes a study that used logistic regression to predict student performance based on educational data. The researchers collected student data including exam scores, attendance, study hours, family income, etc. from a large dataset. Logistic regression achieved the best prediction accuracy of 82.03% compared to other models like naive bayes, K-nearest neighbor, and multi-layer perceptron. The results indicate that around 230 students would perform poorly, 600 would perform fairly, and 200 would perform well based on the predictive model. This analysis can help identify students needing extra support and help universities improve academic outcomes.
This document proposes a model for automatically clustering Thai students' online homework assignments before teachers grade them. The model uses five parts: 1) Thai word segmentation, 2) stop-word removal, 3) term weighting, 4) document clustering using k-means, and 5) performance evaluation. The model was tested on 1,000 student assignments and achieved high accuracy, purity, and F-measure scores similar to human grading, allowing teachers to grade assignments more efficiently.
This document summarizes an article that proposes improvements to an existing algorithm for resource scheduling in cloud computing environments. The existing algorithm uses a hybrid of ant colony optimization and particle swarm optimization. The proposed improvements add an initial phase that uses an enhanced fish swarm search algorithm to help find more global optimal solutions. This global optimal solution found by fish swarm search is then used to guide the existing ant colony optimization and particle swarm optimization hybrid to find more locally optimal solutions. The document provides background on resource scheduling, metaheuristic algorithms, and describes the specific implementations of the improved fish swarm search algorithm and the overall proposed methodology.
SCCAI- A Student Career Counselling Artificial Intelligencevivatechijri
As education is growing day by day, the competition has prompted a need for the student to
understand more about the educational field. Many times the counselor isn’t available all the time and
sometimes due to the lack of proper knowledge about some educational field. Due to this, it creates an issue of
misconception of that field. This creates a problem for the student to decide a proper educational trajectory and
guidance is not always useful. The proposed paper will overcome all these problem using machine learning
algorithm. Various algorithms are being considered and amongst them the best suitable for our project are used
here. There are 3 major problems that come across our path and they are solved using Random forest, Linear
regression and Searching algorithm using Google API. At first Searching algorithm solves the problem of
location by segregating the college’s location vice, then Random Forest provides the list of colleges by using
stream and range of percentage and finally Linear Regression predicts the current cutoff using previous years’
data. Rather than this, the proposed system also provides information regarding all fields of education helping
students to understand and know about their field of interest better. The following idea is a total fresh idea with
no existing projects of similar kind. This project will help students guide them throughout.
IRJET- Multi-Document Summarization using Fuzzy and Hierarchical ApproachIRJET Journal
This document discusses multi-document summarization using fuzzy and hierarchical approaches. It begins with an abstract describing multi-document summarization as extracting important information from multiple source documents to create a short summary. The introduction discusses the need for efficient multi-document summarization due to the large amount of online information. It then reviews related literature on multi-document summarization techniques including neuro-fuzzy approaches and modified K-nearest neighbor algorithms. Finally, it describes the proposed methodology which uses statistical approaches like similarity measures, page rank and expectation maximization to cluster sentences and extract a summary from the clustered sentences.
DEVELOPING A CASE-BASED RETRIEVAL SYSTEM FOR SUPPORTING IT CUSTOMERSIJCSEA Journal
This document describes the development of a case-based retrieval system to help IT customers solve problems. It discusses using a case-based reasoning approach where prior solutions and experiences are stored as cases. A conversational case-based reasoning system is developed that allows users to describe problems and receive potential solutions through a dialogue. The system was tested on sample problems and achieved a high success rate of 90% with an average of 7.7 steps to retrieve solutions.
Calculation of Reusability Matrices for Object Oriented applicationsIJMERJOURNAL
ABSTRACT: Reusability is one of the major concerns of object oriented applications. Object oriented paradigm is having its own various advantages including reusability. There are a lots of Metrics available for quantitative measure of readabilities of any of the application which is developed using object oriented paradigm. The goals of software metrics are to identify and control essential parameters that affect the parameters related to software development. There are various types of measurements which are required in software development including size of the project, complexities involved, measurement of cohesion and coupling among modules, testability, reusability, effort and resources required etc. This paper presents a practical calculation on some of the reusability metrics which can be used for object oriented applications
A Preference Model on Adaptive Affinity PropagationIJECEIAES
In recent years, two new data clustering algorithms have been proposed. One of them is Affinity Propagation (AP). AP is a new data clustering technique that use iterative message passing and consider all data points as potential exemplars. Two important inputs of AP are a similarity matrix (SM) of the data and the parameter ”preference” p. Although the original AP algorithm has shown much success in data clustering, it still suffer from one limitation: it is not easy to determine the value of the parameter ”preference” p which can result an optimal clustering solution. To resolve this limitation, we propose a new model of the parameter ”preference” p, i.e. it is modeled based on the similarity distribution. Having the SM and p, Modified Adaptive AP (MAAP) procedure is running. MAAP procedure means that we omit the adaptive p-scanning algorithm as in original Adaptive-AP (AAP) procedure. Experimental results on random non-partition and partition data sets show that (i) the proposed algorithm, MAAP-DDP, is slower than original AP for random non-partition dataset, (ii) for random 4-partition dataset and real datasets the proposed algorithm has succeeded to identify clusters according to the number of dataset’s true labels with the execution times that are comparable with those original AP. Beside that the MAAP-DDP algorithm demonstrates more feasible and effective than original AAP procedure.
MINING DISCIPLINARY RECORDS OF STUDENT WELFARE AND FORMATION OFFICE: AN EXPLO...IJITCA Journal
Data mining is the process of analyzing large datasets, understanding their patterns and discovering useful
information from a large amount of data. Decision tree as one of the common algorithm of data mining is a
tree structure entailing of internal and terminal nodes which process the data to eventually produce a
classification. Classification is the process of dividing a dataset together in a high-class set such that the
members of each set are nearby as expected to one another, and different groups are as far as expected
from one another, where distance is measured with respect to the specific variable(s) you are trying to
predict. Data Envelopment Analysis is a technique wherein the productivity of a unit is evaluated by
equating the volume/amount of output(s) in relation to the volume/amount of input(s) used. The
performance of a unit is calculated by equating its efficiency with the best-perceived performance in the
data set. In this study, a model for measuring the efficiency of Decision Making Units will be presented,
along with related methods of implementation and interpretation. DEA assesses and evaluates the
efficiency of a unit dubbed as Decision-Making Units or DMU. There are many classification techniques
and algorithms but the research study used decision tree using CHAID algorithms. Classification decision
tree algorithm using CHAID as data mining technique identifies the relationship between the demographic
profile of the students and the category of offenses. Cross tabulation is a tool used to analyze categorical
data. It is a type of table in a matrix format that shows the multivariate occurrence dissemination of the
variables and delivers a basic picture of the interrelation between two variables. Both CHAID algorithm
and cross tabulation obtained the same results implying that higher percentage of students commit minor
offenses regardless of college, gender, year level, month and course. The CHAID algorithm used in a
software application Student Offenses Remediation System (STORES) serves as remediation plan for the
university. Further studies should be conducted to identify the effectiveness of the remediation plan by
conducting an empirical investigation on the rule set and/or implement another algorithm to determine the
program efficiency.
ADABOOST ENSEMBLE WITH SIMPLE GENETIC ALGORITHM FOR STUDENT PREDICTION MODELijcsit
Predicting the student performance is a great concern to the higher education managements.This
prediction helps to identify and to improve students' performance.Several factors may improve this
performance.In the present study, we employ the data mining processes, particularly classification, to
enhance the quality of the higher educational system. Recently, a new direction is used for the improvement
of the classification accuracy by combining classifiers.In thispaper, we design and evaluate a fastlearning
algorithm using AdaBoost ensemble with a simple genetic algorithmcalled “Ada-GA” where the genetic
algorithm is demonstrated to successfully improve the accuracy of the combined classifier performance.
The Ada-GA algorithm proved to be of considerable usefulness in identifying the students at risk early,
especially in very large classes. This early prediction allows the instructor to provide appropriate advising
to those students. The Ada/GA algorithm is implemented and tested on ASSISTments dataset, the results
showed that this algorithm hassuccessfully improved the detection accuracy as well as it reduces the
complexity of computation.
Enhancement of student performance prediction using modified K-nearest neighborTELKOMNIKA JOURNAL
The traditional K-nearest neighbor (KNN) algorithm uses an exhaustive search for a complete training set to predict a single test sample. This procedure can slow down the system to consume more time for huge datasets. The selection of classes for a new sample depends on a simple majority voting system that does not reflect the various significance of different samples (i.e. ignoring the similarities among samples). It also leads to a misclassification problem due to the occurrence of a double majority class. In reference to the above-mentioned issues, this work adopts a combination of moment descriptor and KNN to optimize the sample selection. This is done based on the fact that classifying the training samples before the searching actually takes place can speed up and improve the predictive performance of the nearest neighbor. The proposed method can be called as fast KNN (FKNN). The experimental results show that the proposed FKNN method decreases original KNN consuming time within a range of (75.4%) to (90.25%), and improve the classification accuracy percentage in the range from (20%) to (36.3%) utilizing three types of student datasets to predict whether the student can pass or fail the exam automatically.
Feature selection using modified particle swarm optimisation for face recogni...eSAT Journals
Abstract
One of the major influential factors which affects the accuracy of classification rate is the selection of right features. Not all features have vital role in classification. Many of the features in the dataset may be redundant and irrelevant, which increase the computational cost and may reduce classification rate. In this paper, we used DCT(Discrete cosine transform) coefficients as features for face recognition application. The coefficients are optimally selected based on a modified PSO algorithm. In this, the choice of coefficients is done by incorporating the average of the mean normalized standard deviations of various classes and giving more weightage to the lower indexed DCT coefficients. The algorithm is tested on ORL database. A recognition rate of 97% is obtained. Average number of features selected is about 40 percent for a 10 × 10 input. The modified PSO took about 50 iterations for convergence. These performance figures are found to be better than some of the work reported in literature.
Keywords: Particle swarm optimization, Discrete cosine transform, feature extraction, feature selection, face recognition, classification rate.
Multi Label Spatial Semi Supervised Classification using Spatial Associative ...cscpconf
Multi-label spatial classification based on association rules with multi objective genetic
algorithms (MOGA) enriched by semi supervised learning is proposed in this paper. It is to deal
with multiple class labels problem. In this paper we adapt problem transformation for the multi
label classification. We use hybrid evolutionary algorithm for the optimization in the generation
of spatial association rules, which addresses single label. MOGA is used to combine the single
labels into multi labels with the conflicting objectives predictive accuracy and
comprehensibility. Semi supervised learning is done through the process of rule cover
clustering. Finally associative classifier is built with a sorting mechanism. The algorithm is
simulated and the results are compared with MOGA based associative classifier, which out
performs the existing
Data Warehouses are structures with large amount of data collected from heterogeneous sources to be
used in a decision support system. Data Warehouses analysis identifies hidden patterns initially unexpected
which analysis requires great memory and computation cost. Data reduction methods were proposed to
make this analysis easier. In this paper, we present a hybrid approach based on Genetic Algorithms (GA)
as Evolutionary Algorithms and the Multiple Correspondence Analysis (MCA) as Analysis Factor Methods
to conduct this reduction. Our approach identifies reduced subset of dimensions p’ from the initial subset p
where p'<p where it is proposed to find the profile fact that is the closest to reference. Gas identify the
possible subsets and the Khi² formula of the ACM evaluates the quality of each subset. The study is based
on a distance measurement between the reference and n facts profile extracted from the warehouse.
Tap changer optimisation using embedded differential evolutionary programming...journalBEEI
Over-compensation and under-compensation phenomena are two undesirable results in power system compensation. This will be not a good option in power system planning and operation. The non-optimal values of the compensating parameters subjected to a power system have contributed to these phenomena. Thus, a reliable optimization technique is mandatory to alleviate this issue. This paper presents a stochastic optimization technique used to fix the power loss control in a high demand power system due to the load increase, which causes the voltage decay problems leading to current increase and system loss increment. A new optimization technique termed as embedded differential evolutionary programming (EDEP) is proposed, which integrates the traditional differential evolution (DE) and evolutionary programming (EP). Consequently, EDEP was for solving optimizations problem in power system through the tap changer optimizations scheme. Results obtained from this study are significantly superior compared to the traditional EP with implementation on the IEEE 30-bus reliability test system (RTS) for the loss minimization scheme.
Technical Efficiency of Management wise Schools in Secondary School Examinati...IOSRJM
In this paper we measuring the Board of Secondary education data by CCR Model for the Andhra Pradesh state for the academic years 2012-2013 and 2013-2014 to see the Pattern through CCR Technical Efficiency of the Management wise school results in prior to the division of state in to two separate states. The Performance of the Management wise schools are presented along with the Peer Management Schools performance of the state as a whole.
- The document proposes a new model called Performance Factors Analysis (PFA) as an alternative to the commonly used Knowledge Tracing (KT) and Learning Factors Analysis (LFA) models for adaptive instruction.
- PFA modifies LFA to make it sensitive to student performance (correct vs incorrect responses) in order to enable individualized modeling of student learning needed for adaptive tutoring, while retaining LFA's advantages for educational data mining.
- Comparison of PFA to LFA and a non-adaptive version of LFA on four datasets showed PFA performed comparably to LFA, demonstrating it can effectively capture individual student differences for adaptive instruction.
Proposing an Appropriate Pattern for Car Detection by Using Intelligent Algor...Editor IJCATR
Nowadays, the automotive industry has attracted the attention of consumers, and product quality is considered as an
essential element in current competitive markets. Security and comfort are the main criteria and parameters of selecting a car.
Therefore, standard dataset of CAR involving six features and characteristics and 1728 instances have been used. In this paper, it
has been tried to select a car with the best characteristics by using intelligent algorithms (Random Forest, J48, SVM,
NaiveBayse) and combining these algorithms with aggregated classifiers such as Bagging and AdaBoostMI. In this study, speed
and accuracy of intelligent algorithms in identifying the best car have been taken into account.
Recent Database Management Systems Research Articles - September 2020ijdms
Recent Database Management Systems
Research Articles - September 2020
International Journal of Database Management Systems (IJDMS)
ISSN: 0975-5705 (Online); 0975-5985 (Print)
http://airccse.org/journal/ijdms/index.html
Contact us: ijdmsjournal@airccse.org
Oversampling technique in student performance classification from engineering...IJECEIAES
This document discusses various oversampling techniques for dealing with imbalanced data in student performance classification. It compares SMOTE, Borderline-SMOTE, SVMSMOTE, and ADASYN oversampling combined with MLP, gradient boosting, AdaBoost, and random forest classifiers. The results show that Borderline-SMOTE gave the best performance for predicting the minority (low performance) class according to several evaluation metrics. SVMSMOTE also performed well overall, particularly for recall, F1-measure, and AUC. Gradient boosting provided high and consistent precision, recall, F1-measure, and AUC across the different oversampling methods.
A Software Measurement Using Artificial Neural Network and Support Vector Mac...ijseajournal
Today, Software measurement are based on various techniques such that neural network, Genetic
algorithm, Fuzzy Logic etc. This study involves the efficiency of applying support vector machine using
Gaussian Radial Basis kernel function to software measurement problem to increase the performance and
accuracy. Support vector machines (SVM) are innovative approach to constructing learning machines that
Minimize generalization error. There is a close relationship between SVMs and the Radial Basis Function
(RBF) classifiers. Both have found numerous applications such as in optical character recognition, object
detection, face verification, text categorization, and so on. The result demonstrated that the accuracy and
generalization performance of SVM Gaussian Radial Basis kernel function is better than RBFN. We also
examine and summarize the several superior points of the SVM compared with RBFN.
A Survey of Modern Data Classification Techniquesijsrd.com
This document provides an overview of modern data classification techniques. It describes decision tree learning algorithms, which use tree structures to classify observations by mapping them to target class labels based on their features. The document discusses common decision tree algorithms like ID3 and C4.5 and their use of recursive partitioning to split data into subsets. It also reviews related work on decision tree algorithms and their applications in domains like medicine, manufacturing, and molecular biology. The conclusion states that current and improved classification algorithms efficiently predict target attributes but require significant time and complex extracted rules.
IRJET- Using Data Mining to Predict Students PerformanceIRJET Journal
This document describes a study that used logistic regression to predict student performance based on educational data. The researchers collected student data including exam scores, attendance, study hours, family income, etc. from a large dataset. Logistic regression achieved the best prediction accuracy of 82.03% compared to other models like naive bayes, K-nearest neighbor, and multi-layer perceptron. The results indicate that around 230 students would perform poorly, 600 would perform fairly, and 200 would perform well based on the predictive model. This analysis can help identify students needing extra support and help universities improve academic outcomes.
This document proposes a model for automatically clustering Thai students' online homework assignments before teachers grade them. The model uses five parts: 1) Thai word segmentation, 2) stop-word removal, 3) term weighting, 4) document clustering using k-means, and 5) performance evaluation. The model was tested on 1,000 student assignments and achieved high accuracy, purity, and F-measure scores similar to human grading, allowing teachers to grade assignments more efficiently.
This document summarizes an article that proposes improvements to an existing algorithm for resource scheduling in cloud computing environments. The existing algorithm uses a hybrid of ant colony optimization and particle swarm optimization. The proposed improvements add an initial phase that uses an enhanced fish swarm search algorithm to help find more global optimal solutions. This global optimal solution found by fish swarm search is then used to guide the existing ant colony optimization and particle swarm optimization hybrid to find more locally optimal solutions. The document provides background on resource scheduling, metaheuristic algorithms, and describes the specific implementations of the improved fish swarm search algorithm and the overall proposed methodology.
SCCAI- A Student Career Counselling Artificial Intelligencevivatechijri
As education is growing day by day, the competition has prompted a need for the student to
understand more about the educational field. Many times the counselor isn’t available all the time and
sometimes due to the lack of proper knowledge about some educational field. Due to this, it creates an issue of
misconception of that field. This creates a problem for the student to decide a proper educational trajectory and
guidance is not always useful. The proposed paper will overcome all these problem using machine learning
algorithm. Various algorithms are being considered and amongst them the best suitable for our project are used
here. There are 3 major problems that come across our path and they are solved using Random forest, Linear
regression and Searching algorithm using Google API. At first Searching algorithm solves the problem of
location by segregating the college’s location vice, then Random Forest provides the list of colleges by using
stream and range of percentage and finally Linear Regression predicts the current cutoff using previous years’
data. Rather than this, the proposed system also provides information regarding all fields of education helping
students to understand and know about their field of interest better. The following idea is a total fresh idea with
no existing projects of similar kind. This project will help students guide them throughout.
In the present paper, applicability and
capability of A.I techniques for effort estimation prediction has
been investigated. It is seen that neuro fuzzy models are very
robust, characterized by fast computation, capable of handling
the distorted data. Due to the presence of data non-linearity, it is
an efficient quantitative tool to predict effort estimation. The one
hidden layer network has been developed named as OHLANFIS
using MATLAB simulation environment.
Here the initial parameters of the OHLANFIS are
identified using the subtractive clustering method. Parameters of
the Gaussian membership function are optimally determined
using the hybrid learning algorithm. From the analysis it is seen
that the Effort Estimation prediction model developed using
OHLANFIS technique has been able to perform well over normal
ANFIS Model.
A Survey and Comparative Study of Filter and Wrapper Feature Selection Techni...theijes
Feature selection is considered as a problem of global combinatorial optimization in machine learning, which reduces the number of features, removes irrelevant, noisy and redundant data. However, identification of useful features from hundreds or even thousands of related features is not an easy task. Selecting relevant genes from microarray data becomes even more challenging owing to the high dimensionality of features, multiclass categories involved and the usually small sample size. In order to improve the prediction accuracy and to avoid incomprehensibility due to the number of features different feature selection techniques can be implemented. This survey classifies and analyzes different approaches, aiming to not only provide a comprehensive presentation but also discuss challenges and various performance parameters. The techniques are generally classified into three; filter, wrapper and hybrid.
Assistantship Assignment Optimization Using Hungarian Algorithm A Case StudyNat Rice
This document describes a study that uses the Hungarian algorithm to optimize the assignment of teaching assistants (TAs) to courses. The goal is to assign TAs from a large pool of candidates to a smaller number of available courses, while balancing preferences, fairness, and other constraints. A mathematical model is formulated to represent the assignment problem as a multi-objective function considering instructor preferences, student preferences, and income equality among candidates. The Hungarian algorithm is then used to solve the model and generate an optimized assignment. When tested on past TA assignment data, the optimized results matched 60-70% of the previous manual assignments on average. The study aims to automate and improve the fairness of the TA assignment process.
This document presents a framework for reusing existing software agents through ontological engineering. The framework includes components like a user interface agent, query processor, mapping agent, transfer agent, wrapper agent, and remote agents containing ontologies. The query processor reformulates the user's query, the mapping agent identifies relevant ontologies, and the transfer agent sends the query to remote agents. The remote agents provide ontologies as output, which are then integrated/merged and presented back to the user interface agent. The goal is to enable reuse of heterogeneous agents across different development environments through a standardized ontology representation.
Developing of decision support system for budget allocation of an r&d organiz...eSAT Publishing House
1) The document describes developing a decision support system for budget allocation of an R&D organization using a performance-based goal programming model.
2) It analyzes nine years of budget data from the organization and finds a wide gap between allocated funds and funds utilized.
3) The proposed model assesses R&D programs based on priority and risk factors using fuzzy set theory, and aims to allocate budgets in a more realistic and accurate manner than the previous approach.
An application of genetic algorithms to time cost-quality trade-off in constr...Alexander Decker
This document summarizes a research paper that develops an optimization model using genetic algorithms to solve the time-cost-quality trade-off problem in construction projects. The model aims to find the minimum cost for a construction project to meet certain quality levels within a given time limit. It does this by considering different activity execution modes and using genetic algorithms to efficiently explore the large solution space. The document provides background on optimization problems and techniques, an overview of the time-cost-quality trade-off problem and prior related research, and describes the objectives and approach of the developed genetic algorithms model.
Optimization of resource allocation in computational gridsijgca
The resource allocation in Grid computing system needs to be scalable, reliable and smart. It should also be adaptable to change its allocation mechanism depending upon the environment and user’s requirements. Therefore, a scalable and optimized approach for resource allocation where the system can adapt itself to the changing environment and the fluctuating resources is essentially needed. In this paper, a Teaching Learning based optimization approach for resource allocation in Computational Grids is proposed. The proposed algorithm is found to outperform the existing ones in terms of execution time and cost. The algorithm is simulated using GRIDSIM and the simulation results are presented.
The document discusses optimization of resource allocation in computational grids. It proposes using a Teaching-Learning Based Optimization (TLBO) approach for resource allocation. The TLBO algorithm is found to outperform existing algorithms like Ant Colony Optimization, Genetic Algorithm, and Particle Swarm Optimization in terms of execution time and cost. The algorithm is simulated using GRIDSIM and results are presented. Existing resource allocation strategies in computational grids are also reviewed, including static and dynamic approaches as well as auction/market-based models.
With the development of the urbanization, industrialization and populace, there has been a huge development in the rush hour gridlock. With development in the rush hour gridlock, there got a heap of issues with it as well, these issues incorporate congested roads, mishaps and movement govern infringement at the overwhelming activity signals. This thusly adversy affects the economy of the nation and in addition the loss of lives. Thus, Speed control is in the need of great importance because of the expanded rate of mishaps announced in our everyday life. The criminal traffic offense expanded due to over movement on streets. The reason is rapid of vehicles. The speed of the vehicles is past the normal speed confine is called speed infringement. In this paper diverse issues are confronted that are given in issue detailing. Every one of these issues are in future with the assistance of the fortification learning issue and advancement issue the changed neural system is contemplated with NN calculations forward Chaining back spread . Omesh Goyal | Chamkour Singh ""A Review on Traffic Signal Identification"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-4 , June 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23557.pdf
Paper URL: https://www.ijtsrd.com/engineering/computer-engineering/23557/a-review-on-traffic-signal-identification/omesh-goyal
This document discusses using machine learning clustering algorithms to analyze stock market data. It compares the K-means, COBWEB, DBSCAN, EM and OPTICS clustering algorithms in the WEKA tool on a stock market dataset containing 420 instances and 6 attributes. The K-means algorithm had the best performance with the lowest error and fastest runtime. It clustered the data into 4 groups in 0.16 seconds. The COBWEB algorithm clustered the data into 107 groups in 27.88 seconds. The DBSCAN algorithm found 21 clusters in 3.97 seconds. The paper concludes that K-means is best suited for stock market data mining applications due to its simplicity and speed compared to other algorithms.
Performance Evaluation of Query Processing Techniques in Information Retrievalidescitation
The first element of the search process is the query.
The user query being on an average restricted to two or three
keywords makes the query ambiguous to the search engine.
Given the user query, the goal of an Information Retrieval
[IR] system is to retrieve information which might be useful
or relevant to the information need of the user. Hence, the
query processing plays an important role in IR system.
The query processing can be divided into four categories
i.e. query expansion, query optimization, query classification and
query parsing. In this paper an attempt is made to evaluate the
performance of query processing algorithms in each of the
category. The evaluation was based on dataset as specified by
Forum for Information Retrieval [FIRE15]. The criteria used
for evaluation are precision and relative recall. The analysis is
based on the importance of each step in query processing. The
experimental results show that the significance of each step
in query processing and also the relevance of web semantics
and spelling correction in the user query.
CORRELATION BASED FEATURE SELECTION (CFS) TECHNIQUE TO PREDICT STUDENT PERFRO...IJCNCJournal
Education data mining is an emerging stream which helps in mining academic data for solving various
types of problems. One of the problems is the selection of a proper academic track. The admission of a
student in engineering college depends on many factors. In this paper we have tried to implement a
classification technique to assist students in predicting their success in admission in an engineering
stream.We have analyzed the data set containing information about student’s academic as well as sociodemographic variables, with attributes such as family pressure, interest, gender, XII marks and CET rank
in entrance examinations and historical data of previous batch of students. Feature selection is a process
for removing irrelevant and redundant features which will help improve the predictive accuracy of
classifiers. In this paper first we have used feature selection attribute algorithms Chi-square.InfoGain, and
GainRatio to predict the relevant features. Then we have applied fast correlation base filter on given
features. Later classification is done using NBTree, MultilayerPerceptron, NaiveBayes and Instance based
–K- nearest neighbor. Results showed reduction in computational cost and time and increase in predictive
accuracy for the student model
Correlation based feature selection (cfs) technique to predict student perfro...IJCNCJournal
Education data mining is an emerging stream which h
elps in mining academic data for solving various
types of problems. One of the problems is the selec
tion of a proper academic track. The admission of a
student in engineering college depends on many fact
ors. In this paper we have tried to implement a
classification technique to assist students in pred
icting their success in admission in an engineering
stream.We have analyzed the data set containing inf
ormation about student’s academic as well as socio-
demographic variables, with attributes such as fami
ly pressure, interest, gender, XII marks and CET ra
nk
in entrance examinations and historical data of pre
vious batch of students. Feature selection is a pro
cess
for removing irrelevant and redundant features whic
h will help improve the predictive accuracy of
classifiers. In this paper first we have used featu
re selection attribute algorithms Chi-square.InfoGa
in, and
GainRatio to predict the relevant features. Then we
have applied fast correlation base filter on given
features. Later classification is done using NBTree
, MultilayerPerceptron, NaiveBayes and Instance bas
ed
–K- nearest neighbor. Results showed reduction in c
omputational cost and time and increase in predicti
ve
accuracy for the student model
CORRELATION BASED FEATURE SELECTION (CFS) TECHNIQUE TO PREDICT STUDENT PERFRO...IJCNCJournal
This document discusses using feature selection and classification techniques to predict student performance and recommend an engineering stream for students. It first describes feature selection algorithms like chi-square and correlation-based feature selection to identify relevant attributes from a student data set. It then applies classifiers like NBTree, Naive Bayes, k-nearest neighbor, and multilayer perceptron on the selected features and evaluates their performance. The results show that correlation-based feature selection reduces computation time and improves predictive accuracy for recommending an engineering stream for students.
This document discusses using genetic algorithms for job scheduling in cloud computing environments. It begins with an introduction to cloud computing and genetic algorithms. It then discusses the challenges of genetic scheduling, including reducing makespan time, uniform load balancing, and minimizing user cost. It reviews various genetic algorithm approaches that have been proposed to address these challenges, such as approaches aimed at reducing makespan time alone, reducing cost alone, or reducing both cost and makespan time simultaneously. The document concludes that no single algorithm solves all problems, and that combining algorithms can better satisfy complex constraints in job scheduling.
An efficient information retrieval ontology system based indexing for contexteSAT Journals
Abstract Many of the research or development projects are constructed and vast type of artifacts are released such as article, patent, report of research, conference papers, journal papers, experimental data and so on. The searching of the particular context through the keywords from the repository is not an easy task because the earliest system the problem of huge recalls with low precision. This paper challenges to construct a search algorithm based on the ontology to retrieve the relevant contexts. Ontology's are great knowledge of retrieving the context. In this paper, we utilize the WordNet ontology to retrieve the relevant contexts from the document repository. It is very difficult to retrieve the relevant context in its original format since we use the pre-processing step, which helps to retrieve context. The pre-processing step includes two major steps first one is stop word removal and the second one is stemming process. The outcome of the pre-processing step is indexing consist of important keywords and their corresponding keywords. When the user enter the keyword to the system, the ontology makes the several steps to make the refine keywords. Finally, the refine keywords are matched with index and relevant contexts are retrieved. The experimentation process is carried out with the help of different set of contexts to achieve the results and the performance analysis of the proposed approach is estimated by the evaluation metrics like precision, recall and F-measure. Keywords— Ontologies; WordNet; contexts; stemming; indexing.
An Empirical Study Of Requirements Model UnderstandingKate Campbell
The document describes a study that compares two requirements modeling methods - Use Cases, a scenario-based approach, and Tropos, a goal-oriented approach. The study aims to evaluate how well novice requirements analysts can understand models created with each method. It involved 19 students performing tasks like determining consistency between models and system descriptions, understanding models, and modifying models. Preliminary results found that Tropos models seemed more comprehensible but took more time to understand compared to Use Case models. The full study aims to provide insights into which modeling method better supports requirements analysis tasks.
Similar to An Iterative Model as a Tool in Optimal Allocation of Resources in University Systems (20)
Total Ionization Cross Sections due to Electron Impact of Ammonia from Thresh...Dr. Amarjeet Singh
In the present paper, we have employed modified Khare-BEB method [Atoms, (2019)] to evaluate total ionization cross sections by the electron impact for ammonia in energy range from the ionization threshold to 10 MeV. The theoretical ionization cross sections have been compared to the available previous theoretical and experimental results. The collision parameters dipole matrix squared M_j^2 and CRP also have been calculated. The present calculations were found in remarkable agreement with the available experimental results.
A Case Study on Small Town Big Player – Enjay IT Solutions Ltd., BhiladDr. Amarjeet Singh
Adequately trained Manpower is a problem that affects the IT industry as a whole, but it is particularly acute for Enjay IT Solution. Enjay's location in a semi-urban or rural area makes it even more difficult to find a talented employee with the right skills. As the competition for skilled workers grows, it becomes more difficult to attract and keep those workers who have the requisite training and experience.
Effect of Biopesticide from the Stems of Gossypium Arboreum on Pink Bollworm ...Dr. Amarjeet Singh
Pink bollworm and Lepidoptera development quickly in numbers which is a typical animal group that produces around 100 youthful ones inside certain days or weeks. This assault influences the harvests broadly in the tropical and sub-tropical temperature areas. Thus, to keep up with the yield of harvests the vermin ought to be kept away by utilizing pesticides. The unnecessary measure of the purpose of pesticides influences the dirt, land, and as well as human well-being, and contaminates the climate. Thus, an ozone-accommodating biopesticide is extracted from the stems of the Gossypium arboreum. Thus, the extraction of biopesticide from the stems of Gossypium arboreum demonstrated that the quantity of pink bollworm and Lepidoptera is diminished step by step in the wake of showering the arrangement on the impacted region of the plant because of the presence of the gossypol.
Artificial Intelligence Techniques in E-Commerce: The Possibility of Exploiti...Dr. Amarjeet Singh
This document discusses the potential applications of artificial intelligence techniques in e-commerce in Saudi Arabia. It begins with an introduction to e-commerce and AI, and how AI is being used increasingly in e-commerce applications worldwide. It then reviews literature on how AI can be integrated into e-commerce systems and the various applications of AI in e-commerce. Some key applications discussed include AI assistants, personalized recommendations, demand forecasting, supply chain management, fraud detection and more. The document concludes that Saudi Arabia is well positioned to benefit from using AI to boost its growing e-commerce sector.
Factors Influencing Ownership Pattern and its Impact on Corporate Performance...Dr. Amarjeet Singh
This document summarizes a research study that analyzed the factors influencing ownership patterns of selected Indian companies and the impact of ownership patterns on corporate performance. The study used data from 5 industries over 5 years from 2017 to 2021. Multiple regression, ANOVA, and correlation analyses were conducted. The results found that the percentage of independent directors on the board and the size of the company had a significant impact on Indian promoter holdings. Additionally, non-institutional ownership was found to have a significant impact on corporate performance measures like asset utilization ratio. The study concluded that ownership patterns can influence corporate performance and companies should work to optimize factors like debt-equity ratio and board independence to improve financial outcomes.
An Analytical Study on Ratios Influencing Profitability of Selected Indian Au...Dr. Amarjeet Singh
Every country with a well-developed transportation network has a well-developed economy. The automobile industry is a critical engine of the nation's economic development. The automobile industry has significant backward and forward links with every area of the economy, as well as a strong and progressive multiplier impact. The automotive industry and the auto component industry are both included in the vehicle industry. It includes passenger waggons, light, medium, and heavy commercial vehicles, as well as multi-utility vehicles such as jeeps, three-wheelers, military vehicles, motorcycles, tractors, and auto-components such as engine parts, batteries, drive transmission parts, electrical, suspension and chassis parts, and body and other parts. In the last several years, India's automobile sector has seen incredible growth in sales, production, innovation, and exports. India's car industry has emerged as one of the best in the world, and the auto-ancillary sector is poised to assist the vehicle sector's expansion. Vehicle manufacturers and auto-parts manufacturers account for a significant component of global motorised manufacturing. Vehicle manufacturers from across the world are keeping a close eye on the Indian auto sector in order to assess future demand and establish India as a global manufacturing base. The current research focuses on three automotive behemoths: TATA Motors, MRF, and Mahindra & Mahindra.
A Study on Factors Influencing the Financial Performance Analysis Selected Pr...Dr. Amarjeet Singh
The growth of a country's banking sector has a significant impact on its economic development. The banking sector plays a critical role in determining a country's economic future. A well-planned, structured, efficient, and viable banking system is an essential component of an economy's economic and social infrastructure. In modern society, a strong banking system is required because it meets the financial needs of the modern society. In a country's economy, the banking system plays a crucial role. Because it connects surplus and deficit economic agents, the bank is the most important financial intermediary in the economy. The banking system is regarded as the economy's lifeline. It meets the financial needs of commerce, industry, and agriculture. As a result, the country's development and the banking system are intertwined. They are critical in the mobilisation of savings and the distribution of credit to various sectors of the economy. India's private sector banks play a critical role in the country's economic development. So The financial performance of private sector banks must be evaluated carefully.
An Empirical Analysis of Financial Performance of Selected Oil Exploration an...Dr. Amarjeet Singh
After the United States, China, and Japan, India was the world's fourth biggest consumer of oil and petroleum products. The nation is significantly reliant on crude oil imports, the majority of which come from the Middle East. The Indian oil and gas business is one of the country's six main sectors, with important forward links to the rest of the economy. More than two-thirds of the country's overall primary energy demands are met by the oil and gas industry. The industry has played a key role in placing India on the global map. India is now the world's sixth biggest crude oil user and ninth largest crude oil importer. In addition, the country's portion of the worldwide refining market is growing. India's refining industry is now the world's sixth biggest. With plans for Reliance Petroleum Limited to commission another refinery with a capacity of 29 MTPA next 16 to its 33 MTPA refinery in Jamnagar, Gujarat, this position is projected to be enhanced. As a consequence, the Reliance refinery would be the biggest single-site refinery in the world. Based on secondary data gathered from CMIE, the current research examines the ratios influencing the profitability of selected oil exploration and production businesses in India during a 10-year period.
Since 1991, thanks to economic policy liberalization, the Indian economy has entered an era in which Indian businesses can no longer disregard global markets. Prior to the 1990s, the prices of a variety of commodities, metals, and other assets were carefully regulated. Others, which were not rolled, were primarily dependant on regulated input costs. As a result, there was no uncertainty and, as a result, no price fluctuations. However, in 1991, when the process of deregulation began, the prices of most items were deregulated. It has also resulted in the exchange being partially deregulated, easing trade restrictions, lowering interest rates, and making significant advancements in foreign institutional investors' access to the capital markets, as well as establishing market-based government securities pricing, among other things. Furthermore, portfolio and securities price volatility and instability were influenced by market-determined exchange rates and interest rates. As a result, hedging strategies employing a variety of derivatives were exposed to a variety of risks. The Indian capital market will be examined in this study, with a focus on derivatives.
Theoretical Estimation of CO2 Compression and Transport Costs for an hypothet...Dr. Amarjeet Singh
This document discusses theoretical estimates for the costs of compressing and transporting CO2 from a hypothetical carbon capture and storage project at the Saline Joniche Power Plant in Italy. It first provides background on the power plant project from 2008 that proposed converting the site to coal power. It then details the methodology used to size the compression system, estimating power needs for multi-stage compression up to pipeline pressures. Costs are considered for constructing, operating, and maintaining both the compression plant and pipeline to a potential offshore storage site. The aim is to evaluate retrofitting the existing plant with carbon capture and storage as a way to enable continued coal power production consistent with climate goals.
Analytical Mechanics of Magnetic Particles Suspended in Magnetorheological FluidDr. Amarjeet Singh
In this paper, the behavior of MR particles has been systematically investigated within the scope of analytical mechanics. . A magnetorheological fluid belongs to a class of smart materials. In magnetorheological fluids, the motion of magnetic particles is controlled by the action of internal and external forces. This paper presents analytical mechanics for the interaction of system of particles in MR fluid. In this paper, basic principles of Analytical Mechanics are utilized for the construction of equations.
Techno-Economic Aspects of Solid Food Wastes into Bio-ManureDr. Amarjeet Singh
Solid waste is health hazard and cause damage to the environment due to improper handling. Solid waste comprises of Industrial Waste (IW), Hazardous Waste (HW), Municipal Solid Waste (MSW), Electronic waste (E-waste), Bio-Medical Waste (BMW) which depend on their supply & characteristics. Food waste or Bio-waste composting and its role in sustainable development is explained in food waste is a growing area of concern with many costs to our community in terms of waste collection, disposal and greenhouse gases. When rotting food ends up in landfill it turns into methane, a greenhouse gas that is particularly damaging to the environment. Composting is biochemical process in which organic materials are biologically degraded, resulting in the production of organic by products and energy in the form of heat. Heat is trapped within the composting mass, leading to the phenomenon of self-heating. This overall process provide us Bio-Manure.
Crypto-Currencies: Can Investors Rely on them as Investment Avenue?Dr. Amarjeet Singh
The purpose of this study is to examine investors’ perceptions about investing in crypto-currencies. We think that investors trust in crypto-currencies is largely driven by crypto-currency comprehension, trust in government, and transaction speed. This is the first study to examine crypto-currencies from the investor’s perspective. Following that, we discover important antecedents of crypto-currency confidence. Second, we look at the government's role in crypto-currencies. The importance of this study is: first, crypto-currencies have the potential to disrupt the current economic system as the debate is all about impact of decentralization of transactions; thus, further research into how it affects investors trust is essential; and second, access to crypto-currencies. Finally, if Fin-Tech companies or banks want to enter the bitcoin industry may not attract huge advertising costs as well as marketing to soothe clients' concerns about investing in various digital currencies The research sheds light on indecisiveness in the context of marketing aspects adopted by demonstrating investors are aware about the crypto.
Awareness of Disaster Risk Reduction (DRR) among Student of the Catanduanes S...Dr. Amarjeet Singh
The Island Province of Catanduanes is prone to all types of natural hazards that includes torrential and heavy rains, strong winds and surge, flooding and landslide or slope failures as a result of its geographical location and topography. RA 10121 mandates local DRRM bodies to “encourage community, specifically the youth, participation in disaster risk reduction and management activities, such as organizing quick response groups, particularly in identified disaster-prone areas, as well as the inclusion of disaster risk reduction and management programs as part of youth programs and projects. The study aims to determine the awareness to disaster of the student of the Catanduanes State University. The disaster-based questionnaire was prepared and distributed among 636 students selected randomly from different Colleges and Laboratory Schools in the University
The Catanduanes State University students understood some disaster-related concepts and ideas, but uncertain on issues on preparedness, adaptation, and awareness on the risks inflicted by these natural hazards. Low perception on disaster risks are evidently observed among students. The responses of the students could be based on the efficiency and impact of the integration of DRR education in the senior high school curriculum. Specifically, integration of the concepts about the hazards, hazard maps, disaster preparedness, awareness, mitigation, prevention, adaptation, and resiliency in the science curriculum possibly affect the knowledge and understanding of students on DRR. Preparedness drills and other forms of capacity building must be done to improve awareness of the student towards DRRM.
The study further recommends that teachers and instructor must also be capacitated in handling disaster as they are the prime movers in the implementation of the DRRM in education. Preparedness drills and other forms of capacity building must be done to improve awareness of the student towards DRRM. Core subjects in Earth Sciences must be reinforced with geologic hazards. Learning competencies must also be focused on hazard identification and mapping, and coping with different geologic disaster.
The 1857 war was a watershed moment in the history of the Indian subcontinent. The battle has sparked academic debate among historians and sociologists all around the world. Despite the fact that it has been more than 150 years, this battle continues to pique the interest of historians. The war's causes and events that occurred throughout the conflict, persons who backed the British and anti-British fighters, and the results and ramifications, are all aspects of this conflict. In terms of outcomes, many academics believe that the war was a failure for those who started it. It is often assumed that the Indians who battled the British in this conflict were unable to achieve their goals. Many gains accrued to Indians as a result of the conflict, but these achievements are overshadowed by the dispute over the war's failure. This research effort focuses on the war's achievements for India, and the significance of those achievements.
Haryana's Honour Killings: A Social and Legal Point of ViewDr. Amarjeet Singh
Life is unpredictably unpredictable. Nobody knows what will happen in the next minute of their lives. In this circumstance, every human being has the right and desire to conduct their lives according to their own desires. No one should be forced to live a life solely for the benefit and reputation of others. Honour killing is defined as the assassination of a person, whether male or female, who refuses to accept the family's arranged marriage or decides to move her or his marital life according to her or his wishes solely because it jeopardizes the family's honour. The family's supreme authority looks after the family's name but neglects to consider the love and affection shared among family members. I have discussed honour killing in India in my research work. This sort of murder occurs as a result of particular triggers, which are also examined in relation to the role of the law in honour killing. No one can be released free if they break the law, and in this case, it is a felony that violates various regulations designed to safeguard citizens. This crime is similar to many others, but it is distinct enough to be differentiated in the report. When the husband is of low social standing, it lowers the position and caste of the female family, prompting the male family members to murder the girl. But they forget that the girl is their kid and that while rank may be attained, a girl's life can never be replaced, and that caste is less valuable than the girl's life and love spent with them.
Optimization of Digital-Based MSME E-Commerce: Challenges and Opportunities i...Dr. Amarjeet Singh
This document summarizes a research article about optimizing digital-based MSME e-commerce during the COVID-19 pandemic. The article discusses how the pandemic severely impacted MSMEs, with many going out of business. However, digitalization and e-commerce provide opportunities for MSMEs to transform their business models. The article reviews literature showing how technologies like websites, social media, and mobile applications can help MSMEs reach more customers online. Case studies of MSMEs in different countries found that those utilizing digital tools through e-commerce were more successful compared to those relying only on offline sales. The article concludes digitalization is both a challenge and opportunity for MSMEs to adapt their traditional business models and survive or grow
Modal Space Controller for Hydraulically Driven Six Degree of Freedom Paralle...Dr. Amarjeet Singh
This paper presents the Modal space decoupled control for a hydraulically driven parallel mechanism has been presented. The approach is based on singular values decomposition to the properties of joint-space inverse mass matrix, and mapping of the control and feedback variables from the joint space to the decoupling modal space. The method transformed highly coupled six-input six-output dynamics into six independent single-input single-output (SISO) 1 DOF hydraulically driven mechanical systems. The novelty in this method is that the signals including control errors, control outputs and pressure feedbacks are transformed into decoupled modal space and also the proportional gains and dynamic pressure feedback are tuned in modal space. The results indicate that the conventional controller can only attenuate the resonance peaks of the lower eigenfrequencies of six rigid modes properly, and the peaking points of other relative higher eigenfrequencies are over damped, The further results show that it is very effective to design and tune the system in modal space and that the bandwidth increased substantially except surge (x) and sway (y) motions, each degree of freedom can be almost tuned independently and their bandwidths can be increased near to the undamped eigenfrequencies.
It is a known fact that a large number of Steel Industry Expansion projects in India have been delayed due to regulatory clearances, environmental issues and problems pertaining to land acquisition. Also, there are challenges in the tendering phase that affect viability of projects thus delaying implementation, construction phase is beset with over-runs and disputes and last but not the least; provider skills are weak all across the value chain. Given the critical role of Steel Sector in ensuring a sustained growth trajectory for India, it is imperative that we identify the core issues affecting completion of infrastructure projects in India and chalk out initiatives that need to be acted upon in short term as well as long term.
A blockchain is a decentralised database that is shared across computer network nodes. A blockchain acts as a database, storing information in a digital format. The study primarily aims to explore how in the future, block chain technology will alter several areas of the Indian economy. The current study aims to obtain a deeper understanding of blockchain technology's idea and implementation in India, as well as the technology's potential as a disruptive financial technological innovation.
Secondary sources such as reports, journals, papers, and websites were used to compile all the data. Current and relevant information were utilised to help understand the research goals. All the information is rationally organised to fulfil the objectives. The current research focuses on recommendations for enhancing India's Blockchain ecosystem so that it may become one of the best in the world at utilising this new technology.
Microbial interaction
Microorganisms interacts with each other and can be physically associated with another organisms in a variety of ways.
One organism can be located on the surface of another organism as an ectobiont or located within another organism as endobiont.
Microbial interaction may be positive such as mutualism, proto-cooperation, commensalism or may be negative such as parasitism, predation or competition
Types of microbial interaction
Positive interaction: mutualism, proto-cooperation, commensalism
Negative interaction: Ammensalism (antagonism), parasitism, predation, competition
I. Mutualism:
It is defined as the relationship in which each organism in interaction gets benefits from association. It is an obligatory relationship in which mutualist and host are metabolically dependent on each other.
Mutualistic relationship is very specific where one member of association cannot be replaced by another species.
Mutualism require close physical contact between interacting organisms.
Relationship of mutualism allows organisms to exist in habitat that could not occupied by either species alone.
Mutualistic relationship between organisms allows them to act as a single organism.
Examples of mutualism:
i. Lichens:
Lichens are excellent example of mutualism.
They are the association of specific fungi and certain genus of algae. In lichen, fungal partner is called mycobiont and algal partner is called
II. Syntrophism:
It is an association in which the growth of one organism either depends on or improved by the substrate provided by another organism.
In syntrophism both organism in association gets benefits.
Compound A
Utilized by population 1
Compound B
Utilized by population 2
Compound C
utilized by both Population 1+2
Products
In this theoretical example of syntrophism, population 1 is able to utilize and metabolize compound A, forming compound B but cannot metabolize beyond compound B without co-operation of population 2. Population 2is unable to utilize compound A but it can metabolize compound B forming compound C. Then both population 1 and 2 are able to carry out metabolic reaction which leads to formation of end product that neither population could produce alone.
Examples of syntrophism:
i. Methanogenic ecosystem in sludge digester
Methane produced by methanogenic bacteria depends upon interspecies hydrogen transfer by other fermentative bacteria.
Anaerobic fermentative bacteria generate CO2 and H2 utilizing carbohydrates which is then utilized by methanogenic bacteria (Methanobacter) to produce methane.
ii. Lactobacillus arobinosus and Enterococcus faecalis:
In the minimal media, Lactobacillus arobinosus and Enterococcus faecalis are able to grow together but not alone.
The synergistic relationship between E. faecalis and L. arobinosus occurs in which E. faecalis require folic acid
Sexuality - Issues, Attitude and Behaviour - Applied Social Psychology - Psyc...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
Discovery of An Apparent Red, High-Velocity Type Ia Supernova at 𝐳 = 2.9 wi...Sérgio Sacani
We present the JWST discovery of SN 2023adsy, a transient object located in a host galaxy JADES-GS
+
53.13485
−
27.82088
with a host spectroscopic redshift of
2.903
±
0.007
. The transient was identified in deep James Webb Space Telescope (JWST)/NIRCam imaging from the JWST Advanced Deep Extragalactic Survey (JADES) program. Photometric and spectroscopic followup with NIRCam and NIRSpec, respectively, confirm the redshift and yield UV-NIR light-curve, NIR color, and spectroscopic information all consistent with a Type Ia classification. Despite its classification as a likely SN Ia, SN 2023adsy is both fairly red (
�
(
�
−
�
)
∼
0.9
) despite a host galaxy with low-extinction and has a high Ca II velocity (
19
,
000
±
2
,
000
km/s) compared to the general population of SNe Ia. While these characteristics are consistent with some Ca-rich SNe Ia, particularly SN 2016hnk, SN 2023adsy is intrinsically brighter than the low-
�
Ca-rich population. Although such an object is too red for any low-
�
cosmological sample, we apply a fiducial standardization approach to SN 2023adsy and find that the SN 2023adsy luminosity distance measurement is in excellent agreement (
≲
1
�
) with
Λ
CDM. Therefore unlike low-
�
Ca-rich SNe Ia, SN 2023adsy is standardizable and gives no indication that SN Ia standardized luminosities change significantly with redshift. A larger sample of distant SNe Ia is required to determine if SN Ia population characteristics at high-
�
truly diverge from their low-
�
counterparts, and to confirm that standardized luminosities nevertheless remain constant with redshift.
Anti-Universe And Emergent Gravity and the Dark UniverseSérgio Sacani
Recent theoretical progress indicates that spacetime and gravity emerge together from the entanglement structure of an underlying microscopic theory. These ideas are best understood in Anti-de Sitter space, where they rely on the area law for entanglement entropy. The extension to de Sitter space requires taking into account the entropy and temperature associated with the cosmological horizon. Using insights from string theory, black hole physics and quantum information theory we argue that the positive dark energy leads to a thermal volume law contribution to the entropy that overtakes the area law precisely at the cosmological horizon. Due to the competition between area and volume law entanglement the microscopic de Sitter states do not thermalise at sub-Hubble scales: they exhibit memory effects in the form of an entropy displacement caused by matter. The emergent laws of gravity contain an additional ‘dark’ gravitational force describing the ‘elastic’ response due to the entropy displacement. We derive an estimate of the strength of this extra force in terms of the baryonic mass, Newton’s constant and the Hubble acceleration scale a0 = cH0, and provide evidence for the fact that this additional ‘dark gravity force’ explains the observed phenomena in galaxies and clusters currently attributed to dark matter.
Quality assurance B.pharm 6th semester BP606T UNIT 5
An Iterative Model as a Tool in Optimal Allocation of Resources in University Systems
1. International Journal of Engineering and Management Research e-ISSN: 2250-0758 | p-ISSN: 2394-6962
Volume- 9, Issue- 1, (February 2019)
www.ijemr.net https://doi.org/10.31033/ijemr.9.1.10
108 This work is licensed under Creative Commons Attribution 4.0 International License.
An Iterative Model as a Tool in Optimal Allocation of Resources in
University Systems
Onanaye, Adeniyi S.
Senior Lecturer, Department of Mathematical Sciences, Industrial Mathematics Programme, Redeemer’s University, Osun
State, NIGERIA
Correspondence Author: onanayea@run.edu.ng
ABSTRACT
In this paper, a study was carried out to aid in
adequate allocation of resources in the College of Natural
Sciences, TYZ University (not real name because of ethical
issue). Questionnaires were administered to the high-
ranking officials of one the Colleges, College of Pure and
Applied Sciences, to examine how resources were allocated
for three consecutive sessions(the sessions were 2009/2010,
2010/2011 and 2011/2012),then used the data gathered and
analysed to generate contributory inputs for the three basic
outputs (variables)formed for the purpose of the study.
These variables are: 1x represents the quality of graduates
produced; 2x stands for research papers, Seminars,
Journals articles etc. published by faculties and 3x denotes
service delivery within the three sessions under study.
Simplex Method of Linear Programming was used to solve
the model formulated.
Keywords-- Optimal, Mathematical Model, Linear
Programming, Resources, Allocation, Management,
Redeemer’s University.
Subject Classification Codes: 2010: 90C90
I. INTRODUCTION
Linear Programming is a basis with which we
can manipulate and control various activities in order to
achieve optimal outcome for any problem. It deals with
the optimization (maximization or minimization) of a
function of variables known as objective functions [1].
Optimization problems consist of maximizing or
minimizing a real function by systematically choosing
input values from within an allowed set and computing
the value of the function [2]. It includes finding the best
available values of some objective function given a well-
defined domain. An optimization problem in general is
referred to as a linear mathematical programming
problem and as such, many real world and theoretical
problems can be modelled into a linear mathematical
program.
In the application of optimization such as in the
allocation of resources, optimizer or solvers are tools that
help users find the best way to allocate those resources
[3]. According Huankai et'al (2013), resource allocation
optimization is a typical cloud project scheduling
problem: a problem that limits a cloud system’s ability to
execute and deliver a project as originally planned. In
their own view, Connor and Shah (2014) argued that to
schedule a project effectively [4], project planners must
select appropriate costing and resourcing options. This
selection will determine the duration of the project. In
most cases, projects have multiple costing and resourcing
options which lead to multiple due dates [5].
These resources may be raw materials, machine
time or people time, money or anything that is in limited
supply. The best or optimal solution may mean profit
maximization, cost minimization or achieving the best
possible quality. Resource allocation may be decided by
using computer programs applied to a specific domain to
automatically and dynamically distribute resources to
applicants. It may be considered as a specialized case of
automatic scheduling and this is especially common in
electronic devices dedicated to routing and
communication. For example, channel allocation in
wireless communication may be decided by a base
transceiver station using an appropriate algorithm.
The College of Natural Sciences is one of the
colleges in the Redeemer’s University. It is made up of
four departments which are: Mathematical Sciences,
Biological Sciences, Chemical Sciences and Physical
Sciences. If the resources given to the College of Natural
Sciences are well allocated, it would make the learning
process in the college more efficient and also make the
college to achieve better outputs. A wide range of
successful applications of optimization have been
developed by businesses, governments, universities,
industries and any other groups. Many large companies
have reported saving billions of (Naira) Dollars using
optimization.
For an allocation of resources to be optimal,
some conditions that must be met are that:
It must be an efficient allocation.
The distribution of such allocation must be
equitable (i.e. fair)
It must be simple and not complex. etc.
In using an optimizer (iterative software tools), the
user must build a model that species the:
Resources to be used (using a decision variable)
The limit of resource usage (constraints)
The measure to optimize (objectives).
2. International Journal of Engineering and Management Research e-ISSN: 2250-0758 | p-ISSN: 2394-6962
Volume- 9, Issue- 1, (February 2019)
www.ijemr.net https://doi.org/10.31033/ijemr.9.1.10
109 This work is licensed under Creative Commons Attribution 4.0 International License.
The optimizer finds values for the decision
variables that satisfy the constraints while optimizing
(maximizing or minimizing) the objective [3].
Iteration is defined as the procedure that
involves repetitive steps in order to achieve the desired
outcome. Sometimes iteration is often referred to as a
loop. In constructing an iterative model as an approach
for solving optimization in allocation of resources, a good
iterative model must possess the following
characteristics:
i. It should be communicable,
ii. It must not be too complex to understand, it
should be simple and
iii. It should be able to give feedback as a
measure of its progress [3].
Joiner in 2009 developed a mathematical model to
determine the optimal structure (dollars, space) for
allocating resource packages when recruiting new faculty,
based on expected financial returns from those faculty
using the University of Arizona College of Medicine as
an illustrative case study (the model was applied there
from 2005 to 2008), according to her, the model is a
simple and flexible approach that can be adopted by other
medical schools irrespective of the magnitude of the
resources allocated [6]. Tarek in 1999 proposed
improvement to resource allocation and levelling
heuristics using the Genetic Algorithms (GA) to search
for near-optimum solution, considering both aspects
simultaneously. According to his work, the improved
heuristics, random priorities were introduced into selected
tasks and their impact on the schedule is monitored [7].
According to Zhu and Cipriano (2002), in their
work on using mathematical optimization approach for
resource allocation in large scale data centres using
Hewlett Packard laboratory, Palo Alto as a case study
centre. According to them, they addressed the resource
allocation problem (RAP) for large scale data centres
using mathematical optimization techniques given a
physical topology of resources in a large data centre, and
an application with certain architecture and requirements,
so as to determine which resources in the physical
topology should be assigned to the application
architecture such that application requirements and
bandwidth constraints in the network are satisfied, while
communication delay between assigned servers is also
minimized [8].Okonta and Chikwendu in2008used an
iterative model for optimum allocation of government
resources to the less privilege in Ethiope West Local
Government Area of Delta State of Nigeria. In their
methodology, four principal projects which are
Education, Electricity, Water supply and Health care
were put into key considerations. Budgeted amount and
the actual expenditure between the year 2001 and 2006
were key parameters used by them making use of the
Simplex Method of the linear programming problems to
generate their iterative model [1].
Guptar and Hira in 1985defined operation
research (OR) a study that encompasses a wide range of
problem-solving techniques and methods applied in the
pursuit of improved decision-making and efficiency, such
as simulation, mathematical optimization, queuing theory
and other stochastic-process models, Markov decision
processes, econometric methods, neural networks, expert
systems, decision analysis, and the analytic hierarchy
process [8]. Operation research gives executives the
power to make more effective decisions and build more
productive systems based on more complete data,
consideration of all available options, careful predictions
of outcomes and estimates of risk and the latest decision
tools and techniques. Guptar and Hira again in 1985
described linear programming (LP or linear optimization)
as a mathematical method for determining a way to
achieve the best outcome (such as maximum profit or
minimum cost) in a given mathematical model for some
list of requirements represented as linear relationships. It
is the process of taking various linear inequalities relating
to some situation, and finding the "best" value obtainable
under those conditions. More formally, linear
programming is a technique for the optimization of a
linear objective function, subject to linear equality and
linear inequality constraints [8]. According to Robert
(2007), a model is a miniature representation of
something, a pattern of something to be made, an
example for imitation or emulation, a description or
analogy use to help visualize something [9].
Mathematically a model is a description of a system using
mathematical concepts and languages, the process of
developing a mathematical model is called mathematical
modelling. Robert in 2007 defined Simplex method is an
iterative procedure for solving Linear Programming
Problems (LPP) with a finite number of steps [9]. This
method provides an algorithm which consist of moving
from one vertex of the region of feasible solution to
another in such a manner that the value of the objective
function at the succeeding vertex is less or more as the
case may be than the previous vertex. The procedure is
repeated and since the number of vertices is finite, the
method leads to an optimal vertex in a finite number of
steps or indicates the existence of unbounded solution.
According to Okonta and Chikwendu in 2008, said that
sensitivity analysis deals with finding out the amount by
which we can change the input data for the output of our
linear programming model to remain comparatively
unchanged [1]. This helps us to determine sensitivity of
the data we supply for the problem. If a small change in
the input produces a large change in the optimal solution
for some model, and a corresponding small change in the
input for some other model doesn’t affect its optimal
solution as much, we can conclude that the second
problem is less sensitive to the changes in the input data.
A typical example of LP Model can be expressed as
follows:
3. International Journal of Engineering and Management Research e-ISSN: 2250-0758 | p-ISSN: 2394-6962
Volume- 9, Issue- 1, (February 2019)
www.ijemr.net https://doi.org/10.31033/ijemr.9.1.10
110 This work is licensed under Creative Commons Attribution 4.0 International License.
Maximize Z: 1
n
j jj
C X
Subject to: ij j ia X b (1)
0jX , 1,2,...,i m
where: jX are the output variables from the system been modelled,
ija are the input coefficients of jX as contributions to the objective function, Z .
ib are the quantities of expectations in each of the processes,
jC arethe marginal values of resources (inputs) available.
Now, in case of minimization models, the
inequalities in (1) above do changed to greater than or
equal to (≥).
II. PROBLEM STATEMENT
Allocation of resources (resources such as
capital, time, land, personnel, facilities etc) in an
organization is not a small job. Correct allocation of such
resources adequately in such a way that every
department/unit is sufficiently satisfied cannot be over
emphasized. Therefore, we want to make room for the
best allocation of resources to the College of Natural
Sciences in Redeemers University, Nigeria so that the
college can carry out her duties more efficiently. In order
to do this, we developed an iterative model for optimal
allocation of those resources to the different departments
in the college of Natural Sciences.
2.1 Aim and Objectives
The aim of this paper is to aid in adequate and
correct allocation of resources in the college of Natural
Sciences in the Redeemers University and by extension to
other colleges/departments/units in the University and
any other organisations both in public and private.
To achieve the above aim, we carried out the following
objectives:
Based on existing model of resource allocation
method, a new model was designed to improve
the existing one in resources allocation.
We recommend areas that should have more
input of resources so that the College would
achieve better outputs.
III. METHODOLOGY
Data generated by questionnaire were used to
formulate the iteration model used for the study. The
mainstream resources allocated and available at the
College of Natural Sciences of the University include
academic and non-academic staff strength; library
facilities and journals; lecture halls; laboratories;
transportation; utilities, furniture, office and residential
accommodations; internet and intercom services and as
such. The questionnaire were administered to the high
ranking officials such as the Dean of the College, Head of
Departments (HOD’s) and the College Officer based on a
three-session academic school calendar (i.e. 2009/2010,
2010/2011 and 2011/2012 sessions were used for this
study). The generated information from the questionnaire
was defined as the primary data while the journal articles,
personal observations and interviews were defined as the
secondary data for the study. The model generated from
the available information was solved by using an iterative
tool called Simplex Method (SM) of the Linear
Programming model.
3.1 Formulated Mathematical Model
Linear programming problems (LPP) of the
Simplex method involves the optimization of a linear
function, called the objective function which is subject to
some linear constraints, which may either be equalities or
inequalities in the unknowns.
The objective function is of the form:
Maximize Z: 1
n
j jj
C X (2)
Subject to: ij j ia X b
0jX , 1,2,...,i m
where jX is the output based on the iteration
model derived from the three sessions academic
University calendar; ija is the input allocated resources
based on the information from the questionnaire.
ib is the quantity of the resources allocated from sessions 2009/2010 – 2011/2012
4. International Journal of Engineering and Management Research e-ISSN: 2250-0758 | p-ISSN: 2394-6962
Volume- 9, Issue- 1, (February 2019)
www.ijemr.net https://doi.org/10.31033/ijemr.9.1.10
111 This work is licensed under Creative Commons Attribution 4.0 International License.
jC is the marginal value of resources available being derived by ranking in order of needs.
ja and ib were obtained as the objective and subjective allocated resources respectively.
Thus, the linear function to be maximized is mathematically given as:
Max 1 1 2 2 3 3Z C X C X C X
Subjects to the constraints:
3333232131
2323222121
1313212111
bXAXAXA
bXAXAXA
bXAXAXA
(3)
where:
1X = Quality of graduate produced from the college
2X = Research papers, journals and seminars e.t.c
3X = Service delivery e.t.c
jX = Output
For the quality of graduates produced from the
college; the following were considered as input:
the staff strength based on qualifications,
productivities and years of experience including
non-academic supporting staff; access to data
base of high quality Journals for different areas
of disciplines and access to internet;
laboratories/equipment/consumables and hostel
accommodations.
For quality of researches done, journals articles
published and seminars presentations; the
following were considered as inputs: access to
data base of high quality Journals for different
areas of disciplines, access to internet,
laboratories/equipment/consumables, research
funds, and conducive office accommodations
provided within each session
For service delivery, the following were
considered as inputs: transportation, residential
accommodations, stationeries, computer
systems, internet facility, and other utilities
provided within each session.
jC = Marginal value of resources which is been derived
based on the ranking of resources allocated
ib = calculated final points based on the inputs
resources from the primary data.
3.2 Theorems
Theorem 1
The set of all feasible solutions to the linear
programming problem (LLP) is a convex set.
(Source: Okonta and Chikwendu, 2008)
Theorem 2
If for any basic feasible solutions
0 10 20 0( , ,..., mX X X X , the conditions 0j jZ C
)0.( jj ZCie hold for 1,2,...,j n , then a
maximum feasible solution is has been obtained.
(Source: modified version of Okonta and Chikwendu,
2008)
IV. SOLUTION TO THE PROBLEM
FORMULATED
After analysing the input based for the proposed
outputs, we were able to formulate the objective function
to be considered and solved as shown below:
Max 1 2 395.5 75 88Z x x x
subject to the constraints:
1 2 3
1 2 3
1 2 3
19 14 14 78
18 15 19 84.2
18 15 17 83.1
x x x
x x x
x x
(4)
Hints: From the above optimization model, it should be noted that:
5. International Journal of Engineering and Management Research e-ISSN: 2250-0758 | p-ISSN: 2394-6962
Volume- 9, Issue- 1, (February 2019)
www.ijemr.net https://doi.org/10.31033/ijemr.9.1.10
112 This work is licensed under Creative Commons Attribution 4.0 International License.
ib = calculated final points based on the inputs
resources from the primary data (i.e
questionnaire),
95.5, 75 and 88 were marginal values (i.e
321 ,, CandCC ) of high ranking from the total
or summation of inputs that contribute 1 2 3, ,x x x
and as outputs respectively.
The individual constraints analysed above were
based on highest ranking from the respective
catchment areas (Departments, College).
In other to remove the inequalities signs in (4)
above, we introduce slacks (dummy variables),
321 ,, SandSS which now made us to re-write
(4) as follows:
321321 00088755.95 SSSXXXZMax
Subject to the constraints
1.8300171518
2.8400191518
7800141419
311321
121321
111321
SSSXXX
SSSXXX
SSSXXX
(5)
For non-negativity condition, it implies that:
0,, 321 XXX
We now construct the initial tableau for the simplex method as follows:
Table 1 Initial Simplex Table
Column ci 95.5 75 88 0 0 0
row ci Solution X1 X2 X3 S1 S2 S3 P0
1 0 S1 19 14 14 1 0 0 78
2 0 S2 18 13 19 0 1 0 84.2
3 0 S3 18 15 17 0 0 1 83.1
zj 0 0 0 0 0 0 0
ci-zj 95.5 75 88 0 0 0
Since the entries in ci-zj row in the above table
contains elements that are positive, [that is for us to have
optimal solution all none of the entries in the ci-zj row
must positive (ci-zj< 0 or =0)] it shows that table is not for
optimal solution. We therefore introduced 1x into the
solution column because it has the highest value of
coefficient in the objective function in (4) above. We also
determined the slack to be removed for 1x by dividing
all entries in P0 by all the entries in 1x
105.4
19
78
………………( 1s )
678.4
18
2.84
……………..( 2s )
617.4
18
1.83
………………( 3s )
We remove the slack 1s in the solution column because of its lowest ratio.
Based on this, we then reconstruct the simplex tableau as follows:
6. International Journal of Engineering and Management Research e-ISSN: 2250-0758 | p-ISSN: 2394-6962
Volume- 9, Issue- 1, (February 2019)
www.ijemr.net https://doi.org/10.31033/ijemr.9.1.10
113 This work is licensed under Creative Commons Attribution 4.0 International License.
Table 2 Simplex Tableau after the First Iteration
Column Ci 95.5 75 88 0 0 0
row Ci Solution X1 X2 X3 S1 S2 S3 P0
1 95.5 X1 1 0.737 0.737 0.052631579 0 0 4.1052632
2 0 X3 0 -5 109 -18 19 0 195.8
3 0 X2 0 33 71 -18 0 19 174.9
Zj 95.5 70.37 73.37 5.026315789 0 0 392.05263
Ci - Zj 0 4.632 17.63 -5.02631579 0 0
Again the entries in ci-zj row in the above table
still contains some elements that are positive, it shows
that table is not yet for optimal solution. We repeated the
same procedures as mentioned in the first iteration until
optimal solution was reached after the fifth iterations and
the final Simplex tableaus shown below:
Table 3 Final Simplex Tableau
Column Ci 95.5 75 88 0 0 0
row Ci Solution X1 X2 X3 S1 S2 S3 P0
1 95.5 X1 1 0 0 0.307692309 0.134615384 -0.40384615 1.775
2 88 X3 0 0 1 -0.17307692 0.158653846 0.024038462 1.85625
3 75 X2 0 1 0 -0.17307692 -0.34134615 0.524038462 1.30625
Zj 95.5 75 88 1.173077232 1.216346371 2.850961562 430.83125
Ci - Zj 0 0 0 -1.17307723 -1.21634637 -2.85096156
Since all entries in ci-zj 0 , then the iterations
in the table had produced the optimal solution.
Interpretation of Result
Considering the final or optimal tableau above,
the optimal values for decision variables are 1x =1.775,
2x =1.30625, and 3x =1.85625 with value of objective
function as Z=430.83125. From the above analysis we
can then say that 3x has the highest value of quality of
output which connotes that services delivery has greater
input of resources followed by 1x and then 2x in that
order respectively. For the best or optimal allocation of
resources, the values of 21,xx and 3x are meant to be
at equilibrium or almost equilibrium. This implies that the
management of the College had reasonable resources that
were evenly distributed among the four Departments and
the College Office. However, we strongly recommend
that the University, through the office of the Dean of
College of Natural Sciences, should deploy more
resources for better outputs in future and that this study
could be extended to the entire University to test the
effectiveness of allocation of resources vis-a-vis her
outputs.
V. CONCLUSION
In conclusion we were able to analyse the
existing model, identifying near optimal allocation of the
available resources by the management of the College,
and we also recommended hat more resource should be
deployed in the College by the University management
for better outputs in future and further research on the
subject matter to cover the entire University.
REFERENCES
[1] Simon D. Okonta & C.R. Chikwendu. (2008). An
iterative model for optimal allocation of government
resources to the less privileged. Publication of the
ICMCS, 4, 89-100.
[2] Jan Kolowski. (1992). Optimal allocation of resources
to growth and reproduction, TREE, 7(1), 15-19.
[3] Richards Mason & Burton Swanson. (1979).
Measurement for management and decision. Available at:
https://journals.sagepub.com/doi/abs/10.2307/41165309.
[4] Huiankai Chen, Frank Wang, & Na Helian. (2013). A
cost-efficient and reliable resource allocation model
based on cellular automation entropy for cloud project
scheduling. International Journal of Advanced Computer
Science and Application, 4(4), 7-14.
[5] Connor Andy M. & Shah Amit. (2014). Resource
allocation using metaheuristic search. Available at:
https://airccj.org/CSCP/vol4/csit41930.pdf.
[6] Joiner Kate. (2009). A mathematical model to
determine the optimal structure for allocating resource
packages: A case study. Publication of Journal on
Academic Medicine, 84, 13-25.
[7] Tarek, Hegazy. (1999). Optimisation of resource
allocation and leveling using genetic algorithms. Journal
of Construction Engineering and Management, 125(3),
167-175.
http://dx.doi.org/10.1061/(ASCE)0733-
9364(1999)125:3(167) )
[8] Connor, A.M. & Tilley, D.G. (1999). A tabu search
method for the optimisation of fluid power circuits.
IMechE Journal of Systems and Control, 212(5), 373-
381.
7. International Journal of Engineering and Management Research e-ISSN: 2250-0758 | p-ISSN: 2394-6962
Volume- 9, Issue- 1, (February 2019)
www.ijemr.net https://doi.org/10.31033/ijemr.9.1.10
114 This work is licensed under Creative Commons Attribution 4.0 International License.
[9] Elbeltagi, E., Hegazy, T., & Grierson, D. (2005).
Comparison among five evolutionary-based optimization
algorithms. Advanced Engineering Informatics, 19(1),
43-53.
[10] Feng, C.-W., Liu, L., & Burns, S. A. (1997). Using
genetic algorithms to solve construction time cost trade-
off problems. Journal of Computing in Civil Engineering,
11(3), 184-189.
APPENDIX
1) Proof of Theorems
Proof of Theorem One: In general, let 1
k
i i
x be a family of feasible solutions and let 0,1ia for all
1,2,...,i k such that 1... 321 aaa where 1 1x a x , then it implies that i i iAx a Ax a b
where 0x .
Proof of Theorem Two: let
0
1
n
i i
i
P y P
(i)
and
0
1
n
i i
i
Z y C
(ii)
where Z is the corresponding value of the objective function. Therefore, by hypothesis if 0j jZ C
)0.( jj ZCie j jZ C then 0Z = 0 jy Z Z .
Using equations (i) and (ii) to obtain the equation below, it implies that:
10 11 20 12 0
1 1 1
...)
n n n
i i i
i i i
y X P y X P y XP P
(iii)
Given that 0 10 1 20 2 ...P x P x P xP
Since, 1 2, ,..., mP P P are linearly independent, we can equate the co-efficient of equation (iii) which becomes:
10 1 20 2 0 0...x c x c x c Z