Decision makers in the educational field always seek new technologies and tools, which provide solid, fast answers that can support decision-making process. They need a platform that utilize the students’ academic data and turn them into knowledge to make the right strategic decisions. In this paper, a roadmap for implementing a data driven decision support system (DSS) is presented based on an educational data mart. The independent data mart is implemented on the students’ degrees in 8 subjects in a private school (AlIskandaria Primary School in Basrah province, Iraq). The DSS implementation roadmap is started from pre-processing paper-based data source and ended with providing three categories of online analytical processing (OLAP) queries (multidimensional OLAP, desktop OLAP and web OLAP). Key performance indicator (KPI) is implemented as an essential part of educational DSS to measure school performance. The static evaluation method shows that the proposed DSS follows the privacy, security and performance aspects with no errors after inspecting the DSS knowledge base. The evaluation shows that the data driven DSS based on independent data mart with KPI, OLAP is one of the best platforms to support short-tolong term academic decisions.
An overview of information extraction techniques for legal document analysis ...IJECEIAES
In an Indian law system, different courts publish their legal proceedings every month for future reference of legal experts and common people. Extensive manual labor and time are required to analyze and process the information stored in these lengthy complex legal documents. Automatic legal document processing is the solution to overcome drawbacks of manual processing and will be very helpful to the common man for a better understanding of a legal domain. In this paper, we are exploring the recent advances in the field of legal text processing and provide a comparative analysis of approaches used for it. In this work, we have divided the approaches into three classes NLP based, deep learning-based and, KBP based approaches. We have put special emphasis on the KBP approach as we strongly believe that this approach can handle the complexities of the legal domain well. We finally discuss some of the possible future research directions for legal document analysis and processing.
Distributed reflection denial of service attack: A critical review IJECEIAES
As the world becomes increasingly connected and the number of users grows exponentially and “things” go online, the prospect of cyberspace becoming a significant target for cybercriminals is a reality. Any host or device that is exposed on the internet is a prime target for cyberattacks. A denial-of-service (DoS) attack is accountable for the majority of these cyberattacks. Although various solutions have been proposed by researchers to mitigate this issue, cybercriminals always adapt their attack approach to circumvent countermeasures. One of the modified DoS attacks is known as distributed reflection denial-of-service attack (DRDoS). This type of attack is considered to be a more severe variant of the DoS attack and can be conducted in transmission control protocol (TCP) and user datagram protocol (UDP). However, this attack is not effective in the TCP protocol due to the three-way handshake approach that prevents this type of attack from passing through the network layer to the upper layers in the network stack. On the other hand, UDP is a connectionless protocol, so most of these DRDoS attacks pass through UDP. This study aims to examine and identify the differences between TCP-based and UDP-based DRDoS attacks.
There are essential security considerations in the systems used by semiconductor companies like TI. Along
with other semiconductor companies, TI has recognized that IT security is highly crucial during web
application developers' system development life cycle (SDLC). The challenges faced by TI web developers
were consolidated via questionnaires starting with how risk management and secure coding can be
reinforced in SDLC; and how to achieve IT Security, PM and SDLC initiatives by developing a prototype
which was evaluated considering the aforementioned goals. This study aimed to practice NIST strategies
by integrating risk management checkpoints in the SDLC; enforce secure coding using static code analysis
tool by developing a prototype application mapped with IT Security goals, project management and SDLC
initiatives and evaluation of the impact of the proposed solution. This paper discussed how SecureTI was
able to satisfy IT Security requirements in the SDLC and PM phases.
From IT service management to IT service governance: An ontological approach ...IJECEIAES
Some companies have achieved better performance as a result of their IT investments, while others have not, as organizations are interested in calculating the value added by their IT. There is a wide range of literature that agrees that the best practices used by organizations promote continuous improvement in service delivery. Nevertheless, overuse of these practices can have undesirable effects and unquantified investments. This paper proposed a practical tool formally developed according to the DSR design science approach, it addresses a domain relevant to both practitioners and academics by providing IT service governance (ITSG) domain model ontology, concerned with maximizing the clarity and veracity of the concepts within it. The results revealed that the proposed ontology resolved key barriers to ITSG process adoption in organizations, and that combining COBIT and ITIL practices would help organizations better manage their IT services and achieve better business-IT alignment.
The huge volume of text documents available on the internet has made it difficult to find valuable
information for specific users. In fact, the need for efficient applications to extract interested knowledge
from textual documents is vitally important. This paper addresses the problem of responding to user
queries by fetching the most relevant documents from a clustered set of documents. For this purpose, a
cluster-based information retrieval framework was proposed in this paper, in order to design and develop
a system for analysing and extracting useful patterns from text documents. In this approach, a pre-
processing step is first performed to find frequent and high-utility patterns in the data set. Then a Vector
Space Model (VSM) is performed to represent the dataset. The system was implemented through two main
phases. In phase 1, the clustering analysis process is designed and implemented to group documents into
several clusters, while in phase 2, an information retrieval process was implemented to rank clusters
according to the user queries in order to retrieve the relevant documents from specific clusters deemed
relevant to the query. Then the results are evaluated according to evaluation criteria. Recall and Precision
(P@5, P@10) of the retrieved results. P@5 was 0.660 and P@10 was 0.655.
An overview of information extraction techniques for legal document analysis ...IJECEIAES
In an Indian law system, different courts publish their legal proceedings every month for future reference of legal experts and common people. Extensive manual labor and time are required to analyze and process the information stored in these lengthy complex legal documents. Automatic legal document processing is the solution to overcome drawbacks of manual processing and will be very helpful to the common man for a better understanding of a legal domain. In this paper, we are exploring the recent advances in the field of legal text processing and provide a comparative analysis of approaches used for it. In this work, we have divided the approaches into three classes NLP based, deep learning-based and, KBP based approaches. We have put special emphasis on the KBP approach as we strongly believe that this approach can handle the complexities of the legal domain well. We finally discuss some of the possible future research directions for legal document analysis and processing.
Distributed reflection denial of service attack: A critical review IJECEIAES
As the world becomes increasingly connected and the number of users grows exponentially and “things” go online, the prospect of cyberspace becoming a significant target for cybercriminals is a reality. Any host or device that is exposed on the internet is a prime target for cyberattacks. A denial-of-service (DoS) attack is accountable for the majority of these cyberattacks. Although various solutions have been proposed by researchers to mitigate this issue, cybercriminals always adapt their attack approach to circumvent countermeasures. One of the modified DoS attacks is known as distributed reflection denial-of-service attack (DRDoS). This type of attack is considered to be a more severe variant of the DoS attack and can be conducted in transmission control protocol (TCP) and user datagram protocol (UDP). However, this attack is not effective in the TCP protocol due to the three-way handshake approach that prevents this type of attack from passing through the network layer to the upper layers in the network stack. On the other hand, UDP is a connectionless protocol, so most of these DRDoS attacks pass through UDP. This study aims to examine and identify the differences between TCP-based and UDP-based DRDoS attacks.
There are essential security considerations in the systems used by semiconductor companies like TI. Along
with other semiconductor companies, TI has recognized that IT security is highly crucial during web
application developers' system development life cycle (SDLC). The challenges faced by TI web developers
were consolidated via questionnaires starting with how risk management and secure coding can be
reinforced in SDLC; and how to achieve IT Security, PM and SDLC initiatives by developing a prototype
which was evaluated considering the aforementioned goals. This study aimed to practice NIST strategies
by integrating risk management checkpoints in the SDLC; enforce secure coding using static code analysis
tool by developing a prototype application mapped with IT Security goals, project management and SDLC
initiatives and evaluation of the impact of the proposed solution. This paper discussed how SecureTI was
able to satisfy IT Security requirements in the SDLC and PM phases.
From IT service management to IT service governance: An ontological approach ...IJECEIAES
Some companies have achieved better performance as a result of their IT investments, while others have not, as organizations are interested in calculating the value added by their IT. There is a wide range of literature that agrees that the best practices used by organizations promote continuous improvement in service delivery. Nevertheless, overuse of these practices can have undesirable effects and unquantified investments. This paper proposed a practical tool formally developed according to the DSR design science approach, it addresses a domain relevant to both practitioners and academics by providing IT service governance (ITSG) domain model ontology, concerned with maximizing the clarity and veracity of the concepts within it. The results revealed that the proposed ontology resolved key barriers to ITSG process adoption in organizations, and that combining COBIT and ITIL practices would help organizations better manage their IT services and achieve better business-IT alignment.
The huge volume of text documents available on the internet has made it difficult to find valuable
information for specific users. In fact, the need for efficient applications to extract interested knowledge
from textual documents is vitally important. This paper addresses the problem of responding to user
queries by fetching the most relevant documents from a clustered set of documents. For this purpose, a
cluster-based information retrieval framework was proposed in this paper, in order to design and develop
a system for analysing and extracting useful patterns from text documents. In this approach, a pre-
processing step is first performed to find frequent and high-utility patterns in the data set. Then a Vector
Space Model (VSM) is performed to represent the dataset. The system was implemented through two main
phases. In phase 1, the clustering analysis process is designed and implemented to group documents into
several clusters, while in phase 2, an information retrieval process was implemented to rank clusters
according to the user queries in order to retrieve the relevant documents from specific clusters deemed
relevant to the query. Then the results are evaluated according to evaluation criteria. Recall and Precision
(P@5, P@10) of the retrieved results. P@5 was 0.660 and P@10 was 0.655.
574An Integrated Framework for IT Infrastructure Management by Work Flow Mana...idescitation
Information Technology (IT) is one of the most emerging
fields in today’s Internet world. IT can be defined in various ways,
but is broadly considered to encompass the use of computers and
telecommunications equipment to store, retrieve, transmit and
manipulate data. Infrastructure is the base for everything. IT also
has an infrastructure, which can be managed and maintained
properly. For an organization’s Information Technology,
Infrastructure Management (IM) is the management of essential
operation components, such as policies, processes, equipment,
data, human resources and external contacts, for overall
effectiveness.
In this paper, we propose a methodology to manage the IT
Infrastructure in a better way. Our methodology uses the tree-
structure based architecture to manage the infrastructure with less
manual power. The process of how to manage the infrastructure is
discussed with efficient methodology and necessary steps with
algorithm, in this paper. Also, in this paper, the process of workflow
management on IT infrastructure management has been provided.
A study secure multi authentication based data classification model in cloud ...IJAAS Team
Abstract: Cloud computing is the most popular term among enterprises and news. The concepts come true because of fast internet bandwidth and advanced cooperation technology. Resources on the cloud can be accessed through internet without self built infrastructure. Cloud computing is effectively managing the security in the cloud applications. Data classification is a machine learning technique used to predict the class of the unclassified data. Data mining uses different tools to know the unknown, valid patterns and relationshipsin the dataset. These tools are mathematical algorithms, statistical models and Machine Learning (ML) algorithms. In this paper author uses improved Bayesian technique to classify the data and encrypt the sensitive data using hybrid stagnography. The encrypted and non encrypted sensitive data is sent to cloud environment and evaluate the parameters with different encryption algorithms.
Application of Savings and Loan Cooperative ServicesYogeshIJTSRD
Information technology knowledge has become a necessity. Most of a persons daily activities involve the assistance of information technology, both teaching and learning activities, working in institutions and entrepreneurship. In addition, knowledge of information technology is a persons main asset to be able to compete in the digital era. A cooperative is an institution that runs on the principle of kinship. So that in carrying out its activities prioritizing the welfare of its members and aiming to increase the economic growth of the community. The goal to be achieved in this community service activity is to produce an android based system of savings and loan cooperative service that is able to facilitate the delivery of information from management to members in real time and can provide solutions for speed, accuracy and accuracy. The method used in this research and development research uses the waterfall model, which is a process or steps to develop a new product or improve an existing product. The system analysis was carried out with the initial stages of the research method, the communication stage was carried out by the interview and observation process. The observation process is carried out by making direct observations to get an overview and the interview process is carried out by conducting questions and answers to match the data and information. After designing the next step is build an android system based of savings and loan cooperative service. Dimas Indra Laksmana | Maranatha Wijayaningtyas | Kiswandono "Application of Savings and Loan Cooperative Services" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-3 , April 2021, URL: https://www.ijtsrd.com/papers/ijtsrd40065.pdf Paper URL: https://www.ijtsrd.com/management/mis-and-retail-management/40065/application-of-savings-and-loan-cooperative-services/dimas-indra-laksmana
Automated hierarchical classification of scanned documents using convolutiona...IJECEIAES
This research proposed automated hierarchical classification of scanned documents with characteristics content that have unstructured text and special patterns (specific and short strings) using convolutional neural network (CNN) and regular expression method (REM). The research data using digital correspondence documents with format PDF images from Pusat Data Teknologi dan Informasi (Technology and Information Data Center). The document hierarchy covers type of letter, type of manuscript letter, origin of letter and subject of letter. The research method consists of preprocessing, classification, and storage to database. Preprocessing covers extraction using Tesseract optical character recognition (OCR) and formation of word document vector with Word2Vec. Hierarchical classification uses CNN to classify 5 types of letters and regular expression to classify 4 types of manuscript letter, 15 origins of letter and 25 subjects of letter. The classified documents are stored in the Hive database in Hadoop big data architecture. The amount of data used is 5200 documents, consisting of 4000 for training, 1000 for testing and 200 for classification prediction documents. The trial result of 200 new documents is 188 documents correctly classified and 12 documents incorrectly classified. The accuracy of automated hierarchical classification is 94%. Next, the search of classified scanned documents based on content can be developed.
ANALYSIS OF SYSTEM ON CHIP DESIGN USING ARTIFICIAL INTELLIGENCEijesajournal
Automation is a powerful word that lies everywhere. It shows that without automation, application will not get developed. In a semiconductor industry, artificial intelligence played a vital role for implementing the chip based design through automation .The main advantage of applying the machine learning & deep learning technique is to improve the implementation rate based upon the capability of the society. The main objective of the proposed system is to apply the deep learning using data driven approach for controlling the system. Thus leads to a improvement in design, delay ,speed of operation & costs. Through this system, huge volume of data’s that are generated by the system will also get control.
BIG DATA IN CLOUD COMPUTING REVIEW AND OPPORTUNITIESijcsit
Big Data is used in decision making process to gain useful insights hidden in the data for business and engineering. At the same time it presents challenges in processing, cloud computing has helped in advancement of big data by providing computational, networking and storage capacity. This paper presents the review, opportunities and challenges of transforming big data using cloud computing resources.
ADMINISTRATION SECURITY ISSUES IN CLOUD COMPUTINGijitcs
This paper discover the most administration security issues in Cloud Computing in term of trustworthy and gives the reader a big visualization of the concept of the Service Level Agreement in Cloud Computing and it’s some security issues. Finding a model that mostly guarantee that the data be saved secure within setting for factors which are data location, duration of keeping the data in cloud environment, trust between customer and provider, and procedure of formulating the SLA.
Design and implementation of the web (extract, transform, load) process in da...IAESIJAI
Owing to the recent increased size and complexity of data in addition to management issues, data storage requires extensive attention so that it can be employed in realistic applications. Hence, the requirement for designing and implementing a data ware-house system has become an urgent necessity. Data extraction, transformation and loading (ETL) is a vital part of the data warehouse architecture. This study designs a data warehouse application that contains a web ETL process that divides the work between the input device and server. This system is proposed to solve the lack of work partitioning between the input device and server. The designed system can be used in many branches and disciplines because of its high performance in adding data, analyzing data by using a web server and building queries. Analysis of the results proves that the designed system is fast in cleaning and transferring data between the remote parts of the system connected to the internet. ETL without missing any data consumes 0.00582 seconds.
Applying Classification Technique using DID3 Algorithm to improve Decision Su...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
574An Integrated Framework for IT Infrastructure Management by Work Flow Mana...idescitation
Information Technology (IT) is one of the most emerging
fields in today’s Internet world. IT can be defined in various ways,
but is broadly considered to encompass the use of computers and
telecommunications equipment to store, retrieve, transmit and
manipulate data. Infrastructure is the base for everything. IT also
has an infrastructure, which can be managed and maintained
properly. For an organization’s Information Technology,
Infrastructure Management (IM) is the management of essential
operation components, such as policies, processes, equipment,
data, human resources and external contacts, for overall
effectiveness.
In this paper, we propose a methodology to manage the IT
Infrastructure in a better way. Our methodology uses the tree-
structure based architecture to manage the infrastructure with less
manual power. The process of how to manage the infrastructure is
discussed with efficient methodology and necessary steps with
algorithm, in this paper. Also, in this paper, the process of workflow
management on IT infrastructure management has been provided.
A study secure multi authentication based data classification model in cloud ...IJAAS Team
Abstract: Cloud computing is the most popular term among enterprises and news. The concepts come true because of fast internet bandwidth and advanced cooperation technology. Resources on the cloud can be accessed through internet without self built infrastructure. Cloud computing is effectively managing the security in the cloud applications. Data classification is a machine learning technique used to predict the class of the unclassified data. Data mining uses different tools to know the unknown, valid patterns and relationshipsin the dataset. These tools are mathematical algorithms, statistical models and Machine Learning (ML) algorithms. In this paper author uses improved Bayesian technique to classify the data and encrypt the sensitive data using hybrid stagnography. The encrypted and non encrypted sensitive data is sent to cloud environment and evaluate the parameters with different encryption algorithms.
Application of Savings and Loan Cooperative ServicesYogeshIJTSRD
Information technology knowledge has become a necessity. Most of a persons daily activities involve the assistance of information technology, both teaching and learning activities, working in institutions and entrepreneurship. In addition, knowledge of information technology is a persons main asset to be able to compete in the digital era. A cooperative is an institution that runs on the principle of kinship. So that in carrying out its activities prioritizing the welfare of its members and aiming to increase the economic growth of the community. The goal to be achieved in this community service activity is to produce an android based system of savings and loan cooperative service that is able to facilitate the delivery of information from management to members in real time and can provide solutions for speed, accuracy and accuracy. The method used in this research and development research uses the waterfall model, which is a process or steps to develop a new product or improve an existing product. The system analysis was carried out with the initial stages of the research method, the communication stage was carried out by the interview and observation process. The observation process is carried out by making direct observations to get an overview and the interview process is carried out by conducting questions and answers to match the data and information. After designing the next step is build an android system based of savings and loan cooperative service. Dimas Indra Laksmana | Maranatha Wijayaningtyas | Kiswandono "Application of Savings and Loan Cooperative Services" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-3 , April 2021, URL: https://www.ijtsrd.com/papers/ijtsrd40065.pdf Paper URL: https://www.ijtsrd.com/management/mis-and-retail-management/40065/application-of-savings-and-loan-cooperative-services/dimas-indra-laksmana
Automated hierarchical classification of scanned documents using convolutiona...IJECEIAES
This research proposed automated hierarchical classification of scanned documents with characteristics content that have unstructured text and special patterns (specific and short strings) using convolutional neural network (CNN) and regular expression method (REM). The research data using digital correspondence documents with format PDF images from Pusat Data Teknologi dan Informasi (Technology and Information Data Center). The document hierarchy covers type of letter, type of manuscript letter, origin of letter and subject of letter. The research method consists of preprocessing, classification, and storage to database. Preprocessing covers extraction using Tesseract optical character recognition (OCR) and formation of word document vector with Word2Vec. Hierarchical classification uses CNN to classify 5 types of letters and regular expression to classify 4 types of manuscript letter, 15 origins of letter and 25 subjects of letter. The classified documents are stored in the Hive database in Hadoop big data architecture. The amount of data used is 5200 documents, consisting of 4000 for training, 1000 for testing and 200 for classification prediction documents. The trial result of 200 new documents is 188 documents correctly classified and 12 documents incorrectly classified. The accuracy of automated hierarchical classification is 94%. Next, the search of classified scanned documents based on content can be developed.
ANALYSIS OF SYSTEM ON CHIP DESIGN USING ARTIFICIAL INTELLIGENCEijesajournal
Automation is a powerful word that lies everywhere. It shows that without automation, application will not get developed. In a semiconductor industry, artificial intelligence played a vital role for implementing the chip based design through automation .The main advantage of applying the machine learning & deep learning technique is to improve the implementation rate based upon the capability of the society. The main objective of the proposed system is to apply the deep learning using data driven approach for controlling the system. Thus leads to a improvement in design, delay ,speed of operation & costs. Through this system, huge volume of data’s that are generated by the system will also get control.
BIG DATA IN CLOUD COMPUTING REVIEW AND OPPORTUNITIESijcsit
Big Data is used in decision making process to gain useful insights hidden in the data for business and engineering. At the same time it presents challenges in processing, cloud computing has helped in advancement of big data by providing computational, networking and storage capacity. This paper presents the review, opportunities and challenges of transforming big data using cloud computing resources.
ADMINISTRATION SECURITY ISSUES IN CLOUD COMPUTINGijitcs
This paper discover the most administration security issues in Cloud Computing in term of trustworthy and gives the reader a big visualization of the concept of the Service Level Agreement in Cloud Computing and it’s some security issues. Finding a model that mostly guarantee that the data be saved secure within setting for factors which are data location, duration of keeping the data in cloud environment, trust between customer and provider, and procedure of formulating the SLA.
Design and implementation of the web (extract, transform, load) process in da...IAESIJAI
Owing to the recent increased size and complexity of data in addition to management issues, data storage requires extensive attention so that it can be employed in realistic applications. Hence, the requirement for designing and implementing a data ware-house system has become an urgent necessity. Data extraction, transformation and loading (ETL) is a vital part of the data warehouse architecture. This study designs a data warehouse application that contains a web ETL process that divides the work between the input device and server. This system is proposed to solve the lack of work partitioning between the input device and server. The designed system can be used in many branches and disciplines because of its high performance in adding data, analyzing data by using a web server and building queries. Analysis of the results proves that the designed system is fast in cleaning and transferring data between the remote parts of the system connected to the internet. ETL without missing any data consumes 0.00582 seconds.
Applying Classification Technique using DID3 Algorithm to improve Decision Su...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Successfully supporting managerial decision-making is critically dep.pdfanushasarees
Successfully supporting managerial decision-making is critically dependent upon the availability
of integrated, high quality information organized and presented in a timely and easily understood
manner. Data warehouses have emerged to meet this need. They serve as an integrated repository
for internal and external data—intelligence critical to understanding and evaluating the business
within its environmental context. With the addition of models, analytic tools, and user interfaces,
they have the potential to provide actionable information resources—business intelligence that
supports effective problem and opportunity identification, critical decision-making, and strategy
formulation, implementation, and evaluation. Four themes frame our analysis: integration,
implementation, intelligence, and innovation.
1:four major categories of business environment factors is
INTEGRATION,IMPLEMENTATION,INTELLIGENCE AND INNOVATION.
Organizations use data warehousing to support strategic and mission-critical applications. Data
deposited into the data warehouse must be transformed into information and knowledge and
appropriately disseminated to decision-makers within the organization and to critical partners in
various capacities within the organizational value chain. Crucial problems that must be addressed
in this area are: the modes of dissemination of information to the end user; the development,
selection, and implementation of appropriate models, analytic tools, and data mining tools; the
privacy and security of data; system performance; and adequate levels of training and support.
The human–computer interface is of paramount importance in the data warehouse environment
and the primary determinant of success from the end-user perspective. In order to support
analysis and reporting tasks, the data warehouse must have high quality data and make these data
accessible through intuitive interface technologies. Data warehouse browsing tools provide star-
schema query-like access through a flexible menu-based interface, with pull-down menus
representing important dimensions. These types of tools are easy to use and support some ad-hoc
exploration, but are usually controlled through an administrative layer that determines the data
available to endusers. In developing a flexible interface, there is a tradeoff between the ability to
express ad-hoc queries and the ease-of-use that results from pre-defined constructs implemented
by data warehouse designers and administrators. Of course, SQL can provide an ad-hoc query
facility, but its use requires some care in the data warehouse environment where the combination
of very large tables and ill-formed user queries can produce some truly awful performance and
potentially erroneous results. Casual users may not have sufficient understanding of SQL or of
the database schema to effectively use such an interface. Typically, only trained power users
(e.g., DBAs, application developers) are permitted to write SQL queries on .
A Generic Model for Student Data Analytic Web Service (SDAWS)Editor IJCATR
Any university management system accumulates a cartload of data and analytics can be applied on it to gather useful
information to aid the academic decision making process. This paper is a novel attempt to demonstrate the significance of a data
analytic web service in the education domain. This can be integrated with the University Management System or any other application
of the university easily. Analytics as a web service offers much benefits over the traditional analysis methods. The web service can be
hosted on a web server and accessed over the internet or on to the private cloud of the campus. The data from various courses from
different departments can be uploaded and analyzed easily. In this paper we design a web service framework to be used in educational
data mining that provide analysis as a service.
Data warehousing is a technique for collecting and managing data from multiple internal and
external sources to provide meaningful business insights. Data warehouses are designed to give a long-range
view of data over time and provide a decision support system environment. They are a vital component of
business intelligence, which is designed for data analysis and reporting. They are used to provide greater
insight into the performance of a business. This paper provides a brief introduction on data warehousing
Higher education institutions now a days are operating in an increasingly complex and
competitive environment. The application of innovation is a must for sustaining its competitive advantage.
Institution leaders are using data management and analytics to question the status quo and develop effective
solutions. Achieving these insights and information requires not a single report from a single system, but
rather the ability to access, share, and explore institution-wide data that can be transformed into meaningful
insights at every level of the institution. Consequently, institutions are facing problems in providing necessary
information technology support for fulfilling excellence in performance. More specifically, the best practices
of big data management and analytics need to be considered within higher education institutions. Therefore,
the study aimed at investigating big data and analytics, in terms of: (1) definition; (2) its most important
principles; (3) models; and (4) benefits of its use to fulfill performance excellence in higher education
institutions. This involves shedding light on big data and analytics models and the possibility of its use in
higher education institutions, and exploring the effect of using big data and analytics in achieving performance
excellence. To reach these objectives, the researcher employed a qualitative research methodology for
collecting and analyzing data. The study concluded the most important result, that there is a significant
relationship between big data and analytics and excellence of performance as big data management and
analytics mainly aims at achieving tasks quickly with the least effort and cost. These positive results support
the use of big data and analytics in institutions and improving knowledge in this field and providing a practical
guide adaptable to the institution structure. This paper also identifies the role of big data and analytics in
institutions of higher education worldwide and outlines the implementation challenges and opportunities in the
education industry.
The project is to ask college related queries and get the responses through a chatbot an Artificial Conversational Entity. This System is a web application which provides answer to the query of the student. Students just have to query through the bot which is used for chatting. Students can chat using any format there is no specific format the user has to follow. This system helps the student to be updated about the college activities.
Designing a Framework to Standardize Data Warehouse Development Process for E...ijdms
Data warehousing solutions work as information base for large organizations to support their decision
making tasks. With the proven need of such solutions in current times, it is crucial to effectively design,
implement and utilize these solutions. Data warehouse (DW) implementation has been a challenge for the
organizations and the success rate of its implementation has been very low. To address these problems, we
have proposed a framework for developing effective data warehousing solutions. The framework is
primarily based on procedural aspect of data warehouse development and aims to standardize its process.
We first identified its components and then worked on them in depth to come up with the framework for
effective implementation of data warehousing projects. To verify effectiveness of the designed framework,
we worked on National Rural Health Mission (NRHM) project of Indian government and designed data
warehousing solution using the proposed framework.
KEYWORDS
Data warehousing, Framework Design, Dimensional
Standard Safeguarding Dataset - overview for CSCDUG.pptxRocioMendez59
13 July, 2023 - CSCDUG Online Event
Presenting the Sector-led Standard Safeguarding Dataset
Colleagues from Data to Insight, the LA-led service for children’s safeguarding data professionals, are delivering a DfE-funded project in partnership with LAs to define a new “standard safeguarding dataset” which all LAs will be able to produce from their safeguarding information systems.
At this session, they shared what they’ve learned so far from user research with LA colleagues and discussed their early thinking about what a better standard dataset might look like. Participants shared their own thoughts about how to improve these systems and processes.
Presenters
Alistair Herbert
Alistair is the lead officer for Data to Insight, the LA-led service for children’s safeguarding data professionals. With a career focused on local authority children’s services data work, he knows about safeguarding data, information systems, and cross-organisation collaboration.
John Foster
John is a Data Manager for Data to Insight. He has supported a range of children’s services data work, most recently at Shropshire Council. He led Data to Insight’s project to introduce the first national benchmarking dataset for Early Help, and is the user research lead for Data to Insight’s Standard Safeguarding Dataset project.
Rob Harrison and Joe Cornford-Hutchings
Rob and Joe are new Data Managers joining Data to Insight from the private and public sector respectively. They bring between them a wealth of experience and technical expertise, and will be working together to support design and implementation of the new Standard Safeguarding Dataset through 2023-24.
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
Enhancing battery system identification: nonlinear autoregressive modeling fo...IJECEIAES
Precisely characterizing Li-ion batteries is essential for optimizing their
performance, enhancing safety, and prolonging their lifespan across various
applications, such as electric vehicles and renewable energy systems. This
article introduces an innovative nonlinear methodology for system
identification of a Li-ion battery, employing a nonlinear autoregressive with
exogenous inputs (NARX) model. The proposed approach integrates the
benefits of nonlinear modeling with the adaptability of the NARX structure,
facilitating a more comprehensive representation of the intricate
electrochemical processes within the battery. Experimental data collected
from a Li-ion battery operating under diverse scenarios are employed to
validate the effectiveness of the proposed methodology. The identified
NARX model exhibits superior accuracy in predicting the battery's behavior
compared to traditional linear models. This study underscores the
importance of accounting for nonlinearities in battery modeling, providing
insights into the intricate relationships between state-of-charge, voltage, and
current under dynamic conditions.
Smart grid deployment: from a bibliometric analysis to a surveyIJECEIAES
Smart grids are one of the last decades' innovations in electrical energy.
They bring relevant advantages compared to the traditional grid and
significant interest from the research community. Assessing the field's
evolution is essential to propose guidelines for facing new and future smart
grid challenges. In addition, knowing the main technologies involved in the
deployment of smart grids (SGs) is important to highlight possible
shortcomings that can be mitigated by developing new tools. This paper
contributes to the research trends mentioned above by focusing on two
objectives. First, a bibliometric analysis is presented to give an overview of
the current research level about smart grid deployment. Second, a survey of
the main technological approaches used for smart grid implementation and
their contributions are highlighted. To that effect, we searched the Web of
Science (WoS), and the Scopus databases. We obtained 5,663 documents
from WoS and 7,215 from Scopus on smart grid implementation or
deployment. With the extraction limitation in the Scopus database, 5,872 of
the 7,215 documents were extracted using a multi-step process. These two
datasets have been analyzed using a bibliometric tool called bibliometrix.
The main outputs are presented with some recommendations for future
research.
Use of analytical hierarchy process for selecting and prioritizing islanding ...IJECEIAES
One of the problems that are associated to power systems is islanding
condition, which must be rapidly and properly detected to prevent any
negative consequences on the system's protection, stability, and security.
This paper offers a thorough overview of several islanding detection
strategies, which are divided into two categories: classic approaches,
including local and remote approaches, and modern techniques, including
techniques based on signal processing and computational intelligence.
Additionally, each approach is compared and assessed based on several
factors, including implementation costs, non-detected zones, declining
power quality, and response times using the analytical hierarchy process
(AHP). The multi-criteria decision-making analysis shows that the overall
weight of passive methods (24.7%), active methods (7.8%), hybrid methods
(5.6%), remote methods (14.5%), signal processing-based methods (26.6%),
and computational intelligent-based methods (20.8%) based on the
comparison of all criteria together. Thus, it can be seen from the total weight
that hybrid approaches are the least suitable to be chosen, while signal
processing-based methods are the most appropriate islanding detection
method to be selected and implemented in power system with respect to the
aforementioned factors. Using Expert Choice software, the proposed
hierarchy model is studied and examined.
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...IJECEIAES
The power generated by photovoltaic (PV) systems is influenced by
environmental factors. This variability hampers the control and utilization of
solar cells' peak output. In this study, a single-stage grid-connected PV
system is designed to enhance power quality. Our approach employs fuzzy
logic in the direct power control (DPC) of a three-phase voltage source
inverter (VSI), enabling seamless integration of the PV connected to the
grid. Additionally, a fuzzy logic-based maximum power point tracking
(MPPT) controller is adopted, which outperforms traditional methods like
incremental conductance (INC) in enhancing solar cell efficiency and
minimizing the response time. Moreover, the inverter's real-time active and
reactive power is directly managed to achieve a unity power factor (UPF).
The system's performance is assessed through MATLAB/Simulink
implementation, showing marked improvement over conventional methods,
particularly in steady-state and varying weather conditions. For solar
irradiances of 500 and 1,000 W/m2
, the results show that the proposed
method reduces the total harmonic distortion (THD) of the injected current
to the grid by approximately 46% and 38% compared to conventional
methods, respectively. Furthermore, we compare the simulation results with
IEEE standards to evaluate the system's grid compatibility.
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...IJECEIAES
Photovoltaic systems have emerged as a promising energy resource that
caters to the future needs of society, owing to their renewable, inexhaustible,
and cost-free nature. The power output of these systems relies on solar cell
radiation and temperature. In order to mitigate the dependence on
atmospheric conditions and enhance power tracking, a conventional
approach has been improved by integrating various methods. To optimize
the generation of electricity from solar systems, the maximum power point
tracking (MPPT) technique is employed. To overcome limitations such as
steady-state voltage oscillations and improve transient response, two
traditional MPPT methods, namely fuzzy logic controller (FLC) and perturb
and observe (P&O), have been modified. This research paper aims to
simulate and validate the step size of the proposed modified P&O and FLC
techniques within the MPPT algorithm using MATLAB/Simulink for
efficient power tracking in photovoltaic systems.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Remote field-programmable gate array laboratory for signal acquisition and de...IJECEIAES
A remote laboratory utilizing field-programmable gate array (FPGA) technologies enhances students’ learning experience anywhere and anytime in embedded system design. Existing remote laboratories prioritize hardware access and visual feedback for observing board behavior after programming, neglecting comprehensive debugging tools to resolve errors that require internal signal acquisition. This paper proposes a novel remote embeddedsystem design approach targeting FPGA technologies that are fully interactive via a web-based platform. Our solution provides FPGA board access and debugging capabilities beyond the visual feedback provided by existing remote laboratories. We implemented a lab module that allows users to seamlessly incorporate into their FPGA design. The module minimizes hardware resource utilization while enabling the acquisition of a large number of data samples from the signal during the experiments by adaptively compressing the signal prior to data transmission. The results demonstrate an average compression ratio of 2.90 across three benchmark signals, indicating efficient signal acquisition and effective debugging and analysis. This method allows users to acquire more data samples than conventional methods. The proposed lab allows students to remotely test and debug their designs, bridging the gap between theory and practice in embedded system design.
Detecting and resolving feature envy through automated machine learning and m...IJECEIAES
Efficiently identifying and resolving code smells enhances software project quality. This paper presents a novel solution, utilizing automated machine learning (AutoML) techniques, to detect code smells and apply move method refactoring. By evaluating code metrics before and after refactoring, we assessed its impact on coupling, complexity, and cohesion. Key contributions of this research include a unique dataset for code smell classification and the development of models using AutoGluon for optimal performance. Furthermore, the study identifies the top 20 influential features in classifying feature envy, a well-known code smell, stemming from excessive reliance on external classes. We also explored how move method refactoring addresses feature envy, revealing reduced coupling and complexity, and improved cohesion, ultimately enhancing code quality. In summary, this research offers an empirical, data-driven approach, integrating AutoML and move method refactoring to optimize software project quality. Insights gained shed light on the benefits of refactoring on code quality and the significance of specific features in detecting feature envy. Future research can expand to explore additional refactoring techniques and a broader range of code metrics, advancing software engineering practices and standards.
Smart monitoring technique for solar cell systems using internet of things ba...IJECEIAES
Rapidly and remotely monitoring and receiving the solar cell systems status parameters, solar irradiance, temperature, and humidity, are critical issues in enhancement their efficiency. Hence, in the present article an improved smart prototype of internet of things (IoT) technique based on embedded system through NodeMCU ESP8266 (ESP-12E) was carried out experimentally. Three different regions at Egypt; Luxor, Cairo, and El-Beheira cities were chosen to study their solar irradiance profile, temperature, and humidity by the proposed IoT system. The monitoring data of solar irradiance, temperature, and humidity were live visualized directly by Ubidots through hypertext transfer protocol (HTTP) protocol. The measured solar power radiation in Luxor, Cairo, and El-Beheira ranged between 216-1000, 245-958, and 187-692 W/m 2 respectively during the solar day. The accuracy and rapidity of obtaining monitoring results using the proposed IoT system made it a strong candidate for application in monitoring solar cell systems. On the other hand, the obtained solar power radiation results of the three considered regions strongly candidate Luxor and Cairo as suitable places to build up a solar cells system station rather than El-Beheira.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
Developing a smart system for infant incubators using the internet of things ...IJECEIAES
This research is developing an incubator system that integrates the internet of things and artificial intelligence to improve care for premature babies. The system workflow starts with sensors that collect data from the incubator. Then, the data is sent in real-time to the internet of things (IoT) broker eclipse mosquito using the message queue telemetry transport (MQTT) protocol version 5.0. After that, the data is stored in a database for analysis using the long short-term memory network (LSTM) method and displayed in a web application using an application programming interface (API) service. Furthermore, the experimental results produce as many as 2,880 rows of data stored in the database. The correlation coefficient between the target attribute and other attributes ranges from 0.23 to 0.48. Next, several experiments were conducted to evaluate the model-predicted value on the test data. The best results are obtained using a two-layer LSTM configuration model, each with 60 neurons and a lookback setting 6. This model produces an R 2 value of 0.934, with a root mean square error (RMSE) value of 0.015 and a mean absolute error (MAE) of 0.008. In addition, the R 2 value was also evaluated for each attribute used as input, with a result of values between 0.590 and 0.845.
A review on internet of things-based stingless bee's honey production with im...IJECEIAES
Honey is produced exclusively by honeybees and stingless bees which both are well adapted to tropical and subtropical regions such as Malaysia. Stingless bees are known for producing small amounts of honey and are known for having a unique flavor profile. Problem identified that many stingless bees collapsed due to weather, temperature and environment. It is critical to understand the relationship between the production of stingless bee honey and environmental conditions to improve honey production. Thus, this paper presents a review on stingless bee's honey production and prediction modeling. About 54 previous research has been analyzed and compared in identifying the research gaps. A framework on modeling the prediction of stingless bee honey is derived. The result presents the comparison and analysis on the internet of things (IoT) monitoring systems, honey production estimation, convolution neural networks (CNNs), and automatic identification methods on bee species. It is identified based on image detection method the top best three efficiency presents CNN is at 98.67%, densely connected convolutional networks with YOLO v3 is 97.7%, and DenseNet201 convolutional networks 99.81%. This study is significant to assist the researcher in developing a model for predicting stingless honey produced by bee's output, which is important for a stable economy and food security.
A trust based secure access control using authentication mechanism for intero...IJECEIAES
The internet of things (IoT) is a revolutionary innovation in many aspects of our society including interactions, financial activity, and global security such as the military and battlefield internet. Due to the limited energy and processing capacity of network devices, security, energy consumption, compatibility, and device heterogeneity are the long-term IoT problems. As a result, energy and security are critical for data transmission across edge and IoT networks. Existing IoT interoperability techniques need more computation time, have unreliable authentication mechanisms that break easily, lose data easily, and have low confidentiality. In this paper, a key agreement protocol-based authentication mechanism for IoT devices is offered as a solution to this issue. This system makes use of information exchange, which must be secured to prevent access by unauthorized users. Using a compact contiki/cooja simulator, the performance and design of the suggested framework are validated. The simulation findings are evaluated based on detection of malicious nodes after 60 minutes of simulation. The suggested trust method, which is based on privacy access control, reduced packet loss ratio to 0.32%, consumed 0.39% power, and had the greatest average residual energy of 0.99 mJoules at 10 nodes.
Fuzzy linear programming with the intuitionistic polygonal fuzzy numbersIJECEIAES
In real world applications, data are subject to ambiguity due to several factors; fuzzy sets and fuzzy numbers propose a great tool to model such ambiguity. In case of hesitation, the complement of a membership value in fuzzy numbers can be different from the non-membership value, in which case we can model using intuitionistic fuzzy numbers as they provide flexibility by defining both a membership and a non-membership functions. In this article, we consider the intuitionistic fuzzy linear programming problem with intuitionistic polygonal fuzzy numbers, which is a generalization of the previous polygonal fuzzy numbers found in the literature. We present a modification of the simplex method that can be used to solve any general intuitionistic fuzzy linear programming problem after approximating the problem by an intuitionistic polygonal fuzzy number with n edges. This method is given in a simple tableau formulation, and then applied on numerical examples for clarity.
The performance of artificial intelligence in prostate magnetic resonance im...IJECEIAES
Prostate cancer is the predominant form of cancer observed in men worldwide. The application of magnetic resonance imaging (MRI) as a guidance tool for conducting biopsies has been established as a reliable and well-established approach in the diagnosis of prostate cancer. The diagnostic performance of MRI-guided prostate cancer diagnosis exhibits significant heterogeneity due to the intricate and multi-step nature of the diagnostic pathway. The development of artificial intelligence (AI) models, specifically through the utilization of machine learning techniques such as deep learning, is assuming an increasingly significant role in the field of radiology. In the realm of prostate MRI, a considerable body of literature has been dedicated to the development of various AI algorithms. These algorithms have been specifically designed for tasks such as prostate segmentation, lesion identification, and classification. The overarching objective of these endeavors is to enhance diagnostic performance and foster greater agreement among different observers within MRI scans for the prostate. This review article aims to provide a concise overview of the application of AI in the field of radiology, with a specific focus on its utilization in prostate MRI.
Seizure stage detection of epileptic seizure using convolutional neural networksIJECEIAES
According to the World Health Organization (WHO), seventy million individuals worldwide suffer from epilepsy, a neurological disorder. While electroencephalography (EEG) is crucial for diagnosing epilepsy and monitoring the brain activity of epilepsy patients, it requires a specialist to examine all EEG recordings to find epileptic behavior. This procedure needs an experienced doctor, and a precise epilepsy diagnosis is crucial for appropriate treatment. To identify epileptic seizures, this study employed a convolutional neural network (CNN) based on raw scalp EEG signals to discriminate between preictal, ictal, postictal, and interictal segments. The possibility of these characteristics is explored by examining how well timedomain signals work in the detection of epileptic signals using intracranial Freiburg Hospital (FH), scalp Children's Hospital Boston-Massachusetts Institute of Technology (CHB-MIT) databases, and Temple University Hospital (TUH) EEG. To test the viability of this approach, two types of experiments were carried out. Firstly, binary class classification (preictal, ictal, postictal each versus interictal) and four-class classification (interictal versus preictal versus ictal versus postictal). The average accuracy for stage detection using CHB-MIT database was 84.4%, while the Freiburg database's time-domain signals had an accuracy of 79.7% and the highest accuracy of 94.02% for classification in the TUH EEG database when comparing interictal stage to preictal stage.
Analysis of driving style using self-organizing maps to analyze driver behaviorIJECEIAES
Modern life is strongly associated with the use of cars, but the increase in acceleration speeds and their maneuverability leads to a dangerous driving style for some drivers. In these conditions, the development of a method that allows you to track the behavior of the driver is relevant. The article provides an overview of existing methods and models for assessing the functioning of motor vehicles and driver behavior. Based on this, a combined algorithm for recognizing driving style is proposed. To do this, a set of input data was formed, including 20 descriptive features: About the environment, the driver's behavior and the characteristics of the functioning of the car, collected using OBD II. The generated data set is sent to the Kohonen network, where clustering is performed according to driving style and degree of danger. Getting the driving characteristics into a particular cluster allows you to switch to the private indicators of an individual driver and considering individual driving characteristics. The application of the method allows you to identify potentially dangerous driving styles that can prevent accidents.
Hyperspectral object classification using hybrid spectral-spatial fusion and ...IJECEIAES
Because of its spectral-spatial and temporal resolution of greater areas, hyperspectral imaging (HSI) has found widespread application in the field of object classification. The HSI is typically used to accurately determine an object's physical characteristics as well as to locate related objects with appropriate spectral fingerprints. As a result, the HSI has been extensively applied to object identification in several fields, including surveillance, agricultural monitoring, environmental research, and precision agriculture. However, because of their enormous size, objects require a lot of time to classify; for this reason, both spectral and spatial feature fusion have been completed. The existing classification strategy leads to increased misclassification, and the feature fusion method is unable to preserve semantic object inherent features; This study addresses the research difficulties by introducing a hybrid spectral-spatial fusion (HSSF) technique to minimize feature size while maintaining object intrinsic qualities; Lastly, a soft-margins kernel is proposed for multi-layer deep support vector machine (MLDSVM) to reduce misclassification. The standard Indian pines dataset is used for the experiment, and the outcome demonstrates that the HSSF-MLDSVM model performs substantially better in terms of accuracy and Kappa coefficient.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Implementing data-driven decision support system based on independent educational data mart
1. International Journal of Electrical and Computer Engineering (IJECE)
Vol. 11, No. 6, December 2021, pp. 5301~5314
ISSN: 2088-8708, DOI: 10.11591/ijece.v11i6.pp5301-5314 5301
Journal homepage: http://ijece.iaescore.com
Implementing data-driven decision support system based on
independent educational data mart
Alaa Khalaf Hamoud, Marwah Kamil Hussein, Zahraa Alhilfi, Rabab Hassan Sabr
Computer Information Systems Department, University of Basrah, Iraq
Article Info ABSTRACT
Article history:
Received Nov 28, 2020
Revised Apr 8, 2021
Accepted Apr 26, 2021
Decision makers in the educational field always seek new technologies and
tools, which provide solid, fast answers that can support decision-making
process. They need a platform that utilize the students’ academic data and
turn them into knowledge to make the right strategic decisions. In this paper,
a roadmap for implementing a data driven decision support system (DSS) is
presented based on an educational data mart. The independent data mart is
implemented on the students’ degrees in 8 subjects in a private school (Al-
Iskandaria Primary School in Basrah province, Iraq). The DSS
implementation roadmap is started from pre-processing paper-based data
source and ended with providing three categories of online analytical
processing (OLAP) queries (multidimensional OLAP, desktop OLAP and
web OLAP). Key performance indicator (KPI) is implemented as an essential
part of educational DSS to measure school performance. The static
evaluation method shows that the proposed DSS follows the privacy, security
and performance aspects with no errors after inspecting the DSS knowledge
base. The evaluation shows that the data driven DSS based on independent
data mart with KPI, OLAP is one of the best platforms to support short-to-
long term academic decisions.
Keywords:
Decision support system
Educational data mart
ETL
Independent data mart
KPI
OLAP
This is an open access article under the CC BY-SA license.
Corresponding Author:
Alaa Khalaf Hamoud
Computer Information Systems Department
University of Basrah
61003 Karmat Ali Camp, Basrah Iraq
Email: alaa.hamoud@uobasrah.edu.iq
1. INTRODUCTION
The importance of data repositories has emerged with the existence of large institutions.
Departments manage their own databases (marketing, financial, and administrative), which organise massive
common data. Finding data related to a specific subject and organising such data into a single database called
‘the data store’ are required; another requirement is maintaining the special rules for stores with no
modification or changing the rules on the basis of topics by using special software, especially in each subject,
via a process called schema integration. This process also specifies how to transfer the merged data [1]. Data
warehouse (DW) can be defined as a subject-oriented, time-variant, integrated and non-volatile data used to
support strategic decision making [2]. DW holds a collection of permanent historical data that assists in
administrative decision making to help in accessing data for the purposes of time analysis, knowledge
discovery and decision making [3]. It is specifically designed to extract, process and represent data in a
suitable format for this purpose. The data extracted from different sources, rules, systems and places are
defined as a kind of database that contains a huge amount of existing data to help in making decisions within
an organization [4]. The interior intention of a user requires indicators, which are systems needed for
2. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 11, No. 6, December 2021 : 5301 - 5314
5302
studying, analysing and presenting enterprise data in a manner that enables senior management to make
decisions [5], [6]. The type of DW that an organization should adopt depends on the way that the
organization works and the type of decision support system (DSS) it needs. One of the simplest types of data
repositories is the operational data store (ODS), a production database that has been replicated after errors
have been processed. ODS is primarily used to complete standard process reports and provide details of
business transactions for summary analysis [7], [8]. Another type of DW is called data mart, which is a
limited use or single-use system that can be used to analyse specific information in a specific area or for a
specific production line. Data stores usually contain only summary data but can be linked to ODS for diving
into transaction details if needed. Data stores are sometimes managed by information technology (IT)
departments in companies but can also be managed by users in a particular department or group [9]. Many
types and applications use DWs and data marts to support strategic decisions, such as clinical path [10]-[12],
cancer DW [13], educational health DW [14], HR DW [15], compliant DW [16], human resource DW [16]
and security DW [17].
Educational staff members always need new technologies that support their decisions; these
members seek different tools and applications to support their decisions on the basis of different algorithms
and techniques, such as machine learning and data mining algorithms [18]-[25], mobile learning [26], DSSs
[27] and web-based tools [28]. The decisions made on the basis of previous studies and solid analytical
results are more considerable and reasonable than before. Educational DW is the best solution that provides
online analytical results on the basis of a multidimensional view of OLAP queries for all educational
stakeholders (professors, lecturers, managers and decision makers). Educational DW provides a large view of
the performance of all students and allows them to detect the obstacles in their progress [29]. Educational
DW can also be used for the future applications of data mining to implement all techniques and algorithms
that can be implemented on huge educational historical records instead of a small dataset. The use of
educational DW can effectively reduce educational errors that affect academic decision making [30], [31].
DSS is an essential platform for making right strategic decisions. The DSS type can be determined
on the basis of the purpose of the DSS, the type of knowledge and target users. DSS can be categorised into
five types: knowledge-, model-, data-, document- and communication-driven DSS. Model-driven DSS allows
decision makers to choose among options on the basis of limited data and options. Knowledge-driven DSS
covers many kinds of systems within organisations; it provides an advice management process or service and
product selection. Data-driven DSS concentrates on a decision and manipulates the data to fit the decision
where the data can be in different formats. Target users are managers and stakeholders. Document-driven
DSS targets a specific group of users on the basis of a document search on the web; a mechanism is required
for retrieving related documents to support decisions. Communication-driven DSS provides a shared
dashboard for making decisions for more than one person in a single shared task [32].
Data mart is used in this model as the base of the data-driven educational DSS through the
implementation of specific school data and analysis of executive information systems. This paper presents
the stages of implementing an educational independent data mart, starting from selecting the proper approach
of design to implementing OLAP queries and KPI. The implementation starts with converting the paper-
based educational data of student records into electronic educational records. The data mart staging area is
prepared to extract electronic students’ records and implement all extraction, transformation and loading
(ETL) processes. Star schema is chosen as an architecture of data mart for its ease of implementation, ease of
load and fast OLAP query response. The educational cube is built to implement OLAP queries. Three types
of result viewing are listed structured query language (SQL) server integration services (SSIS) cube view,
online reports and offline excel pivot reports). SQL server management studio (SSMS) 2014 is used to store
and implement educational data mart objects (data mart schema and staging area). ETL is implemented and
designed by SSIS 2014, where SQL server analysis services (SSAS) 2014 is used to design and construct the
educational cube. Likewise, SSRS 2014 is used for designing reports. All the implemented projects are
deployed by SQL server data tools (SSDT), as this tool provides the ability to create and deploy (SSIS, SSAS
and SQL server reporting services (SSRS)) projects.
The rest of this paper is organised is being as: Section 2 analyses and discusses the related works
and presents the strong points in these works. Section 3 explains the model implementation roadmap and
details each step of the model implementation. Section 4 concludes the points derived from the model result
and presents the future works that can be implemented on educational data mart.
2. RELATED WORKS
Smyrnaki [33] proposed a model based on DW to implement a DSS in order to support decisions.
The proposed system integrated a heterogeneous sources used for different purposes such as education,
financial and managerial decisions. The results conducted the integration of multiple heterogeneous sources
3. Int J Elec & Comp Eng ISSN: 2088-8708
Implementing data-driven decision support system based on independent… (Alaa Khalaf Hamoud)
5303
of data will be a useful platform for both of educational leadership and quality assurance unit. Next, Suman
et al. [34] proposed a DW to solve many challenges in the higher eduction center. These challenges are
facing the designer throughout the designing process such as the overall system design, the main processes of
extraction, transformation, and loading, and performing the analysis processes using multidimensional
queries based OLAP. The tool used for developing DW is Mondrian and Pentaho business intelligence tool.
However, the proposed systems did not implement KPI as an essential component in the decision making
process.
Mihai et al. [35] proposed using enterprise data warehouse (EDW) to support academic decisions in
educational institutions. The design and implementation process proceeded via two steps; the first one is
implementing EDW to find the performance measurements of the management of the entire academic staff,
whereas the second step is finding the result evaluation of the correlation between financial allocation and
educational performance. The result obtained from implementing enterprise document management (EDM) is
finding the indicator that measures the required financial efforts for education and helping decision makers
estimate future efforts to enhance education. However, the paper presented a general overview of EDW
implementation and neither went through design methodology nor explained how to select the proper
approach of implementing EDM. Another model implemented by Abdullah and Obaid [36] combined
educational records from two simulated databases of 10 years of data from the Department of Computer
Science at Basrah University and four years of data from Al-Iraq University, Iraq. They unified the data
under a single schema of EDW and then used OLAP to conduct a descriptive analysis and find student
achievements through these years. The decision makers in the proposed model are lecturers and department
heads. However, the approach of EDW design, the access method to EDW and the multidimensional cube
were not clearly explained.
Kurniawan and Erwin [37] showed the advantages of using DW and DM in the prediction of
students’ performance. The model implementation also passed through two stages analysis and DW
implementation; they used the Kimball approach to design DW with a star schema as a structure for the DW.
However, the type of DW in the model was neither determined nor distinguished from the data mart.
Mohammed et al. [38] designed an architectural approach on the basis of DW to combine databases from
different Iraqi universities for increasing information sharing among all universities, colleges and
departments. They used the beneficial characteristic of DW application to maximise information sharing
among universities. However, the approach failed to show how to solve conflicts among databases from
different universities. The difficulties in this approach are the huge data of a single university because many
colleges exist with many departments and units in a single college. The paper also failed to explain how to
deal with different standards and business rules among databases of a single college and how to handle this
obstacle.
3. DSS IMPLEMENTATION
The data mart is the base of the proposed DSS. The flow chart of the implemented data mart is
demonstrated in Figure 1. Six basic important steps are taken to implement the educational data mart; these
steps are data pre-processing; data profiling; data mart area, which holds the staging area and the data mart
schema; ETL; educational cube building; and OLAP, KPI and reports. The evaluation process is performed
after completing the knowledge base of the DSS.
Figure 1. Flow chart of educational data mart implementation approach
4. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 11, No. 6, December 2021 : 5301 - 5314
5304
The DSS architecture consists of four major areas, which are data preparation area; data mart area;
KPI, OLAP and reports area; and decision-making area, as shown in Figure 2. The data preparation area
involves all data source transformation and selection processes performed on the paper-based data source and
their conversion into an electronic data source. The data mart area involves a data staging area where all the
extracted data are stored and transformed to be loaded into data mart schema tables (fact tables and
dimensions). The OLAP and query area provides many access methods to data mart analysis results from
online to offline and multidimensional OLAP (MOLAP). KPI is an essential tool in the DSS that helps
stakeholders in measuring progress. The last area is the decision-making area where all stakeholders
(analysts, school managers, teachers and senior decision makers) reach the reports and make their decisions.
Figure 2. Model architecture
3.1. Data preprocessing
The base element of the proposed DSS is the collected data. The data source of the educational data
mart is sourced from the Al-Iscanderia Private Elementary School; this school is in the Bahadria District,
Basrah Province, Iraq. The data are paper-based data sheets, which hold all the required documentations for
all students’ degrees. The school follows the paper-based procedure to calculate averages and success rates
on different subjects and in different stages; thus, storing them in an electronic base takes a long time. The
main table of student records contain the following details after storing them in the electronic base Table 1.
In this table, the main attributes in data sources are explained in detail, content types, data types, and details
of each attribute are presented.
Table 1. Details of students’ records
Field Content Type Data Type Details
Student_ID Continuous Long Represents the number of students currently attending the school.
Student_Number Continuous Long The number of the paper record containing all the student's data that
may be referred to in case of looking for more extensive data on the
student (e.g: medical records).
Full Name Discrete Text First three parts of the student's name as written in Arabic order
(Student name + Father name + Grandfather name).
Birth Continuous Date The student's birthdate presented in this format.
Enrollment Discrete Text The year of which the data was taken presented in this format.
Address Discrete Text Name of the district in which the student currently resides in.
Class Continuous Long Number of the school year the student is in at the time of the recording
of the data presented in an integer datatype.
Groups Discrete Text Due to the huge number of enrolled students, a reasonable number of
students of the same year are divided into groups (e.g: 5-A, 5-B, 5-C).
Subject Discrete Text Name of the subject the class provides in each year, differing in number.
Grade Discrete Text The grades are registered for each month (October, November,
December, Mid-term, March, April, Final.
5. Int J Elec & Comp Eng ISSN: 2088-8708
Implementing data-driven decision support system based on independent… (Alaa Khalaf Hamoud)
5305
The second table as shown in Table 2 of data profiling presents the subjects that have been learned
from class to class. Eight subjects taught are for students from classes 1 to 6. The subjects differ for each
class; some subjects are taught in the first class, and different subjects are taught from the fourth class to the
sixth class. For example, Islamic studies, Arabic language, mathematics, science, arts and physical education
are taught from Classes 1 to 6, whereas social studies are taught from classes 4 to 6. Finally, English
language is taught in classes 5 and 6.
Table 2. Subjects taken by classes
Subject Number Subject Class
1 Islamic Studies 1-6
2 Arabic Language 1-6
3 Mathematics 1-6
4 Science 1-6
5 Arts 1-6
6 PE 1-6
7 Social Studies 4-6
8 English Language 5-6
3.2. Data profiling
Data profiling, sometimes called data analysis, is the assessment and examination of data
consistency, integrity and quality of data source. Using data profiling is a fundamental process to examine the
data quality of DW data sources. Data profiling provides results that can be depended on when making a
decision related to DW implementation. Data profiling concentrates on the individual attributes of the data
source. It gives a complete summary that describes the length, data type, length, variance, uniqueness, null
ratio, and domain range. It shows the full view of data quality related to all data source attributes [39], [40].
Data profiling is an important step in the building process of DW and data mart. Building data mart or DW
does not actually succeed if this step is not performed. The results of data profiling can help in determining
the dimensions and fact tables, the proposed primary keys, the null ratio in each column, mean and standard
in each column, the maximum and minimum values in each column and the domain of each column. The
result of the data profiling of student records is shown in Table 3. SSDT provides a data profiling tool that
presents the results graphically for ease of understanding and use. Data profiling results are converted into
table readings, as shown in Table 3.
Table 3. Data profiling results
Seq Field Name Minimum Value Maximum Value Number of Distinct Values Null Ratio
1 Address - - 11 0
2 Birth 9-7-2003 27-2-2012 528 104
3 Class 1 6 6 0
4 Groups 5 0
5 Mark 613 0
6 Month 613 0
7 Number_of_Students 525 525 599 0
8 School_Address - - 1 0
9 School_Name - - 1 0
10 Student_Name - - 599 0
11 Student_Number 3 613 599 0
12 Subject - - 12 0
13 Year 2018 2018 1 0
3.3. Data mart area
The data mart consists of two areas: Staging and data mart schema tables. The staging area consists
primarily of a staging table where the data are first extracted from the data source and loaded into it. The
staging table faces all transformation processes and holds the final version of the table before loading it into
dimensions and fact tables. Data mart schema is a star schema where five-dimension tables are connected to
the fact table whilst their data are taken from the staging table. The staging table is an intermediate area used
for storage between source systems and the DW; a temporary storage area is where data are successfully
deleted after being uploaded to the repository. This area is used in many major processes, such as archiving
and preparing data source, data extraction, cleaning, unifying, mirror, conversion, loading and indexing,
quality assurance and updating [41], [42]. These processes are usually referred to as ETL. The staging area
should be prepared and performed whilst all the intended OLAP queries are asked. The staging area also
6. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 11, No. 6, December 2021 : 5301 - 5314
5306
holds the table with all columns and dimension tables. The staging table becomes the place where all data are
manipulated. Data manipulation involves all data integrity processes, such as cleaning, transformation,
enrichment and deletion. The data values of all columns must be the same as the data values of all columns in
the dimension tables.
The data mart schema tables involve many approaches for designing DW, such as top-down,
bottom-up, inside-out, and mixed approaches. Selecting an approach depending on the overall size and
implementation duration period of DW can result in either enterprise DW or small data mart [43]. The top-
down approach is used for long-term designing models and takes further analysis and redesigning to fit all
enterprise goals. The bottom-up approach is used for short-term designing models where the results can be
observed [44]. Given that the intended DW is an independent educational data mart, the best approach to
build the data mart is the bottom-up approach. Based on the previously listed reasons, using this approach in
implementing a data mart provides many facilities to build a solid solution that can provide answers to
educational stakeholders.
Three famous schemas are used to implement the DW schema (i.e. star, snowflake and fact
constellation). Each schema has its own advantages and disadvantages, which make designers prefer one
schema over the other. Star schema is the famous one for its simplicity and wide usage; it consists of a
central table called a fact table and many other tables called dimensions that surround the fact table. The fact
table consists of many concatenated keys to the dimensions and many other keys called measurements, which
represent the facts or functions that can be calculated along other dimension columns [45]. Figure 3
represents the proposed star schema of an educational data mart.
Figure 3. Star schema of educational data mart
The data mart schema is built using SSMS 2014. The schema consists of five dimensions (address,
information, enrolment, subject, and degree) and the fact table. The fact table holds five keys to concatenate
the dimension tables and one key measurement (count), which is the count function for finding the number of
students represented by all the OLAP query answers. Using dimensions is one of the key factors that hasten
7. Int J Elec & Comp Eng ISSN: 2088-8708
Implementing data-driven decision support system based on independent… (Alaa Khalaf Hamoud)
5307
the OLAP responses. Concept hierarchy is normally used with dimensions such as date and address
dimensions to permit OLAP operations, such as roll-up, drill down, slice, dice and pivot. The star schema can
provide fast response OLAP queries, handle changes with time, have multiple hierarchies in dimensions, can
build a simple DW schema and can easily be loaded with data [46], [47].
The next stage is performing ETL tasks, which take approximately 70% of the DW development
time and cost spent on implementing the overall DW model. ETL involves many tasks used to manipulate
data for obtaining their final cleaned and integrated version. The tasks of ETL (not limited to) are:
Data extraction: The first step in the process of transferring data to the DW. It means reading the data and
understanding them from different sources and then copying the necessary parts to the data submission
area to continue the work later. The extraction step represents the greatest effort in the DW.
Data cleaning is a task of detecting errors in the data and correcting them if possible; it involves tasks
such as dealing with missing elements and reducing noise by defining extreme values and correcting data
conflicts [48].
Data transformation: When data are extracted from the source system, a series of actions is applied to
convert the data into valid and meaningful formulas [49].
Load: Load services need support before and after loadings, such as the regeneration of indexes and
physical sections of the table. The specificity and structure of each goal when loading is also
considered [50].
Refresh: The last step of ETL where updates over time are transferred from data sources to
repositories [51].
3.4. ETL
ETL is the stage where designers unconsciously prepare for fast-answered OLAP queries. In ETL,
three important factors are prepared to make OLAP fast, namely, measurements, loading dimensions, and
concept hierarchies. The first two stages (extract and transform) are implemented on the previously created
staging table. The extraction process not only involves selecting the required data but also testing if the data
are fulfilling the intended goals.
The two parts of extract and transform (ETL) are implemented on the staging table by using the
SSIS package. The loading data mart strategy is divided into two stages, namely, loading dimensions and
loading fact table. Figure 4 shows both stages of loading data mart schema tables. The first stage is loading
the five-dimension tables with data. The multicast tool is used to make an image-like staging table to load
dimensions by using a slowly changing dimension (SCD). Address and info dimensions are loaded using
SCD with changing attributes. The next three dimensions (subject, enrolment and degree) are loaded with
fixed attributes. The major difference between fixed and changing attributes is that fixed attributes do not
detect changes in the staging table after loading data into dimensions, whereas changing attributes detect
changes and reflect such changes in dimensions. Fixed attributes can be used for all dimensions’ attributes.
Figure 4. Loading dimensions and fact tables strategy
8. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 11, No. 6, December 2021 : 5301 - 5314
5308
The second stage is building a cube to provide an analytical platform to perform OLAP queries. The
cube is constructed to provide analysts with a platform where they can ask questions and find their answers
as charts or tables. The educational multidimensional cube is implemented using SSAS 2014, which consists
of the required dimensions and hierarchies. The cube consists of dimensions that can be used to answer
OLAP queries on the basis of the measurement (count). Multidimensional cube is selected due to its
advantages, such as fast complex query response and excellent performance [52], [53].
3.5. OLAP and KPI
The process of implementing OLAP and constructing reports is performed. OLAP is a technology
used on DW architecture to obtain fast and accurate results as answers to complex queries [4]. The OLAP
cube is primarily implemented to support complex queries on highly dimensional data structures. OLAP
simultaneously processes dimensions and fact tables with the possibility to roll back if an error occurs. The
OLAP cube is a popular important component of DW; the OLAP cube server stores security settings and
complex calculations that can be integrated into data mining tools and algorithms [54]. The OLAP system is
built on top of a relational database; OLAP has different categories, such as multidimensional OLAP
(MOLAP), relational OLAP (ROLAP), hybrid OLAP (HOLAP), desktop OLAP (DOLAP), database OLAP
and web OALP. MOLAP server is implemented by a multidimensional database, and all cube indexes are
stored and retrieved. By contrast, ROLAP server sends query parameters and receives query answers to/from
the relational database. HOLAP is a combination of the strengths of MOLAP and ROLAP features. DOLAP
can be considered a variation of ROLAP. In DOLAP, users have the portability to perform OLAP complex
queries by using existing DOLAP software on a pre-created multidimensional dataset on user desktop. In
database OLAP, OLAP calculation can be performed on a relational database management system that
supports the OLAP structure. Web OLAP allows OLAP calculations from web browsers. Three ways can be
used to view OLAP results, namely, SSAS 2014 cube view, online reports using SSRS 2014 and offline
reports using Microsoft Excel 2013 pivot table [16], [55].
The first access method is the SSAS cube view. After implementing a multidimensional educational
cube by using SSAS 2014, the resulting cube is stored with separated locations, which can be accessed from
the SSMS login screen locally or remotely. The SSMS login screen provides remote access to the database,
ETL, cube and report. Cubes can be easily viewed by dragging and dropping dimension columns and the
measurement. The educational cube is implemented on the basis of the multidimensional category because of
its performance and flexibility in implementing complex queries. Figure 5 shows the grade counts in the mid-
term examinations of all the classes of students who reside in Bahadria City.
The first class has the greatest number of failed students with 228, followed by Classes 2 and 4 each
with 180 students. In Class 3, 150 students failed, whereas Classes 5 and 6 hold 30 and 20 failed students,
respectively. The figure also illustrates the other grades with the number of students according to class and
grade. The figure represents the slice OLAP operation, where a slice of data is selected on the basis of
Bahadria City as a dimension with other dimensions (class and mid-grade).
Figure 5. Grades of students of all classes in Bahadria district in mid-term
9. Int J Elec & Comp Eng ISSN: 2088-8708
Implementing data-driven decision support system based on independent… (Alaa Khalaf Hamoud)
5309
The second method used to view reports is SSRS 2014 for constructing web OLAP. The reports are
designed using SSRS 2014 to provide pre-defined web OLAP answers and remote access for analysts to view
statistical and complex multidimensional query answers. Reports are stored in reporting servers that can be
accessed remotely or locally within the intranet network. Figure 6 presents the results as a report using SSRS
where these results are conducted in the previous Figure 5.
Figure 6. Grades of students in the mid-term exam
The last method of viewing reports is using the offline Microsoft Excel 2013 pivot table to
implement desktop OLAP. This method allows offline access for instant access to the cube. In this method,
the final staging table is imported to an excel file after the transformation processes, and the pivot table report
chart is used to view the results. The pivot table provides the flexibility to select the required chart to view
the results on the basis of the selected columns and a measurement. Excel pivot provides fast calculation
functions of a selected table. After importing the staging table from SSMS 2014 and selecting the imported
table, a pivot table can easily provide a chart to view the selected columns with a count aggregate function.
Figure 7 illustrates the examples of excel pivot tables. The figure lists the number of students
according to their ages (10–15) as classified on the basis of the three selected grades (Excellent (E), good
(G), and very good (V)). Two ages (11, 12) have the highest grades among the other ages. Students between
the ages of 11 and 12 have the highest grade (E) among all students (with>300 cases), followed by those
aged 12 (with >200 cases); these results can help analysts in finding the factors that improve student grades
in these two age groups, including the factors that influence the grades in other ages.
Figure 7. Number of students according to age and grade (E, G, and V)
Key performance indicator (KPI) is one of the most powerful tools in any DSS. KPI helps in
measuring performance according to specific criteria. SSAS provides a powerful environment to build and
implement KPIs that help in constructing DSSs [10]. Figure 8 shows the KPI of the success rate that helps in
measuring the performance according to the number of successful students. The value cursor represents the
number of successful students in the current year, and the goal is to increase the success rate by 10%. The
10. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 11, No. 6, December 2021 : 5301 - 5314
5310
status indicator can be represented by three types of cursors where the cursor in the figure represents the
current status. The status indicator colour changes according to the status. The trend indicator represents the
performance measure according to time. When the KPI value changes over time, the trend indicator changes.
The figure represents the implemented report on the basis of the KPI of the success rate. SSAS also allows to
drag and drop dimensions to measure KPI value according to the selected dimensions.
Figure 8. KPI of success rate
3.6. Data-driven educational DSS evaluation
In this stage, the proposed DSS is evaluated to find the framework feasibility. The proposed system
is implemented on a platform with 8 GB of random-access memory (RAM), intel core i3, 2.4 GHz of
processer, Intel (R) 82579 LM Gigabit network adapter, and 140 GB free sized hard. The access to platform
is restricted and the online access throughout network is restricted to permitted users only. These restrictions
to access DSS satisfy the security aspects of the system. The privacy of the DSS is ensured by hiding all the
personal information of the students in the system. Educational DSS holds KPI and three kinds of OLAP
reports (Web OLAP, MOLAP, and desktop OLAP). Each OLAP report holds many strength points and
serves a specific kind of user. In the proposed educational DSS, the user (manager, analyst, decision maker)
has the flexibility to use and navigate any type of reports according to needs. Analysts can use MOLAP
reports. MOLAP permits to select more than one dimension to perform a multidimensional expression on the
selected dimensions. Managers can use web OLAP reports to view the analyst results on the predefined
OLAP reports through the network. Desktop OLAP reports can be used by decision makers and any other
users to perform all OLAP operations instantly and show the analytical results according to the selected
dimensions. OLAP reports allow users to navigate the educational cube on the basis of the OLAP operations.
KPI is a powerful tool in any DSS that helps in giving status and trend indicators and presenting the current
value and how near it is to the goal.
The static evaluation method is performed to check the DSS errors by inspecting knowledge base
without using DSS. The proposed DSS confirms the security aspects where the dashboards are accessed by
permitted users only. These restrictions also confirm the privacy aspects where students’ personal
information is hidden from analysts. The evaluation process of DSS is a static method where the knowledge
base is reviewed to check the errors and mistakes in the knowledge base of the DSS without using DSS. The
presentation is checked to confirm the static evaluation requirements. The DSS usability, cost effectiveness
and effectiveness are checked on the basis of the system usage [56]-[61]. Subsequently, the dynamic
evaluation is performed over test cases after applying the DSS in the school. In this step, the experts check
the obtained results from the test cases. All the usability factors are examined, such as learnability,
understandability, error prevention, accuracy, operability, efficiency, attractiveness, and effectiveness [62]-
[67].
4. CONCLUSION AND FUTURE WORKS
In this study, a roadmap for implementing an independent educational data mart is explained. The
independent educational data mart is the base of the resulting DSS. Many difficulties are dealt with in
implementing an educational data mart, such as handling the paper-based data source, implementing the ETL
package, applying OLAP, KPI, and finally designing and deploying reports. The data source pre-processing
stage involves converting paper-based student information and degrees into electronic data sources by using
SSMS. The ETL package is implemented using SSIS, which involves many ETL tasks, such as deriving
columns, filling missing values, constructing surrogate keys and constructing concept hierarchies. Four
approaches can be used to design and implement models (i.e. top-down, bottom-up, inside out and mixed).
The best approach to design the data mart is a bottom-up approach because the result is required in a short
period to support decisions for constructing enterprise DW. The educational data mart provides many
11. Int J Elec & Comp Eng ISSN: 2088-8708
Implementing data-driven decision support system based on independent… (Alaa Khalaf Hamoud)
5311
benefits, such as the provision of a fast-implemented platform to support educational decisions related to the
academic performance of students; it also provides academic decision makers, teachers and school managers
with solutions to the factors that improve students’ success on the basis of their age, class, address and
subject. In addition, the educational data mart provides educational stakeholders with a comprehensive
overview of all data to detect outliers.
Three famous schemas exist for implementing DW, namely, star schema, snowflake schema and
fact constellation. Implementing a schema is a complementary step with the OLAP cube implementation
because the OLAP cube completely depends on the previously constructed fact tables and dimensions. The
proposed schema for the educational data mart is the star schema, which is selected on the basis of many
reasons, such as the fast responses to OLAP queries, easiness to apply hierarchies on dimensions and easiness
to adapt changes in the schema. The educational data mart can be considered a solid platform for data mining
applications that need the information collected from the entire school. The use of a data mart at the school
level is preferred in this type of analysis and aggregation of large data. Given the complexity and breadth of
its scope of work, the DW of this institution is usually managed by the central information technology (IT)
departments. ETL tasks take approximately 70% of time and effort during data mart implementation. Given
that the data source of student degrees and information is converted into an electronic data source, the
extraction task is performed smoothly.
The transformation task consists of many processes, such as deriving columns, converting data,
filling missing values and constructing surrogate keys. The loading task primarily consists of two stages-
loading dimensions and loading fact tables. SCD has three types that are used for loading dimensions,
namely, historical, fixed, and changing attributes. However, only two types (fixed and changing attributes)
are used to load dimensions. A fixed attribute is also used to load dimensions for one time and cannot allow
changes, whereas a changing attribute detects changes in staging tables and reflects such changes on the
dimensions. Loading fact tables involves looking up for keys in the dimensions and calculating the count
measurements to load these looked-up keys and calculated measurements into the fact tables. Many OLAP
categories can be implemented throughout this model, such as MOLAP, desktop OLAP and web OLAP.
OLAP queries; MOLAP is especially fast because of its use of dimensions, concept hierarchies, and
measurements. These three factors, in addition to storing a cube as indexed, make OLAP responses fast. KPI
is an essential tool in any DSS where the KPI of a proposed success rate is implemented to measure the
progress of increasing the success rate in a school. Based on the proposed educational data mart, data marts
can be considered reliable platforms for performing machine learning and data mining algorithms, such as
decision tree, neural networks, clustering and association rules, to detect and find hidden patterns that affect
student performance in the knowledge base of DSS.
The static evaluation method is performed to check the DSS errors by inspecting the knowledge
base without using DSS. The proposed DSS confirms the security aspects where the dashboards are accessed
by permitted users only. These restrictions also confirm the privacy aspects where the personal information
of students are hidden from analysts. The evaluation process of DSS is a static method where the knowledge
base is reviewed to check the errors and mistakes in the knowledge base of DSS without using DSS. The
presentation is checked to confirm the static evaluation requirements. The DSS usability, cost effectiveness
and effectiveness are checked on the basis of the system usage. Subsequently, a dynamic evaluation is
performed over test cases after applying the DSS in the school. In this step, experts check the obtained results
from test cases. All the usability factors are tested, such as learnability, understandability, error prevention,
accuracy, operability, efficiency, attractiveness and effectiveness. The proposed system can be depended as
an analysis platform where it can be used to implement a mobile application to transmit online analytical
results to all stakeholders. These results can be used to measure the progress to satisfy the school goals set
earlier. This system can be used for implementing data mining algorithms in order to find the hidden patterns,
relationship among attributes, and students failure causes. Using these algorithms can enhance students’
performance and though improve academic performance of the school. Besides, a knowledge mining will be
implemented to conduct the data mining algorithms results to allow users to find the relationship among
features in the data.
REFERENCES
[1] J. Han, J. Pei, M. Kamber, and J. Pei, "Data mining: Concepts and techniques third edition," The Morgan
Kaufmann Series in Data Management Systems, vol. 5, no. 4, pp. 83-124, 2011.
[2] S. Bouaziz, A. Nabli, and F. Gargouri, "Design a data warehouse schema from document-oriented database,"
Procedia Computer Science, vol. 159, pp. 221-230, 2019, doi: 10.1016/j.procs.2019.09.177.
[3] S. Sundari, M. N. Fadli, D. Hartama, A. P. Windarto, and A. Wanto, "Decision Support System on Selection of
Lecturer Research Grant Proposals using Preferences Selection Index," in Journal of Physics: Conference Series,
vol. 1255, no. 1, Art. No. 012006, 2019, doi: 10.1088/1742-6596.
12. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 11, No. 6, December 2021 : 5301 - 5314
5312
[4] I. A. Najm, A. K. Hamoud, J. Lloret, and I. Bosch, "Machine Learning Prediction Approach to Enhance Congestion
Control in 5G IoT Environment," Electronics, vol. 8, no. 6, Art. No. 607, 2019, doi: 10.3390/electronics8060607.
[5] W. H. Inmon, "Building the data warehouse," John Wiley & Sons, 2005.
[6] T. H. Davenport, "Putting the enterprise into the enterprise system," Harvard Business Review, vol. 76, no. 4,
pp. 121-131, 1998.
[7] A. Baghal, M. Zozus, A. Baghal, S. Al-Shukri, and F. Prior, "Factors Associated with Increased Adoption of a
Research Data Warehouse," in ITCH, vol. 257, pp. 31-35, 2019, doi: 10.3233/978-1-61499-951-5-31.
[8] R. Kimball and M. Ross, "The data warehouse toolkit: the complete guide to dimensional modelling," John Wiley
& Sons, 2011.
[9] T. Z. Ali, T. M. Abdelaziz, A. M. Maatuk, and S. M. Elakeili, "A Framework for Improving Data Quality in Data
Warehouse: A Case Study," 2020 21st International Arab Conference on Information Technology (ACIT), 2020,
pp. 1-8, doi: 10.1109/ACIT50332.2020.9300119.
[10] A. K. Hamoud and T. A. S. Obaid, "Design and implementation data warehouse to support clinical decisions using
OLAP and KPI," Department of computer science, University of Basrah, 2013.
[11] A. Hamoud and T. Obaid, "Building data warehouse for diseases registry: first step for clinical data warehouse,"
International Journal of Scientific and Engineering Research, vol. 4, no. 7, pp. 636-640, 2013,
doi: 10.2139/ssrn.3061599.
[12] A. Hamoud, A. S. Hashim, and W. A. Awadh, "Clinical data warehouse: a review," Iraqi Journal for Computers
and Informatics, vol. 44, no. 2, 2018.
[13] A. Hamoud, H. Adday, T. Obaid, and R. Hameed, "Design and implementing cancer data warehouse to support
clinical decisions," International Journal of Scientific and Engineering Research, vol. 7, no. 2, pp. 1271-1285,
2016, doi: 10.2139/ssrn.3061594.
[14] M. M. Triola and M. V. Pusic, "The education data warehouse: a transformative tool for health education research,"
Journal of graduate medical education, vol. 4, no. 1, pp. 113-115, 2012, doi: 10.4300/JGME-D-11-00312.1.
[15] A. K. Hamoud, M. A. Ulkareem, H. N. Hussain, Z. A. Mohammed, and G. M. Salih, "Improve HR Decision-
Making Based on Data Mart and OLAP," In Journal of Physics: Conference Series, IOP Publishing, vol. 1530,
no. 1, 2020, Art. no. 012058, doi: 10.1088/1742-6596/1530/1/012058.
[16] A. K. Hamoud, H. N. Hussien, A. A. Fadhil and Z. R. Ekal, "Improving service quality using consumers’
complaints data mart which effect on financial customer satisfaction," in Journal of Physics Conference Series,
vol. 1530, no. 1, 2020, Art. no. 012060, doi: 10.1088/1742-6596/1530/1/012060.
[17] R. Chowdhury, P. Chatterjee, P. Mitra, and O. Roy, "Design and implementation of security mechanism for data
warehouse performance enhancement using two tier user authentication techniques," International Journal of
Innovative Research in Science, Engineering and Technology, vol. 3, no. 2, pp. 165-172, Feb. 2014.
[18] Hashim, A. Salah, W. A. Awadh, and A. K. Hamoud, "Student Performance Prediction Model based on Supervised
Machine Learning Algorithms," IOP Conference Series: Materials Science and Engineering, IOP Publishing,
vol. 928. no. 3, p. 032019, 2020, doi: 10.1088/1757-899X/928/3/032019.
[19] A. Hamoud, "Applying association rules and decision tree algorithms with tumor diagnosis data," International
Research Journal of Engineering and Technology, vol. 3, no. 8, pp. 27-31, 2017, doi: 10.2139/ssrn.3028893.
[20] A. Hamoud and A. Humadi, "Student’s Success Prediction Model Based on Artificial Neural Networks (ANN) and
A Combination of Feature Selection Methods," Journal of Southwest Jiaotong University, vol. 54, no. 3, 2019.
[21] A. Hamoud, A. Humadi, W. A. Awadh, and A. S. Hashim, "Students’ success prediction based on bayes
algorithms," International Journal of Computer Applications, vol. 178, no. 7, pp. 6-12, Nov. 2017,
doi: 10.2139/ssrn.3080633.
[22] A. Hamoud, "Selection of best decision tree algorithm for prediction and classification of students’ action,"
American International Journal of Research in Science, Technology, Engineering & Mathematics, vol. 16, no. 1,
pp. 26-32, 2016.
[23] A. S. Hashima, A. K. Hamoud, and W. A. Awadh, "Analyzing students’ answers using association rule mining
based on feature selection," Journal of Southwest Jiaotong University, vol. 53, no. 5, pp. 1-15, Oct. 2018,
doi: 10. 3969/j.
[24] A. Hamoud, A. S. Hashim, and W. A. Awadh, "Predicting student performance in higher education institutions
using decision tree analysis," International Journal of Interactive Multimedia and Artificial Intelligence, vol. 5,
no. 2, pp. 26-31, 2018.
[25] A. K. Hamoud, "Classifying students' answers using clustering algorithms based on principle component analysis,"
Journal of Theoretical & Applied Information Technology, vol. 96, no. 7, 2018.
[26] G. Fulantelli, D. Taibi, and M. Arrigo, "A framework to support educational decision making in mobile learning,"
Computers in human behavior, vol. 47, pp. 50-59, Jun. 2015, doi: 10.1016/j.chb.2014.05.045.
[27] T. Pradhan and S. Pal, "A multi-level fusion based decision support system for academic collaborator
recommendation," Knowledge-Based Systems, vol. 197, no. 7, Art. No. 105784, Jun. 2020,
doi: 10.1016/j.knosys.2020.105784.
[28] T. Feghali, I. Zbib, and S. Hallal, "A web-based decision support tool for academic advising," Journal of
Educational Technology and Society, vol. 14, no. 1, pp. 82-94, 2011.
[29] F. Di Tria, E. Lefons, and F. Tangorra, "Research Data Mart in an Academic System," 2012 Spring Congress on
Engineering and Technology, 2012, pp. 1-5, doi: 10.1109/SCET.2012.6341952.
[30] I. Mekterovic, L. Brkic, and M. Baranovic, "Improving the ETL process and maintenance of higher education
information system data warehouse," WSEAS transactions on computers, vol. 8, no. 10, pp. 1681-1690, 2009.
13. Int J Elec & Comp Eng ISSN: 2088-8708
Implementing data-driven decision support system based on independent… (Alaa Khalaf Hamoud)
5313
[31] M. E. Zorrilla, "Data warehouse technology for e-learning," in Methods and Supporting Technologies for Data
Analysis, ed: Springer, vol. 225, pp. 1-20, 2009, doi: 10.1007/978-3-642-02196-1_1.
[32] S. Belciug and F. Gorunescu, "A brief history of intelligent decision support systems," Intelligent Decision Support
Systems-A Journey to Smarter Healthcare. Springer, Cham, 2020, pp. 57-70.
[33] O. Smyrnaki, "Data warehousing in higher education. A case study of the Hellenic Mediterranean University,"
Thesis, Hellenic Mediterranean University, 2020.
[34] S. Suman, P. Khajuria, and S. Urolagin, "Star Schema-Based Data Warehouse Model for Education System Using
Mondrian and Pentaho," in International conference on Modelling, Simulation and Intelligent Computing, vol. 659,
pp. 30-39, 2020, doi: 10.1007/978-981-15-4775-1_4.
[35] M. Paunica, M. L. Matac, A. L. Manole, and C. Motofei, "Measuring the performance of educational entities with a
data warehouse," Annales Universitatis Apulensis: Series Oeconomica, vol. 12, no. 1, pp. 176-184, 2010.
[36] Z. A. Abdullah and T. A. Obaid, "Design and implementation of educational data warehouse using OLAP,"
International Journal of Computer Science and Network-IJCSN, vol. 5, no. 5, 2016.
[37] Y. Kurniawan and E. Halim, "Use data warehouse and data mining to predict student academic performance in
schools: A case study (perspective application and benefits)," Proceedings of 2013 IEEE International Conference
on Teaching, Assessment and Learning for Engineering (TALE), 2013, pp. 98-103,
doi: 10.1109/TALE.2013.6654408.
[38] M. A. Mohammed, A. R. Hasson, A. R. Shawkat, and N. J. Al-khafaji, "E-government architecture uses data
warehouse techniques to increase information sharing in Iraqi universities," 2012 IEEE Symposium on E-Learning,
E-Management and E-Services, 2012, pp. 1-5, doi: 10.1109/IS3e.2012.6414947.
[39] I. F. Ilyas and X. Chu, "Data cleaning," ACM, 2019.
[40] R. Zhang, M. Indulska, and S. Sadiq, "Discovering data quality problems," Business & Information Systems
Engineering, vol. 61, no. 5, pp. 575-593, 2019, doi: 10.1007/s12599-019-00608-0.
[41] T. Rujirayanyong and J. J. Shi, "A project-oriented data warehouse for construction," Automation in Construction,
vol. 15, no. 6, pp. 800-807, November 2006, doi: 10.1016/j.autcon.2005.11.001.
[42] A. Sen and A. P. Sinha, "A comparison of data warehousing methodologies," Communications of the ACM, vol. 48,
no. 3, pp. 79-84, 2005.
[43] P. A. Sabatier, "Top-down and bottom-up approaches to implementation research: a critical analysis and suggested
synthesis," Journal of public policy, vol. 6, no. 1, pp. 21-48, 1986.
[44] C. Ghezzi, "Designing data marts for data warehouses," ACM Transactions on Software Engineering and
Methodology (TOSEM), vol. 10, no. 4, pp. 452-483, 2001, doi: 10.1145/384189.384190.
[45] P. Ponniah, "Data warehousing fundamentals for IT professionals," John Wiley & Sons, 2011.
[46] N. Tryfona, F. Busborg, and J. G. B. Christiansen, "StarER: a conceptual model for data warehouse design," in
Proceedings of the 2nd ACM international workshop on Data warehousing and OLAP, November 1999, pp. 3-8,
doi: 10.1145/319757.319776.
[47] S. Chaudhuri and U. Dayal, "An overview of data warehousing and OLAP technology," ACM Sigmod record,
vol. 26, no. 1, pp. 65-74, 1997, doi: 10.1145/248603.248616.
[48] J. Caserta and R. Kimball, "The data warehouse toolkit: Practical techniques for extracting, cleaning, conforming,
and delivering data," Wiley, 2013.
[49] A. Sellami, A. Nabli, and F. Gargouri, "Graph NoSQL Data Warehouse Creation," Proceedings of the 22nd
International Conference on Information Integration and Web-based Applications & Services, Nov. 2020,
pp. 34-38, doi: 10.1145/3428757.3429141.
[50] S. Thulasiram and N. Ramaiah, "Real Time Data Warehouse Updates Through Extraction-Transformation-Loading
Process Using Change Data Capture Method," in International Conference on Computer Networks and Inventive
Communication Technologies, vol. 44, pp. 552-560, 2019, doi: 10.1007/978-3-030-37051-0_62.
[51] S. Chakraborty and J. Doshi, "Materialized Queries with Incremental Updates," in Information and Communication
Technology for Intelligent Systems, ed: Springer, vol. 106, pp. 31-40, 2019, doi: 10.1007/978-981-13-1742-2_4.
[52] A. Hamoud and T. Obaid, "Using OLAP with diseases registry warehouse for clinical decision support,"
International Journal of Computer Science and Mobile Computing, vol. 3, no. 4, pp. 39-49, 2014,
doi: 10.2139/ssrn.3061597.
[53] R. T. Rimi and K. A. Hasan, "Efficient Query Processing for Multidimensional Data Cubes," in International
Conference on Cyber Security and Computer Science, Springer, Champp, vol 325, pp. 647-658, 2020,
doi: 10.1007/978-3-030-52856-0_51.
[54] M. C. Trivedi, V. K. Yadav, and A. K. Gupta, "A Proposed DDS Enabled Model for Data Warehouses with Real
Time Updates," International Journal of Informatics and Communication Technology (IJ-ICT), vol. 7, no. 1,
pp. 31-38, 2018, doi: 10.11591/ijict.v7i1.pp31-38.
[55] E. Pourabbas, "Providing accurate answers to OLAP queries based on standardized moments of data cubes,"
Information Systems, vol. 94, 2020, Art. no. 101588, doi: 10.1016/j.is.2020.101588.
[56] A. Krtalić and M. Bajić, "Development of the TIRAMISU advanced intelligence decision support system,"
European Journal of Remote Sensing, vol. 52, no. 1, pp. 40-55, 2019, doi: 10.1080/22797254.2018.1550351.
[57] M. el Yaakoubi, P. Ravesteijn, A. Prinsen, H. Hooimeijer, and M. van der Ven, "Data Driven Decision Support:
The Role of the Controller in Decision-Making Processes," in 16th European Conference on Management,
Leadership and Governance, 2020, pp. 73-80, doi: 10.34190/ELG.20.026.
[58] G. Jayashree and C. Priya, "Comprehensive Guide to Implementation of Data Warehouse in Education," in
Intelligent Computing and Innovation on Data Science, ed: Springer, vol. 118, pp. 1-8, 2020, doi: 10.1007/978-
981-15-3284-9_1.
14. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 11, No. 6, December 2021 : 5301 - 5314
5314
[59] G. Roccasalva, "Towards a DSS: A Toolkit for Processes of Co-designing," in Project and Design Literacy as
Cornerstones of Smart Education, ed: Springer, vol. 158, pp. 49-52, 2020, doi: 10.1007/978-981-13-9652-6_4.
[60] H. M. Iqbal, R. Parra-Saldivar, R. Zavala-Yoe, and R. A. Ramirez-Mendoza, "Smart educational tools and learning
management systems: supportive framework," International Journal on Interactive Design and Manufacturing
(IJIDeM), vol. 14, pp. 1179-1193, 2020, doi: 10.1007/s12008-020-00695-4.
[61] A. Häggman-Laitila, P. Salokekkilä, and S. Karki, "Integrative review of the evaluation of additional support
programs for care leavers making the transition to adulthood," Journal of pediatric nursing, vol. 54, pp. 63-77,
September-October 2020, doi: 10.1016/j.pedn.2020.05.009.
[62] K. Shafinah, M. Selamat, R. Abdullah, A. Muhamad, and A. Noor, "System evaluation for a decision support
system," Information Technology Journal, vol. 9, no. 5, pp. 889-898, 2010.
[63] A. B. Deraman and F. A. Salman, "Managing usability evaluation practices in agile development environments,"
International Journal of Electrical and Computer Engineering, vol. 9, no. 2, pp. 1288-1297, April 2019,
doi: 10.11591/ijece.v9i2.pp1288-1297.
[64] T. Wahyuningrum and K. Mustofa, "A systematic mapping review of software quality measurement: Research
trends, model, and method," International Journal of Electrical and Computer Engineering, vol. 7, no. 5,
pp. 2847-2854, Oct. 2017, doi: 10.11591/ijece.v7i5.pp2847-2854.
[65] H. V. Gamido and M. V. Gamido, "Comparative review of the features of automated software testing tools,"
International Journal of Electrical and Computer Engineering, vol. 9, no. 5, pp. 4473-4478, Oct. 2019,
doi: 10.11591/ijece.v9i5.pp4473-4478.
[66] D. Delen, K. Topuz, and E. Eryarsoy, "Development of a Bayesian Belief Network-based DSS for predicting and
understanding freshmen student attrition," European journal of operational research, vol. 281, no. 3, pp. 575-587,
March 2020, doi: 10.1016/j.ejor.2019.03.037.
[67] K. Fahd, S. J. Miah, K. Ahmed, S. Venkatraman, and Y. Miao, "Integrating design science research and design
based research frameworks for developing education support systems," Education and Information Technologies,
pp. 1-22, 2021, doi: 10.1007/s10639-021-10442-1.