Big data technology (BDT) is being actively adopted by world-leading organizations due to its expected benefits. However, most of the organizations in Thailand are still in the decision or planning stage to adopt BDT. Many challenges exist in encouraging the BDT diffusion in businesses. Thus, this study develops a research model that investigates the determinants of BDT adoption in the Thai context based on the technology-organizationenvironment (TOE) framework and diffusion of innovation (DOI) theory. Data were collected through an online questionnaire. Three hundred IT employees in different organizations in Thailand were used as a sample group. Structural equation modeling (SEM) was conducted to test the hypotheses. The result indicated that the research model was fitted with the empirical data with the statistics: Normed Chi-Square=1.651, GFI=0.895, AFGI=0.863, NFI=0.930, TLI=0.964, CFI=0.971, SRMR=0.0392, and RMSEA=0.046. The research model could, at 52%, explain decision to adopt BDT. Relative advantage, top management support, competitive pressure, and trading partner pressure show significant positive relation with BDT adoption, while security negatively influences BDT adoption.
Machine learning for data management -Findings and implications for data management:
Machine learning has significant potential to improve data quality, but will at the same time disrupt data management processes and practices.
Data management processes will be redesigned:
- Highly repetitive and simple cases will be automated by machine, but human needs to intervene in more difficult and complex cases
--> Machine takes over prediction
--> Human judges output and confirms
There are some important prerequisites:
- Machine learning techniques depend on high quality data-->(Garbage in – garbage out)
- New roles and skills are required to explore and productize machine learning
Big Data triggered furthered an influx of research and prospective on concepts and processes pertaining previously to the Data Warehouse field. Some conclude that Data Warehouse as such will disappear;others present Big Data as the natural Data Warehouse evolution (perhaps without identifying a clear division between the two); and finally, some others pose a future of convergence, partially exploring the
possible integration of both. In this paper, we revise the underlying technological features of Big Data and Data Warehouse, highlighting their differences and areas of convergence. Even when some differences exist, both technologies could (and should) be integrated because they both aim at the same purpose: data exploration and decision making support. We explore some convergence strategies, based on the common elements in both technologies. We present a revision of the state-of-the-art in integration proposals from the point of view of the purpose, methodology, architecture and underlying technology, highlighting the
common elements that support both technologies that may serve as a starting point for full integration and we propose a proposal of integration between the two technologies.
Developing a PhD Research Topic for Your Research| PhD Assistance UKPhDAssistanceUK
According to UGC report, every year approximately 77,000 scholars are enrolling for PhD research programs in India. but only 1/3rd of them successfully complete the doctorate every year (Marg, 2015). This result clearly indicates the difficulty and standard of evaluation of this program, so it is necessary that the scholar thoroughly analyze and select a good PhD research topic before plunging into the research work. There are many methodologies and techniques introduced over a period for effective selection and decision making in various field. Data Driven Decision Making which is also referred as Data Based Decision Making is a modern method which was found to be effective and efficient for strategic and systematic development and decision making. So, this article summarizes the framework, effectiveness, benefits of DDDM in selection of Research topics for the PhD scholars.
For more Details:-
UK: +44 7424997337
Email : info@phdassistance.com
Hadoop and Big Data Readiness in Africa: A Case of Tanzaniaijsrd.com
Big data has been referred to as a forefront pillar of any modern analytics application. Together with Hadoop which is open source software, they have emerged to be a solution to the processing of massive generated both structured and unstructured data. With different strategies and initiatives taken by governments and private institutions in the world towards deployment and support of big data analytics and hadoop, Africa cannot be left isolated. In this paper, we assessed the readiness of Africa with a case study of Tanzania in harnessing the power of big data analytics and hadoop as a tool for drawing insights that might help them make crucial decisions. We used a survey in collecting the data using questionnaires. Results reveal that majority of the companies are either not aware of the technologies or still in their infancy stages in using big data analytics and hadoop. We identified that most companies are in either awakening or advancing stages of the big data continuum. This is attributed by challenges such as lack of IT skills to manage big data projects, cost of technology infrastructure, making decision on which data are relevant, lack of skills to analyze the data, lack of business support and deciding on what technology is best compared to others. It has also been found out that most of the companies' IT officers are not aware with the concepts and techniques of big data analytics and hadoop.
Machine learning for data management -Findings and implications for data management:
Machine learning has significant potential to improve data quality, but will at the same time disrupt data management processes and practices.
Data management processes will be redesigned:
- Highly repetitive and simple cases will be automated by machine, but human needs to intervene in more difficult and complex cases
--> Machine takes over prediction
--> Human judges output and confirms
There are some important prerequisites:
- Machine learning techniques depend on high quality data-->(Garbage in – garbage out)
- New roles and skills are required to explore and productize machine learning
Big Data triggered furthered an influx of research and prospective on concepts and processes pertaining previously to the Data Warehouse field. Some conclude that Data Warehouse as such will disappear;others present Big Data as the natural Data Warehouse evolution (perhaps without identifying a clear division between the two); and finally, some others pose a future of convergence, partially exploring the
possible integration of both. In this paper, we revise the underlying technological features of Big Data and Data Warehouse, highlighting their differences and areas of convergence. Even when some differences exist, both technologies could (and should) be integrated because they both aim at the same purpose: data exploration and decision making support. We explore some convergence strategies, based on the common elements in both technologies. We present a revision of the state-of-the-art in integration proposals from the point of view of the purpose, methodology, architecture and underlying technology, highlighting the
common elements that support both technologies that may serve as a starting point for full integration and we propose a proposal of integration between the two technologies.
Developing a PhD Research Topic for Your Research| PhD Assistance UKPhDAssistanceUK
According to UGC report, every year approximately 77,000 scholars are enrolling for PhD research programs in India. but only 1/3rd of them successfully complete the doctorate every year (Marg, 2015). This result clearly indicates the difficulty and standard of evaluation of this program, so it is necessary that the scholar thoroughly analyze and select a good PhD research topic before plunging into the research work. There are many methodologies and techniques introduced over a period for effective selection and decision making in various field. Data Driven Decision Making which is also referred as Data Based Decision Making is a modern method which was found to be effective and efficient for strategic and systematic development and decision making. So, this article summarizes the framework, effectiveness, benefits of DDDM in selection of Research topics for the PhD scholars.
For more Details:-
UK: +44 7424997337
Email : info@phdassistance.com
Hadoop and Big Data Readiness in Africa: A Case of Tanzaniaijsrd.com
Big data has been referred to as a forefront pillar of any modern analytics application. Together with Hadoop which is open source software, they have emerged to be a solution to the processing of massive generated both structured and unstructured data. With different strategies and initiatives taken by governments and private institutions in the world towards deployment and support of big data analytics and hadoop, Africa cannot be left isolated. In this paper, we assessed the readiness of Africa with a case study of Tanzania in harnessing the power of big data analytics and hadoop as a tool for drawing insights that might help them make crucial decisions. We used a survey in collecting the data using questionnaires. Results reveal that majority of the companies are either not aware of the technologies or still in their infancy stages in using big data analytics and hadoop. We identified that most companies are in either awakening or advancing stages of the big data continuum. This is attributed by challenges such as lack of IT skills to manage big data projects, cost of technology infrastructure, making decision on which data are relevant, lack of skills to analyze the data, lack of business support and deciding on what technology is best compared to others. It has also been found out that most of the companies' IT officers are not aware with the concepts and techniques of big data analytics and hadoop.
A case study of using the hybrid model of scrum and six sigma in software dev...IJECEIAES
The world of software engineering is constantly discovering new ways that lead to an increase in team performance in the production of software products and, at the same time, brings the customer's further satisfaction. With the advent of agile methodologies in software development, these objectives have been considered more seriously by software teams and companies. Due to their very nature, agile methodologies have the potential to be integrated with other methodologies or specific managerial approaches defined in line with agility objectives. One of the cases is Six Sigma, which is used in organizations by focusing on organizational change and process improvement. In the present study, attempts were made to present the hybrid software development approach, including Scrum, as the most common agile and Six Sigma methodology. This approach was practically used in a case study, and the obtained results were analyzed. The results of this evaluation showed that this hybrid method could lead to the increased team performance and customer satisfaction. However, besides these two achievements, an increase in the number of re-works, number of defects discovered, and the duration of the project implementation were also observed. These cases are in line with the main objectives of Scrum and Six Sigma and are justifiable and acceptable due to achieving those objectives.
IMPORTANCE OF PROCESS MINING FOR BIG DATA REQUIREMENTS ENGINEERINGijcsit
Requirements engineering (RE), as a part of the project development life cycle, has increasingly been recognized as the key to ensure on-time, on-budget, and goal-based delivery of software projects. RE of big data projects is even more crucial because of the rapid growth of big data applications over the past few years. Data processing, being a part of big data RE, is an essential job in driving big data RE process successfully. Business can be overwhelmed by data and underwhelmed by the information so, data processing is very critical in big data projects. Employing traditional data processing techniques lacks the invention of useful information because of the main characteristics of big data, including high volume, velocity, and variety. Data processing can be benefited by process mining, and in turn, helps to increase the productivity of the big data projects. In this paper, the capability of process mining in big data RE to discover valuable insights and business values from event logs and processes of the systems has been highlighted. Also, the proposed big data requirements engineering framework, named REBD, helps software requirements engineers to eradicate many challenges of big data RE.
Requirements engineering (RE), as a part of the project development life cycle, has increasingly been recognized as the key to ensure on-time, on-budget, and goal-based delivery of software projects. RE of big data projects is even more crucial because of the rapid growth of big data applications over the past few years. Data processing, being a part of big data RE, is an essential job in driving big data RE process successfully. Business can be overwhelmed by data and underwhelmed by the information so, data processing is very critical in big data projects. Employing traditional data processing techniques lacks the invention of useful information because of the main characteristics of big data, including high volume, velocity, and variety. Data processing can be benefited by process mining, and in turn, helps to increase the productivity of the big data projects. In this paper, the capability of process mining in big data RE to discover valuable insights and business values from event logs and processes of the systems has been highlighted. Also, the proposed big data requirements engineering framework, named REBD, helps software requirements engineers to eradicate many challenges of big data RE.
An overview of information extraction techniques for legal document analysis ...IJECEIAES
In an Indian law system, different courts publish their legal proceedings every month for future reference of legal experts and common people. Extensive manual labor and time are required to analyze and process the information stored in these lengthy complex legal documents. Automatic legal document processing is the solution to overcome drawbacks of manual processing and will be very helpful to the common man for a better understanding of a legal domain. In this paper, we are exploring the recent advances in the field of legal text processing and provide a comparative analysis of approaches used for it. In this work, we have divided the approaches into three classes NLP based, deep learning-based and, KBP based approaches. We have put special emphasis on the KBP approach as we strongly believe that this approach can handle the complexities of the legal domain well. We finally discuss some of the possible future research directions for legal document analysis and processing.
Data warehousing is a technique for collecting and managing data from multiple internal and
external sources to provide meaningful business insights. Data warehouses are designed to give a long-range
view of data over time and provide a decision support system environment. They are a vital component of
business intelligence, which is designed for data analysis and reporting. They are used to provide greater
insight into the performance of a business. This paper provides a brief introduction on data warehousing
Big Data Analytics in Higher Education: A Reviewtheijes
Big Data provides an opportunity to educational Institutions to use their Information Technology resources strategically to improve educational quality and guide students to higher rates of completion, and to improve student persistence and outcomes. This paper explores the attributes of big data that are relevant to educational institutions, investigates the factors influencing adoption of big data and analytics in learning institutions and seeks to establish the limiting factors hindering use of big data in Institutions of higher learning. The study has been conducted through a desk search and reviewed sources of literature including scientific research journals and reports. The paper is based on desk research. The sources of literature that were reviewed included scientific research articles and journals, conference reports and theses. Online journals found on the internet were also examined with the search being broadened by Google Scholar where the following keywords were used “big data”, “developing countries”, “education systems” and “clustering”. The paper concludes that Big Data is important since it offers Universities opportunities to their Information Technology resources strategically to improve educational quality and guide students, colleges and universities see value in analytics; and therefore recommends that these institutions carry out investments in analytics programs and in people to have relevant data science. This is because Big Data can afford educational institutions opportunities to shape a modern and dynamic education system, in which every individual student can have the maximum benefit from, and can greatly contribute towards improving the quality of education
Big Data must be processed with advanced collection and analysis tools, based on predetermined algorithms, in order to obtain relevant information. Algorithms must also take into account invisible aspects for direct perceptions. Big Data issues is multi-layered. A distributed parallel architecture distributes data on multiple servers (parallel execution environments) thus dramatically improving data processing speeds. Big Data provides an infrastructure that allows for highlighting uncertainties, performance, and availability of components.
DOI: 10.13140/RG.2.2.12784.00004
From IT service management to IT service governance: An ontological approach ...IJECEIAES
Some companies have achieved better performance as a result of their IT investments, while others have not, as organizations are interested in calculating the value added by their IT. There is a wide range of literature that agrees that the best practices used by organizations promote continuous improvement in service delivery. Nevertheless, overuse of these practices can have undesirable effects and unquantified investments. This paper proposed a practical tool formally developed according to the DSR design science approach, it addresses a domain relevant to both practitioners and academics by providing IT service governance (ITSG) domain model ontology, concerned with maximizing the clarity and veracity of the concepts within it. The results revealed that the proposed ontology resolved key barriers to ITSG process adoption in organizations, and that combining COBIT and ITIL practices would help organizations better manage their IT services and achieve better business-IT alignment.
Higher education institutions now a days are operating in an increasingly complex and
competitive environment. The application of innovation is a must for sustaining its competitive advantage.
Institution leaders are using data management and analytics to question the status quo and develop effective
solutions. Achieving these insights and information requires not a single report from a single system, but
rather the ability to access, share, and explore institution-wide data that can be transformed into meaningful
insights at every level of the institution. Consequently, institutions are facing problems in providing necessary
information technology support for fulfilling excellence in performance. More specifically, the best practices
of big data management and analytics need to be considered within higher education institutions. Therefore,
the study aimed at investigating big data and analytics, in terms of: (1) definition; (2) its most important
principles; (3) models; and (4) benefits of its use to fulfill performance excellence in higher education
institutions. This involves shedding light on big data and analytics models and the possibility of its use in
higher education institutions, and exploring the effect of using big data and analytics in achieving performance
excellence. To reach these objectives, the researcher employed a qualitative research methodology for
collecting and analyzing data. The study concluded the most important result, that there is a significant
relationship between big data and analytics and excellence of performance as big data management and
analytics mainly aims at achieving tasks quickly with the least effort and cost. These positive results support
the use of big data and analytics in institutions and improving knowledge in this field and providing a practical
guide adaptable to the institution structure. This paper also identifies the role of big data and analytics in
institutions of higher education worldwide and outlines the implementation challenges and opportunities in the
education industry.
EMC Isilon: A Scalable Storage Platform for Big DataEMC
This white paper provides insights into EMC Isilon's shared storage approach, covering a wide range of desired characteristics including increased efficiency and reduced total cost.
CHALLENGES FOR MANAGING COMPLEX APPLICATION PORTFOLIOS: A CASE STUDY OF SOUTH...IJMIT JOURNAL
This research explores the challenges in management and the root cause for complex application portfolios
in the public sector. It takes Australian public sector organisations with the case of South Australia Police
(SAPOL) for evaluation it being one of the significant and mission critical state government agencies. The
exploratory research surfaces some of the key challenges using interview as primary data collection
source, along with archive records, documentation, and direct observation as secondary sources. This
paper reports on the information analysed surfacing eight key issues. It highlights that the organic growth
of the technology portfolios, with mission criticality has resulted in many quick fixes which are not aligned
with long term enterprise architectural stability. Integration of different mismatched technologies, along
with the pressure from the business to always keep the lights on, does not provide the opportunity for the
portfolios to be rationalised in an ongoing way. Other issues and the areas for further study are explored
at the end.
LEVERAGING CLOUD BASED BIG DATA ANALYTICS IN KNOWLEDGE MANAGEMENT FOR ENHANCE...ijdpsjournal
In recent past, big data opportunities have gained much momentum to enhance knowledge management in
organizations. However, big data due to its various properties like high volume, variety, and velocity can
no longer be effectively stored and analyzed with traditional data management techniques to generate
values for knowledge development. Hence, new technologies and architectures are required to store and
analyze this big data through advanced data analytics and in turn generate vital real-time knowledge for
effective decision making by organizations. More specifically, it is necessary to have a single infrastructure
which provides common functionality of knowledge management, and flexible enough to handle different
types of big data and big data analysis tasks. Cloud computing infrastructures capable of storing and
processing large volume of data can be used for efficient big data processing because it minimizes the
initial cost for the large-scale computing infrastructure demanded by big data analytics. This paper aims to
explore the impact of big data analytics on knowledge management and proposes a cloud-based conceptual
framework that can analyze big data in real time to facilitate enhanced decision making intended for
competitive advantage. Thus, this framework will pave the way for organizations to explore the relationship
between big data analytics and knowledge management which are mostly deemed as two distinct entities.
LEVERAGING CLOUD BASED BIG DATA ANALYTICS IN KNOWLEDGE MANAGEMENT FOR ENHANCE...ijdpsjournal
In recent past, big data opportunities have gained much momentum to enhance knowledge management in
organizations. However, big data due to its various properties like high volume, variety, and velocity can
no longer be effectively stored and analyzed with traditional data management techniques to generate
values for knowledge development. Hence, new technologies and architectures are required to store and
analyze this big data through advanced data analytics and in turn generate vital real-time knowledge for
effective decision making by organizations. More specifically, it is necessary to have a single infrastructure
which provides common functionality of knowledge management, and flexible enough to handle different
types of big data and big data analysis tasks. Cloud computing infrastructures capable of storing and
processing large volume of data can be used for efficient big data processing because it minimizes the
initial cost for the large-scale computing infrastructure demanded by big data analytics. This paper aims to
explore the impact of big data analytics on knowledge management and proposes a cloud-based conceptual
framework that can analyze big data in real time to facilitate enhanced decision making intended for
competitive advantage. Thus, this framework will pave the way for organizations to explore the relationship
between big data analytics and knowledge management which are mostly deemed as two distinct entities.
LEVERAGING CLOUD BASED BIG DATA ANALYTICS IN KNOWLEDGE MANAGEMENT FOR ENHANC...ijdpsjournal
In recent past, big data opportunities have gained much momentum to enhance knowledge management in
organizations. However, big data due to its various properties like high volume, variety, and velocity can
no longer be effectively stored and analyzed with traditional data management techniques to generate
values for knowledge development. Hence, new technologies and architectures are required to store and
analyze this big data through advanced data analytics and in turn generate vital real-time knowledge for
effective decision making by organizations. More specifically, it is necessary to have a single infrastructure
which provides common functionality of knowledge management, and flexible enough to handle different
types of big data and big data analysis tasks. Cloud computing infrastructures capable of storing and
processing large volume of data can be used for efficient big data processing because it minimizes the
initial cost for the large-scale computing infrastructure demanded by big data analytics. This paper aims to
explore the impact of big data analytics on knowledge management and proposes a cloud-based conceptual
framework that can analyze big data in real time to facilitate enhanced decision making intended for
competitive advantage. Thus, this framework will pave the way for organizations to explore the relationship
between big data analytics and knowledge management which are mostly deemed as two distinct entities.
A case study of using the hybrid model of scrum and six sigma in software dev...IJECEIAES
The world of software engineering is constantly discovering new ways that lead to an increase in team performance in the production of software products and, at the same time, brings the customer's further satisfaction. With the advent of agile methodologies in software development, these objectives have been considered more seriously by software teams and companies. Due to their very nature, agile methodologies have the potential to be integrated with other methodologies or specific managerial approaches defined in line with agility objectives. One of the cases is Six Sigma, which is used in organizations by focusing on organizational change and process improvement. In the present study, attempts were made to present the hybrid software development approach, including Scrum, as the most common agile and Six Sigma methodology. This approach was practically used in a case study, and the obtained results were analyzed. The results of this evaluation showed that this hybrid method could lead to the increased team performance and customer satisfaction. However, besides these two achievements, an increase in the number of re-works, number of defects discovered, and the duration of the project implementation were also observed. These cases are in line with the main objectives of Scrum and Six Sigma and are justifiable and acceptable due to achieving those objectives.
IMPORTANCE OF PROCESS MINING FOR BIG DATA REQUIREMENTS ENGINEERINGijcsit
Requirements engineering (RE), as a part of the project development life cycle, has increasingly been recognized as the key to ensure on-time, on-budget, and goal-based delivery of software projects. RE of big data projects is even more crucial because of the rapid growth of big data applications over the past few years. Data processing, being a part of big data RE, is an essential job in driving big data RE process successfully. Business can be overwhelmed by data and underwhelmed by the information so, data processing is very critical in big data projects. Employing traditional data processing techniques lacks the invention of useful information because of the main characteristics of big data, including high volume, velocity, and variety. Data processing can be benefited by process mining, and in turn, helps to increase the productivity of the big data projects. In this paper, the capability of process mining in big data RE to discover valuable insights and business values from event logs and processes of the systems has been highlighted. Also, the proposed big data requirements engineering framework, named REBD, helps software requirements engineers to eradicate many challenges of big data RE.
Requirements engineering (RE), as a part of the project development life cycle, has increasingly been recognized as the key to ensure on-time, on-budget, and goal-based delivery of software projects. RE of big data projects is even more crucial because of the rapid growth of big data applications over the past few years. Data processing, being a part of big data RE, is an essential job in driving big data RE process successfully. Business can be overwhelmed by data and underwhelmed by the information so, data processing is very critical in big data projects. Employing traditional data processing techniques lacks the invention of useful information because of the main characteristics of big data, including high volume, velocity, and variety. Data processing can be benefited by process mining, and in turn, helps to increase the productivity of the big data projects. In this paper, the capability of process mining in big data RE to discover valuable insights and business values from event logs and processes of the systems has been highlighted. Also, the proposed big data requirements engineering framework, named REBD, helps software requirements engineers to eradicate many challenges of big data RE.
An overview of information extraction techniques for legal document analysis ...IJECEIAES
In an Indian law system, different courts publish their legal proceedings every month for future reference of legal experts and common people. Extensive manual labor and time are required to analyze and process the information stored in these lengthy complex legal documents. Automatic legal document processing is the solution to overcome drawbacks of manual processing and will be very helpful to the common man for a better understanding of a legal domain. In this paper, we are exploring the recent advances in the field of legal text processing and provide a comparative analysis of approaches used for it. In this work, we have divided the approaches into three classes NLP based, deep learning-based and, KBP based approaches. We have put special emphasis on the KBP approach as we strongly believe that this approach can handle the complexities of the legal domain well. We finally discuss some of the possible future research directions for legal document analysis and processing.
Data warehousing is a technique for collecting and managing data from multiple internal and
external sources to provide meaningful business insights. Data warehouses are designed to give a long-range
view of data over time and provide a decision support system environment. They are a vital component of
business intelligence, which is designed for data analysis and reporting. They are used to provide greater
insight into the performance of a business. This paper provides a brief introduction on data warehousing
Big Data Analytics in Higher Education: A Reviewtheijes
Big Data provides an opportunity to educational Institutions to use their Information Technology resources strategically to improve educational quality and guide students to higher rates of completion, and to improve student persistence and outcomes. This paper explores the attributes of big data that are relevant to educational institutions, investigates the factors influencing adoption of big data and analytics in learning institutions and seeks to establish the limiting factors hindering use of big data in Institutions of higher learning. The study has been conducted through a desk search and reviewed sources of literature including scientific research journals and reports. The paper is based on desk research. The sources of literature that were reviewed included scientific research articles and journals, conference reports and theses. Online journals found on the internet were also examined with the search being broadened by Google Scholar where the following keywords were used “big data”, “developing countries”, “education systems” and “clustering”. The paper concludes that Big Data is important since it offers Universities opportunities to their Information Technology resources strategically to improve educational quality and guide students, colleges and universities see value in analytics; and therefore recommends that these institutions carry out investments in analytics programs and in people to have relevant data science. This is because Big Data can afford educational institutions opportunities to shape a modern and dynamic education system, in which every individual student can have the maximum benefit from, and can greatly contribute towards improving the quality of education
Big Data must be processed with advanced collection and analysis tools, based on predetermined algorithms, in order to obtain relevant information. Algorithms must also take into account invisible aspects for direct perceptions. Big Data issues is multi-layered. A distributed parallel architecture distributes data on multiple servers (parallel execution environments) thus dramatically improving data processing speeds. Big Data provides an infrastructure that allows for highlighting uncertainties, performance, and availability of components.
DOI: 10.13140/RG.2.2.12784.00004
From IT service management to IT service governance: An ontological approach ...IJECEIAES
Some companies have achieved better performance as a result of their IT investments, while others have not, as organizations are interested in calculating the value added by their IT. There is a wide range of literature that agrees that the best practices used by organizations promote continuous improvement in service delivery. Nevertheless, overuse of these practices can have undesirable effects and unquantified investments. This paper proposed a practical tool formally developed according to the DSR design science approach, it addresses a domain relevant to both practitioners and academics by providing IT service governance (ITSG) domain model ontology, concerned with maximizing the clarity and veracity of the concepts within it. The results revealed that the proposed ontology resolved key barriers to ITSG process adoption in organizations, and that combining COBIT and ITIL practices would help organizations better manage their IT services and achieve better business-IT alignment.
Higher education institutions now a days are operating in an increasingly complex and
competitive environment. The application of innovation is a must for sustaining its competitive advantage.
Institution leaders are using data management and analytics to question the status quo and develop effective
solutions. Achieving these insights and information requires not a single report from a single system, but
rather the ability to access, share, and explore institution-wide data that can be transformed into meaningful
insights at every level of the institution. Consequently, institutions are facing problems in providing necessary
information technology support for fulfilling excellence in performance. More specifically, the best practices
of big data management and analytics need to be considered within higher education institutions. Therefore,
the study aimed at investigating big data and analytics, in terms of: (1) definition; (2) its most important
principles; (3) models; and (4) benefits of its use to fulfill performance excellence in higher education
institutions. This involves shedding light on big data and analytics models and the possibility of its use in
higher education institutions, and exploring the effect of using big data and analytics in achieving performance
excellence. To reach these objectives, the researcher employed a qualitative research methodology for
collecting and analyzing data. The study concluded the most important result, that there is a significant
relationship between big data and analytics and excellence of performance as big data management and
analytics mainly aims at achieving tasks quickly with the least effort and cost. These positive results support
the use of big data and analytics in institutions and improving knowledge in this field and providing a practical
guide adaptable to the institution structure. This paper also identifies the role of big data and analytics in
institutions of higher education worldwide and outlines the implementation challenges and opportunities in the
education industry.
EMC Isilon: A Scalable Storage Platform for Big DataEMC
This white paper provides insights into EMC Isilon's shared storage approach, covering a wide range of desired characteristics including increased efficiency and reduced total cost.
CHALLENGES FOR MANAGING COMPLEX APPLICATION PORTFOLIOS: A CASE STUDY OF SOUTH...IJMIT JOURNAL
This research explores the challenges in management and the root cause for complex application portfolios
in the public sector. It takes Australian public sector organisations with the case of South Australia Police
(SAPOL) for evaluation it being one of the significant and mission critical state government agencies. The
exploratory research surfaces some of the key challenges using interview as primary data collection
source, along with archive records, documentation, and direct observation as secondary sources. This
paper reports on the information analysed surfacing eight key issues. It highlights that the organic growth
of the technology portfolios, with mission criticality has resulted in many quick fixes which are not aligned
with long term enterprise architectural stability. Integration of different mismatched technologies, along
with the pressure from the business to always keep the lights on, does not provide the opportunity for the
portfolios to be rationalised in an ongoing way. Other issues and the areas for further study are explored
at the end.
LEVERAGING CLOUD BASED BIG DATA ANALYTICS IN KNOWLEDGE MANAGEMENT FOR ENHANCE...ijdpsjournal
In recent past, big data opportunities have gained much momentum to enhance knowledge management in
organizations. However, big data due to its various properties like high volume, variety, and velocity can
no longer be effectively stored and analyzed with traditional data management techniques to generate
values for knowledge development. Hence, new technologies and architectures are required to store and
analyze this big data through advanced data analytics and in turn generate vital real-time knowledge for
effective decision making by organizations. More specifically, it is necessary to have a single infrastructure
which provides common functionality of knowledge management, and flexible enough to handle different
types of big data and big data analysis tasks. Cloud computing infrastructures capable of storing and
processing large volume of data can be used for efficient big data processing because it minimizes the
initial cost for the large-scale computing infrastructure demanded by big data analytics. This paper aims to
explore the impact of big data analytics on knowledge management and proposes a cloud-based conceptual
framework that can analyze big data in real time to facilitate enhanced decision making intended for
competitive advantage. Thus, this framework will pave the way for organizations to explore the relationship
between big data analytics and knowledge management which are mostly deemed as two distinct entities.
LEVERAGING CLOUD BASED BIG DATA ANALYTICS IN KNOWLEDGE MANAGEMENT FOR ENHANCE...ijdpsjournal
In recent past, big data opportunities have gained much momentum to enhance knowledge management in
organizations. However, big data due to its various properties like high volume, variety, and velocity can
no longer be effectively stored and analyzed with traditional data management techniques to generate
values for knowledge development. Hence, new technologies and architectures are required to store and
analyze this big data through advanced data analytics and in turn generate vital real-time knowledge for
effective decision making by organizations. More specifically, it is necessary to have a single infrastructure
which provides common functionality of knowledge management, and flexible enough to handle different
types of big data and big data analysis tasks. Cloud computing infrastructures capable of storing and
processing large volume of data can be used for efficient big data processing because it minimizes the
initial cost for the large-scale computing infrastructure demanded by big data analytics. This paper aims to
explore the impact of big data analytics on knowledge management and proposes a cloud-based conceptual
framework that can analyze big data in real time to facilitate enhanced decision making intended for
competitive advantage. Thus, this framework will pave the way for organizations to explore the relationship
between big data analytics and knowledge management which are mostly deemed as two distinct entities.
LEVERAGING CLOUD BASED BIG DATA ANALYTICS IN KNOWLEDGE MANAGEMENT FOR ENHANC...ijdpsjournal
In recent past, big data opportunities have gained much momentum to enhance knowledge management in
organizations. However, big data due to its various properties like high volume, variety, and velocity can
no longer be effectively stored and analyzed with traditional data management techniques to generate
values for knowledge development. Hence, new technologies and architectures are required to store and
analyze this big data through advanced data analytics and in turn generate vital real-time knowledge for
effective decision making by organizations. More specifically, it is necessary to have a single infrastructure
which provides common functionality of knowledge management, and flexible enough to handle different
types of big data and big data analysis tasks. Cloud computing infrastructures capable of storing and
processing large volume of data can be used for efficient big data processing because it minimizes the
initial cost for the large-scale computing infrastructure demanded by big data analytics. This paper aims to
explore the impact of big data analytics on knowledge management and proposes a cloud-based conceptual
framework that can analyze big data in real time to facilitate enhanced decision making intended for
competitive advantage. Thus, this framework will pave the way for organizations to explore the relationship
between big data analytics and knowledge management which are mostly deemed as two distinct entities.
Artificial intelligence has been a buzz word that is impacting every industry in the world. With the rise of
such advanced technology, there will be always a question regarding its impact on our social life,
environment and economy thus impacting all efforts exerted towards sustainable development. In the
information era, enormous amounts of data have become available on hand to decision makers. Big data
refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to
handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be
studied and provided in order to handle and extract value and knowledge from these datasets for different
industries and business operations. Numerous use cases have shown that AI can ensure an effective supply
of information to citizens, users and customers in times of crisis. This paper aims to analyse some of the
different methods and scenario which can be applied to AI and big data, as well as the opportunities
provided by the application in various business operations and crisis management domains.
Artificial intelligence has been a buzz word that is impacting every industry in the world. With the rise of such advanced technology, there will be always a question regarding its impact on our social life, environment and economy thus impacting all efforts exerted towards sustainable development. In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets for different industries and business operations. Numerous use cases have shown that AI can ensure an effective supply of information to citizens, users and customers in times of crisis. This paper aims to analyse some of the different methods and scenario which can be applied to AI and big data, as well as the opportunities provided by the application in various business operations and crisis management domains.
Artificial intelligence has been a buzz word that is impacting every industry in the world. With the rise of
such advanced technology, there will be always a question regarding its impact on our social life,
environment and economy thus impacting all efforts exerted towards sustainable development. In the
information era, enormous amounts of data have become available on hand to decision makers. Big data
refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to
handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be
studied and provided in order to handle and extract value and knowledge from these datasets for different
industries and business operations. Numerous use cases have shown that AI can ensure an effective supply
of information to citizens, users and customers in times of crisis. This paper aims to analyse some of the
different methods and scenario which can be applied to AI and big data, as well as the opportunities
provided by the application in various business operations and crisis management domains.
Encroachment in Data Processing using Big Data TechnologyMangaiK4
Abstract—The nature of big data is now growing and information is present all around us in different kind of forms. The big data information plays crucial role and it provides business value for the firms and its benefits sectors by accumulating knowledge. This growth of big data around all the concerns is high and challenge in data processing technique because it contains variety of data in enormous volume. The tools which are built on the data mining algorithm provides efficient data processing mechanisms, but not fulfill the pattern of heterogeneous, so the emerging tools such like Hadoop MapReduce, Pig, SPARK, Cloudera, Impala and Enterprise RTQ, IBM Netezza and Apache Giraphe as computing tools and HBase, Hive, Neo4j and Apache Cassendra as storage tools useful in classifying, clustering and discovering the knowledge. This study will focused on the comparative study of different data processing tools, in big data analytics and their benefits will be tabulated.
The software development process is complete for computer project analysis, and it is important to the evaluation of the random project. These practice guidelines are for those who manage big-data and big-data analytics projects or are responsible for the use of data analytics solutions. They are also intended for business leaders and program leaders that are responsible for developing agency capability in the area of big data and big data analytics .
For those agencies currently not using big data or big data analytics, this document may assist strategic planners, business teams and data analysts to consider the value of big data to the current and future programs.
This document is also of relevance to those in industry, research and academia who can work as partners with government on big data analytics projects.
Technical APS personnel who manage big data and/or do big data analytics are invited to join the Data Analytics Centre of Excellence Community of Practice to share information of technical aspects of big data and big data analytics, including achieving best practice with modeling and related requirements. To join the community, send an email to the Data Analytics Centre of Excellence
THE CRITICAL SUCCESS FACTORS FOR BIG DATA ADOPTION IN GOVERNMENTIAEME Publication
Over the past decade, governments around the world have been trying to take
advantage of Big Data technology to improve public services with citizens. The
adoption of Big Data has increased in most countries, but at the same time, the rate of
successful adoption and management varies from one country to another. A systematic
review of the literature (SLR) was carried out to identify the critical success factors
(CSF) for the adoption of big data in the government. It includes the critical success
factor of the adoption of Big Data in the government in the last 10 years. It presents
the general trends that examine 183 journals and numerous literary works related to
government operations, the provision of public services, citizen participation, decision
making and policies, and governance reform. We selected 90 journals and found 11
classification factors that refer to the successions of a Big Data adoption in the
government
Due to the arrival of new technologies, devices, and communication means, the amount of data produced by mankind is growing rapidly every year. This gives rise to the era of big data. The term big data comes with the new challenges to input, process and output the data. The paper focuses on limitation of traditional approach to manage the data and the components that are useful in handling big data. One of the approaches used in processing big data is Hadoop framework, the paper presents the major components of the framework and working process within the framework.
Application of Big Data in Enterprise Managementijtsrd
With the continuous development of information technology, enterprises have gradually entered the era of big data. How to analyze the complex data and find out the useful information to promote the development of enterprises is becoming more and more important in the modernization of science and technology. This paper expounds the importance and existing problems of big data application in enterprise management, and briefly analyzes and discusses its application in enterprises and its future development direction and trend. With the rapid development of Internet of things, cloud computing and other information technology, the world ushered in the era of big data. It has become a trend to promote the deep integration of Internet, big data, artificial intelligence and real economy. Due to the rapid development of economy, the amount of data information generated in the process of consumption and production is very large. Under the traditional management mode, enterprises cannot meet the needs of the current social and economic development. However, the application of big data technology in enterprises can achieve better analysis and Research on these data information, so as to provide reliable data basis for enterprises to carry out various business management decisions. Feng GUO | Hui-lin QIN "Application of Big Data in Enterprise Management" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-4 , June 2021, URL: https://www.ijtsrd.compapers/ijtsrd43622.pdf Paper URL: https://www.ijtsrd.commanagement/business-economics/43622/application-of-big-data-in-enterprise-management/feng-guo
sustainabilityCase ReportIntegrated Understanding of B.docxdeanmtaylor1545
sustainability
Case Report
Integrated Understanding of Big Data, Big Data
Analysis, and Business Intelligence: A Case Study
of Logistics
Dong-Hui Jin and Hyun-Jung Kim *
Seoul Business School, aSSIST, 46 Ewhayeodae 2-gil, Seodaemun-gu, Seoul 03767, Korea; [email protected]
* Correspondence: [email protected]; Tel.: +82-70-7012-2722
Received: 5 October 2018; Accepted: 17 October 2018; Published: 19 October 2018
!"#!$%&'(!
!"#$%&'
Abstract: Efficient decision making based on business intelligence (BI) is essential to ensure
competitiveness for sustainable growth. The rapid development of information and communication
technology has made collection and analysis of big data essential, resulting in a considerable increase
in academic studies on big data and big data analysis (BDA). However, many of these studies are
not linked to BI, as companies do not understand and utilize the concepts in an integrated way.
Therefore, the purpose of this study is twofold. First, we review the literature on BI, big data,
and BDA to show that they are not separate methods but an integrated decision support system.
Second, we explore how businesses use big data and BDA practically in conjunction with BI through
a case study of sorting and logistics processing of a typical courier enterprise. We focus on the
company’s cost efficiency as regards to data collection, data analysis/simulation, and the results from
actual application. Our findings may enable companies to achieve management efficiency by utilizing
big data through efficient BI without investing in additional infrastructure. It could also give them
indirect experience, thereby reducing trial and error in order to maintain or increase competitiveness.
Keywords: business application; big data; big data analysis; business intelligence; logistics;
courier service
1. Introduction
A growing number of corporations depend on various and continuously evolving methods of
extracting valuable information through big data and big data analysis (BDA) for business intelligence
(BI) to make better decisions. The term “big data” refers to large amounts of information or data at
a certain point in time and within a particular scope. However, big data have a short lifecycle with
rapidly decreasing effective value, which makes it difficult for academic research to keep up with their
fast pace. In addition, big data have no limits regarding their type, form, or scale, and their scope is
too vast to narrow them down to a specific area of study.
Big data can also simply refer to a huge amount of complex data, but their type, characteristics,
scale, quality, and depth vary depending on the capabilities and purpose of each company.
The same holds for the reliability and usability of the results gathered from analysis of the data.
Previous studies generally agree on three main properties that define big data, namely, volume,
velocity, and variety, or the “3Vs” [1–4], which have recently been expanded to “5Vs” with the ad.
sustainabilityCase ReportIntegrated Understanding of B.docxmabelf3
sustainability
Case Report
Integrated Understanding of Big Data, Big Data
Analysis, and Business Intelligence: A Case Study
of Logistics
Dong-Hui Jin and Hyun-Jung Kim *
Seoul Business School, aSSIST, 46 Ewhayeodae 2-gil, Seodaemun-gu, Seoul 03767, Korea; [email protected]
* Correspondence: [email protected]; Tel.: +82-70-7012-2722
Received: 5 October 2018; Accepted: 17 October 2018; Published: 19 October 2018
!"#!$%&'(!
!"#$%&'
Abstract: Efficient decision making based on business intelligence (BI) is essential to ensure
competitiveness for sustainable growth. The rapid development of information and communication
technology has made collection and analysis of big data essential, resulting in a considerable increase
in academic studies on big data and big data analysis (BDA). However, many of these studies are
not linked to BI, as companies do not understand and utilize the concepts in an integrated way.
Therefore, the purpose of this study is twofold. First, we review the literature on BI, big data,
and BDA to show that they are not separate methods but an integrated decision support system.
Second, we explore how businesses use big data and BDA practically in conjunction with BI through
a case study of sorting and logistics processing of a typical courier enterprise. We focus on the
company’s cost efficiency as regards to data collection, data analysis/simulation, and the results from
actual application. Our findings may enable companies to achieve management efficiency by utilizing
big data through efficient BI without investing in additional infrastructure. It could also give them
indirect experience, thereby reducing trial and error in order to maintain or increase competitiveness.
Keywords: business application; big data; big data analysis; business intelligence; logistics;
courier service
1. Introduction
A growing number of corporations depend on various and continuously evolving methods of
extracting valuable information through big data and big data analysis (BDA) for business intelligence
(BI) to make better decisions. The term “big data” refers to large amounts of information or data at
a certain point in time and within a particular scope. However, big data have a short lifecycle with
rapidly decreasing effective value, which makes it difficult for academic research to keep up with their
fast pace. In addition, big data have no limits regarding their type, form, or scale, and their scope is
too vast to narrow them down to a specific area of study.
Big data can also simply refer to a huge amount of complex data, but their type, characteristics,
scale, quality, and depth vary depending on the capabilities and purpose of each company.
The same holds for the reliability and usability of the results gathered from analysis of the data.
Previous studies generally agree on three main properties that define big data, namely, volume,
velocity, and variety, or the “3Vs” [1–4], which have recently been expanded to “5Vs” with the ad.
We have concentrated on a range of strategies, methodologies, and distinct fields of research in this article, all of which are useful and relevant in the field of data mining technologies. As we all know, numerous multinational corporations and major corporations operate in various parts of the world. Each location of business may create significant amounts of data. Corporate decision-makers need access to all of these data sources in order to make strategic decisions.
Similar to The effect of technology-organization-environment on adoption decision of big data technology in Thailand (20)
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
Enhancing battery system identification: nonlinear autoregressive modeling fo...IJECEIAES
Precisely characterizing Li-ion batteries is essential for optimizing their
performance, enhancing safety, and prolonging their lifespan across various
applications, such as electric vehicles and renewable energy systems. This
article introduces an innovative nonlinear methodology for system
identification of a Li-ion battery, employing a nonlinear autoregressive with
exogenous inputs (NARX) model. The proposed approach integrates the
benefits of nonlinear modeling with the adaptability of the NARX structure,
facilitating a more comprehensive representation of the intricate
electrochemical processes within the battery. Experimental data collected
from a Li-ion battery operating under diverse scenarios are employed to
validate the effectiveness of the proposed methodology. The identified
NARX model exhibits superior accuracy in predicting the battery's behavior
compared to traditional linear models. This study underscores the
importance of accounting for nonlinearities in battery modeling, providing
insights into the intricate relationships between state-of-charge, voltage, and
current under dynamic conditions.
Smart grid deployment: from a bibliometric analysis to a surveyIJECEIAES
Smart grids are one of the last decades' innovations in electrical energy.
They bring relevant advantages compared to the traditional grid and
significant interest from the research community. Assessing the field's
evolution is essential to propose guidelines for facing new and future smart
grid challenges. In addition, knowing the main technologies involved in the
deployment of smart grids (SGs) is important to highlight possible
shortcomings that can be mitigated by developing new tools. This paper
contributes to the research trends mentioned above by focusing on two
objectives. First, a bibliometric analysis is presented to give an overview of
the current research level about smart grid deployment. Second, a survey of
the main technological approaches used for smart grid implementation and
their contributions are highlighted. To that effect, we searched the Web of
Science (WoS), and the Scopus databases. We obtained 5,663 documents
from WoS and 7,215 from Scopus on smart grid implementation or
deployment. With the extraction limitation in the Scopus database, 5,872 of
the 7,215 documents were extracted using a multi-step process. These two
datasets have been analyzed using a bibliometric tool called bibliometrix.
The main outputs are presented with some recommendations for future
research.
Use of analytical hierarchy process for selecting and prioritizing islanding ...IJECEIAES
One of the problems that are associated to power systems is islanding
condition, which must be rapidly and properly detected to prevent any
negative consequences on the system's protection, stability, and security.
This paper offers a thorough overview of several islanding detection
strategies, which are divided into two categories: classic approaches,
including local and remote approaches, and modern techniques, including
techniques based on signal processing and computational intelligence.
Additionally, each approach is compared and assessed based on several
factors, including implementation costs, non-detected zones, declining
power quality, and response times using the analytical hierarchy process
(AHP). The multi-criteria decision-making analysis shows that the overall
weight of passive methods (24.7%), active methods (7.8%), hybrid methods
(5.6%), remote methods (14.5%), signal processing-based methods (26.6%),
and computational intelligent-based methods (20.8%) based on the
comparison of all criteria together. Thus, it can be seen from the total weight
that hybrid approaches are the least suitable to be chosen, while signal
processing-based methods are the most appropriate islanding detection
method to be selected and implemented in power system with respect to the
aforementioned factors. Using Expert Choice software, the proposed
hierarchy model is studied and examined.
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...IJECEIAES
The power generated by photovoltaic (PV) systems is influenced by
environmental factors. This variability hampers the control and utilization of
solar cells' peak output. In this study, a single-stage grid-connected PV
system is designed to enhance power quality. Our approach employs fuzzy
logic in the direct power control (DPC) of a three-phase voltage source
inverter (VSI), enabling seamless integration of the PV connected to the
grid. Additionally, a fuzzy logic-based maximum power point tracking
(MPPT) controller is adopted, which outperforms traditional methods like
incremental conductance (INC) in enhancing solar cell efficiency and
minimizing the response time. Moreover, the inverter's real-time active and
reactive power is directly managed to achieve a unity power factor (UPF).
The system's performance is assessed through MATLAB/Simulink
implementation, showing marked improvement over conventional methods,
particularly in steady-state and varying weather conditions. For solar
irradiances of 500 and 1,000 W/m2
, the results show that the proposed
method reduces the total harmonic distortion (THD) of the injected current
to the grid by approximately 46% and 38% compared to conventional
methods, respectively. Furthermore, we compare the simulation results with
IEEE standards to evaluate the system's grid compatibility.
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...IJECEIAES
Photovoltaic systems have emerged as a promising energy resource that
caters to the future needs of society, owing to their renewable, inexhaustible,
and cost-free nature. The power output of these systems relies on solar cell
radiation and temperature. In order to mitigate the dependence on
atmospheric conditions and enhance power tracking, a conventional
approach has been improved by integrating various methods. To optimize
the generation of electricity from solar systems, the maximum power point
tracking (MPPT) technique is employed. To overcome limitations such as
steady-state voltage oscillations and improve transient response, two
traditional MPPT methods, namely fuzzy logic controller (FLC) and perturb
and observe (P&O), have been modified. This research paper aims to
simulate and validate the step size of the proposed modified P&O and FLC
techniques within the MPPT algorithm using MATLAB/Simulink for
efficient power tracking in photovoltaic systems.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Remote field-programmable gate array laboratory for signal acquisition and de...IJECEIAES
A remote laboratory utilizing field-programmable gate array (FPGA) technologies enhances students’ learning experience anywhere and anytime in embedded system design. Existing remote laboratories prioritize hardware access and visual feedback for observing board behavior after programming, neglecting comprehensive debugging tools to resolve errors that require internal signal acquisition. This paper proposes a novel remote embeddedsystem design approach targeting FPGA technologies that are fully interactive via a web-based platform. Our solution provides FPGA board access and debugging capabilities beyond the visual feedback provided by existing remote laboratories. We implemented a lab module that allows users to seamlessly incorporate into their FPGA design. The module minimizes hardware resource utilization while enabling the acquisition of a large number of data samples from the signal during the experiments by adaptively compressing the signal prior to data transmission. The results demonstrate an average compression ratio of 2.90 across three benchmark signals, indicating efficient signal acquisition and effective debugging and analysis. This method allows users to acquire more data samples than conventional methods. The proposed lab allows students to remotely test and debug their designs, bridging the gap between theory and practice in embedded system design.
Detecting and resolving feature envy through automated machine learning and m...IJECEIAES
Efficiently identifying and resolving code smells enhances software project quality. This paper presents a novel solution, utilizing automated machine learning (AutoML) techniques, to detect code smells and apply move method refactoring. By evaluating code metrics before and after refactoring, we assessed its impact on coupling, complexity, and cohesion. Key contributions of this research include a unique dataset for code smell classification and the development of models using AutoGluon for optimal performance. Furthermore, the study identifies the top 20 influential features in classifying feature envy, a well-known code smell, stemming from excessive reliance on external classes. We also explored how move method refactoring addresses feature envy, revealing reduced coupling and complexity, and improved cohesion, ultimately enhancing code quality. In summary, this research offers an empirical, data-driven approach, integrating AutoML and move method refactoring to optimize software project quality. Insights gained shed light on the benefits of refactoring on code quality and the significance of specific features in detecting feature envy. Future research can expand to explore additional refactoring techniques and a broader range of code metrics, advancing software engineering practices and standards.
Smart monitoring technique for solar cell systems using internet of things ba...IJECEIAES
Rapidly and remotely monitoring and receiving the solar cell systems status parameters, solar irradiance, temperature, and humidity, are critical issues in enhancement their efficiency. Hence, in the present article an improved smart prototype of internet of things (IoT) technique based on embedded system through NodeMCU ESP8266 (ESP-12E) was carried out experimentally. Three different regions at Egypt; Luxor, Cairo, and El-Beheira cities were chosen to study their solar irradiance profile, temperature, and humidity by the proposed IoT system. The monitoring data of solar irradiance, temperature, and humidity were live visualized directly by Ubidots through hypertext transfer protocol (HTTP) protocol. The measured solar power radiation in Luxor, Cairo, and El-Beheira ranged between 216-1000, 245-958, and 187-692 W/m 2 respectively during the solar day. The accuracy and rapidity of obtaining monitoring results using the proposed IoT system made it a strong candidate for application in monitoring solar cell systems. On the other hand, the obtained solar power radiation results of the three considered regions strongly candidate Luxor and Cairo as suitable places to build up a solar cells system station rather than El-Beheira.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
Developing a smart system for infant incubators using the internet of things ...IJECEIAES
This research is developing an incubator system that integrates the internet of things and artificial intelligence to improve care for premature babies. The system workflow starts with sensors that collect data from the incubator. Then, the data is sent in real-time to the internet of things (IoT) broker eclipse mosquito using the message queue telemetry transport (MQTT) protocol version 5.0. After that, the data is stored in a database for analysis using the long short-term memory network (LSTM) method and displayed in a web application using an application programming interface (API) service. Furthermore, the experimental results produce as many as 2,880 rows of data stored in the database. The correlation coefficient between the target attribute and other attributes ranges from 0.23 to 0.48. Next, several experiments were conducted to evaluate the model-predicted value on the test data. The best results are obtained using a two-layer LSTM configuration model, each with 60 neurons and a lookback setting 6. This model produces an R 2 value of 0.934, with a root mean square error (RMSE) value of 0.015 and a mean absolute error (MAE) of 0.008. In addition, the R 2 value was also evaluated for each attribute used as input, with a result of values between 0.590 and 0.845.
A review on internet of things-based stingless bee's honey production with im...IJECEIAES
Honey is produced exclusively by honeybees and stingless bees which both are well adapted to tropical and subtropical regions such as Malaysia. Stingless bees are known for producing small amounts of honey and are known for having a unique flavor profile. Problem identified that many stingless bees collapsed due to weather, temperature and environment. It is critical to understand the relationship between the production of stingless bee honey and environmental conditions to improve honey production. Thus, this paper presents a review on stingless bee's honey production and prediction modeling. About 54 previous research has been analyzed and compared in identifying the research gaps. A framework on modeling the prediction of stingless bee honey is derived. The result presents the comparison and analysis on the internet of things (IoT) monitoring systems, honey production estimation, convolution neural networks (CNNs), and automatic identification methods on bee species. It is identified based on image detection method the top best three efficiency presents CNN is at 98.67%, densely connected convolutional networks with YOLO v3 is 97.7%, and DenseNet201 convolutional networks 99.81%. This study is significant to assist the researcher in developing a model for predicting stingless honey produced by bee's output, which is important for a stable economy and food security.
A trust based secure access control using authentication mechanism for intero...IJECEIAES
The internet of things (IoT) is a revolutionary innovation in many aspects of our society including interactions, financial activity, and global security such as the military and battlefield internet. Due to the limited energy and processing capacity of network devices, security, energy consumption, compatibility, and device heterogeneity are the long-term IoT problems. As a result, energy and security are critical for data transmission across edge and IoT networks. Existing IoT interoperability techniques need more computation time, have unreliable authentication mechanisms that break easily, lose data easily, and have low confidentiality. In this paper, a key agreement protocol-based authentication mechanism for IoT devices is offered as a solution to this issue. This system makes use of information exchange, which must be secured to prevent access by unauthorized users. Using a compact contiki/cooja simulator, the performance and design of the suggested framework are validated. The simulation findings are evaluated based on detection of malicious nodes after 60 minutes of simulation. The suggested trust method, which is based on privacy access control, reduced packet loss ratio to 0.32%, consumed 0.39% power, and had the greatest average residual energy of 0.99 mJoules at 10 nodes.
Fuzzy linear programming with the intuitionistic polygonal fuzzy numbersIJECEIAES
In real world applications, data are subject to ambiguity due to several factors; fuzzy sets and fuzzy numbers propose a great tool to model such ambiguity. In case of hesitation, the complement of a membership value in fuzzy numbers can be different from the non-membership value, in which case we can model using intuitionistic fuzzy numbers as they provide flexibility by defining both a membership and a non-membership functions. In this article, we consider the intuitionistic fuzzy linear programming problem with intuitionistic polygonal fuzzy numbers, which is a generalization of the previous polygonal fuzzy numbers found in the literature. We present a modification of the simplex method that can be used to solve any general intuitionistic fuzzy linear programming problem after approximating the problem by an intuitionistic polygonal fuzzy number with n edges. This method is given in a simple tableau formulation, and then applied on numerical examples for clarity.
The performance of artificial intelligence in prostate magnetic resonance im...IJECEIAES
Prostate cancer is the predominant form of cancer observed in men worldwide. The application of magnetic resonance imaging (MRI) as a guidance tool for conducting biopsies has been established as a reliable and well-established approach in the diagnosis of prostate cancer. The diagnostic performance of MRI-guided prostate cancer diagnosis exhibits significant heterogeneity due to the intricate and multi-step nature of the diagnostic pathway. The development of artificial intelligence (AI) models, specifically through the utilization of machine learning techniques such as deep learning, is assuming an increasingly significant role in the field of radiology. In the realm of prostate MRI, a considerable body of literature has been dedicated to the development of various AI algorithms. These algorithms have been specifically designed for tasks such as prostate segmentation, lesion identification, and classification. The overarching objective of these endeavors is to enhance diagnostic performance and foster greater agreement among different observers within MRI scans for the prostate. This review article aims to provide a concise overview of the application of AI in the field of radiology, with a specific focus on its utilization in prostate MRI.
Seizure stage detection of epileptic seizure using convolutional neural networksIJECEIAES
According to the World Health Organization (WHO), seventy million individuals worldwide suffer from epilepsy, a neurological disorder. While electroencephalography (EEG) is crucial for diagnosing epilepsy and monitoring the brain activity of epilepsy patients, it requires a specialist to examine all EEG recordings to find epileptic behavior. This procedure needs an experienced doctor, and a precise epilepsy diagnosis is crucial for appropriate treatment. To identify epileptic seizures, this study employed a convolutional neural network (CNN) based on raw scalp EEG signals to discriminate between preictal, ictal, postictal, and interictal segments. The possibility of these characteristics is explored by examining how well timedomain signals work in the detection of epileptic signals using intracranial Freiburg Hospital (FH), scalp Children's Hospital Boston-Massachusetts Institute of Technology (CHB-MIT) databases, and Temple University Hospital (TUH) EEG. To test the viability of this approach, two types of experiments were carried out. Firstly, binary class classification (preictal, ictal, postictal each versus interictal) and four-class classification (interictal versus preictal versus ictal versus postictal). The average accuracy for stage detection using CHB-MIT database was 84.4%, while the Freiburg database's time-domain signals had an accuracy of 79.7% and the highest accuracy of 94.02% for classification in the TUH EEG database when comparing interictal stage to preictal stage.
Analysis of driving style using self-organizing maps to analyze driver behaviorIJECEIAES
Modern life is strongly associated with the use of cars, but the increase in acceleration speeds and their maneuverability leads to a dangerous driving style for some drivers. In these conditions, the development of a method that allows you to track the behavior of the driver is relevant. The article provides an overview of existing methods and models for assessing the functioning of motor vehicles and driver behavior. Based on this, a combined algorithm for recognizing driving style is proposed. To do this, a set of input data was formed, including 20 descriptive features: About the environment, the driver's behavior and the characteristics of the functioning of the car, collected using OBD II. The generated data set is sent to the Kohonen network, where clustering is performed according to driving style and degree of danger. Getting the driving characteristics into a particular cluster allows you to switch to the private indicators of an individual driver and considering individual driving characteristics. The application of the method allows you to identify potentially dangerous driving styles that can prevent accidents.
Hyperspectral object classification using hybrid spectral-spatial fusion and ...IJECEIAES
Because of its spectral-spatial and temporal resolution of greater areas, hyperspectral imaging (HSI) has found widespread application in the field of object classification. The HSI is typically used to accurately determine an object's physical characteristics as well as to locate related objects with appropriate spectral fingerprints. As a result, the HSI has been extensively applied to object identification in several fields, including surveillance, agricultural monitoring, environmental research, and precision agriculture. However, because of their enormous size, objects require a lot of time to classify; for this reason, both spectral and spatial feature fusion have been completed. The existing classification strategy leads to increased misclassification, and the feature fusion method is unable to preserve semantic object inherent features; This study addresses the research difficulties by introducing a hybrid spectral-spatial fusion (HSSF) technique to minimize feature size while maintaining object intrinsic qualities; Lastly, a soft-margins kernel is proposed for multi-layer deep support vector machine (MLDSVM) to reduce misclassification. The standard Indian pines dataset is used for the experiment, and the outcome demonstrates that the HSSF-MLDSVM model performs substantially better in terms of accuracy and Kappa coefficient.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
2. Int J Elec & Comp Eng ISSN: 2088-8708
The effect of technology-organization-environment on… (Wanida Saetang)
6413
As for the situation of big data in Thailand, BDT played a major role during the government’s
development to a more advanced level, Thailand 4.0. The big data market was valued at 6 billion baht in
2017 and had been expected to grow to 13.2 billion baht in 2020 [6]. The growth in market value shows that
most organizations in Thailand invest in BDT. This strongly reflects the interest and growth of big data in
Thailand. Also, the research by IPG Mediabrands, which explored BDT in Thailand in 2018 found that 19%
of business sections had used BDT and succeeded, 40% was still in the experimental stage and the remaining
40% might be in the decision or planning stage [6]. This exploration indicates that organizations in Thailand
are enthusiastic about adopting this technology.
However, there are various factors that can affect an organization's decision to adopt BDT.
Therefore, this paper will focus on, 1) studying the factors influencing BDT adoption in organizations,
2) developing the BDT adoption model that fits the context of Thailand. The result of this study will deliver
knowledge that can be applied as well as used in exploration and preparation for the adoption of BDT in
organizations. This can also be applied for policy regulations in the government as well as in private sectors
to support BDT adoption. We hope this study may promote acceptance of BDT enhancing Thailand’s
competitiveness in the world.
This paper consists of six sections. The second section covers the theoretical background,
hypotheses and the study model. The third section describes the research methodology and data collection.
The four section, we present the research results and hypotheses testing. In section five, we conclude this
research. In the last section, we describe the implications for research and practical.
2. THEORETICAL BACKGROUND
In this section, the first part includes the definition and adoption of big data. The following part
provides the theoretical foundations of the technology-organization-environment (TOE) framework. Lastly,
hypotheses are proposed to test the conceptual model.
2.1. Big data
Big data is a field that deals with the storage and processing of large collections of data that are
often from diverse data sources. The official definition: “Big data is high-volume, high-velocity, and high-
variety information assets that demand cost-effective, innovative forms of information processing for
enhanced insight and decision making” [7]. In this definition, the main properties of big data have three V’s:
volume, velocity, and variety. Volume refers to the amount of data generated continuously from various
sources such as mobile devices and online applications. Velocity refers to the speed at which data are being
generated and the speed of data processing to extract insights and useful information. Variety refers to
the many types of data (structured, semi- structured, and unstructured) and come from heterogeneous sources [8].
Big data is larger, more complex data sets, which is difficult to process using traditional database
management tools and traditional data processing applications [9]. From different big data projects around
the world, there are many solutions and new technologies [1]. In fact, the new technologies that have been
developed are completely different from traditional technologies in two domains.
Firstly, big data storage is completely different from relational database technology that works
together with traditional information systems. Data collected from external sources are often not in a format
or structure that can be directly processed, so “data wrangling” [10] is necessary. Generally, data wrangling
begins with the extraction of the raw data from data sources, followed by wrangling the data either using
algorithms or parsing into predefined data structures. Eventually, the prepared data must be stored again in
the database for future use. Due to the required storage for multiple copies, innovative storages were created
to provide cost-effective and highly scalable storage solutions. Examples of big data storage concepts include
clusters, file systems and distributed files systems, and NoSQL [10].
Secondly, big data processing is different from centralized processing in a traditional relational
database. Big data is often processed in parallel through distributed processing at the location where data are
stored. Moreover, big data has the velocity characteristic, so there are many ways for big data processing,
such as distributed data processing, parallel data processing, and cluster processing [10].
Data processed by big data solutions can be used directly by enterprise applications or can be fed
into a data warehouse to enhance existing data. The results of big data processing can lead to many insights
and benefits such as improved decision-making, increased productivity, reduce costs, accurate predictions,
and fraud detection. Although, organizations can exploit the benefits of BDT, they must also be aware of
the risks. In addition, investing in BDT requires a constant change in the organization to fully receive
the benefits [11]. Therefore, the business analysts and executives are supposed to be aware of this change.
Considering all these reasons, the following sections show the research hypotheses that have been used to
investigate the adoption of BDT in organizations.
3. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 10, No. 6, December 2020 : 6412 - 6422
6414
2.2. Determinants of BDT adoption following the TOE perspective
The TOE framework was developed by Tornatzky and Fleischer [12]. They presented a framework
determining the factors of IT adoption that can be categorized into three dimensions, namely, technology,
organization, and environment. The theory also explains that there are specific factors that influence
the adoption of technological innovation in an organization. The technological dimension describes both
internal and external technologies that the organization can use [13]. This dimension focuses on how
the specific of the innovation being studied can influence its adoption decision [14]. The factors in this
dimension are often related to the characteristics of innovation in the Diffusion of Innovations theory (DOI),
which are relative advantage, compatibility, complexity, trialability, and observability.
The organizational dimension represents the intra-organizational environment and the organization's
resources that allocate the adoption of innovations. Organizational dimension comprises several features of
an organization such as the number of employees, IT expertise, top management support and leaders’
attitude, hierarchical structure, and slack resources [13]. The environmental dimension represents the external
or inter-organizational environment in which the organization operates. The literature often defines this
dimension in terms of external pressures (e.g., competitive pressure), external support (e.g., government
support) and the regulatory environment [15].
Many empirical studies have confirmed the applications of TOE framework in the prediction of
adopting technologies such as CRM [14], e-SCM [16], IoT [17], cloud computing [18] or business
analytics [19]. In the specific case of BDT, TOE has been accepted to be a useful perspective to understand
its diffusion process. For example, Nguyen and Petersen studied in Norway [13] and Agrawal studied in two
big emerging economies of Asia-China and India [20]. Their conceptual models are also grounded on
the TOE framework. The results of these studies showed that the relative advantage, complexity,
compatibility and security in technology plays a significant role while top management support, IT expertise,
and organizational size in organizational dimensions were the significant drivers for BDT adoption.
Additionally, external pressures (competitive pressure and trading partner pressure) and support (government
support and regulatory support) within the environmental dimension were confirmed as significant
antecedents prior to BDT adoption. However, the study results of each country may be applicable only to that
country. Therefore, this research focuses on studying factors affecting the decision to adopt BDT and to
obtain a model for technology adoption in Thailand.
2.3. Research model and hypotheses
This study aims to provide a research model by using the theoretical background of the TOE
framework based on a literature review. In technological dimension, DOI theory is applied. Attributes of DOI
such as relative advantage, compatibility, and complexity are used as antecedent factors. Moreover, to cover
security aspects, security factor is added into consideration. In organizational dimension, perspective of top
managers (top management support) and the capability perspective of an organization (IT expertise) are
applied. In environment dimension, perspective of external pressure (competitive pressure and trading partner
pressure) is applied. The research model is shown in Figure 1.
Figure 1. Research model
4. Int J Elec & Comp Eng ISSN: 2088-8708
The effect of technology-organization-environment on… (Wanida Saetang)
6415
2.3.1. Relative advantage (RA)
“Relative advantage is the degree to which an innovation is perceived to be better than the idea it
supersedes” [21]. Relative advantage has been widely studied and found to be a factor that influences
the prediction of innovation adoption and diffusion. For example, Lin [16] investigated the determinants of
electronic supply chain management system (e-SCM) adoption in large Taiwanese firms and found that
relative advantage positively affected firm decisions to adopt e-SCM.
In the BDT context, relative advantage refers to the degree to which BDT is perceived as providing
greater organizational benefit in comparison with traditional databases or data warehouses. Based on
the advantage of the technology, BDT can help companies in several aspects such as a large volume data
collection, high velocity of data streaming and flexibility in the usage of a variety of data processing
including unstructured as well as semi-structured data. BDT can improve efficiency and reduce costs when
integrated with other backend systems. Therefore, the first hypothesis is proposed.
H1: Relative advantage has a positive effect on adoption decision of big data technology
2.3.2. Compatibility (CP)
“Compatibility is the degree to which an innovation is perceived as consistent with the existing
values, past experiences, and needs of potential adopters” [21]. From the above definition, compatibility can
be considered in two dimensions. The first dimension is normative or cognitive compatibility such as what
adopters feel or think about innovation. The other dimension is compatibility with operational or practical
aspects such as compatibility with what the adopters do [13]. This study emphasizes on the latter aspect.
The more users feel compatible with a new technology, the higher possibility users adopt it and
facilitate to replace existing technology. Implementing BDT combined with existing process innovations
could often lead to better results. At the same time, resisting change can be a key issue in implementing BDT.
However, many studies have confirmed that compatibility has a positive effect on the adoption rate.
Agrawal [20] investigated the determinants of big data analytics adoption in Asian emerging economies and
found that compatibility has a significantly positive effect on organizational decisions to assimilate big data
analytics. Thus, according to the literature above, this research proposes the second hypothesis.
H2: Compatibility has a positive effect on adoption decision of big data technology
2.3.3. Complexity (CPX)
“Complexity is the degree to which an innovation is perceived as difficult to understand and use”.
The rate of adoption and complexity of an innovation are negatively related to each other [21]. Researchers
have regularly found that complexity is an obstacle to accepting innovation. Complexity was found to have
a negative influence on firm decisions to adopt RFID [22]. Wong et al. [23] also found that complexity was
negatively related to the decision to adopt blockchain in operations and SCM.
In the context of BDT, complexity is perceived as the difficultly in understanding the ecosystem of
the big data solution and integrating BDT with existing information systems and business processes. BDT
ecosystem has various layers, the different layers will increase complexity to understand the connections
among other software and their functions [1]. BDT should be customized to suit the specific working
environment. Moreover, the big data solution requires at least 18-20 days for training, to understand those
layers and to comprehend distributed systems in all dimensions. Organizations that perceive the complexity
of technology tend to be cautious in adopting technology. Thus, the following hypothesis is expressed.
H3: Complexity has a negative effect on adoption decision of big data technology
2.3.4. Security (SE)
“Security is defined as the extent to which a person believes that using a particular application is
risk-free [24]. Security is most frequently recognized by the examined adoption in companies [11].
In the context of BDT, new solutions have been developed to ensure more privacy and security.
Security features of Hadoop consist of authentication, service level authorization, authentication for web
consoles and data confidentiality. Therefore, security is a factor that may be a positive effect on deciding to
implement big data in an organization. Thus, the following hypothesis is expressed.
H4: Security has a positive effect on adoption decision of big data technology
2.3.5. IT expertise (ITX)
IT expertise is referred to as the knowledge and skill for effectively managing information
technology at the organizational level [25]. The literature presents that IT expertise is a major factor in
the adoption of new technological innovation in an organization [26]. An organization with existing
knowledge of new technological innovations makes adoption effortless. In a big data project, each member of
the team must have the expertise and knowledge to work in other sections. Implementing big data requires
5. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 10, No. 6, December 2020 : 6412 - 6422
6416
new IT skills such as data analysis, statistics, math, substantive expertise, presentation, and visualization.
We can expect that organizations with a higher level of IT expertise are in a better position to adopt BDT.
This study, therefore, hypothesizes that:
H5: IT Expertise has a positive effect on adoption decision of big data technology
2.3.6. Top management support (TMS)
Top management support plays a key critical factor in technology innovation adoption.
Top management support, including the technical and financial resources of an organization, is the degree to
which top executive understands the importance and benefits of innovation adoption. Most decisions
regularly need support from top management. The study of Oliveira et al. [18] indicated that top management
that understands the benefits of cloud computing would readily facilitate the necessary resources and also
influenced to implement the change in the organization.
In the context of BDT, the potential advantages of big data to an organization are broad. If top
management perceives that the use of BDT is likely to enhance the effectiveness and productivity of an
organization’s operations, it would influence on the decision to adopt the innovation. In addition, big data
involves changes in the whole business processes and top management support is essential to guarantee
the allocation of sufficient technical, administrative and financial resources for big data projects [13].
Therefore, this study hypothesizes that:
H6: Top management support has a positive effect on adoption decision of big data technology
2.3.7. Competitive pressure (CPS)
Competitive pressure refers to the level of competitive intensity in the industrial environment where
the organization operates. Competitive pressure encourages organizations to find competitive advantages by
adopting new technologies. These new technologies may lead to an uncertainty in the industrial environment
and may increase the competition within the industry. Both cases are believed to increase the demand to
adopt and the rate of acceptance of innovation. Previous studies have shown that competitive pressure is an
important factor in adopting technology. For instance, the study of Lin and Lin [27] investigated
the determinants of e-business diffusion in large Taiwanese firms and found that competitive pressure
influenced internal integration and external diffusion of e-business positively. Lin [16] also investigated
the determinants of e-SCM adoption in large Taiwanese firms and found that competitive pressure was a key
factor of the likelihood of e-SCM adoption.
Recent studies that examined the determinants of organizational adoption of BDT in medium and
large companies in Norway also showed that competitive pressure had a positive relationship with
the decision to adopt BDT [13]. This suggests that organizations adopt innovation to avoid the risk of
competitive pressure. It is therefore hypothesized that organizations tend to adopt BDT when they believe
that non-adoption will lead to a competitive disadvantage. Therefore, this study hypothesizes that:
H7: Competitive pressure has a positive effect on adoption decision of big data technology
2.3.8. Trading partner pressure (TP)
The pressure imposed by trading partners consists of the potential power of a trading partner to
encourage an adoption decision and its strategy to exercise that potential power through recommendations,
promises and threats [28]. The recommendation from powerful partners e.g., in adopting a decision in a firm,
those who has more power, owns larger portion of the firm’s profits or sales, are expected to be more
influential than similar recommendations from less powerful partners. In a manufacturing industry in Taiwan,
it was found that trading partner pressure was a positive relationship with RFID adoption because
many powerful companies or organizations had recently exerted strong pressure on their suppliers to
adopt RFID [22].
In the context of BDT, many large purchasers and retailers have come to be aware of the potential
benefits of BDT to exploit new opportunities and draw meaningful insights from big data so that partners
may be asked to use BDT. Therefore, it is expected that organizations facing pressure from partners will use
BDT more than organizations that do not face pressure. Hence, this study hypothesizes that:
H8: Trading partner pressure has a positive effect on adoption decision of big data technology
3. RESEARCH METHOD
This study employed quantitative approach based mainly on survey conducting on convenience
sampling. The details of methodology, e.g. instruments and data collection method, explained as follows.
6. Int J Elec & Comp Eng ISSN: 2088-8708
The effect of technology-organization-environment on… (Wanida Saetang)
6417
3.1. Research instrument
This research uses questionnaire as an instrument. The questionnaire consists of three sections.
The first section includes the demographic characteristics, the second section asks about their organizations,
and the last part contains the questions for measuring the constructs in the research model. The questionnaire
items are extracted from various research studies and modified to suit the BDT context. All constructs are
assessed using 7-point Likert-type scales, ranging from 1 for strongly disagree to 7 for strongly agree as
anchors for each construct. The reliability of the questionnaire is ensured by a review of the literature and
observing composite reliability (CR) values. A pilot study session is conducted to evaluate the reliability of
the instrument with 27 samples. The CR of the pilot-test is above 0.7 for all constructs.
3.2. Data collection method
The target population in this study were practitioners who were familiar with BDT. Data were
collected through online survey from April to June 2019. A total of 331 responses were obtained. Removing
missing data resulted in 310 valid cases for analysis (approximately 93.65% of all received cases).
The participant profiles were presented in Table 1.
We obtained a total of 310 respondents from diverse sectors, the highest respondents belong to
the government agency (20.65%), 17.42% belong to technology, and communication and 14.52% belong to
education or research, respectively. Based on the number of employees, 34.52% of the respondents are from
the organizations with more than 2,000 employees, 21.61% from the organizations having 101 to 500
employees and 13.87% are from the organizations that have less than or equal to 50 employees.
Table 1. Participants’ profile
N=310 Percentage)%(
Gender Male 148 47.74
Female 162 52.26
Industry Group Agro & Food Industry 7 2.26
Consumer Products 13 4.19
Financial and banking 43 13.87
Industrials 8 2.85
Property & Construction 7 2.26
Resources 3 0.97
Services 23 7.42
Technology and Communication 54 17.42
Education /Research 45 14.52
Government agency 64 20.65
State Enterprise Agency 20 6.45
Other 23 7.42
Number of employees ≤50 43 13.87
50-100 31 10.00
101-500 67 21.61
501-1,000 39 12.58
1,001-2,000 23 7.42
≥2,000 107 34.52
4. RESULTS AND DISCUSSION
Structural Equation Modeling (SEM) is conducted for evaluating construct validity and testing
the hypotheses. We proceed with a two-step procedure. We first examine the measurement model by
assessing reliability and validity. We then investigate the strength and the direction of the relationships
between the endogenous and exogenous variables of the structure model.
4.1. The measurement model
AMOS is used to carry out a confirmation factor analysis (CFA) to evaluate the measurement
model. CFA is a confirmation test for the relationship between latent and observed variables. The criteria for
the overall model fit testing include the norm chi-square (CMIN/DF), goodness of fit index (GFI), adjusted
goodness of fit index (AGFI), the normed fit index (NFI), the comparative fit index (CFI), the Tucker Lewis
index (TLI), standardized root mean squared residuals (SRMR), and root mean square error of approximation
(RMSEA). The results and recommended values are summarized in Table 2.
Reliability is assessed by using composite reliability (CR). Each construct should be consistent in
what it is intended to measure. All constructs have CR above 0.70 (ranging from 0.830 to 0.964) and
demonstrate internal consistency see in Table 3. Convergent validity indicates the degree to which the items
supposed to measure a construct illustrate high correlation with each other. The measuring scale is assessed
7. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 10, No. 6, December 2020 : 6412 - 6422
6418
by using two criteria: (1) factor loading of all item should be 0.5 or higher and (2) Average variance
extracted (AVE) value of 0.5 or above is recommended for sufficient convergence. Table 4 shows sufficient
degrees of convergent validities, as the values of factor loading for all of the items are higher than 0.5
(ranging from 0.689 to 0.974). All AVEs exceed the recommended values of 0.5 (ranging from 0.551 to
0.900). Discriminant validity examines that the items are truly distinct from other constructs. The measuring
scale, assessed based on the square root of AVE, is greater than the inter-construct correlations estimates.
As indicated in Table 4, the square root of AVE of each construct, shown in diagonal elements, is higher than
the inter-construct correlations.
Table 2. Goodness-of-fit indices
Model fit indices Results value Recommend value
norm chi-square (CMIN/DF) 1.651 CMIN/DF < 3.0
goodness of fit index (GFI) 0.895 0.90 < GFI
adjusted goodness of fit index (AGFI) 0.863 0.80 < AGFI
the normed fit index (NFI) 0.930 0.90 < NFI
the Tucker Lewis Index (TLI) 0.964 0.90 < TLI
the comparative fit index (CFI) 0.971 0.90 < CFI
standardized root mean squared residuals (SRMR) 0.0392 SRMR < 0.05
root mean square error of approximation (RMSEA) 0.046 RMSEA < 0.08
Table 3. Factor loading, reliability, and AVE
Constructs Items Loadings CR AVE
Relative advantage RA1 0.836 0.830 0.551
RA2 0.689
RA3 0.723
RA4 0.758
Compatibility CP1 0.771 0.907 0.710
CP2 0.830
CP3 0.908
CP4 0.855
Complexity CPX1 0.825 0.865 0.680
CPX2 0.842
CPX3 0.807
Security SE1 8.5.5 0.939 0.836
SE2 8.9.5
SE3 8.9.0
IT expertise ITX1 8.500 0.900 0.750
ITX2 8.980
ITX3 8.5.8
Top management support TMS1 8.5.9 0.872 0.696
TMS2 8.9.8
TMS3 0.708
Competitive pressure CPS2 8.585 0.832 0.713
CPS3 8.5.9
Trading partner pressure TP1 8.5.0 0.933 0.822
TP2 8.9.8
TP3 8.988
Behavioral intention BI1 8.9.. 0.964 0.900
BI2 8.9..
BI3 8.988
Table 4. Descriptive statistics and correlation
Mean S.D. RA CP CPX SE ITX TMS CPS TP BI
RA 5.713 1.048 0.743
CP 5.183 1.308 0.430 0.843
CPX 3.942 1.348 -0.389 -0.240 0.825
SE 4.519 1.395 0.217 0.386 -0.435 0.914
ITX 4.999 1.330 0.664 0.270 -0.274 0.118 0.866
TMS 5.614 1.356 0.737 0.459 -0.192 0.126 0.603 0.834
CPS 5.623 1.084 0.412 0.626 -0.158 0.203 0.228 0.339 0.844
TP 5.018 1.369 0.226 0.276 -0.288 0.207 0.151 0.192 0.381 0.907
BI 5.640 1.197 0.491 0.532 -0.195 0.088 0.328 0.585 0.546 0.329 0.949
Note :The square root of AVE is shown on the diagonal.
8. Int J Elec & Comp Eng ISSN: 2088-8708
The effect of technology-organization-environment on… (Wanida Saetang)
6419
4.2. Structure model for hypotheses testing
The results, shown in Figure 2, indicate that five exogenous variables, that is, relative advantage,
security, top management support, competitive pressure, and trading partner pressure are significant
supported. Moreover, square multiple correlations, or R2
indicates that value of 0.52. Thus, the study
suggested that exogenous variables in this study can explain the adoption decision at 52%.
The results are summarized in Table 5. Within the technological dimension, we found that relative
advantage has a positive relationship with adoption (p < 0.05). Thus, H1 is supported. Security has a negative
relationship with adoption (p < 0.05) but is in the wrong direction. Thus, H4 is not supported. The path from
compatibility and complexity have an insignificant path to adoption. Thus, H2 and H3 are not supported.
Within the organizational dimension, IT expertise has an insignificant path to adoption. Thus, H5 is not
supported. Top management support has a strong positive relationship with adoption (p < 0.001). Thus, H6 is
supported. Within the environmental dimension, competition pressure has a strongly positive relationship
with adoption (p < 0.001). Thus, H7 is supported. We also found that trading partner pressure has a positive
relationship with adoption (p < 0.05). Thus, H8 is supported.
Figure 2. Research model results
Table 5. Hypothesis testing results
Dimension Hypothesis Parameter β p Result
Technology H1 RA -> BI 0.185 0.015 Supported
H2 CP -> BI 0.015 0.871 Not Supported
H3 CPX -> BI -0.069 0.241 Not Supported
H4 SE -> BI -0.139 0.010 Not Supported
Organization H5 ITX -> BI -0.065 0.416 Not Supported
H6 TMS -> BI 0.419 *** Supported
Environment H7 CPS -> BI 0.272 *** Supported
H8 TP -> BI 0.111 8.8.0 Supported
9. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 10, No. 6, December 2020 : 6412 - 6422
6420
4.3. Discussion
Many interesting discoveries arise from the analyses of data. These findings will be discussed based
on TOE dimension as follows. In the technology dimension, H1 is supported and is related to relative
advantage. This observation is consistent with the findings in [4]. The possible explanation for this finding is
that the features of BDT are different from other information systems such as RDBMS and data warehouse.
Therefore, the more relative advantage of the new technology offers, the higher the level of adoption.
Compatibility is not a significant effect on the adoption, which answers the second hypothesis (H2).
These findings suggest that the changes introduced by BDT do not affect decision making stage. H3 also
contradicts our predictions regarding the effects of complexity on decision adoption. A possible explanation
for this finding is that it may require the employment of a consultant or an outsourced team to avoid learning
and installation difficulties. This finding confirmed previous study [29], which finds that complexity is not
significant to adopt the Internet and e-business technologies. Security (H4) has a significant negative effect
on the decision to adopt BDT, which is consistent with the previous research [13]. Possible explanations for
this could be that BDT is different from the traditional data environment, including tools and methods for
protecting data that may cause new forms of security threats. Therefore, security is a factor that inhibits
the decision to adopt BDT in the organization.
Within the dimension of the organization, we found that IT expertise had no significant effect on
behavioral intention to adopt BDT (H5). This suggests that IT expertise does not play a role in the decision
stage in the context of BDT. This finding confirmed the work of Nguyen and Petersen [13], which shows that
organizations not only in the initiation stage but also the adoption-decision stage are less aware of
the importance of their IT expertise than the companies in the implementation stage. Top management support
influences the behavioral intention to adopt BDT and the results have confirmed this relationship (H6). A possible
explanation for this finding is that the adoption of BDT could lead to changes in business procedures, and
organizational structures [4]. Top management support is a good sign of creating a positive attitude towards
innovation and create an enabling environment for acceptance of innovation. Moreover, gaining support from
top management ensures that the project has enough resources for the successful adoption of innovation
technology. This finding is in line with previous studies [13].
As for the environmental dimension, competitive pressure has influenced the decision to adopt BDT
(H7). Today's businesses are driven based on data. If an organization does not adopt BDT, it would be an
advantage to competitors. Therefore, when organizations are faced with intense competitive pressure,
they are more open to the adoption of BDT. This finding is consistent with those from other studies [13, 20].
Trading partner pressure was found to be a significant facilitator of BDT adoption. This is due to the fact that
trading partners have an influence in the operation of the business. Their recommendations or promises can
influence the decision to adopt BDT. However, this finding is inconsistent with [22].
5. CONCLUSION
This study aims to examine the adoption decision of BDT in organizations in Thailand from
a technology-organization-environment (TOE) framework perspective. These findings support the suitability
of using TOE to understand the determinants that influence the use of BDT in Thai organizations.
The measure of the variance of the dependent variable that explained by the independent variables resulted in
an R2
value of 0.52. Relative advantage, top management support, competitive pressure, and trading partner
pressure show significant positive relation to BDT adoption. Security negatively influences BDT adoption.
The top management support is the major determinant followed by competitive pressure.
6. IMPLICATIONS
6.1. Implications for research
This study presents a model for studying the decision to adopt BDT in organizations in Thailand.
The results of this study have the following implication. Firstly, the contribution of this study is an
empirically test of the existing information systems theory in a new context, which is the study on
the adoption of BDT in organizations in Thailand. The status of Thailand is different from those of other
countries because big data technologies have recently entered the trend in Thailand. Therefore, this study
helps to confirm the existing information system theory in a new context. As a result, the proposed model is
able to explain 52% of the technology acceptance of BDT and can thus be applied to other domains.
Secondly, the study presents a model built on the TOE framework. This study combines various
perspectives into the proposed model. In terms of technology, research integrates the perceptions of attributes
of innovation, including relative advantage, compatibility, and complexity that may affect BDT adoption.
In addition, the security determinant is extended for predicting BDT adoption. Organizational factors are
10. Int J Elec & Comp Eng ISSN: 2088-8708
The effect of technology-organization-environment on… (Wanida Saetang)
6421
created from the perspective of top managers and the capability perspective of the corresponding
organization. This study proposes that top management support and IT expertise may influence an
organization’s adoption of BDT. Lastly, competitive pressure and trade partner pressure under
the perspective of external pressure serve as the environmental features that may influence the adoption of
BDT. This type of TOE model is novel, as it is based on integrating various combinations of theoretically
grounded variables.
6.2. Implications for practice
The findings of this study provide important practical implications for organizations' decision to
adopt BDT. Firstly, both relative advantages and security in the technological dimension have a significant
effect on the adoption of BDT. In order to facilitate organizational adoption, those responsible for
implementing BDT in an organization need to be fully aware of these factors that are important during
the decision-making stage. Security has a negative impact so managers should encourage operators to strictly
comply with BDT regulations with the improvements in IT infrastructure. In addition, the training and
recruitment of IT employees with security expertise is very important.
Secondly, top management support in the organizational dimension has a significant effect on
the adoption of BDT. Thus, it is recommendable to engage top management support at the beginning.
Top management should specify BDT in their business strategies and collaborate with specialists for obvious
definition of big data and its advantages as well as providing the resources needed for the project.
Lastly, in the environment dimension both competitive pressure and trading partner pressure have
a significant positive effect on the adoption of BDT. Managers should be aware that higher external pressure
encourages the BDT decision process. Hence, managers should be concerned about the industrial
environment and the potential power of the partner when deciding to adopt BDT.
ACKNOWLEDGEMENTS
This research is supported by Graduate Development Scholarship 2020, National Research Council
of Thailand.
REFERENCES
[1] A. Oussous, F.-Z. Benjelloun, A. Ait Lahcen, and S. Belfkih, “Big Data technologies: A survey,” Journal of King
Saud University - Computer and Information Sciences, vol. 30, no. 4, pp. 431-448, 2018.
[2] H. G. Sanjay Ghemawat, and Shun-Tak Leung, "The Google File System," Proceedings of the nineteenth ACM
symposium on Operating systems principles, pp. 29-43, 2003.
[3] K. Shvachko, H. Kuang, S. Radia, and R. Chansler, "The Hadoop Distributed File System," IEEE 26th symposium
on mass storage systems and technologies (MSST), pp. 1-10, 2010.
[4] S. Verma, S. S. Bhattacharyya, and S. Kumar, “An extension of the technology acceptance model in the big data
analytics system implementation environment,” Information Processing & Management, vol. 54, no. 5,
pp. 791-806, 2018.
[5] T. Oliveira, and M. R. Martins, “Literature Review of Information Technology Adoption Models at Firm Level,”
vol. 14, no. 1, 2011.
[6] Brandbuffet, "Penetrate the Big Data situation in Thailand, "treasure" giant of the business world," 2018. [Online].
Available: https://www.brandbuffet.in.th.
[7] I. Gartner, "Gartner Glossary," 2019. [Online]. Available: https://www.gartner.com/en/information-
technology/glossary/big-data.
[8] A. Gandomi, and M. Haider, “Beyond the hype: Big data concepts, methods, and analytics,” International Journal
of Information Management, vol. 35, no. 2, pp. 137-144, 2015.
[9] B. Furht, and F. Villanustre, "Introduction to Big Data," Big Data Technologies and Applications Cham: Springer
International Publishing, pp. 3-11, 2016.
[10] T. Erl, W. Khattak, and P. Buhler, "Big Data Fundamentals Concepts, Drivers & Techniques," 1st ed., United
States: Prentice Hall, 2015.
[11] E. Raguseo, “Big data technologies: An empirical investigation on their adoption, benefits and risks for
companies,” International Journal of Information Management, vol. 38, no. 1, pp. 187-195, 2018.
[12] L. G. Tornatzky, and M. Fleischer, "Processes of Technological Innovation," Lexington, MA: Lexington Books, 1990.
[13] T. Nguyen, and T. E. Petersen, “Technology Adoption in Norway: Organizational Assimilation of Big Data,”
Master Thesis, Economics and Business Administration, Norwegian School of Economics, 2017.
[14] F. Cruz-Jesus, A. Pinheiro, and T. Oliveira, “Understanding CRM adoption stages: empirical analysis building on
the TOE framework,” Computers in Industry, vol. 109, pp. 1-13, 2019.
[15] Q. Jia, Y. Guo, and S. J. Barnes, “Enterprise 2.0 post-adoption: Extending the information system continuance
model based on the technology-Organization-environment framework,” Computers in Human Behavior, vol. 67,
pp. 95-105, 2017.
11. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 10, No. 6, December 2020 : 6412 - 6422
6422
[16] H.-F. Lin, “Understanding the determinants of electronic supply chain management system adoption: Using
the technology–organization–environment framework,” Technological Forecasting and Social Change, vol. 86,
pp. 80-92, 2014.
[17] D. Lin, C. K. M. Lee, and K. Lin, "Research on effect factors evaluation of internet of things (IOT) adoption in
Chinese agricultural supply chain," International Conference on Industrial Engineering and Engineering
Management (IEEM), pp. 612-615, 2016.
[18] T. Oliveira, M. Thomas, and M. Espadanal, “Assessing the determinants of cloud computing adoption: An analysis
of the manufacturing and services sectors,” Information & Management, vol. 51, no. 5, pp. 497-510, 2014.
[19] D. Nam, J. Lee, and H. Lee, “Business analytics adoption process: An innovation diffusion perspective,”
International Journal of Information Management, vol. 49, pp. 411-423, 2019.
[20] K. P. Agrawal, "Investigating the Determinants of Big Data Analytics (BDA) Adoption in Asian Emerging
Economies," In Academy of Management Annual Meeting Proceedings, vol. 2015, no. 1, pp. 11290-11290, 2015.
[21] E. M. Rogers, "Diffusion of innovations: A Division of Macmillan Publishing Co.," Inc. Third Edition, The Free
Pres, New York., 1983.
[22] Y.-M. Wang, Y.-S. Wang, and Y.-F. Yang, “Understanding the determinants of RFID adoption in the
manufacturing industry,” Technological Forecasting and Social Change, vol. 77, no. 5, pp. 803-815, 2010.
[23] L.-W. Wong, L.-Y. Leong, J.-J. Hew, G. W.-H. Tan, and K.-B. Ooi, “Time to seize the digital evolution: Adoption
of blockchain in operations and supply chain management among Malaysian SMEs,” International Journal of
Information Management, 2019.
[24] D.-H. Shin, “User centric cloud service model in public sectors: Policy implications of cloud services,”
Government Information Quarterly, vol. 30, no. 2, pp. 194-203, 2013.
[25] M. I. Baig, L. Shuib, and E. Yadegaridehkordi, “Big data adoption: State of the art and research challenges,”
Information Processing & Management, vol. 56, no. 6, 2019.
[26] M. A. Hameed, S. Counsell, and S. Swift, “A meta-analysis of relationships between organizational characteristics
and IT innovation adoption in organizations,” Information & Management, vol. 49, no. 5, pp. 218-232, 2012.
[27] H.-F. Lin, and S.-M. Lin, “Determinants of e-business diffusion: A test of the technology diffusion perspective,”
Technovation, vol. 28, no. 3, pp. 135-145, 2008.
[28] C. L. Iacovou, I. Benbasat, and A. S. Dexter, “Electronic data interchange and small organizations: adoption and
impact of technology,” MIS quarterly, vol. 19, no. 4, pp. 465-485, 1995.
[29] P. Ifinedo, “An Empirical Analysis of Factors Influencing Internet/E-Business Technologies Adoption by SMEs in
Canada,” International Journal of Information Technology and Decision Making, vol. 10, pp. 731-766, 2011.
BIOGRAPHIES OF AUTHORS
Wanida Saetang is currently a Ph.D. student in the Department of Information Technology at King
Mongkut’s University of Technology North Bangkok. She was graduated with Bachelor of
Computer Science from Srinakharinwirot University where she worked on projects like image
retrieval. She pursued her Master of Information Technology Management from King Mongkut’s
University of Technology North Bangkok where she researched about data mining. Her current
research interests are Information Management, Information Theory, Innovations Diffusion, Data
Mining and Data Science.
Sakchai Tangwannawit received the professional education with B.S. (Computer Education),
M.S.(Information Technology), and Ph.D. (Computer Education) from King Mongkut’s University
of Technology North Bangkok (KMUTNB), Bangkok, Thailand. He has been working as a lecturer
for more than 10 years in the field of Information Technology in the Faculty of Information
Technology, KMUTNB.
Tanapon Jensuttiwetchakul is currently a lecture at faculty of Information Technology, King
Mongkut’s University of Technology North Bangkok, Thailand. He Obtained his Ph.D. degree in
Information Technology in Business at Faculty of Commerce and Accountancy, Chulalongkorn
University. He received his Bachelor and master’s degree in electrical engineering from
Chulalongkorn University in 2006 and 2008, respectively.