The Internet is currently the largest network of communication worldwide and is where technological advances could be observed. The original creation of the Internet was based on the idea that this network would be formed mainly by multiple independent networks with an arbitrary design. The Internet is the place where all countries communicate and disseminate information in real time, this phenomenon directly affects economies, businesses, and society. This article shows what the future of the Internet is, our research carries out a qualitative prospective analysis on projects and investigations in which the scientific community is currently working, the information is analyzed, and the highlighted topics are shown.
Content an Insight to Security Paradigm for BigData on Cloud: Current Trend a...IJECEIAES
The sucesssive growth of collabrative applications producing Bigdata on timeline leads new opprutinity to setup commodities on cloud infrastructure. Mnay organizations will have demand of an efficient data storage mechanism and also the efficient data analysis. The Big Data (BD) also faces some of the security issues for the important data or information which is shared or transferred over the cloud. These issues include the tampering, losing control over the data, etc. This survey work offers some of the interesting, important aspects of big data including the high security and privacy issue. In this, the survey of existing research works for the preservation of privacy and security mechanism and also the existing tools for it are stated. The discussions for upcoming tools which are needed to be focused on performance improvement are discussed. With the survey analysis, a research gap is illustrated, and a future research idea is presented
A Comparative Analysis of Data Privacy and Utility Parameter Adjustment, Usin...Kato Mivule
Kato Mivule, Claude Turner, "A Comparative Analysis of Data Privacy and Utility Parameter Adjustment, Using Machine Learning Classification as a Gauge", Procedia Computer Science, Volume 20, 2013, Pages 414-419, Baltimore MD, USA
Applying Data Privacy Techniques on Published Data in UgandaKato Mivule
Kato Mivule, Claude Turner, "Applying Data Privacy Techniques on Published Data in Uganda", Proceedings of the 2012 International Conference on e-Learning, e-Business, Enterprise Information Systems, and e-Government (EEE 2012), Pages 110-115, Las Vegas, NV, USA.
Two-Phase TDS Approach for Data Anonymization To Preserving Bigdata Privacydbpublications
While Big Data gradually become a hot topic of research and business and has been everywhere used in many industries, Big Data security and privacy has been increasingly concerned. However, there is an obvious contradiction between Big Data security and privacy and the widespread use of Big Data. There have been a various different privacy preserving mechanisms developed for protecting privacy at different stages (e.g. data generation, data storage, data processing) of big data life cycle. The goal of this paper is to provide a complete overview of the privacy preservation mechanisms in big data and present the challenges for existing mechanisms and also we illustrate the infrastructure of big data and state-of-the-art privacy-preserving mechanisms in each stage of the big data life cycle. This paper focus on the anonymization process, which significantly improve the scalability and efficiency of TDS (top-down-specialization) for data anonymization over existing approaches. Also, we discuss the challenges and future research directions related to preserving privacy in big data.
New prediction method for data spreading in social networks based on machine ...TELKOMNIKA JOURNAL
Information diffusion prediction is the study of the path of dissemination of news, information, or topics in a structured data such as a graph. Research in this area is focused on two goals, tracing the information diffusion path and finding the members that determine future the next path. The major problem of traditional approaches in this area is the use of simple probabilistic methods rather than intelligent methods. Recent years have seen growing interest in the use of machine learning algorithms in this field. Recently, deep learning, which is a branch of machine learning, has been increasingly used in the field of information diffusion prediction. This paper presents a machine learning method based on the graph neural network algorithm, which involves the selection of inactive vertices for activation based on the neighboring vertices that are active in a given scientific topic. Basically, in this method, information diffusion paths are predicted through the activation of inactive vertices byactive vertices. The method is tested on three scientific bibliography datasets: The Digital Bibliography and Library Project (DBLP), Pubmed, and Cora. The method attempts to answer the question that who will be the publisher of thenext article in a specific field of science. The comparison of the proposed method with other methods shows 10% and 5% improved precision in DBL Pand Pubmed datasets, respectively.
Content an Insight to Security Paradigm for BigData on Cloud: Current Trend a...IJECEIAES
The sucesssive growth of collabrative applications producing Bigdata on timeline leads new opprutinity to setup commodities on cloud infrastructure. Mnay organizations will have demand of an efficient data storage mechanism and also the efficient data analysis. The Big Data (BD) also faces some of the security issues for the important data or information which is shared or transferred over the cloud. These issues include the tampering, losing control over the data, etc. This survey work offers some of the interesting, important aspects of big data including the high security and privacy issue. In this, the survey of existing research works for the preservation of privacy and security mechanism and also the existing tools for it are stated. The discussions for upcoming tools which are needed to be focused on performance improvement are discussed. With the survey analysis, a research gap is illustrated, and a future research idea is presented
A Comparative Analysis of Data Privacy and Utility Parameter Adjustment, Usin...Kato Mivule
Kato Mivule, Claude Turner, "A Comparative Analysis of Data Privacy and Utility Parameter Adjustment, Using Machine Learning Classification as a Gauge", Procedia Computer Science, Volume 20, 2013, Pages 414-419, Baltimore MD, USA
Applying Data Privacy Techniques on Published Data in UgandaKato Mivule
Kato Mivule, Claude Turner, "Applying Data Privacy Techniques on Published Data in Uganda", Proceedings of the 2012 International Conference on e-Learning, e-Business, Enterprise Information Systems, and e-Government (EEE 2012), Pages 110-115, Las Vegas, NV, USA.
Two-Phase TDS Approach for Data Anonymization To Preserving Bigdata Privacydbpublications
While Big Data gradually become a hot topic of research and business and has been everywhere used in many industries, Big Data security and privacy has been increasingly concerned. However, there is an obvious contradiction between Big Data security and privacy and the widespread use of Big Data. There have been a various different privacy preserving mechanisms developed for protecting privacy at different stages (e.g. data generation, data storage, data processing) of big data life cycle. The goal of this paper is to provide a complete overview of the privacy preservation mechanisms in big data and present the challenges for existing mechanisms and also we illustrate the infrastructure of big data and state-of-the-art privacy-preserving mechanisms in each stage of the big data life cycle. This paper focus on the anonymization process, which significantly improve the scalability and efficiency of TDS (top-down-specialization) for data anonymization over existing approaches. Also, we discuss the challenges and future research directions related to preserving privacy in big data.
New prediction method for data spreading in social networks based on machine ...TELKOMNIKA JOURNAL
Information diffusion prediction is the study of the path of dissemination of news, information, or topics in a structured data such as a graph. Research in this area is focused on two goals, tracing the information diffusion path and finding the members that determine future the next path. The major problem of traditional approaches in this area is the use of simple probabilistic methods rather than intelligent methods. Recent years have seen growing interest in the use of machine learning algorithms in this field. Recently, deep learning, which is a branch of machine learning, has been increasingly used in the field of information diffusion prediction. This paper presents a machine learning method based on the graph neural network algorithm, which involves the selection of inactive vertices for activation based on the neighboring vertices that are active in a given scientific topic. Basically, in this method, information diffusion paths are predicted through the activation of inactive vertices byactive vertices. The method is tested on three scientific bibliography datasets: The Digital Bibliography and Library Project (DBLP), Pubmed, and Cora. The method attempts to answer the question that who will be the publisher of thenext article in a specific field of science. The comparison of the proposed method with other methods shows 10% and 5% improved precision in DBL Pand Pubmed datasets, respectively.
Kato Mivule - Utilizing Noise Addition for Data Privacy, an OverviewKato Mivule
Kato Mivule, "Utilizing Noise Addition for Data Privacy, an Overview", Proceedings of the International Conference on Information and Knowledge Engineering (IKE 2012), Pages 65-71, Las Vegas, NV, USA.
Effective Feature Selection for Mining Text Data with Side-InformationIJTET Journal
Abstract— Many text documents contain side-information. Many web documents consist of meta-data with them which correspond to different kinds of attributes such as the source or other information related to the origin of the document. Data such as location, ownership or even temporal information may be considered as side-information. This huge amount of information may be used for performing text clustering. This information can either improve the quality of the representation for the mining process, or can add noise to the process. When the information is noisy it can be a risky approach for performing mining process along with the side-information. These noises can reduce the quality of clustering while if the side-information is informative then it can improve the quality of clustering. In existing system, Gini index is used as the feature selection method to filter the informative side-information from text documents. It is effective to a certain extent but the remaining number of features is still huge. It is important to use feature selection methods to handle the high dimensionality of data for effective text categorization. In the proposed system, In order to improve the document clustering and classification accuracy as well as reduce the number of selected features, a novel feature selection method was proposed. To improve the accuracy and purity of document clustering with less time complexity a new method called Effective Feature Selection (EFS) is introduced. This three-stage procedure includes feature subset selection, feature ranking and feature re-ranking.
An Investigation of Data Privacy and Utility Preservation Using KNN Classific...Kato Mivule
Kato Mivule and Claude Turner, An Investigation of Data Privacy and Utility Preservation Using KNN Classification as a Gauge, International Conference on Information and Knowledge Engineering (IKE 2013), July 22-25, Pages 203-204, Las Vegas, NV, USA
Towards A Differential Privacy and Utility Preserving Machine Learning Classi...Kato Mivule
Kato Mivule, Claude Turner, Soo-Yeon Ji, "Towards A Differential Privacy and Utility Preserving Machine Learning Classifier", Procedia Computer Science (Complex Adaptive Systems), 2012, Pages 176-181, Washington DC, USA.
Classifying confidential data using SVM for efficient cloud query processingTELKOMNIKA JOURNAL
Nowadays, organizations are widely using a cloud database engine from the cloud service
providers. Privacy still is the main concern for these organizations where every organization is strictly
looking forward more secure environment for their own data. Several studies have proposed different types
of encryption methods to protect the data over the cloud. However, the daily transactions represented by
queries for such databases makes encryption is inefficient solution. Therefore, recent studies presented
a mechanism for classifying the data prior to migrate into the cloud. This would reduce the need of
encryption which enhances the efficiency. Yet, most of the classification methods used in the literature
were based on string-based matching approach. Such approach suffers of the exact match of terms where
the partial matching would not be considered. This paper aims to take the advantage of N-gram
representation along with Support Vector Machine classification. A real-time data will used in
the experiment. After conducting the classification, the Advanced Encryption Standard algorithm will be
used to encrypt the confidential data. Results showed that the proposed method outperformed the baseline
encryption method. This emphasizes the usefulness of using the machine learning techniques for
the process of classifying the data based on confidentiality.
Paper presented International Conference on Data Science and Analytics - ICDSA'21 organized by Rathinam College of Arts and Science, Tamil Nadu, India on 19th February 2021
Impact of big data congestion in IT: An adaptive knowledgebased Bayesian networkIJECEIAES
Recent progress on real-time systems are growing high in information technology which is showing importance in every single innovative field. Different applications in IT simultaneously produce the enormous measure of information that should be taken care of. In this paper, a novel algorithm of adaptive knowledge-based Bayesian network is proposed to deal with the impact of big data congestion in decision processing. A Bayesian system show is utilized to oversee learning arrangement toward all path for the basic leadership process. Information of Bayesian systems is routinely discharged as an ideal arrangement, where the examination work is to find a development that misuses a measurably inspired score. By and large, available information apparatuses manage this ideal arrangement by methods for normal hunt strategies. As it required enormous measure of information space, along these lines it is a tedious method that ought to be stayed away from. The circumstance ends up unequivocal once huge information include in hunting down ideal arrangement. A calculation is acquainted with achieve quicker preparing of ideal arrangement by constraining the pursuit information space. The proposed algorithm consists of recursive calculation intthe inquiry space. The outcome demonstrates that the ideal component of the proposed algorithm can deal with enormous information by processing time, and a higher level of expectation rates.
Multi-dimensional cubic symmetric block cipher algorithm for encrypting big datajournalBEEI
The advanced technology in the internet and social media, communication companies, health care records and cloud computing applications made the data around us increase dramatically every minute and continuously. These renewals big data involve sensitive information such as password, PIN number, credential numbers, secret identifications and etc. which require maintaining with some high secret procedures. The present paper involves proposing a secret multi-dimensional symmetric cipher with six dimensions as a cubic algorithm. The proposed algorithm works with the substitution permutation network (SPN) structure and supports a high processing data rate in six directions. The introduced algorithm includes six symmetry rounds transformations for encryption the plaintext, where each dimension represents an independent algorithm for big data manipulation. The proposed cipher deals with parallel encryption structures of the 128-bit data block for each dimension in order to handle large volumes of data. The submitted cipher compensates for six algorithms working simultaneously each with 128-bit according to various irreducible polynomials of order eight. The round transformation includes four main encryption stages where each stage with a cubic form of six dimensions.
Frequency and similarity aware partitioning for cloud storage based on space ...redpel dot com
Frequency and similarity aware partitioning for cloud storage based on space time utility maximization model.
for more ieee paper / full abstract / implementation , just visit www.redpel.com
Big data is the term that characterized by its increasing
volume, velocity, variety and veracity. All these characteristics
make processing on this big data a complex task. So, for
processing such data we need to do it differently like map reduce
framework. When an organization exchanges data for mining
useful information from this big data then privacy of the data
becomes an important problem. In the past, several privacy
preserving algorithms have been proposed. Of all those
anonymizing the data has been the most efficient one.
Anonymizing the dataset can be done on several operations like
generalization, suppression, anatomy, specialization, permutation
and perturbation. These algorithms are all suitable for dataset
that does not have the characteristics of the big data. To preserve
the privacy of the large dataset an algorithm was proposed
recently. It applies the top down specialization approach for
anonymizing the dataset and the scalability is increasing my
applying the map reduce frame work. In this paper we survey the
growth of big data, characteristics, map-reduce framework and
all the privacy preserving mechanisms and propose future
directions of our research.
Context, Causality, and Information Flow: Implications for Privacy Engineerin...Sebastian Benthall
The creators of technical infrastructure are under social and legal pressure to comply with expectations that can be difficult to translate into computational and business logics. The dissertation presented in this talk bridges this gap through three projects that focus on privacy engineering, information security, and data economics, respectively. These projects culminate in a new formal method for evaluating the strategic and tactical value of data. This method relies on a core theoretical contribution building on the work of Shannon, Dretske, Pearl, Koller, and Nissenbaum: a definition of information flow as a channel situated in a context of causal relations.
A forecasting of stock trading price using time series information based on b...IJECEIAES
Big data is a large set of structured or unstructured data that can collect, store, manage, and analyze data with existing database management tools. And it means the technique of extracting value from these data and interpreting the results. Big data has three characteristics: The size of existing data and other data (volume), the speed of data generation (velocity), and the variety of information forms (variety). The time series data are obtained by collecting and recording the data generated in accordance with the flow of time. If the analysis of these time series data, found the characteristics of the data implies that feature helps to understand and analyze time series data. The concept of distance is the simplest and the most obvious in dealing with the similarities between objects. The commonly used and widely known method for measuring distance is the Euclidean distance. This study is the result of analyzing the similarity of stock price flow using 793,800 closing prices of 1,323 companies in Korea. Visual studio and Excel presented calculate the Euclidean distance using an analysis tool. We selected “000100” as a target domestic company and prepared for big data analysis. As a result of the analysis, the shortest Euclidean distance is the code “143860” company, and the calculated value is “11.147”. Therefore, based on the results of the analysis, the limitations of the study and theoretical implications are suggested.
Meliorating usable document density for online event detectionIJICTJOURNAL
Online event detection (OED) has seen a rise in the research community as it can provide quick identification of possible events happening at times in the world. Through these systems, potential events can be indicated well before they are reported by the news media, by grouping similar documents shared over social media by users. Most OED systems use textual similarities for this purpose. Similar documents, that may indicate a potential event, are further strengthened by the replies made by other users, thereby improving the potentiality of the group. However, these documents are at times unusable as independent documents, as they may replace previously appeared noun phrases with pronouns, leading OED systems to fail while grouping these replies to their suitable clusters. In this paper, a pronoun resolution system that tries to replace pronouns with relevant nouns over social media data is proposed. Results show significant improvement in performance using the proposed system.
Kato Mivule - Utilizing Noise Addition for Data Privacy, an OverviewKato Mivule
Kato Mivule, "Utilizing Noise Addition for Data Privacy, an Overview", Proceedings of the International Conference on Information and Knowledge Engineering (IKE 2012), Pages 65-71, Las Vegas, NV, USA.
Effective Feature Selection for Mining Text Data with Side-InformationIJTET Journal
Abstract— Many text documents contain side-information. Many web documents consist of meta-data with them which correspond to different kinds of attributes such as the source or other information related to the origin of the document. Data such as location, ownership or even temporal information may be considered as side-information. This huge amount of information may be used for performing text clustering. This information can either improve the quality of the representation for the mining process, or can add noise to the process. When the information is noisy it can be a risky approach for performing mining process along with the side-information. These noises can reduce the quality of clustering while if the side-information is informative then it can improve the quality of clustering. In existing system, Gini index is used as the feature selection method to filter the informative side-information from text documents. It is effective to a certain extent but the remaining number of features is still huge. It is important to use feature selection methods to handle the high dimensionality of data for effective text categorization. In the proposed system, In order to improve the document clustering and classification accuracy as well as reduce the number of selected features, a novel feature selection method was proposed. To improve the accuracy and purity of document clustering with less time complexity a new method called Effective Feature Selection (EFS) is introduced. This three-stage procedure includes feature subset selection, feature ranking and feature re-ranking.
An Investigation of Data Privacy and Utility Preservation Using KNN Classific...Kato Mivule
Kato Mivule and Claude Turner, An Investigation of Data Privacy and Utility Preservation Using KNN Classification as a Gauge, International Conference on Information and Knowledge Engineering (IKE 2013), July 22-25, Pages 203-204, Las Vegas, NV, USA
Towards A Differential Privacy and Utility Preserving Machine Learning Classi...Kato Mivule
Kato Mivule, Claude Turner, Soo-Yeon Ji, "Towards A Differential Privacy and Utility Preserving Machine Learning Classifier", Procedia Computer Science (Complex Adaptive Systems), 2012, Pages 176-181, Washington DC, USA.
Classifying confidential data using SVM for efficient cloud query processingTELKOMNIKA JOURNAL
Nowadays, organizations are widely using a cloud database engine from the cloud service
providers. Privacy still is the main concern for these organizations where every organization is strictly
looking forward more secure environment for their own data. Several studies have proposed different types
of encryption methods to protect the data over the cloud. However, the daily transactions represented by
queries for such databases makes encryption is inefficient solution. Therefore, recent studies presented
a mechanism for classifying the data prior to migrate into the cloud. This would reduce the need of
encryption which enhances the efficiency. Yet, most of the classification methods used in the literature
were based on string-based matching approach. Such approach suffers of the exact match of terms where
the partial matching would not be considered. This paper aims to take the advantage of N-gram
representation along with Support Vector Machine classification. A real-time data will used in
the experiment. After conducting the classification, the Advanced Encryption Standard algorithm will be
used to encrypt the confidential data. Results showed that the proposed method outperformed the baseline
encryption method. This emphasizes the usefulness of using the machine learning techniques for
the process of classifying the data based on confidentiality.
Paper presented International Conference on Data Science and Analytics - ICDSA'21 organized by Rathinam College of Arts and Science, Tamil Nadu, India on 19th February 2021
Impact of big data congestion in IT: An adaptive knowledgebased Bayesian networkIJECEIAES
Recent progress on real-time systems are growing high in information technology which is showing importance in every single innovative field. Different applications in IT simultaneously produce the enormous measure of information that should be taken care of. In this paper, a novel algorithm of adaptive knowledge-based Bayesian network is proposed to deal with the impact of big data congestion in decision processing. A Bayesian system show is utilized to oversee learning arrangement toward all path for the basic leadership process. Information of Bayesian systems is routinely discharged as an ideal arrangement, where the examination work is to find a development that misuses a measurably inspired score. By and large, available information apparatuses manage this ideal arrangement by methods for normal hunt strategies. As it required enormous measure of information space, along these lines it is a tedious method that ought to be stayed away from. The circumstance ends up unequivocal once huge information include in hunting down ideal arrangement. A calculation is acquainted with achieve quicker preparing of ideal arrangement by constraining the pursuit information space. The proposed algorithm consists of recursive calculation intthe inquiry space. The outcome demonstrates that the ideal component of the proposed algorithm can deal with enormous information by processing time, and a higher level of expectation rates.
Multi-dimensional cubic symmetric block cipher algorithm for encrypting big datajournalBEEI
The advanced technology in the internet and social media, communication companies, health care records and cloud computing applications made the data around us increase dramatically every minute and continuously. These renewals big data involve sensitive information such as password, PIN number, credential numbers, secret identifications and etc. which require maintaining with some high secret procedures. The present paper involves proposing a secret multi-dimensional symmetric cipher with six dimensions as a cubic algorithm. The proposed algorithm works with the substitution permutation network (SPN) structure and supports a high processing data rate in six directions. The introduced algorithm includes six symmetry rounds transformations for encryption the plaintext, where each dimension represents an independent algorithm for big data manipulation. The proposed cipher deals with parallel encryption structures of the 128-bit data block for each dimension in order to handle large volumes of data. The submitted cipher compensates for six algorithms working simultaneously each with 128-bit according to various irreducible polynomials of order eight. The round transformation includes four main encryption stages where each stage with a cubic form of six dimensions.
Frequency and similarity aware partitioning for cloud storage based on space ...redpel dot com
Frequency and similarity aware partitioning for cloud storage based on space time utility maximization model.
for more ieee paper / full abstract / implementation , just visit www.redpel.com
Big data is the term that characterized by its increasing
volume, velocity, variety and veracity. All these characteristics
make processing on this big data a complex task. So, for
processing such data we need to do it differently like map reduce
framework. When an organization exchanges data for mining
useful information from this big data then privacy of the data
becomes an important problem. In the past, several privacy
preserving algorithms have been proposed. Of all those
anonymizing the data has been the most efficient one.
Anonymizing the dataset can be done on several operations like
generalization, suppression, anatomy, specialization, permutation
and perturbation. These algorithms are all suitable for dataset
that does not have the characteristics of the big data. To preserve
the privacy of the large dataset an algorithm was proposed
recently. It applies the top down specialization approach for
anonymizing the dataset and the scalability is increasing my
applying the map reduce frame work. In this paper we survey the
growth of big data, characteristics, map-reduce framework and
all the privacy preserving mechanisms and propose future
directions of our research.
Context, Causality, and Information Flow: Implications for Privacy Engineerin...Sebastian Benthall
The creators of technical infrastructure are under social and legal pressure to comply with expectations that can be difficult to translate into computational and business logics. The dissertation presented in this talk bridges this gap through three projects that focus on privacy engineering, information security, and data economics, respectively. These projects culminate in a new formal method for evaluating the strategic and tactical value of data. This method relies on a core theoretical contribution building on the work of Shannon, Dretske, Pearl, Koller, and Nissenbaum: a definition of information flow as a channel situated in a context of causal relations.
A forecasting of stock trading price using time series information based on b...IJECEIAES
Big data is a large set of structured or unstructured data that can collect, store, manage, and analyze data with existing database management tools. And it means the technique of extracting value from these data and interpreting the results. Big data has three characteristics: The size of existing data and other data (volume), the speed of data generation (velocity), and the variety of information forms (variety). The time series data are obtained by collecting and recording the data generated in accordance with the flow of time. If the analysis of these time series data, found the characteristics of the data implies that feature helps to understand and analyze time series data. The concept of distance is the simplest and the most obvious in dealing with the similarities between objects. The commonly used and widely known method for measuring distance is the Euclidean distance. This study is the result of analyzing the similarity of stock price flow using 793,800 closing prices of 1,323 companies in Korea. Visual studio and Excel presented calculate the Euclidean distance using an analysis tool. We selected “000100” as a target domestic company and prepared for big data analysis. As a result of the analysis, the shortest Euclidean distance is the code “143860” company, and the calculated value is “11.147”. Therefore, based on the results of the analysis, the limitations of the study and theoretical implications are suggested.
Meliorating usable document density for online event detectionIJICTJOURNAL
Online event detection (OED) has seen a rise in the research community as it can provide quick identification of possible events happening at times in the world. Through these systems, potential events can be indicated well before they are reported by the news media, by grouping similar documents shared over social media by users. Most OED systems use textual similarities for this purpose. Similar documents, that may indicate a potential event, are further strengthened by the replies made by other users, thereby improving the potentiality of the group. However, these documents are at times unusable as independent documents, as they may replace previously appeared noun phrases with pronouns, leading OED systems to fail while grouping these replies to their suitable clusters. In this paper, a pronoun resolution system that tries to replace pronouns with relevant nouns over social media data is proposed. Results show significant improvement in performance using the proposed system.
Technology analysis for internet of things using big data learningeSAT Journals
Abstract The internet of things (IoT) is an internet among things through advanced communication protocols without human’s operation. The main idea of IoT is to reach and improve the common goal by their intelligent networking. The IoT is an integrated technology of several sub technologies, such as wireless sensor or semantic. The technology of IoT has been evolved according to the environment based on information communication technology and social infrastructure. So we need to know the technological evolution of IoT in the future. In this paper, we analyze the IoT technology to find its technological relationship. We use patents and papers related to the IoT, and consider big data learning as tool for the IoT technology analysis. To verify the performance of our proposed methodology, we perform and show our case study using collected the patent and paper documents. Keywords: Big data learning, Internet of things, Technology analysis, and Patent analysis
In this paper we have penetrate an era of Big Data. Through better analysis of the large volumes of data that are becoming available, there is the potential for making faster advances in many scientific disciplines and improving the profitability and success of many enterprises. However, many technical challenges described in this paper must be addressed before this potential can be realized fully. The challenges include not just the obvious issues of scale, but also heterogeneity, lack of structure, error-handling, privacy, timeliness, provenance, and visualization, at all stages of the analysis pipeline from data acquisition to result interpretation.
A STUDY OF TRADITIONAL DATA ANALYSIS AND SENSOR DATA ANALYTICSijistjournal
The growth of smart and intelligent devices known as sensors generate large amount of data. These generated data over a time span takes such a large volume which is designated as big data. The data structure of repository holds unstructured data. The traditional data analytics methods well developed and used widely to analyze structured data and to limit extend the semi-structured data which involves additional processing over heads. The similar methods used to analyze unstructured data are different because of distributed computing approach where as there is a possibility of centralized processing in case of structured and semi-structured data. The under taken work is confined to analysis of both verities of methods. The result of this study is targeted to introduce methods available to analyze big data.
Building a semantic-based decision support system to optimize the energy use ...Gonçal Costa Jutglar
The reduction of carbon emissions in cities is a systemic problem which involves multiple scales and domains and the collaboration of experts from various fields. The smart cities approach can contribute to improve the energy efficiency of urban areas provided that there is reliable data –from the different domains concerned with carbon emission reduction– to assess their energy performance and to make decisions to improve it. In the SEMANCO project, we applied Semantic Web technologies to solve the interoperability among data, systems, tools, and users in applications cases dealing with carbon emission reduction in urban areas. In the OPTIMUS project, the tools and methods developed in SEMANCO are being further enhanced and applied to the development of a decision support system (DSS) to help local administrations to optimize the energy use of public buildings.
Social network analysis is a method of big data analysis which reveals the nature
of connections between objects, including implicit connections. This is a tool of interest
since it can be applied to large data sets, manual processing of which is very laborintensive,
while automated processing through self-learning linguistic engines requires
a lot of resources. In this regard a study was carried out: it was aimed at development
and testing of social network analysis tools and creating a research algorithm which is
applicable to solve a wide range of analytical and search tasks. The current image of
Russia and its activities in the Arctic was chosen as a case.
The research algorithm helps to discover implicit patterns and trends, relate
information flows and events with relevant newsworthy events and news stories to form
a “clear” view of the study object and key actors which this object is associated with.
The work contributes to filling the gap in scientific literature, caused by insufficient
development of applied issues of using social network analysis to solve managerial
tasks, while theoretical papers, which describe the theory and methodology of such an
analysis, are abundant.
A Systematic Literature Review of Cloud Computing in Ehealth hiij
Cloud computing in eHealthis an emerging area for only few years. There needs to identify the state of the art and pinpoint challenges and possible directions for researchers and applications developers. Based on this need, we have conducted a systematic review of cloud computing in eHealth. We searched ACM Digital Library, IEEE Xplore, Inspec, ISI Web of Science and Springer as well as relevant open-access journals for relevant articles. A total of 237 studies were first searched, of which 44 papers met the Include Criteria. The studies identified three types of studied areas about cloud computing in eHealth, namely (1) cloud-based eHealth framework design (n=13);(2) applications of cloud computing (n=17); and (3) security or privacy control mechanisms of healthcare data in the cloud (n=14). Most of the studies in the review were about designs and concept-proof. Only very few studies have evaluated their research in the real world, which may indicate that the application of cloud computing in eHealth is still very immature. However, our presented review could pinpoint that a hybrid cloud platform with mixed access control and security protection mechanisms will be a main research area for developing citizen centred home-based healthcare applications.
Novel holistic architecture for analytical operation on sensory data relayed...IJECEIAES
With increasing adoption of the sensor-based application, there is an exponential rise of the sensory data that eventually take the shape of the big data. However, the practicality of executing high end analytical operation over the resource-constrained big data has never being studied closely. After reviewing existing approaches, it is explored that there is no cost-effective schemes of big data analytics over large scale sensory data processiing that can be directly used as a service. Therefore, the propsoed system introduces a holistic architecture where streamed data after performing extraction of knowedge can be offered in the form of services. Implemented in MATLAB, the proposed study uses a very simplistic approach considering energy constrained of the sensor nodes to find that proposed system offers better accuracy, reduced mining duration (i.e. faster response time), and reduced memory dependencies to prove that it offers cost effective analytical solution in contrast to existing system.
A Novel Frame Work System Used In Mobile with Cloud Based Environmentpaperpublications3
Abstract: Recent era efforts have been taken in the field of social based Question and Answer (Q&A) which is used to search the answers for the non – factorial questions. But traditional search engines like Google, Bing is used to answer only for the factorial questions where we can get direct answer from the data base servers. The web search engine for the (Q&A) system does not dependent on the broadcasting methods and centralized server for identifying friends on the social network. The problem is recovered by using mobile Q&A system in that mobile nodes are help full for accessing internet because these techniques are used to generate low node overload, higher server bandwidth cost and highest cost of mobile internet access. Lately technical experts proposed a new method called Distributed Social – Based Mobile Q&A system (SOS) which makes very faster and quicker responses to the asker. SOS enables the mobile user’s to forward the question in the decentralized manner in order get effective, capable, and potential answers from the users. SOS is the light weighted knowledge engineering technique which is used find correct person who ready and willing to answer questions hence this type of search are used reduce searching time and computational cost of the mobile nodes. In this paper we proposed a new method called mobile Q&A system in the cloud based environment through which the data has been as been transmitted form cloud server to the centralized server at any time.
Story of Bigdata and its Applications in Financial Institutionsijtsrd
The importance of BigData is indeed nothing new, but being able to manage data efficiently is just now becoming more attainable. Although data management has evolved considerably since the 1800's, advancements made in recent years that have made the process even more efficient. Technique of Data mining, is much used in the banking industry, which helps banks compete in the market and provide the right product to the right customer. While collecting and combining different sources of data into a single significant volumetric Golden Source of TRUTH can be achieved by applying the right combination of tools. In this paper Author introduced BIGDATA technologies in brief along with its applications. Phani Bhooshan | Dr. C. Umashankar "Story of Bigdata and its Applications in Financial Institutions" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-6 , October 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29145.pdf Paper URL: https://www.ijtsrd.com/computer-science/database/29145/story-of-bigdata-and-its-applications-in-financial-institutions/phani-bhooshan
Square transposition: an approach to the transposition process in block cipherjournalBEEI
The transposition process is needed in cryptography to create a diffusion effect on data encryption standard (DES) and advanced encryption standard (AES) algorithms as standard information security algorithms by the National Institute of Standards and Technology. The problem with DES and AES algorithms is that their transposition index values form patterns and do not form random values. This condition will certainly make it easier for a cryptanalyst to look for a relationship between ciphertexts because some processes are predictable. This research designs a transposition algorithm called square transposition. Each process uses square 8 × 8 as a place to insert and retrieve 64-bits. The determination of the pairing of the input scheme and the retrieval scheme that have unequal flow is an important factor in producing a good transposition. The square transposition can generate random and non-pattern indices so that transposition can be done better than DES and AES.
Hyper-parameter optimization of convolutional neural network based on particl...journalBEEI
Deep neural networks have accomplished enormous progress in tackling many problems. More specifically, convolutional neural network (CNN) is a category of deep networks that have been a dominant technique in computer vision tasks. Despite that these deep neural networks are highly effective; the ideal structure is still an issue that needs a lot of investigation. Deep Convolutional Neural Network model is usually designed manually by trials and repeated tests which enormously constrain its application. Many hyper-parameters of the CNN can affect the model performance. These parameters are depth of the network, numbers of convolutional layers, and numbers of kernels with their sizes. Therefore, it may be a huge challenge to design an appropriate CNN model that uses optimized hyper-parameters and reduces the reliance on manual involvement and domain expertise. In this paper, a design architecture method for CNNs is proposed by utilization of particle swarm optimization (PSO) algorithm to learn the optimal CNN hyper-parameters values. In the experiment, we used Modified National Institute of Standards and Technology (MNIST) database of handwritten digit recognition. The experiments showed that our proposed approach can find an architecture that is competitive to the state-of-the-art models with a testing error of 0.87%.
Supervised machine learning based liver disease prediction approach with LASS...journalBEEI
In this contemporary era, the uses of machine learning techniques are increasing rapidly in the field of medical science for detecting various diseases such as liver disease (LD). Around the globe, a large number of people die because of this deadly disease. By diagnosing the disease in a primary stage, early treatment can be helpful to cure the patient. In this research paper, a method is proposed to diagnose the LD using supervised machine learning classification algorithms, namely logistic regression, decision tree, random forest, AdaBoost, KNN, linear discriminant analysis, gradient boosting and support vector machine (SVM). We also deployed a least absolute shrinkage and selection operator (LASSO) feature selection technique on our taken dataset to suggest the most highly correlated attributes of LD. The predictions with 10 fold cross-validation (CV) made by the algorithms are tested in terms of accuracy, sensitivity, precision and f1-score values to forecast the disease. It is observed that the decision tree algorithm has the best performance score where accuracy, precision, sensitivity and f1-score values are 94.295%, 92%, 99% and 96% respectively with the inclusion of LASSO. Furthermore, a comparison with recent studies is shown to prove the significance of the proposed system.
A secure and energy saving protocol for wireless sensor networksjournalBEEI
The research domain for wireless sensor networks (WSN) has been extensively conducted due to innovative technologies and research directions that have come up addressing the usability of WSN under various schemes. This domain permits dependable tracking of a diversity of environments for both military and civil applications. The key management mechanism is a primary protocol for keeping the privacy and confidentiality of the data transmitted among different sensor nodes in WSNs. Since node's size is small; they are intrinsically limited by inadequate resources such as battery life-time and memory capacity. The proposed secure and energy saving protocol (SESP) for wireless sensor networks) has a significant impact on the overall network life-time and energy dissipation. To encrypt sent messsages, the SESP uses the public-key cryptography’s concept. It depends on sensor nodes' identities (IDs) to prevent the messages repeated; making security goals- authentication, confidentiality, integrity, availability, and freshness to be achieved. Finally, simulation results show that the proposed approach produced better energy consumption and network life-time compared to LEACH protocol; sensors are dead after 900 rounds in the proposed SESP protocol. While, in the low-energy adaptive clustering hierarchy (LEACH) scheme, the sensors are dead after 750 rounds.
Plant leaf identification system using convolutional neural networkjournalBEEI
This paper proposes a leaf identification system using convolutional neural network (CNN). This proposed system can identify five types of local Malaysia leaf which were acacia, papaya, cherry, mango and rambutan. By using CNN from deep learning, the network is trained from the database that acquired from leaf images captured by mobile phone for image classification. ResNet-50 was the architecture has been used for neural networks image classification and training the network for leaf identification. The recognition of photographs leaves requested several numbers of steps, starting with image pre-processing, feature extraction, plant identification, matching and testing, and finally extracting the results achieved in MATLAB. Testing sets of the system consists of 3 types of images which were white background, and noise added and random background images. Finally, interfaces for the leaf identification system have developed as the end software product using MATLAB app designer. As a result, the accuracy achieved for each training sets on five leaf classes are recorded above 98%, thus recognition process was successfully implemented.
Customized moodle-based learning management system for socially disadvantaged...journalBEEI
This study aims to develop Moodle-based LMS with customized learning content and modified user interface to facilitate pedagogical processes during covid-19 pandemic and investigate how teachers of socially disadvantaged schools perceived usability and technology acceptance. Co-design process was conducted with two activities: 1) need assessment phase using an online survey and interview session with the teachers and 2) the development phase of the LMS. The system was evaluated by 30 teachers from socially disadvantaged schools for relevance to their distance learning activities. We employed computer software usability questionnaire (CSUQ) to measure perceived usability and the technology acceptance model (TAM) with insertion of 3 original variables (i.e., perceived usefulness, perceived ease of use, and intention to use) and 5 external variables (i.e., attitude toward the system, perceived interaction, self-efficacy, user interface design, and course design). The average CSUQ rating exceeded 5.0 of 7 point-scale, indicated that teachers agreed that the information quality, interaction quality, and user interface quality were clear and easy to understand. TAM results concluded that the LMS design was judged to be usable, interactive, and well-developed. Teachers reported an effective user interface that allows effective teaching operations and lead to the system adoption in immediate time.
Understanding the role of individual learner in adaptive and personalized e-l...journalBEEI
Dynamic learning environment has emerged as a powerful platform in a modern e-learning system. The learning situation that constantly changing has forced the learning platform to adapt and personalize its learning resources for students. Evidence suggested that adaptation and personalization of e-learning systems (APLS) can be achieved by utilizing learner modeling, domain modeling, and instructional modeling. In the literature of APLS, questions have been raised about the role of individual characteristics that are relevant for adaptation. With several options, a new problem has been raised where the attributes of students in APLS often overlap and are not related between studies. Therefore, this study proposed a list of learner model attributes in dynamic learning to support adaptation and personalization. The study was conducted by exploring concepts from the literature selected based on the best criteria. Then, we described the results of important concepts in student modeling and provided definitions and examples of data values that researchers have used. Besides, we also discussed the implementation of the selected learner model in providing adaptation in dynamic learning.
Prototype mobile contactless transaction system in traditional markets to sup...journalBEEI
One way to prevent and reduce the spread of the covid-19 pandemic is through physical distancing program. This research aims to develop a prototype contactless transaction system using digital payment mechanisms and QR code technology that will be applied in traditional markets. The method used in the development of electronic market systems is a prototype approach. The application of QR code and digital payments are used as a solution to minimize money exchange contacts that are common in traditional markets. The results showed that the system built was able to accelerate and facilitate the buying and selling transaction process in traditional market environment. Alpha testing shows that all functional systems are running well. Meanwhile, beta testing shows that the user can very well accept the system that was built. The results of the study also show acceptance of the usefulness of the system being built, as well as the optimism of its users to be able to take advantage of this system both technologically and functionally, so its can be a part of the digital transformation of the traditional market to the electronic market and has become one of the solutions in reducing the spread of the current covid-19 pandemic.
Wireless HART stack using multiprocessor technique with laxity algorithmjournalBEEI
The use of a real-time operating system is required for the demarcation of industrial wireless sensor network (IWSN) stacks (RTOS). In the industrial world, a vast number of sensors are utilised to gather various types of data. The data gathered by the sensors cannot be prioritised ahead of time. Because all of the information is equally essential. As a result, a protocol stack is employed to guarantee that data is acquired and processed fairly. In IWSN, the protocol stack is implemented using RTOS. The data collected from IWSN sensor nodes is processed using non-preemptive scheduling and the protocol stack, and then sent in parallel to the IWSN's central controller. The real-time operating system (RTOS) is a process that occurs between hardware and software. Packets must be sent at a certain time. It's possible that some packets may collide during transmission. We're going to undertake this project to get around this collision. As a prototype, this project is divided into two parts. The first uses RTOS and the LPC2148 as a master node, while the second serves as a standard data collection node to which sensors are attached. Any controller may be used in the second part, depending on the situation. Wireless HART allows two nodes to communicate with each other.
Implementation of double-layer loaded on octagon microstrip yagi antennajournalBEEI
A double-layer loaded on the octagon microstrip yagi antenna (OMYA) at 5.8 GHz industrial, scientific and medical (ISM) Band is investigated in this paper. The double-layer consist of two double positive (DPS) substrates. The OMYA is overlaid with a double-layer configuration were simulated, fabricated and measured. A good agreement was observed between the computed and measured results of the gain for this antenna. According to comparison results, it shows that 2.5 dB improvement of the OMYA gain can be obtained by applying the double-layer on the top of the OMYA. Meanwhile, the bandwidth of the measured OMYA with the double-layer is 14.6%. It indicates that the double-layer can be used to increase the OMYA performance in term of gain and bandwidth.
The calculation of the field of an antenna located near the human headjournalBEEI
In this work, a numerical calculation was carried out in one of the universal programs for automatic electro-dynamic design. The calculation is aimed at obtaining numerical values for specific absorbed power (SAR). It is the SAR value that can be used to determine the effect of the antenna of a wireless device on biological objects; the dipole parameters will be selected for GSM1800. Investigation of the influence of distance to a cell phone on radiation shows that absorbed in the head of a person the effect of electromagnetic radiation on the brain decreases by three times this is a very important result the SAR value has decreased by almost three times it is acceptable results.
Exact secure outage probability performance of uplinkdownlink multiple access...journalBEEI
In this paper, we study uplink-downlink non-orthogonal multiple access (NOMA) systems by considering the secure performance at the physical layer. In the considered system model, the base station acts a relay to allow two users at the left side communicate with two users at the right side. By considering imperfect channel state information (CSI), the secure performance need be studied since an eavesdropper wants to overhear signals processed at the downlink. To provide secure performance metric, we derive exact expressions of secrecy outage probability (SOP) and and evaluating the impacts of main parameters on SOP metric. The important finding is that we can achieve the higher secrecy performance at high signal to noise ratio (SNR). Moreover, the numerical results demonstrate that the SOP tends to a constant at high SNR. Finally, our results show that the power allocation factors, target rates are main factors affecting to the secrecy performance of considered uplink-downlink NOMA systems.
Design of a dual-band antenna for energy harvesting applicationjournalBEEI
This report presents an investigation on how to improve the current dual-band antenna to enhance the better result of the antenna parameters for energy harvesting application. Besides that, to develop a new design and validate the antenna frequencies that will operate at 2.4 GHz and 5.4 GHz. At 5.4 GHz, more data can be transmitted compare to 2.4 GHz. However, 2.4 GHz has long distance of radiation, so it can be used when far away from the antenna module compare to 5 GHz that has short distance in radiation. The development of this project includes the scope of designing and testing of antenna using computer simulation technology (CST) 2018 software and vector network analyzer (VNA) equipment. In the process of designing, fundamental parameters of antenna are being measured and validated, in purpose to identify the better antenna performance.
Transforming data-centric eXtensible markup language into relational database...journalBEEI
eXtensible markup language (XML) appeared internationally as the format for data representation over the web. Yet, most organizations are still utilising relational databases as their database solutions. As such, it is crucial to provide seamless integration via effective transformation between these database infrastructures. In this paper, we propose XML-REG to bridge these two technologies based on node-based and path-based approaches. The node-based approach is good to annotate each positional node uniquely, while the path-based approach provides summarised path information to join the nodes. On top of that, a new range labelling is also proposed to annotate nodes uniquely by ensuring the structural relationships are maintained between nodes. If a new node is to be added to the document, re-labelling is not required as the new label will be assigned to the node via the new proposed labelling scheme. Experimental evaluations indicated that the performance of XML-REG exceeded XMap, XRecursive, XAncestor and Mini-XML concerning storing time, query retrieval time and scalability. This research produces a core framework for XML to relational databases (RDB) mapping, which could be adopted in various industries.
Key performance requirement of future next wireless networks (6G)journalBEEI
Given the massive potentials of 5G communication networks and their foreseeable evolution, what should there be in 6G that is not in 5G or its long-term evolution? 6G communication networks are estimated to integrate the terrestrial, aerial, and maritime communications into a forceful network which would be faster, more reliable, and can support a massive number of devices with ultra-low latency requirements. This article presents a complete overview of potential 6G communication networks. The major contribution of this study is to present a broad overview of key performance indicators (KPIs) of 6G networks that cover the latest manufacturing progress in the environment of the principal areas of research application, and challenges.
Noise resistance territorial intensity-based optical flow using inverse confi...journalBEEI
This paper presents the use of the inverse confidential technique on bilateral function with the territorial intensity-based optical flow to prove the effectiveness in noise resistance environment. In general, the image’s motion vector is coded by the technique called optical flow where the sequences of the image are used to determine the motion vector. But, the accuracy rate of the motion vector is reduced when the source of image sequences is interfered by noises. This work proved that the inverse confidential technique on bilateral function can increase the percentage of accuracy in the motion vector determination by the territorial intensity-based optical flow under the noisy environment. We performed the testing with several kinds of non-Gaussian noises at several patterns of standard image sequences by analyzing the result of the motion vector in a form of the error vector magnitude (EVM) and compared it with several noise resistance techniques in territorial intensity-based optical flow method.
Modeling climate phenomenon with software grids analysis and display system i...journalBEEI
This study aims to model climate change based on rainfall, air temperature, pressure, humidity and wind with grADS software and create a global warming module. This research uses 3D model, define, design, and develop. The results of the modeling of the five climate elements consist of the annual average temperature in Indonesia in 2009-2015 which is between 29oC to 30.1oC, the horizontal distribution of the annual average pressure in Indonesia in 2009-2018 is between 800 mBar to 1000 mBar, the horizontal distribution the average annual humidity in Indonesia in 2009 and 2011 ranged between 27-57, in 2012-2015, 2017 and 2018 it ranged between 30-60, during the East Monsoon, the wind circulation moved from northern Indonesia to the southern region Indonesia. During the west monsoon, the wind circulation moves from the southern part of Indonesia to the northern part of Indonesia. The global warming module for SMA/MA produced is feasible to use, this is in accordance with the value given by the validate of 69 which is in the appropriate category and the response of teachers and students through a 91% questionnaire.
An approach of re-organizing input dataset to enhance the quality of emotion ...journalBEEI
The purpose of this paper is to propose an approach of re-organizing input data to recognize emotion based on short signal segments and increase the quality of emotional recognition using physiological signals. MIT's long physiological signal set was divided into two new datasets, with shorter and overlapped segments. Three different classification methods (support vector machine, random forest, and multilayer perceptron) were implemented to identify eight emotional states based on statistical features of each segment in these two datasets. By re-organizing the input dataset, the quality of recognition results was enhanced. The random forest shows the best classification result among three implemented classification methods, with an accuracy of 97.72% for eight emotional states, on the overlapped dataset. This approach shows that, by re-organizing the input dataset, the high accuracy of recognition results can be achieved without the use of EEG and ECG signals.
Parking detection system using background subtraction and HSV color segmentationjournalBEEI
Manual system vehicle parking makes finding vacant parking lots difficult, so it has to check directly to the vacant space. If many people do parking, then the time needed for it is very much or requires many people to handle it. This research develops a real-time parking system to detect parking. The system is designed using the HSV color segmentation method in determining the background image. In addition, the detection process uses the background subtraction method. Applying these two methods requires image preprocessing using several methods such as grayscaling, blurring (low-pass filter). In addition, it is followed by a thresholding and filtering process to get the best image in the detection process. In the process, there is a determination of the ROI to determine the focus area of the object identified as empty parking. The parking detection process produces the best average accuracy of 95.76%. The minimum threshold value of 255 pixels is 0.4. This value is the best value from 33 test data in several criteria, such as the time of capture, composition and color of the vehicle, the shape of the shadow of the object’s environment, and the intensity of light. This parking detection system can be implemented in real-time to determine the position of an empty place.
Quality of service performances of video and voice transmission in universal ...journalBEEI
The universal mobile telecommunications system (UMTS) has distinct benefits in that it supports a wide range of quality of service (QoS) criteria that users require in order to fulfill their requirements. The transmission of video and audio in real-time applications places a high demand on the cellular network, therefore QoS is a major problem in these applications. The ability to provide QoS in the UMTS backbone network necessitates an active QoS mechanism in order to maintain the necessary level of convenience on UMTS networks. For UMTS networks, investigation models for end-to-end QoS, total transmitted and received data, packet loss, and throughput providing techniques are run and assessed and the simulation results are examined. According to the results, appropriate QoS adaption allows for specific voice and video transmission. Finally, by analyzing existing QoS parameters, the QoS performance of 4G/UMTS networks may be improved.
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Internet Prospective Study
1. Bulletin of Electrical Engineering and Informatics
ISSN: 2302-9285
Vol. 6, No. 3, September 2017, pp. 235~240, DOI: 10.11591/eei.v6i3.653 235
Received July 12, 2017; Revised August 18, 2017; Accepted August 31, 2017
Internet Prospective Study
Jesús Alvarez-Cedillo*
1
, Elizabeth Acosta-Gonzaga
2
, Mario Aguilar-Fernández
3
, Patricia
Pérez-Romero
4
1,2,3
Instituto Politécnico Nacional, Unidad Profesional Interdisciplinaria de Ingeniería
y Ciencias Sociales y Administrativas, Av. Té 950 Col. Granjas México, CDMX, México
4
Instituto Politécnico Nacional, CIDETEC, Av. Juan de Dios Batiz, esquina con Miguel Othón de
Mendizábal Edificio CIDETEC, 07738, CDMX, México
*Corresponding author, e-mail: jaalvarez@ipn.mx
Abstract
The Internet is currently the largest network of communication worldwide and is where
technological advances could be observed. The original creation of the Internet was based on the idea that
this network would be formed mainly by multiple independent networks with an arbitrary design. The
Internet is the place where all countries communicate and disseminate information in real time, this
phenomenon directly affects economies, businesses, and society. This article shows what the future of the
Internet is, our research carries out a qualitative prospective analysis on projects and investigations in
which the scientific community is currently working, the information is analyzed, and the highlighted topics
are shown.
Keywords: internet foresight, FI-PPP, FIWARE, FI-CORE, IoT, smart cities, smart homes
1. Introduction
The Internet started with the project called ARPANET (Advanced Research Projects
Agency Network) in the seventies and by the Advanced Research Projects Agency (ARPA) of
the Department of Defense of the United States was founded. ARPANET was a pioneering
packet-switched network that would soon include packet satellite networks, terrestrial radio
networks and other networks [1-2].
In the 1970s, the traditional method of communication was circuit switching, i.e.,
networks were interconnected at the circuit level, whose distinctive property was to establish a
physical path or circuit and to maintain it throughout the conversation between a point of origin
and of destiny. The path could be the single direction (simplex) or two-way (duplex)
communication [3].
In contrast, packet-switched networks do not establish a physical path between the
source point and the terminal, but the information is transmitted separately by packets [3]. The
Transmission Control Protocol (TCP) and the Internet Protocol (IP) are the most widely used
data communication protocols; they mostly satisfy the need for global data communication. The
success of these rules is due to their characteristics, among which stand out, which are open
standards widely available and developed independently of any hardware or operating system.
These protocols can unify different hardware and software even if they do not communicate
over the Internet [4]. The TCP protocol was ideal for remote session initiation and file transfer
applications. However, for some advanced network applications, such as packet voice
transmission, it clearly showed that in some cases it could not correct packet loss, and the
application should take care of it [2].
In the early years of the creation of the Internet, various applications were proposed,
including packet-based voice communication, online chat rooms, social networks and early
"worm" programs that showed the concept of agents, and Viruses were also created [2].
Currently, the Internet has more than 3.6 billion users worldwide [5] and is one of the
great success stories; its infrastructure supports the economy and social aspects of the planet.
As it in the decade of the 70 was designed, the limitations are more and more evident, the
reason why it is necessary to evolve the Internet or to change it, initiating a new project. This
work shows the future of the Internet, a future not too distant, with a real and not futuristic vision.
2. ISSN: 2302-9285
Bulletin of EEI Vol. 6, No. 3, September 2017 : 235 – 240
236
2. Current Internet
The initial goal of the creation of the Internet was academic, was used to share
specialized information and research. Today the Internet is formed by a large number of
services, integrating the most innovative technologies for data processing, video, and sound.
The standard data transfer rates are measured in millions of bits per second, i.e., gigabits per
second (GB/s) and millions of pixel resolutions, to meet the needs of the new entertainment
markets, Business and telecommunications.
According to [6-7] normal online activities in the world are sending and receiving emails,
accessing information about goods and services online, reading news, newspapers, online
magazines, e-banking, e-government, online purchases.
In Europe, in the year 2016, the statistical findings [8] mention that a vast majority of
people use the Internet for everyday life, education, work and participation in society, as it
allows them to access information and services At any time from anywhere.
Given its main feature where it is possible to have a significant amount of information on
the Internet and can be accessed almost instantaneously. It can cause more and more users to
look for information online and lose interest in reading a book Traditional way, which could even
cause them to find traditional or classroom education in a boring and monotonous manner. It is
common to use digital media for teaching; this allows to modify traditional school practices such
as visits to museums or libraries [6].
3. Research Method
Qualitative research is defined as any research that generates results and discoveries
in which statistical procedures and other means of quantification are not used "[9].
Qualitative analysis refers to rational and non-mathematical reinterpretation for the
purpose of discovering keywords or concepts and relationships in raw data and then organizing
them into a theoretical framework "[9]. These qualitative methods were used in particular
substantive areas on which is known little or much whit the goal to obtain new knowledge [10].
In [9] there are three main components in qualitative research:
a. The data, which may come from different sources, such as interviews, observations,
documents, records, and films.
b. The procedures used to interpret and organize the data, such as:
1) Conceptualize and
2) Reduce data,
3) Elaborate categories, regarding their properties and dimensions,
4) Relate the data using a series of propositional sentences, the previous four are known as
encoding.
c. Written and verbal reports can be presented as articles in scientific journals, in talks (for
example in conferences), or as books.
According to the purpose of this study, the methodology used for data analysis is the
grounded theory proposed by Glaser and Strauss [11] and described in [12], as seen in Figure
1.
3. Bulletin of EEI ISSN: 2302-9285
Internet Prospective Study (Jesús Alvarez-Cedillo)
237
Figure 1. Grounded theory proposed by Glaser and Strauss [11]
In [10], the fundamental theory is a research method in which theory arises from
systematically collected data. It does not begin with a preconceived theory, but of the date of
the approach so that it resembles reality. Since the purpose of the authors [10] was to create
new ways of understanding reality and to express them theoretically, the methods would help
build theories. Given the above, fundamental theory is the appropriate method for this study.
We analyzed 2000 scientific articles of international arbitration with the subject of
Internet of the future. To perform the data analysis and coding, we used the software for the
qualitative analysis NVivo 10 [13]. After performing the data analysis, results were obtained in
which the most relevant studies were identified in the subject under investigation, to detect
trends.
To elaborate the data analysis, NVivo 10 stores the information in nodes, which are
structured in hierarchies or the trees creating topologies. According to the methodology used,
we seek to find the elements that form the keywords or properties and with these create the
categories.
Also, NVivo 10 uses the constant comparison technique, that is, a measure that
encoding is generated, the information found in a text is continuously compared to other coded
documents. The categories and properties that emerge from the analysis are combined with the
fundamental concepts being sought, it is said, about the theory that is emerging. From the main
keywords, it is possible to look for more data to strengthen the original theory.
NVivo 10 also shows when the theoretical saturation has been reached, that is, the
saturation of the elements and categories that are being analyzed. It this allows you to focus the
search on the full parts and look for documents that have not yet reached that level. According
to the data introduced for this study the frequency of the words is shown in Figure 2.
As can be seen in the analysis of the frequency, the words represented in a cloud show the
category of primary technologies that has the most impact at this moment and therefore, will
also have effects in the future.
Figure 2. Cloud of words generated with NVivo 10
4. ISSN: 2302-9285
Bulletin of EEI Vol. 6, No. 3, September 2017 : 235 – 240
238
4. Results and Analysis
According to the results obtained and represented in the word cloud of Figure 3. Using
the data obtained it is possible to make an estimate showing the future-oriented research topics
of the Internet as illustrated in Figure 4. The FI-PPP was proposed by the European
Commission on May 3, 2011. This research program includes several projects to place Europe
At the top of the Internet of the future around the world.
Figure 3. Technologies most relevant to the future of the internet
The FI-PPP program focuses on key areas such as energy, health, and media content.
This community develops standard technological components of Internet of the future (called
Generic Enablers) that are reusable, open and interoperable with the evolution of infrastructures
and technologies [14].
Figure 4. Central projects and technological trends on the internet of the future
The key FI-PPP research project is FI-WARE whose aim is to promote the overall
competitiveness of the EU economy by establishing an innovative infrastructure. This project will
include key deliverables as an open architecture and a reference implementation of a new
service infrastructure, based on generic and reusable building blocks. The design will support
emerging (FI) services [15]. FI-WARE is an open middleware platform developed between 2011
and 2014 as a platform for the implementation of new applications and new services related to
the Internet of Things, Sensor Networks, Open Data and Intelligent Cities [16].
The results show that the FI-Core project represents 75% of current research on the
future of the Internet. The objective of the FI-Core is to continue the investigation of the FI-
WARE project that ended in 2016. The percentage of the development of these technologies is
shown in Figure 5.
This project will support economic, social and environmental demands that will have the
participation of specific projects such as:
5. Bulletin of EEI ISSN: 2302-9285
Internet Prospective Study (Jesús Alvarez-Cedillo)
239
a. Smart Utility. FINESCE (Intelligent Internet Utility Services of the Future-[17].
b. Future business collaboration networks on the Internet in agri-food, transport, and logistics.
c. Investigation of social and technological alignment. Research of the Social and
Technological Alignment on Internet of the Future (FI-STAR), this project seeks to satisfy
the requirements of the global health industry taking advantage of FI-PPP results.
d. Internet technologies for manufacturing industries.
e. Development of Internet media for large-scale experimentation.
Figure 5. FI-PPP Trends
The race to reach the intelligence of cities has begun. Smart Cities will use ICTs to a
large extent, based on their infrastructure and the creation of services. These services are more
interactive and efficient, based on the cooperation and awareness of Internet citizens. The
Smart Cities as well as spaces committed to their environment and ecology. With this respect
for the environment, history and its culture. This interconnected digital system will become a
digital platform to optimize and maximize the economy and well-being in society. The society
will be constituted with sustainable development. Figure 6 give information about investigation
of FI-PPP program trends.
Figure 6. Investigation of FI-PPP program trends
5. Conclusion
However, current research shows an uncertain future in geospatial data and
processing, on the contrary, research focuses on smart cities. The FI-WARE platform will
accelerate the supply of raw materials and the development of technology in small and medium-
sized organizations.
It also contributes to the growth and expansion of enterprises through ICT. The real
value and success of FI-WARE implementation depend on its ability to provide the only
functional and intuitive activators that connect to particular area-of-use enablers. Border
6. ISSN: 2302-9285
Bulletin of EEI Vol. 6, No. 3, September 2017 : 235 – 240
240
research topics focus on the various phases of FI-PPP, the creation of sustainable ecosystems
based on IT, the consolidation of business and value chains.
Acknowledgments
Our research team thanks the Instituto Politécnico Nacional y a la Secretaría de
Investigación y Posgrado for the support received for the development of investigation,
MEXICO, CDMX.
References
[1] Lievrouw LA. Handbook of New Media: Student Edition. 2006. (p. 253) SAGE, 475 pages. ISBN
1412918731.
[2] Hafner, Katie. Where Wizards Stay Up Late: The Origins of the Internet. Simon & Schuster. 1988.
ISBN 0684832674.
[3] Herrera PE. Introducción a las Telecomunicaciones Moderna. Serie Electrotecnia. Colección de
Textos Politécnicos. Ed. Noriega Limusa. 1998. ISBN 968185506X.
[4] Chapman DB, Zwicky ED. Construya Firewalls para Internet. Ed. Mc Graw Hill. 1997. ISBN
1565921240.
[5] Internet Live Stats. 2017 Tomado de: http://www.internetlivestats.com/
[6] Méndez M. Primera Revista Electrónica en América Latina Especializada en Comunicación. 2005.
Tomado de: http://www.razonypalabra.org.mx/anteriores/n43/mmendez.html
[7] OECD. org., 2017. Tomado de: http://www.oecd.org
[8] European Commission. Eurostat. 2017. Tomado de: http://ec.europa.eu/eurostat/statistics-
explained/index.php/Main_Page
[9] Strauss A, Cordin J. Bases de la Investigación Cualitativa. Técnicas y Procedimientos para
Desarrollar la Teoría Fundamentada. SAGE Publicaciones Inc. Edición en Español. Editorial
Universidad de Antoquia, Colombia. 2002. ISBN 958-655-623-9.
[10] Stern PN. Grounded Theory Methodology: Its Uses and Processes. 1980. doi:10.1111/j.1547-
5069.1980.tb01455.x
[11] Glaser B. y Strauss A. The discovery of grounded theory. Chicago: Aldine Press. 1967.
[12] Carrero V. Análisis Cualitativo de Datos: Aplicación de la Teoría Fundamentada (Grounded Theory)
1998.
[13] Deland Bengt y Mc Dougall A. Nvivo 10 Essentials: Your Guide to the world's Most Powerful
Qualitative Data Analysis Software. Stallarholmen: Form & Kunskap AB. 2013.
[14] Rwthaachen University. 2017. Tomado de: http://www.acs.eonerc.rwth-aachen.de
[15] Fiware. 2017. Tomado de: https://www.fi-ware.eu.
[16] RedIRIS. Red Académica y de Investigación Nacional, España. 2017. Tomado de:
https://www.rediris.es/proyectos/Fi-core/