If data security issues in a big data environment are considered, then the distribution of keys, their management, and the ability to transfer them between server users in a public channel will be one of the most critical issues that must consider on. In which the importance of keys management may outweigh the importance of the encryption algorithm strength. Therefore, this paper raised a new proposed scheme called authenticated key management scheme (AKMS) that works through two levels of security. First, to concerns how the user communicates with the server with preventing any attempt to penetrate senders/receivers. Second, to make the data sent vague by encrypting it, and unreadable by others except for the concerned receiver, thus the server function be limited only as a passageway for communication between the sender and receiver. In the presented work some concepts discussed related to analysis and evaluation as keys security, data security, public channel transmission, and security isolation inquiry which demonstrated the rich value that AKMS scheme carried. As well, AKMS scheme achieved very satisfactory results about computation cost, communication cost, and storage overhead which proved that AKMS scheme is appropriate, secure, and practical to use and protect the user's private data in big data environments.
CLOUD BASED ACCESS CONTROL MODEL FOR SELECTIVE ENCRYPTION OF DOCUMENTS WITH T...IJNSA Journal
Cloud computing refers to a type of networked computing whereby an application can be run on connected servers instead of local servers. Cloud can be used to store data, share resources and also to provide services. Technically, there is very little difference between public and private cloud architecture. However, the security and privacy of the data is a very big issue when sensitive data is being entrusted to third party cloud service providers. Thus encryption with a fine grained access control is inevitable to enforce security in clouds. Several techniques implementing attribute based encryption for fine grained access control have been proposed. Under such approaches, the key management overhead is a little bit high in terms of computational complexity. Also, secret sharing mechanisms have added complexity. Moreover, they lack mechanisms to handle existence of traitors. Our proposed approach addresses these requirements and reduces the overhead of the key management as well as secret sharing by using efficient algorithms and protocols. Also, a traitor tracing technique is introduced into the cloud computing two layer encryption environment.
CLOUD BASED ACCESS CONTROL MODEL FOR SELECTIVE ENCRYPTION OF DOCUMENTS WITH T...IJNSA Journal
Cloud computing refers to a type of networked computing whereby an application can be run on connected
servers instead of local servers. Cloud can be used to store data, share resources and also to provide
services. Technically, there is very little difference between public and private cloud architecture. However,
the security and privacy of the data is a very big issue when sensitive data is being entrusted to third party
cloud service providers. Thus encryption with a fine grained access control is inevitable to enforce security
in clouds. Several techniques implementing attribute based encryption for fine grained access control have
been proposed. Under such approaches, the key management overhead is a little bit high in terms of
computational complexity. Also, secret sharing mechanisms have added complexity. Moreover, they lack
mechanisms to handle existence of traitors. Our proposed approach addresses these requirements and
reduces the overhead of the key management as well as secret sharing by using efficient algorithms and
protocols. Also, a traitor tracing technique is introduced into the cloud computing two layer encryption
environment.
Fragmentation of Data in Large-Scale System For Ideal Performance and SecurityEditor IJCATR
Cloud computing is becoming prominent trend which offers the number of significant advantages. One of the ground laying
advantage of the cloud computing is the pay-as-per-use, where according to the use of the services, the customer has to pay. At present,
user’s storage availability improves the data generation. There is requiring farming out such large amount of data. There is indefinite
large number of Cloud Service Providers (CSP). The Cloud Service Providers is increasing trend for many number of organizations and
as well as for the customers that decreases the burden of the maintenance and local data storage. In cloud computing transferring data to
the third party administrator control will give rise to security concerns. Within the cloud, compromisation of data may occur due to
attacks by the unauthorized users and nodes. So, in order to protect the data in cloud the higher security measures are required and also
to provide security for the optimization of the data retrieval time. The proposed system will approach the issues of security and
performance. Initially in the DROPS methodology, the division of the files into fragments is done and replication of those fragmented
data over the cloud node is performed. Single fragment of particular file can be stored on each of the nodes which ensure that no
meaningful information is shown to an attacker on a successful attack. The separation of the nodes is done by T-Coloring in order to
prohibit an attacker to guess the fragment’s location. The complete data security is ensured by DROPS methodology
Accessing secured data in cloud computing environmentIJNSA Journal
Number of businesses using cloud computing has increased dramatically over the last few years due to the attractive features such as scalability, flexibility, fast start-up and low costs. Services provided over the web are ranging from using provider’s software and hardware to managing security and other issues. Some of the biggest challenges at this point are providing privacy and data security to subscribers of public cloud servers. An efficient encryption technique presented in this paper can be used for secure access to and storage of data on public cloud server, moving and searching encrypted data through communication channels while protecting data confidentiality. This method ensures data protection against both external and internal intruders. Data can be decrypted only with the provided by the data owner key, while public cloud server is unable to read encrypted data or queries. Answering a query does not depend on it size and done in a constant time. Data access is managed by the data owner. The proposed schema allows unauthorized modifications detection
ACCESSING SECURED DATA IN CLOUD COMPUTING ENVIRONMENTIJNSA Journal
Number of businesses using cloud computing has increased dramatically over the last few years due to the attractive features such as scalability, flexibility, fast start-up and low costs. Services provided over the web are ranging from using provider’s software and hardware to managing security and other issues. Some of the biggest challenges at this point are providing privacy and data security to subscribers of public cloud servers. An efficient encryption technique presented in this paper can be used for secure access to and storage of data on public cloud server, moving and searching encrypted data through communication channels while protecting data confidentiality. This method ensures data protection against both external and internal intruders. Data can be decrypted only with the provided by the data owner key, while public cloud server is unable to read encrypted data or queries. Answering a query does not depend on it size and done in a constant time. Data access is managed by the data owner. The proposed schema allows unauthorized modifications detection.
CLOUD BASED ACCESS CONTROL MODEL FOR SELECTIVE ENCRYPTION OF DOCUMENTS WITH T...IJNSA Journal
Cloud computing refers to a type of networked computing whereby an application can be run on connected servers instead of local servers. Cloud can be used to store data, share resources and also to provide services. Technically, there is very little difference between public and private cloud architecture. However, the security and privacy of the data is a very big issue when sensitive data is being entrusted to third party cloud service providers. Thus encryption with a fine grained access control is inevitable to enforce security in clouds. Several techniques implementing attribute based encryption for fine grained access control have been proposed. Under such approaches, the key management overhead is a little bit high in terms of computational complexity. Also, secret sharing mechanisms have added complexity. Moreover, they lack mechanisms to handle existence of traitors. Our proposed approach addresses these requirements and reduces the overhead of the key management as well as secret sharing by using efficient algorithms and protocols. Also, a traitor tracing technique is introduced into the cloud computing two layer encryption environment.
CLOUD BASED ACCESS CONTROL MODEL FOR SELECTIVE ENCRYPTION OF DOCUMENTS WITH T...IJNSA Journal
Cloud computing refers to a type of networked computing whereby an application can be run on connected
servers instead of local servers. Cloud can be used to store data, share resources and also to provide
services. Technically, there is very little difference between public and private cloud architecture. However,
the security and privacy of the data is a very big issue when sensitive data is being entrusted to third party
cloud service providers. Thus encryption with a fine grained access control is inevitable to enforce security
in clouds. Several techniques implementing attribute based encryption for fine grained access control have
been proposed. Under such approaches, the key management overhead is a little bit high in terms of
computational complexity. Also, secret sharing mechanisms have added complexity. Moreover, they lack
mechanisms to handle existence of traitors. Our proposed approach addresses these requirements and
reduces the overhead of the key management as well as secret sharing by using efficient algorithms and
protocols. Also, a traitor tracing technique is introduced into the cloud computing two layer encryption
environment.
Fragmentation of Data in Large-Scale System For Ideal Performance and SecurityEditor IJCATR
Cloud computing is becoming prominent trend which offers the number of significant advantages. One of the ground laying
advantage of the cloud computing is the pay-as-per-use, where according to the use of the services, the customer has to pay. At present,
user’s storage availability improves the data generation. There is requiring farming out such large amount of data. There is indefinite
large number of Cloud Service Providers (CSP). The Cloud Service Providers is increasing trend for many number of organizations and
as well as for the customers that decreases the burden of the maintenance and local data storage. In cloud computing transferring data to
the third party administrator control will give rise to security concerns. Within the cloud, compromisation of data may occur due to
attacks by the unauthorized users and nodes. So, in order to protect the data in cloud the higher security measures are required and also
to provide security for the optimization of the data retrieval time. The proposed system will approach the issues of security and
performance. Initially in the DROPS methodology, the division of the files into fragments is done and replication of those fragmented
data over the cloud node is performed. Single fragment of particular file can be stored on each of the nodes which ensure that no
meaningful information is shown to an attacker on a successful attack. The separation of the nodes is done by T-Coloring in order to
prohibit an attacker to guess the fragment’s location. The complete data security is ensured by DROPS methodology
Accessing secured data in cloud computing environmentIJNSA Journal
Number of businesses using cloud computing has increased dramatically over the last few years due to the attractive features such as scalability, flexibility, fast start-up and low costs. Services provided over the web are ranging from using provider’s software and hardware to managing security and other issues. Some of the biggest challenges at this point are providing privacy and data security to subscribers of public cloud servers. An efficient encryption technique presented in this paper can be used for secure access to and storage of data on public cloud server, moving and searching encrypted data through communication channels while protecting data confidentiality. This method ensures data protection against both external and internal intruders. Data can be decrypted only with the provided by the data owner key, while public cloud server is unable to read encrypted data or queries. Answering a query does not depend on it size and done in a constant time. Data access is managed by the data owner. The proposed schema allows unauthorized modifications detection
ACCESSING SECURED DATA IN CLOUD COMPUTING ENVIRONMENTIJNSA Journal
Number of businesses using cloud computing has increased dramatically over the last few years due to the attractive features such as scalability, flexibility, fast start-up and low costs. Services provided over the web are ranging from using provider’s software and hardware to managing security and other issues. Some of the biggest challenges at this point are providing privacy and data security to subscribers of public cloud servers. An efficient encryption technique presented in this paper can be used for secure access to and storage of data on public cloud server, moving and searching encrypted data through communication channels while protecting data confidentiality. This method ensures data protection against both external and internal intruders. Data can be decrypted only with the provided by the data owner key, while public cloud server is unable to read encrypted data or queries. Answering a query does not depend on it size and done in a constant time. Data access is managed by the data owner. The proposed schema allows unauthorized modifications detection.
Cloud Computing is the most emerging trend in Information Technology now days. It is attracting the organizations due to its advantages of scalability, throughput, easy and cheap access and on demand up and down grading of SaaS, PaaS and IaaS. Besides all the salient features of cloud environment, there are the big challenges of privacy and security. In this paper, a review of different security issues like trust, confidentiality, authenticity, encryption, key management and resource sharing are presented along with the efforts made on how to overcome these issues.
A Review on Key-Aggregate Cryptosystem for Climbable Knowledge Sharing in Clo...Editor IJCATR
The Data sharing is an important functionality in cloud storage. In this article, we show how to securely, efficiently, and
flexibly share data with others in cloud storage. We describe new public-key cryptosystems which produce constant-size ciphertexts
such that efficient delegation of decryption rights for any set of ciphertexts are possible. The novelty is that one can aggregate any set
of secret keys and make them as compact as a single key, but encompassing the power of all the keys being aggregated. In other
words, the secret key holder can release a constant-size aggregate key for flexible choices of ciphertext set in cloud storage, but the
other encrypted files outside the set remain confidential. This compact aggregate key can be conveniently sent to others or be stored in
a smart card with very limited secure storage. We provide formal security analysis of our schemes in the standard model. We also
describe other application of our schemes. In particular, our schemes give the first public-key patient controlled encryption for flexible
hierarchy, which was yet to be known.
Proposed system for data security in distributed computing in using triple d...IJECEIAES
Cloud computing is considered a distributed computing paradigm in which resources are provided as services. In cloud computing, the applications do not run from a user’s personal computer but are run and stored on distributed servers on the Internet. The resources of the cloud infrastructures are shared on cloud computing on the Internet in the open environment. This increases the security problems in security such as data confidentiality, data integrity and data availability, so the solution of such problems are conducted by adopting data encryption is very important for securing users data. In this paper, a comparative study is done between the two security algorithms on a cloud platform called eyeOS. From the comparative study it was found that the Rivest Shamir Adlemen (3kRSA) algorithm outperforms that triple data encryption standard (3DES) algorithm with respect to the complexity, and output bytes. The main drawback of the 3kRSA algorithm is its computation time, while 3DES is faster than that 3kRSA. This is useful for storing large amounts of data used in the cloud computing, the key distribution and authentication of the asymmetric encryption, speed, data integrity and data confidentiality of the symmetric encryption are also important also it enables to execute required computations on this encrypted data.
EFFECTIVE METHOD FOR MANAGING AUTOMATION AND MONITORING IN MULTI-CLOUD COMPUT...IJNSA Journal
Multi-cloud is an advanced version of cloud computing that allows its users to utilize different cloud systems from several Cloud Service Providers (CSPs) remotely. Although it is a very efficient computing
facility, threat detection, data protection, and vendor lock-in are the major security drawbacks of this infrastructure. These factors act as a catalyst in promoting serious cyber-crimes of the virtual world. Privacy and safety issues of a multi-cloud environment have been overviewed in this research paper. The
objective of this research is to analyze some logical automation and monitoring provisions, such as monitoring Cyber-physical Systems (CPS), home automation, automation in Big Data Infrastructure (BDI), Disaster Recovery (DR), and secret protection. The Results of this research investigation indicate that it is possible to avoid security snags of a multi-cloud interface by adopting these scientific solutions methodically.
5 ijaems jan-2016-16-survey on encryption techniques in delay and disruption ...INFOGAIN PUBLICATION
Delay and disruption tolerant network (DTN) is used for long area communication in computer network, where there is no direct connection between the sender and receiver and there was no internet facility. Delay tolerant network generally perform store and forward techniques as a result intermediate node can view the message, the possible solution is using encryption techniques to protect the message. Starting stages of DTN RSA, DES, 3DES encryption algorithms are used but now a day’s attribute based encryption (ABE) techniques are used. Attribute based encryption technique can be classified in to two, key policy attribute based encryption (KPABE) and cipher policy attribute based encryption (CPABE). In this paper we perform a categorized survey on different encryption techniques presents in delay tolerant networks. This categorized survey is very helpful for researchers to propose modified encryption techniques. Finally the paper compares the performance and effectiveness of different encryption algorithms.
Security Check in Cloud Computing through Third Party Auditorijsrd.com
In cloud computing, data owners crowd their data on cloud servers and users (data consumers) can access the data from cloud servers. Due to the data outsourcing, however, it requires an independent auditing service to check the data integrity in the cloud. Some existing remote integrity checking method scan only serve for static records data. Thus, cannot be used in the auditing service since the data in the cloud can be animatedly updated. Thus, an efficient and secure dynamic auditing protocol is required to convince data owners that the data are correctly stored in the cloud. In this paper, we first design an auditing framework for cloud storage systems for privacy-preserving auditing protocol. Then, we extend our auditing protocol to support the data dynamic operations, which is efficient to secure the random model.
Secure Data Sharing In an Untrusted CloudIJERA Editor
Cloud computing is a huge area which basically provides many services on the basis of pay as you go. One of the fundamental services provided by cloud is data storage. Cloud provides cost efficiency and an efficient solution for sharing resource among cloud users. A secure and efficient data sharing scheme for groups in cloud is not an easy task. On one hand customers are not ready to share their identity but on other hand want to enjoy the cost efficiency provided by the cloud. It needs to provide identity privacy, multiple owner and dynamic data sharing without getting effected by the number of cloud users revoked. In this paper, any member of a group can completely enjoy the data storing and sharing services by the cloud. A secure data sharing scheme for dynamic cloud users is proposed in this paper. For which it uses group signature and dynamic broadcast encryption techniques such that any user in a group can share the information in a secured manner. Additionally the permission option is proposed for the security reasons. This means the file access permissions are generated by the admin and given to the user using Role Based Access Control (RBA) algorithm. The file access permissions are read, write and delete. In this, owner can provide files with options and accepts the users using that option. The revocation of cloud user is a function generated by the Admin for security purpose. The encryption computational cost and storage overhead is not dependent on the number of users revoked. We analyze the security by proofs and produce the cloud efficiency report using cloudsim.
E-Mail Systems In Cloud Computing Environment Privacy,Trust And Security Chal...IJERA Editor
In this paper, SMCSaaS is proposed to secure email system based on Web Service and Cloud Computing
Model. The model offers end-to-end security, privacy, and non-repudiation of PKI without the associated
infrastructure complexity. The Proposed Model control risks in Cloud Computing like Insecure Application
Programming Interfaces, Malicious Insiders, Data Loss Shared Technology Vulnerabilities, or Leakage,
Account, Service, Traffic Hijacking and Unknown Risk Profile
A cloud storage system for sharing data securely with privacy preservation an...eSAT Journals
Abstract Cloud computing provides much-known services for storing user data over cloud server and it provides attention towards a broad set of technologies, rules and controls deployed to provide security for applications and data. As the more and more firm uses the cloud, security in cloud environment is becoming very important issue. It is much needed that companies should work with partners doing best practices of cloud security and which facilitate transparency for their solutions. Number of security solutions today depends on the authentication for security but it did not provide solution for the privacy problems while sharing data in the cloud environment. Data access request from the user itself may expose users’ private data no matter his request approved or not. So this becomes very important in sharing data in the cloud environment. In this paper we proposed a system which provides attention towards the above mentioned problem. In proposed system we used the concept of data anonymity for sending data access request to data owner and also provide the data auditing facility to detect fraud in the integrity of users shared data. Keywords: Cloud computing, privacy preservation, data integrity, data sharing, authentication
A data quarantine model to secure data in edge computingIJECEIAES
Edge computing provides an agile data processing platform for latencysensitive and communication-intensive applications through a decentralized cloud and geographically distributed edge nodes. Gaining centralized control over the edge nodes can be challenging due to security issues and threats. Among several security issues, data integrity attacks can lead to inconsistent data and intrude edge data analytics. Further intensification of the attack makes it challenging to mitigate and identify the root cause. Therefore, this paper proposes a new concept of data quarantine model to mitigate data integrity attacks by quarantining intruders. The efficient security solutions in cloud, ad-hoc networks, and computer systems using quarantine have motivated adopting it in edge computing. The data acquisition edge nodes identify the intruders and quarantine all the suspected devices through dimensionality reduction. During quarantine, the proposed concept builds the reputation scores to determine the falsely identified legitimate devices and sanitize their affected data to regain data integrity. As a preliminary investigation, this work identifies an appropriate machine learning method, linear discriminant analysis (LDA), for dimensionality reduction. The LDA results in 72.83% quarantine accuracy and 0.9 seconds training time, which is efficient than other state-of-the-art methods. In future, this would be implemented and validated with ground truth data.
Cloud Auditing With Zero Knowledge PrivacyIJERA Editor
The Cloud computing is a latest technology which provides various services through internet. The Cloud server allows user to store their data on a cloud without worrying about correctness & integrity of data. Cloud data storage has many advantages over local data storage. User can upload their data on cloud and can access those data anytime anywhere without any additional burden. The User doesn’t have to worry about storage and maintenance of cloud data. But as data is stored at the remote place how users will get the confirmation about stored data. Hence Cloud data storage should have some mechanism which will specify storage correctness and integrity of data stored on a cloud. The major problem of cloud data storage is security .Many researchers have proposed their work or new algorithms to achieve security or to resolve this security problem. In this paper, we proposed a Shamir’s Secrete sharing algorithm for Privacy Preservation for data Storage security in cloud computing. We can achieve confidentiality, integrity and availability of the data. It supports data dynamics where the user can perform various operations on data like insert, update and delete as well as batch auditing where multiple user requests for storage correctness will be handled simultaneously which reduce communication and computing cost.
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
More Related Content
Similar to An authenticated key management scheme for securing big data environment
Cloud Computing is the most emerging trend in Information Technology now days. It is attracting the organizations due to its advantages of scalability, throughput, easy and cheap access and on demand up and down grading of SaaS, PaaS and IaaS. Besides all the salient features of cloud environment, there are the big challenges of privacy and security. In this paper, a review of different security issues like trust, confidentiality, authenticity, encryption, key management and resource sharing are presented along with the efforts made on how to overcome these issues.
A Review on Key-Aggregate Cryptosystem for Climbable Knowledge Sharing in Clo...Editor IJCATR
The Data sharing is an important functionality in cloud storage. In this article, we show how to securely, efficiently, and
flexibly share data with others in cloud storage. We describe new public-key cryptosystems which produce constant-size ciphertexts
such that efficient delegation of decryption rights for any set of ciphertexts are possible. The novelty is that one can aggregate any set
of secret keys and make them as compact as a single key, but encompassing the power of all the keys being aggregated. In other
words, the secret key holder can release a constant-size aggregate key for flexible choices of ciphertext set in cloud storage, but the
other encrypted files outside the set remain confidential. This compact aggregate key can be conveniently sent to others or be stored in
a smart card with very limited secure storage. We provide formal security analysis of our schemes in the standard model. We also
describe other application of our schemes. In particular, our schemes give the first public-key patient controlled encryption for flexible
hierarchy, which was yet to be known.
Proposed system for data security in distributed computing in using triple d...IJECEIAES
Cloud computing is considered a distributed computing paradigm in which resources are provided as services. In cloud computing, the applications do not run from a user’s personal computer but are run and stored on distributed servers on the Internet. The resources of the cloud infrastructures are shared on cloud computing on the Internet in the open environment. This increases the security problems in security such as data confidentiality, data integrity and data availability, so the solution of such problems are conducted by adopting data encryption is very important for securing users data. In this paper, a comparative study is done between the two security algorithms on a cloud platform called eyeOS. From the comparative study it was found that the Rivest Shamir Adlemen (3kRSA) algorithm outperforms that triple data encryption standard (3DES) algorithm with respect to the complexity, and output bytes. The main drawback of the 3kRSA algorithm is its computation time, while 3DES is faster than that 3kRSA. This is useful for storing large amounts of data used in the cloud computing, the key distribution and authentication of the asymmetric encryption, speed, data integrity and data confidentiality of the symmetric encryption are also important also it enables to execute required computations on this encrypted data.
EFFECTIVE METHOD FOR MANAGING AUTOMATION AND MONITORING IN MULTI-CLOUD COMPUT...IJNSA Journal
Multi-cloud is an advanced version of cloud computing that allows its users to utilize different cloud systems from several Cloud Service Providers (CSPs) remotely. Although it is a very efficient computing
facility, threat detection, data protection, and vendor lock-in are the major security drawbacks of this infrastructure. These factors act as a catalyst in promoting serious cyber-crimes of the virtual world. Privacy and safety issues of a multi-cloud environment have been overviewed in this research paper. The
objective of this research is to analyze some logical automation and monitoring provisions, such as monitoring Cyber-physical Systems (CPS), home automation, automation in Big Data Infrastructure (BDI), Disaster Recovery (DR), and secret protection. The Results of this research investigation indicate that it is possible to avoid security snags of a multi-cloud interface by adopting these scientific solutions methodically.
5 ijaems jan-2016-16-survey on encryption techniques in delay and disruption ...INFOGAIN PUBLICATION
Delay and disruption tolerant network (DTN) is used for long area communication in computer network, where there is no direct connection between the sender and receiver and there was no internet facility. Delay tolerant network generally perform store and forward techniques as a result intermediate node can view the message, the possible solution is using encryption techniques to protect the message. Starting stages of DTN RSA, DES, 3DES encryption algorithms are used but now a day’s attribute based encryption (ABE) techniques are used. Attribute based encryption technique can be classified in to two, key policy attribute based encryption (KPABE) and cipher policy attribute based encryption (CPABE). In this paper we perform a categorized survey on different encryption techniques presents in delay tolerant networks. This categorized survey is very helpful for researchers to propose modified encryption techniques. Finally the paper compares the performance and effectiveness of different encryption algorithms.
Security Check in Cloud Computing through Third Party Auditorijsrd.com
In cloud computing, data owners crowd their data on cloud servers and users (data consumers) can access the data from cloud servers. Due to the data outsourcing, however, it requires an independent auditing service to check the data integrity in the cloud. Some existing remote integrity checking method scan only serve for static records data. Thus, cannot be used in the auditing service since the data in the cloud can be animatedly updated. Thus, an efficient and secure dynamic auditing protocol is required to convince data owners that the data are correctly stored in the cloud. In this paper, we first design an auditing framework for cloud storage systems for privacy-preserving auditing protocol. Then, we extend our auditing protocol to support the data dynamic operations, which is efficient to secure the random model.
Secure Data Sharing In an Untrusted CloudIJERA Editor
Cloud computing is a huge area which basically provides many services on the basis of pay as you go. One of the fundamental services provided by cloud is data storage. Cloud provides cost efficiency and an efficient solution for sharing resource among cloud users. A secure and efficient data sharing scheme for groups in cloud is not an easy task. On one hand customers are not ready to share their identity but on other hand want to enjoy the cost efficiency provided by the cloud. It needs to provide identity privacy, multiple owner and dynamic data sharing without getting effected by the number of cloud users revoked. In this paper, any member of a group can completely enjoy the data storing and sharing services by the cloud. A secure data sharing scheme for dynamic cloud users is proposed in this paper. For which it uses group signature and dynamic broadcast encryption techniques such that any user in a group can share the information in a secured manner. Additionally the permission option is proposed for the security reasons. This means the file access permissions are generated by the admin and given to the user using Role Based Access Control (RBA) algorithm. The file access permissions are read, write and delete. In this, owner can provide files with options and accepts the users using that option. The revocation of cloud user is a function generated by the Admin for security purpose. The encryption computational cost and storage overhead is not dependent on the number of users revoked. We analyze the security by proofs and produce the cloud efficiency report using cloudsim.
E-Mail Systems In Cloud Computing Environment Privacy,Trust And Security Chal...IJERA Editor
In this paper, SMCSaaS is proposed to secure email system based on Web Service and Cloud Computing
Model. The model offers end-to-end security, privacy, and non-repudiation of PKI without the associated
infrastructure complexity. The Proposed Model control risks in Cloud Computing like Insecure Application
Programming Interfaces, Malicious Insiders, Data Loss Shared Technology Vulnerabilities, or Leakage,
Account, Service, Traffic Hijacking and Unknown Risk Profile
A cloud storage system for sharing data securely with privacy preservation an...eSAT Journals
Abstract Cloud computing provides much-known services for storing user data over cloud server and it provides attention towards a broad set of technologies, rules and controls deployed to provide security for applications and data. As the more and more firm uses the cloud, security in cloud environment is becoming very important issue. It is much needed that companies should work with partners doing best practices of cloud security and which facilitate transparency for their solutions. Number of security solutions today depends on the authentication for security but it did not provide solution for the privacy problems while sharing data in the cloud environment. Data access request from the user itself may expose users’ private data no matter his request approved or not. So this becomes very important in sharing data in the cloud environment. In this paper we proposed a system which provides attention towards the above mentioned problem. In proposed system we used the concept of data anonymity for sending data access request to data owner and also provide the data auditing facility to detect fraud in the integrity of users shared data. Keywords: Cloud computing, privacy preservation, data integrity, data sharing, authentication
A data quarantine model to secure data in edge computingIJECEIAES
Edge computing provides an agile data processing platform for latencysensitive and communication-intensive applications through a decentralized cloud and geographically distributed edge nodes. Gaining centralized control over the edge nodes can be challenging due to security issues and threats. Among several security issues, data integrity attacks can lead to inconsistent data and intrude edge data analytics. Further intensification of the attack makes it challenging to mitigate and identify the root cause. Therefore, this paper proposes a new concept of data quarantine model to mitigate data integrity attacks by quarantining intruders. The efficient security solutions in cloud, ad-hoc networks, and computer systems using quarantine have motivated adopting it in edge computing. The data acquisition edge nodes identify the intruders and quarantine all the suspected devices through dimensionality reduction. During quarantine, the proposed concept builds the reputation scores to determine the falsely identified legitimate devices and sanitize their affected data to regain data integrity. As a preliminary investigation, this work identifies an appropriate machine learning method, linear discriminant analysis (LDA), for dimensionality reduction. The LDA results in 72.83% quarantine accuracy and 0.9 seconds training time, which is efficient than other state-of-the-art methods. In future, this would be implemented and validated with ground truth data.
Cloud Auditing With Zero Knowledge PrivacyIJERA Editor
The Cloud computing is a latest technology which provides various services through internet. The Cloud server allows user to store their data on a cloud without worrying about correctness & integrity of data. Cloud data storage has many advantages over local data storage. User can upload their data on cloud and can access those data anytime anywhere without any additional burden. The User doesn’t have to worry about storage and maintenance of cloud data. But as data is stored at the remote place how users will get the confirmation about stored data. Hence Cloud data storage should have some mechanism which will specify storage correctness and integrity of data stored on a cloud. The major problem of cloud data storage is security .Many researchers have proposed their work or new algorithms to achieve security or to resolve this security problem. In this paper, we proposed a Shamir’s Secrete sharing algorithm for Privacy Preservation for data Storage security in cloud computing. We can achieve confidentiality, integrity and availability of the data. It supports data dynamics where the user can perform various operations on data like insert, update and delete as well as batch auditing where multiple user requests for storage correctness will be handled simultaneously which reduce communication and computing cost.
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
Enhancing battery system identification: nonlinear autoregressive modeling fo...IJECEIAES
Precisely characterizing Li-ion batteries is essential for optimizing their
performance, enhancing safety, and prolonging their lifespan across various
applications, such as electric vehicles and renewable energy systems. This
article introduces an innovative nonlinear methodology for system
identification of a Li-ion battery, employing a nonlinear autoregressive with
exogenous inputs (NARX) model. The proposed approach integrates the
benefits of nonlinear modeling with the adaptability of the NARX structure,
facilitating a more comprehensive representation of the intricate
electrochemical processes within the battery. Experimental data collected
from a Li-ion battery operating under diverse scenarios are employed to
validate the effectiveness of the proposed methodology. The identified
NARX model exhibits superior accuracy in predicting the battery's behavior
compared to traditional linear models. This study underscores the
importance of accounting for nonlinearities in battery modeling, providing
insights into the intricate relationships between state-of-charge, voltage, and
current under dynamic conditions.
Smart grid deployment: from a bibliometric analysis to a surveyIJECEIAES
Smart grids are one of the last decades' innovations in electrical energy.
They bring relevant advantages compared to the traditional grid and
significant interest from the research community. Assessing the field's
evolution is essential to propose guidelines for facing new and future smart
grid challenges. In addition, knowing the main technologies involved in the
deployment of smart grids (SGs) is important to highlight possible
shortcomings that can be mitigated by developing new tools. This paper
contributes to the research trends mentioned above by focusing on two
objectives. First, a bibliometric analysis is presented to give an overview of
the current research level about smart grid deployment. Second, a survey of
the main technological approaches used for smart grid implementation and
their contributions are highlighted. To that effect, we searched the Web of
Science (WoS), and the Scopus databases. We obtained 5,663 documents
from WoS and 7,215 from Scopus on smart grid implementation or
deployment. With the extraction limitation in the Scopus database, 5,872 of
the 7,215 documents were extracted using a multi-step process. These two
datasets have been analyzed using a bibliometric tool called bibliometrix.
The main outputs are presented with some recommendations for future
research.
Use of analytical hierarchy process for selecting and prioritizing islanding ...IJECEIAES
One of the problems that are associated to power systems is islanding
condition, which must be rapidly and properly detected to prevent any
negative consequences on the system's protection, stability, and security.
This paper offers a thorough overview of several islanding detection
strategies, which are divided into two categories: classic approaches,
including local and remote approaches, and modern techniques, including
techniques based on signal processing and computational intelligence.
Additionally, each approach is compared and assessed based on several
factors, including implementation costs, non-detected zones, declining
power quality, and response times using the analytical hierarchy process
(AHP). The multi-criteria decision-making analysis shows that the overall
weight of passive methods (24.7%), active methods (7.8%), hybrid methods
(5.6%), remote methods (14.5%), signal processing-based methods (26.6%),
and computational intelligent-based methods (20.8%) based on the
comparison of all criteria together. Thus, it can be seen from the total weight
that hybrid approaches are the least suitable to be chosen, while signal
processing-based methods are the most appropriate islanding detection
method to be selected and implemented in power system with respect to the
aforementioned factors. Using Expert Choice software, the proposed
hierarchy model is studied and examined.
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...IJECEIAES
The power generated by photovoltaic (PV) systems is influenced by
environmental factors. This variability hampers the control and utilization of
solar cells' peak output. In this study, a single-stage grid-connected PV
system is designed to enhance power quality. Our approach employs fuzzy
logic in the direct power control (DPC) of a three-phase voltage source
inverter (VSI), enabling seamless integration of the PV connected to the
grid. Additionally, a fuzzy logic-based maximum power point tracking
(MPPT) controller is adopted, which outperforms traditional methods like
incremental conductance (INC) in enhancing solar cell efficiency and
minimizing the response time. Moreover, the inverter's real-time active and
reactive power is directly managed to achieve a unity power factor (UPF).
The system's performance is assessed through MATLAB/Simulink
implementation, showing marked improvement over conventional methods,
particularly in steady-state and varying weather conditions. For solar
irradiances of 500 and 1,000 W/m2
, the results show that the proposed
method reduces the total harmonic distortion (THD) of the injected current
to the grid by approximately 46% and 38% compared to conventional
methods, respectively. Furthermore, we compare the simulation results with
IEEE standards to evaluate the system's grid compatibility.
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...IJECEIAES
Photovoltaic systems have emerged as a promising energy resource that
caters to the future needs of society, owing to their renewable, inexhaustible,
and cost-free nature. The power output of these systems relies on solar cell
radiation and temperature. In order to mitigate the dependence on
atmospheric conditions and enhance power tracking, a conventional
approach has been improved by integrating various methods. To optimize
the generation of electricity from solar systems, the maximum power point
tracking (MPPT) technique is employed. To overcome limitations such as
steady-state voltage oscillations and improve transient response, two
traditional MPPT methods, namely fuzzy logic controller (FLC) and perturb
and observe (P&O), have been modified. This research paper aims to
simulate and validate the step size of the proposed modified P&O and FLC
techniques within the MPPT algorithm using MATLAB/Simulink for
efficient power tracking in photovoltaic systems.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Remote field-programmable gate array laboratory for signal acquisition and de...IJECEIAES
A remote laboratory utilizing field-programmable gate array (FPGA) technologies enhances students’ learning experience anywhere and anytime in embedded system design. Existing remote laboratories prioritize hardware access and visual feedback for observing board behavior after programming, neglecting comprehensive debugging tools to resolve errors that require internal signal acquisition. This paper proposes a novel remote embeddedsystem design approach targeting FPGA technologies that are fully interactive via a web-based platform. Our solution provides FPGA board access and debugging capabilities beyond the visual feedback provided by existing remote laboratories. We implemented a lab module that allows users to seamlessly incorporate into their FPGA design. The module minimizes hardware resource utilization while enabling the acquisition of a large number of data samples from the signal during the experiments by adaptively compressing the signal prior to data transmission. The results demonstrate an average compression ratio of 2.90 across three benchmark signals, indicating efficient signal acquisition and effective debugging and analysis. This method allows users to acquire more data samples than conventional methods. The proposed lab allows students to remotely test and debug their designs, bridging the gap between theory and practice in embedded system design.
Detecting and resolving feature envy through automated machine learning and m...IJECEIAES
Efficiently identifying and resolving code smells enhances software project quality. This paper presents a novel solution, utilizing automated machine learning (AutoML) techniques, to detect code smells and apply move method refactoring. By evaluating code metrics before and after refactoring, we assessed its impact on coupling, complexity, and cohesion. Key contributions of this research include a unique dataset for code smell classification and the development of models using AutoGluon for optimal performance. Furthermore, the study identifies the top 20 influential features in classifying feature envy, a well-known code smell, stemming from excessive reliance on external classes. We also explored how move method refactoring addresses feature envy, revealing reduced coupling and complexity, and improved cohesion, ultimately enhancing code quality. In summary, this research offers an empirical, data-driven approach, integrating AutoML and move method refactoring to optimize software project quality. Insights gained shed light on the benefits of refactoring on code quality and the significance of specific features in detecting feature envy. Future research can expand to explore additional refactoring techniques and a broader range of code metrics, advancing software engineering practices and standards.
Smart monitoring technique for solar cell systems using internet of things ba...IJECEIAES
Rapidly and remotely monitoring and receiving the solar cell systems status parameters, solar irradiance, temperature, and humidity, are critical issues in enhancement their efficiency. Hence, in the present article an improved smart prototype of internet of things (IoT) technique based on embedded system through NodeMCU ESP8266 (ESP-12E) was carried out experimentally. Three different regions at Egypt; Luxor, Cairo, and El-Beheira cities were chosen to study their solar irradiance profile, temperature, and humidity by the proposed IoT system. The monitoring data of solar irradiance, temperature, and humidity were live visualized directly by Ubidots through hypertext transfer protocol (HTTP) protocol. The measured solar power radiation in Luxor, Cairo, and El-Beheira ranged between 216-1000, 245-958, and 187-692 W/m 2 respectively during the solar day. The accuracy and rapidity of obtaining monitoring results using the proposed IoT system made it a strong candidate for application in monitoring solar cell systems. On the other hand, the obtained solar power radiation results of the three considered regions strongly candidate Luxor and Cairo as suitable places to build up a solar cells system station rather than El-Beheira.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
Developing a smart system for infant incubators using the internet of things ...IJECEIAES
This research is developing an incubator system that integrates the internet of things and artificial intelligence to improve care for premature babies. The system workflow starts with sensors that collect data from the incubator. Then, the data is sent in real-time to the internet of things (IoT) broker eclipse mosquito using the message queue telemetry transport (MQTT) protocol version 5.0. After that, the data is stored in a database for analysis using the long short-term memory network (LSTM) method and displayed in a web application using an application programming interface (API) service. Furthermore, the experimental results produce as many as 2,880 rows of data stored in the database. The correlation coefficient between the target attribute and other attributes ranges from 0.23 to 0.48. Next, several experiments were conducted to evaluate the model-predicted value on the test data. The best results are obtained using a two-layer LSTM configuration model, each with 60 neurons and a lookback setting 6. This model produces an R 2 value of 0.934, with a root mean square error (RMSE) value of 0.015 and a mean absolute error (MAE) of 0.008. In addition, the R 2 value was also evaluated for each attribute used as input, with a result of values between 0.590 and 0.845.
A review on internet of things-based stingless bee's honey production with im...IJECEIAES
Honey is produced exclusively by honeybees and stingless bees which both are well adapted to tropical and subtropical regions such as Malaysia. Stingless bees are known for producing small amounts of honey and are known for having a unique flavor profile. Problem identified that many stingless bees collapsed due to weather, temperature and environment. It is critical to understand the relationship between the production of stingless bee honey and environmental conditions to improve honey production. Thus, this paper presents a review on stingless bee's honey production and prediction modeling. About 54 previous research has been analyzed and compared in identifying the research gaps. A framework on modeling the prediction of stingless bee honey is derived. The result presents the comparison and analysis on the internet of things (IoT) monitoring systems, honey production estimation, convolution neural networks (CNNs), and automatic identification methods on bee species. It is identified based on image detection method the top best three efficiency presents CNN is at 98.67%, densely connected convolutional networks with YOLO v3 is 97.7%, and DenseNet201 convolutional networks 99.81%. This study is significant to assist the researcher in developing a model for predicting stingless honey produced by bee's output, which is important for a stable economy and food security.
A trust based secure access control using authentication mechanism for intero...IJECEIAES
The internet of things (IoT) is a revolutionary innovation in many aspects of our society including interactions, financial activity, and global security such as the military and battlefield internet. Due to the limited energy and processing capacity of network devices, security, energy consumption, compatibility, and device heterogeneity are the long-term IoT problems. As a result, energy and security are critical for data transmission across edge and IoT networks. Existing IoT interoperability techniques need more computation time, have unreliable authentication mechanisms that break easily, lose data easily, and have low confidentiality. In this paper, a key agreement protocol-based authentication mechanism for IoT devices is offered as a solution to this issue. This system makes use of information exchange, which must be secured to prevent access by unauthorized users. Using a compact contiki/cooja simulator, the performance and design of the suggested framework are validated. The simulation findings are evaluated based on detection of malicious nodes after 60 minutes of simulation. The suggested trust method, which is based on privacy access control, reduced packet loss ratio to 0.32%, consumed 0.39% power, and had the greatest average residual energy of 0.99 mJoules at 10 nodes.
Fuzzy linear programming with the intuitionistic polygonal fuzzy numbersIJECEIAES
In real world applications, data are subject to ambiguity due to several factors; fuzzy sets and fuzzy numbers propose a great tool to model such ambiguity. In case of hesitation, the complement of a membership value in fuzzy numbers can be different from the non-membership value, in which case we can model using intuitionistic fuzzy numbers as they provide flexibility by defining both a membership and a non-membership functions. In this article, we consider the intuitionistic fuzzy linear programming problem with intuitionistic polygonal fuzzy numbers, which is a generalization of the previous polygonal fuzzy numbers found in the literature. We present a modification of the simplex method that can be used to solve any general intuitionistic fuzzy linear programming problem after approximating the problem by an intuitionistic polygonal fuzzy number with n edges. This method is given in a simple tableau formulation, and then applied on numerical examples for clarity.
The performance of artificial intelligence in prostate magnetic resonance im...IJECEIAES
Prostate cancer is the predominant form of cancer observed in men worldwide. The application of magnetic resonance imaging (MRI) as a guidance tool for conducting biopsies has been established as a reliable and well-established approach in the diagnosis of prostate cancer. The diagnostic performance of MRI-guided prostate cancer diagnosis exhibits significant heterogeneity due to the intricate and multi-step nature of the diagnostic pathway. The development of artificial intelligence (AI) models, specifically through the utilization of machine learning techniques such as deep learning, is assuming an increasingly significant role in the field of radiology. In the realm of prostate MRI, a considerable body of literature has been dedicated to the development of various AI algorithms. These algorithms have been specifically designed for tasks such as prostate segmentation, lesion identification, and classification. The overarching objective of these endeavors is to enhance diagnostic performance and foster greater agreement among different observers within MRI scans for the prostate. This review article aims to provide a concise overview of the application of AI in the field of radiology, with a specific focus on its utilization in prostate MRI.
Seizure stage detection of epileptic seizure using convolutional neural networksIJECEIAES
According to the World Health Organization (WHO), seventy million individuals worldwide suffer from epilepsy, a neurological disorder. While electroencephalography (EEG) is crucial for diagnosing epilepsy and monitoring the brain activity of epilepsy patients, it requires a specialist to examine all EEG recordings to find epileptic behavior. This procedure needs an experienced doctor, and a precise epilepsy diagnosis is crucial for appropriate treatment. To identify epileptic seizures, this study employed a convolutional neural network (CNN) based on raw scalp EEG signals to discriminate between preictal, ictal, postictal, and interictal segments. The possibility of these characteristics is explored by examining how well timedomain signals work in the detection of epileptic signals using intracranial Freiburg Hospital (FH), scalp Children's Hospital Boston-Massachusetts Institute of Technology (CHB-MIT) databases, and Temple University Hospital (TUH) EEG. To test the viability of this approach, two types of experiments were carried out. Firstly, binary class classification (preictal, ictal, postictal each versus interictal) and four-class classification (interictal versus preictal versus ictal versus postictal). The average accuracy for stage detection using CHB-MIT database was 84.4%, while the Freiburg database's time-domain signals had an accuracy of 79.7% and the highest accuracy of 94.02% for classification in the TUH EEG database when comparing interictal stage to preictal stage.
Analysis of driving style using self-organizing maps to analyze driver behaviorIJECEIAES
Modern life is strongly associated with the use of cars, but the increase in acceleration speeds and their maneuverability leads to a dangerous driving style for some drivers. In these conditions, the development of a method that allows you to track the behavior of the driver is relevant. The article provides an overview of existing methods and models for assessing the functioning of motor vehicles and driver behavior. Based on this, a combined algorithm for recognizing driving style is proposed. To do this, a set of input data was formed, including 20 descriptive features: About the environment, the driver's behavior and the characteristics of the functioning of the car, collected using OBD II. The generated data set is sent to the Kohonen network, where clustering is performed according to driving style and degree of danger. Getting the driving characteristics into a particular cluster allows you to switch to the private indicators of an individual driver and considering individual driving characteristics. The application of the method allows you to identify potentially dangerous driving styles that can prevent accidents.
Hyperspectral object classification using hybrid spectral-spatial fusion and ...IJECEIAES
Because of its spectral-spatial and temporal resolution of greater areas, hyperspectral imaging (HSI) has found widespread application in the field of object classification. The HSI is typically used to accurately determine an object's physical characteristics as well as to locate related objects with appropriate spectral fingerprints. As a result, the HSI has been extensively applied to object identification in several fields, including surveillance, agricultural monitoring, environmental research, and precision agriculture. However, because of their enormous size, objects require a lot of time to classify; for this reason, both spectral and spatial feature fusion have been completed. The existing classification strategy leads to increased misclassification, and the feature fusion method is unable to preserve semantic object inherent features; This study addresses the research difficulties by introducing a hybrid spectral-spatial fusion (HSSF) technique to minimize feature size while maintaining object intrinsic qualities; Lastly, a soft-margins kernel is proposed for multi-layer deep support vector machine (MLDSVM) to reduce misclassification. The standard Indian pines dataset is used for the experiment, and the outcome demonstrates that the HSSF-MLDSVM model performs substantially better in terms of accuracy and Kappa coefficient.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
An authenticated key management scheme for securing big data environment
1. International Journal of Electrical and Computer Engineering (IJECE)
Vol. 12, No. 3, June 2022, pp. 3238~3248
ISSN: 2088-8708, DOI: 10.11591/ijece.v12i3.pp3238-3248 3238
Journal homepage: http://ijece.iaescore.com
An authenticated key management scheme for securing big data
environment
Thoyazan Sultan Algaradi, Boddireddy Rama
Department of Computer Science, Kakatiya University, Warangal, India
Article Info ABSTRACT
Article history:
Received Dec 17, 2020
Revised Jan 19, 2022
Accepted Feb 8, 2022
If data security issues in a big data environment are considered, then the
distribution of keys, their management, and the ability to transfer them
between server users in a public channel will be one of the most critical
issues that must consider on. In which the importance of keys management
may outweigh the importance of the encryption algorithm strength.
Therefore, this paper raised a new proposed scheme called authenticated key
management scheme (AKMS) that works through two levels of security.
First, to concerns how the user communicates with the server with
preventing any attempt to penetrate senders/receivers. Second, to make the
data sent vague by encrypting it, and unreadable by others except for the
concerned receiver, thus the server function be limited only as a passageway
for communication between the sender and receiver. In the presented work
some concepts discussed related to analysis and evaluation as keys security,
data security, public channel transmission, and security isolation inquiry
which demonstrated the rich value that AKMS scheme carried. As well,
AKMS scheme achieved very satisfactory results about computation cost,
communication cost, and storage overhead which proved that AKMS
scheme is appropriate, secure, and practical to use and protect the user's
private data in big data environments.
Keywords:
Authenticated
Big data environment
Diffie–Hellman
Key exchange
Key management
Security
This is an open access article under the CC BY-SA license.
Corresponding Author:
Thoyazan Sultan Algaradi, Boddireddy Rama
Department of Computer science, Kakatiya University
Warangal City, Telangana State, 506001, India
Email: yaz.sul77@gmail.com, rama.abbidi@gmail.com
1. INTRODUCTION
The remarkable progress in the development of data production has made data processing difficult
[1]. However, due to the rapid development in environmental technologies, the transmission of sensitive
information through the internet has become easy using the new technologies that come under big data today
[2], [3]. Where it became an important and modern topic racing towards researchers as a rich field of research
areas and come out with the most prominent researches and thoughts and innovative ways to deal with the
problem and cover its gaps [4]. Big data networks today occupy great importance through the offer of
usefulness that can be seized by the user in a big data environment [5], [6] where the user can store his data
in the cloud service and share it with others [7]. Also, companies and employees can get great convenience by
using cloud computing as modern and exclusive technology that is considered as a big data environment. From this
standpoint, it becomes clear the great importance of the security of this environment [8] and the preservation of
the privacy of its users through full studies of the most prominent problems of this dilemma [9].
That create the need to find appropriate solutions that ensure privacy, credibility, and validation of
the security that preserves the rights well [10], [11]. By considering how users are authenticated to ensure a
more reliable connection, it encrypts and sends data to the other party to make it difficult for attacks to
2. Int J Elec & Comp Eng ISSN: 2088-8708
An authenticated key management scheme for securing big data environment (Thoyazan Sultan Algaradi)
3239
penetrate. The power to do so depends heavily on how encryption keys are configured and managed well
between users and how they are passed smoothly and securely [12]. From here, the successful management
of keys is essential for the management of security keys for any encryption system [13], [14], once the
inventory of keys, keys management consists of three steps as the first is key exchange, second is key
storage, third is key use [15]. Where before any secured communication, users must set up the details of the
cryptography. In some cases, because of the symmetric key system, it may require exchanging identical keys.
In others, it may require possessing the other party's public key. While public keys can be openly exchanged.
That what is known as key exchange. Then the keys must be stored securely to keep the connection safe from
any unauthorized attempts to use these keys, which is known as key storage. Finally, key use refers to the
major issue as the length of time a key is to be used, and therefore the frequency of replacement.
The security of the keys used in the encryption system is directly reflected in the security of the
system used [16]. where the keys and their security management are one of the most important factors that
must be considered in the encryption and decryption processes and no direct relationship to the vulnerability
of the system as a result of the key algorithm itself or the device used [17]. Moreover, the cryptographic
system can be public if not for threats that could encounter sensitive user information in case if the key is lost
or wrong [18], [19]. Some of the systems depend on third parties such as traditional internet style key
exchange and key distribution protocols based on infrastructures were considered as unpractical for
whopping scale because of the obscure network topology before deployment, communication range palaces,
intermittent sensor-node operation, and network dynamics [20]. Thus, the key transfer over the network is the
only practical solution and the appropriate option to store the key but the problem in how to protect the key
during the transfer and keep it safe from any attempt to steal or lose during the transfer [21], [22].
Fan et al. [7] proposed a secure key management scheme deployed in big data environments to
protect user data and privacy, where the keys are divided into three layers. In the layered structure, upper
keys encrypt lower keys to guarantee the security of keys in their scheme, the server and others can know
nothing of the user's key, and he has to remember the login password only. Their proposed reduced the
encryption time, improved key exchange, and key distribution. The idea of this research went from this point,
hoping to propose a scheme to ensure the safe transfer and management of these keys to protect user's
privacy and data integrity. It protects users' data and privacy by managing the keys safely in a big data
environment, emphasizing keeping efficiency, convenience, and security. The proposed states that there are
two levels of security during the transfer the data and manage the keys among the server's users, one of
which concerns how to communicate between the user and the server, as it ensures verification of a user
during their communication and prevents any impersonation attempt of the character. The other is how to
maintain the data securely from the server itself so that the concerned receiving user is the only one able to
decrypt the data and read it.
The remaining parts of this paper organized as follows: section two introduces the material and
method that explain the proposed approach with a detailed explanation of the conception and its phases and
give a detail of the system model with explaining the way for ciphertext sharing. Section 3 give an analysis
and evaluation by discussing some of the important terms included as key security, data security, public
channel transmission, and Security Isolation inquiry, and give a comparison with some other related work on
the area of big data security, which concerns the term of computation cost, communication cost, and storage
cost. Finally, in section 4, the conclusion and references are provided.
2. MATERIAL AND METHOD
In this section, authenticated key management scheme (AKMS) scheme formulated in four parts,
first: explain the idea of AKMS scheme in a way that facilitates the continent to understand and know how it
works, then go further to the conception of AKMS approach by introducing its three phases as registration
phase, login and verification phase, and communication phase. Moreover, explain the detail of the system
model in deep. Finally, present the ciphertext sharing to explain the procedures that AKMS take to work.
2.1. Term and definition
AKMS scheme depends on a several of phases which each have its own steps that include new
symbols and parameters to perform the purpose, ensure the efficiency and work at the best manner. This
section clarified that to facilitate the understanding on the readers. Table 1 illustrates a brief definition of
AKMS scheme critical parameters that will be used.
2.2. Work of AKMS solution
The purpose of this research is to find an approach to manage the keys and transfer them between
users in a secure way and ensures the confidentiality of data to solve the problem of data protection and
privacy assurance. AKMS scheme provided for mimicking and managing keys between users in a big data
3. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 3, June 2022: 3238-3248
3240
network in the cloud to ensure the security using Diffie–Hellman key exchange. In a way that ensures the
safe transmission of data between the users involved in the transmission and the server and then receives data
through the user concerned. Communicate between users and the server is by distinct keys that vary from
user to user, which created in advance during the registration phase. That adds a double security power
because of the already encrypted data that even the server itself does not understand, and no one can decrypt
it except the user concerned with the reception that was be predefined by the sender. AKMS scheme consists
of three main parts, cloud server, user, and client (browser). After the user registers, he encrypts the data
before sending it using a random unique key and then use the agreed key between the user and server to send
the encrypted data with the random key ciphertext. What distinguishes the work is that, if the attacker can
break the message, it will not get more than the data encrypted vague and will not benefit from it. The power
of the user registration phase integrated with the power of Diffie–Hellman key exchange and the random key
generation is the core of AKMS scheme. Thus, provide complete protection of users' data during transferring,
including the preservation of their privacy. In other words, an essential feature of AKMS scheme is the
encryption process that the client-side already do. The file first will be encrypted by a randomly generated
key related to the algorithm that will use for the encryption, then encrypt this randomly generated key using a
secret shared key agreed between the sender and receiver, created by using Diffie–Hellman key exchange.
The encrypted file with the encrypted random generated key created between the sender and receiver sends it
to the server using the agreed key between the user and the server that predefined between them. Moreover,
the cloud service provider knows nothing of the user's information and key.
Table 1. Term and definition used by the AKMS scheme
Term Definition
ID User identity, the username of the registered user.
Pw User's Password, which keeps safely on the user side.
B Random private key created for each user on the server-side.
MustKey Master key of the registered user, created on the server side.
PrivA Private key of user A and it is required to establish a shared key between the
sender user A and receiver.
PubA Public key of user A and it is required to establish a shared key between the
sender user A and receiver.
g, P g is A primitive root modulo of P (offend called generator), and those are a
well-known value agreed to beforehand.
A,B Corresponding public keys.
H() One-way hash function.
𝐾𝐴,𝑆 The agreed key between user A and the server.
𝐾𝐴,𝐵 The agreed key between the user A and user B.
File The file that will be uploaded to the server to be available for the receiver to
download it.
𝑓𝑖𝑙𝑒𝐾𝑒𝑦 A randomly generated key on the user side is used to encrypt the file before
transferring.
2.3. The conception of the AKMS solution
2.3.1. Registration phase
The registration phase is one of the most important stages that most systems rely on to protect the
privacy of users and keep them safe. Through this phase, the user will be able to transfer his personal
information as a username and password by performing some mathematical operations that will contribute to
creating more robust security, and that will prevent any penetration during the registration process. It is worth
mentioning that each user must choose a valid password (pw) and a unique username (ID). Figure 1 shows
the dialogue of this phase:
Algorithm:
1. User-A side – enter the user ID (𝐼𝐷𝐴) and Password (pw).
2. Send 𝐼𝐷𝐴 from user-A to server-side.
3. server-side- Generate random b as a random number and calculate Mustkey=SHA2(𝐼𝐷𝐴||b).
4. Send Mustkey from server to user-A side.
5. User-A side- calculate PrivA=f(pw), pubA=𝑝𝑢𝑏𝐴 = 𝑔𝑝𝑟𝑖𝑣𝐴
𝑚𝑜𝑑𝑃, and 𝐴 = mustkey𝑝𝑢𝑏𝐴
.
6. Send A from user-A to the server side.
7. Server-side- Calculate 𝐵 = mustkey + mustkey𝑏
, 𝑝𝑢𝑏𝐴 =
𝑙𝑜𝑔𝐴
logmustkey
.
8. Server-side- store (𝐼𝐷𝐴, B, b, pubA).
9. Do the same process to register all users.
4. Int J Elec & Comp Eng ISSN: 2088-8708
An authenticated key management scheme for securing big data environment (Thoyazan Sultan Algaradi)
3241
Figure 1. Registration phase of AKMS scheme
2.3.2. Login and verification phase
Here, before starting transfer data, the user must log in to the server by entering the username-ID
while keeping the password (pw) on the user side and send any of the other previously registered users-ID he
wants to deal with him. The server checks the name of the user sent in his database and gets its random b and
B. It also checks the name of the other user that wants to deal with as a receiver to get its public key. Finally,
get the MustKey from the random b and send it along with B and pubB to the sender. The continue to get A
value, secret key, and get its hash as a key between the sender and server. The user also performs some
mathematical operations, such as uses the MustKey to extract the secret key that connects between him and
the server. Here, the verification is successful only when this secret key and be equal to that got in the server,
or they will not be able to communicate. As well the sender also uses the public key of the receiving user to
extract the private secret key between them in complete secrecy, where even the server itself does not know.
Figure 2 shows the dialogue of this phase:
Figure 2. Login phase of the AKMS scheme
5. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 3, June 2022: 3238-3248
3242
Algorithm:
1. User-A side- inter user ID (𝐼𝐷𝐴) password (pw), and receiver user ID (𝐼𝐷𝐵).
2. Send 𝐼𝐷𝐴 and 𝐼𝐷𝐵 from user-A to the server side.
3. Server-side- get b and B value of 𝐼𝐷𝐴 , as well as pubB of 𝐼𝐷𝐵.
4. Calculate Mustkey=SHA-2(𝐼𝐷𝐴||b).
5. Send Mustkey, B, PubB from server to user-A side.
6. User-A side – calculate privA= f(pw), 𝑝𝑢𝑏𝐴 = 𝑔𝑝𝑟𝑖𝑣𝐴𝑚𝑜𝑑𝑃, 𝑆 = (𝐵 − mustkey)2𝑝𝑢𝑏𝐴, and 𝐾𝐴,𝑠 = 𝐻(𝑆).
7. Server-side – calculate A= mustkey𝑝𝑢𝑏𝐴, 𝑆 = (𝐴 ∗ mustkey𝑃𝑢𝑏𝐴)𝑏, and 𝐾𝐴,𝑠 = 𝐻(𝑆).
8. Do the same process to authenticate the receiver user ID (𝐼𝐷𝐵) and get its Key for the server.
2.3.3. Communication phase
After the verification process that took place by the sending user to obtain two secret keys, one is
related to his communication with the server, and the second is concerned with the sending user's interaction
with the receiver user, which was agreed using Diffie–Hellman key exchange technology. The third stage,
during which the user prepare the file to be sent and then encrypt it using a random key (𝑓𝑖𝑙𝑒𝑘𝑒𝑦), where
often determined in a way that is appropriate for the encryption algorithm that used, then the scheme will
encrypt this key (𝑓𝑖𝑙𝑒𝑘𝑒𝑦) using the private key which identified between the sender user and the concerned
receiver user. Finally, the scheme will send that encrypted data and the encrypted key of (𝑓𝑖𝑙𝑒𝑘𝑒𝑦) to the
server using the private key between the user and the server itself. Figure 3 shows the dialogue of this phase:
Figure 3. Communication phase of the AKMS scheme
Algorithm
1. User-A side- generate random (𝑓𝑖𝑙𝑒𝑘𝑒𝑦), encrypt the original file by 𝑓𝑖𝑙𝑒𝑘𝑒𝑦 Key, get the secret Key
between the sender and receiver 𝐾𝐴,𝐵, encrypt 𝑓𝑖𝑙𝑒𝑘𝑒𝑦 by 𝐾𝐴,𝐵 and encrypt both by 𝐾𝐴,𝑠 key.
2. Send ((𝑓𝑖𝑙𝑒)𝑓𝑖𝑙𝑒𝑘𝑒𝑦, (𝑓𝑖𝑙𝑒𝑘𝑒𝑦)𝐾𝐴,𝐵
)𝐾𝐴,𝑠
from user-A to the server side.
3. Server-side- get (𝑓𝑖𝑙𝑒)𝑓𝑖𝑙𝑒𝑘𝑒𝑦𝑎𝑛𝑑(𝑓𝑖𝑙𝑒𝑘𝑒𝑦)𝐾𝐴,𝐵
then encrypt them again using 𝐾𝐵,𝑠
4. Send ((𝑓𝑖𝑙𝑒)𝑓𝑖𝑙𝑒𝑘𝑒𝑦, (𝑓𝑖𝑙𝑒𝑘𝑒𝑦)𝐾𝐴,𝐵
)𝐾𝐵,𝑠
from server to user-B side.
5. User-B side- calculate the secret Key 𝐾𝐴,𝐵, get the file key, and finally get the original file.
2.4. Detail of system model
2.4.1. Generate 𝒎𝒖𝒔𝒕𝒌𝒆𝒚
The generated random b in AKMS scheme considered one of the most important things that must
generate for each user, where the server stores it for later use to get the master key and then by using the
master key identifying the private key (𝐾𝐴,𝑠) between it and the concerned user, often at the registration stage
for any big data environment application that requires the user to enter a valid password and valid username
(ID). In AKMS, the user will keep the password for himself and only send the username (ID). The scheme
6. Int J Elec & Comp Eng ISSN: 2088-8708
An authenticated key management scheme for securing big data environment (Thoyazan Sultan Algaradi)
3243
generates the master key on the server side itself by obtaining a hash value, where hashing algorithm SHA-2
combines the username (ID) with a random number (b) generated on the server-side. That helps to
distinguish users in case if there are users with the same name [23].
2.4.2. Generate 𝑷𝒓𝒊𝒗𝑨 and 𝑷𝒖𝒃𝑨 and 𝑲𝑨,𝑩
AKMS scheme guarantees confidentiality in transferring keys between users in a confidential
manner, which even the server itself cannot know. Where it generates a random key (𝑓𝑖𝑙𝑒𝐾𝑒𝑦) that it uses to
encrypt the file before the transfer, the user also encrypts 𝑓𝑖𝑙𝑒𝐾𝑒𝑦 using the unified key created between him
and the concerned receiving user, which will rely on Diffie–Hellman key exchange technology, that led by
mathematical operations through which both the sender and the receiver can agree on a secret shared key
(𝐾𝐴,𝐵), which is not sent directly [24]. For that reason, AKMS scheme generate two keys for each concerned
user, one private (PrivA), and the other is public (𝑃𝑢𝑏𝐴). The private key entirely dependent on the unshared
password used by the sender, while the public key generates using the private key as indicated in (1) and (2),
which are finally sent and stored on the server.
𝑃𝑟𝑖𝑣𝐴 = 𝑓(𝑝𝑤) (1)
𝑝𝑢𝑏𝐴 = 𝑔𝑝𝑟𝑖𝑣𝐴
𝑚𝑜𝑑𝑃 (2)
On the other side, the receiver user takes the same steps using his password to generate his private
and public key. By knowing each user, the public key of the other party, and through the use of
Diffie–Hellman key exchange technology, the sender and receiver can generate a unified key through which
they can communicate separately from the server.
2.4.3. Generate a corresponding public key (A, B), and 𝑲𝑨,𝒔
It is known that the server would not be able to know any information about the sent data, where the
data encrypted on the sender side, and no one able to decrypt it except the concerned receiving user using the
secret shared key agreed between them. Moreover, AKMS scheme will add another level of data security to
ensure that previously encrypted data will securely reach the server. From here comes the importance of
getting a unique secret key agreed between the user and the server to use it during communication, where
each user will have its own agreed secret key to deal with the server, which symbolized it for the user-A
as 𝐾𝐴,𝑠. This key is mainly obtained by A and B as a corresponding public key between the user and the
server and can by using them calculate the right key, where the following formulas (3) to (6) shows how:
𝐴 = 𝑔𝑝𝑟𝑖𝑣𝐼𝐷
𝑚𝑜𝑑𝑃 (3)
𝐵 = mustkey + mustkey𝑏
, (4)
𝑆 = (𝐴 ∗ mustkey𝑎)𝑏
= (𝐵 − mustkey)2𝑝𝑢𝑏𝐴
(5)
𝐾𝐼𝐷,𝑠 = 𝐻(𝑆) (6)
2.4.4. Ciphertext sharing
If user-A wanted to send a file to share with user-B, the scenario would be as follows, user-A logs in
by entering his name and password and then entering user-B as the user concerned with receiving the file,
user-A chooses the file to send, the server checks for the presence of user-B in its database, then sends the
public key (𝑃𝑢𝑏𝐵) of user-B, and get the master key of user-A himself (𝑀𝑢𝑠𝑡𝑘𝑒𝑦 − 𝐴), send both to the
user-A side, the master key of user-A (𝑀𝑢𝑠𝑡𝑘𝑒𝑦 − 𝐴) is used to find the shared secret key between the user-
A and the server (𝐾𝐴,𝑠). Moreover, the scheme generates a random key (𝑓𝑖𝑙𝑒𝑘𝑒𝑦) complies with the
encryption algorithm that used to encrypt the file in the user-A side, and by using the public key of user-B
(𝑃𝑢𝑏𝐵),the scheme will get the shared key that connects the two users (𝐾𝐴,𝐵) depending on the
Diffie–Hellman key exchange technique. The random key encrypted by the user-A and user-B shared key
(𝐾𝐴,𝐵), then the ciphertext file along with sharing 𝑓𝑖𝑙𝑒𝑘𝑒𝑦 ciphertext will be sent to the server using (𝐾𝐴,𝑠)
and store it there. On the other side, user-B do the same steps to log in and then get his own (𝐾𝐴,𝐵), and
(𝐾𝐴,𝑠). user-B downloads the storage file and the sharing 𝑓𝑖𝑙𝑒𝑘𝑒𝑦 ciphertext from the server using (𝐾𝐵,𝑠) that
connect between them. Then uses (𝐾𝐴,𝐵) to decrypt the sharing 𝑓𝑖𝑙𝑒𝑘𝑒𝑦 ciphertext, then by using the
obtained (𝑓𝑖𝑙𝑒𝑘𝑒𝑦), user-B can get the file which user-A shared.
7. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 3, June 2022: 3238-3248
3244
3. RESULT AND DISCUSSION
3.1. Analysis and evaluation
3.1.1. Keys security
AKMS scheme depend on the indirectly transferring keys using mathematical formulas that make it
very difficult for attackers to break. As well, it careful not to send any values that might be a reason to reach
the main keys, so all the sent values that the attacker may obtain have no meaning or benefit. As observed on
the registration and login phases, AKMS provides zero-knowledge security by using the unify key K = H(S),
which is the hash result calculated in both sides based on some other mathematical formulae as (7), (8):
𝑆 = (𝐴 ∗ mustkey𝑃𝑢𝑏𝐴)𝑏
(7)
𝑆 = (𝐵 − mustkey)2𝑝𝑢𝑏𝐴
(8)
where to get both equations, hackers need the value of 𝑃𝑢𝑏𝐴, which is never shared. Moreover, the value
𝑃𝑢𝑏𝐼𝐷 is dependent on the value of 𝑃𝑟𝑖𝑣𝐴 that directly depends on the password of the user himself. That
ensures that no user will be able to get the correct key to connect the server unless he shows the correct
password used during his registration phase and never shared. Thus, any impersonation attempt also fails.
Concerning the communication phase to transfer the file, the AKMS scheme encrypt the file using a
generated random encryption key (𝑓𝑖𝑙𝑒𝑘𝑒𝑦) that is compatible with the encryption algorithm used. Then
encrypt this key to get sharing 𝑓𝑖𝑙𝑒𝑘𝑒𝑦 ciphertext in a way that only the concerned receiving user be able to
break it using Diffie–Hellman key exchange technique, which has proven its security worth in advance. In
this way, what will reach the server is the encrypted file along with sharing 𝑓𝑖𝑙𝑒𝑘𝑒𝑦 ciphertext. It will be
delivered in the same manner to the concerned receiving user without decrypting them.
3.1.2. Data security
In general, AKMS scheme designed to ensure the confidentiality of data and to preserve the security
and privacy. It guarantees duple security by using two-level, first is between the user and the server, second
between the sending user and his counterpart user who concerned to receive data. Moreover, the scheme
encrypts the file before sending it using 𝑓𝑖𝑙𝑒𝑘𝑒𝑦 generated random key, then sends and upload that to the
server to store. Only the user himself can decrypt the file, and what will send is useless information, even the
server cannot understand it. Therefore, the user will encrypt the generated random 𝑓𝑖𝑙𝑒𝑘𝑒𝑦 by using a shared
key between him and the concerned receiving user, using Diffie–Hellman key exchange technique to transfer
the key to the receiver secretly. Thus, maintain and guarantee the preservation of security and protection of
data.
3.1.3. Public channel transmission
AKMS scheme provides a double level of security, the first of which is at the user and server, and
the other between the sending user and the concerned receiving user. The shared keys on the first level of
security are reached in a complex and indirect way, as each user has his unshared password, thus will
distinguish him from others and protect him against an impersonation attempt even on his side or on the
server-side. Moreover, user data will be sent in its ciphertext format, using a random key (𝑓𝑖𝑙𝑒𝑘𝑒𝑦), which is
generated each time in a way that is compatible with the encryption algorithm used. Finally, get the sharing
𝑓𝑖𝑙𝑒𝑘𝑒𝑦 ciphertext, and only the concerned receiving user can break it using the Diffie–Hellman key
exchange technique. Thus, even if the big data environment channel attacked, it will become difficult for the
attacker to break the ciphertext data transferred between the user and the server, since the user is the only one
who can configure the correct key between him and the server, and if any can break it, will only find a
ciphertext data that does no benefit.
3.1.4. Security Isolation inquiry
The benefit of a cloud server is in the communication phase to store the ciphertext of users' data and
keys in the whole process. The scheme transfers the data in a ciphertext from which no one other than the
concerned user can understand it. Also, the server maintains the data security protection through its
transferring as well as ensures a strict registration and login stages that difficult to break. Means, the server's
function will remain to pass only, and no more than checking the truth of the users and managing the keys
between them. Thus, the data stored on the cloud be isolated from the server.
3.1.5. Security features
The security analysis of AKMS proved the strength of the approach to face most of the common
attacks through a powerful and strict way to enable the user to register himself in the server in a more robust
8. Int J Elec & Comp Eng ISSN: 2088-8708
An authenticated key management scheme for securing big data environment (Thoyazan Sultan Algaradi)
3245
secure manner, allow him to get its key to communicating with the server over the untrusted channel.
Table 2 shows and summarizes some of the security features of the AKMS scheme.
Table 2. Security features of the AKMS scheme
Attacks Avoided
Password guessing attack Yes
Replay attack Yes
Man in the middle attack Yes
Brute-force attack Yes
Dictionary attack Yes
Insider attack Yes
Impersonation attack Yes
User anonymity attack Yes
3.2. Performance analysis
In this section, the efficiency of AKMS demonstrated with compared to some of the related works
concerns to big data security, as this be through a discussion of performance analysis by computing the cost
of computation, communication, and storage overhead. The meaning of notations used in the comparison is
given in Table 3.
Table 3. The Notation used in computation cost comparison
Symbols Notation for
𝑇𝐻 Hash function
𝑇𝐸 Exponential operation
𝑇𝑆 Symmetric key encryption/decryption
3.2.1. Computation cost
Table 4 shows the results of the computation cost for AKMS scheme and the works related static
knowledge-based authentication mechanism (SKAM) [6], A token-based authentication security (TASS)
[25], and A novel authentication framework for Hadoop (NAFH) [23] as well as give a relative comparison
regarding the registration phase, Table 5 will do the same for both the login and communication phases.
Table 4. The computation cost of the registration phase
AKMS (The Proposed) SKAM TASS NAFH
2𝑇𝐻 + 𝑇𝐸 2𝑇𝑆 + 3𝑇𝐻 + 𝑇𝐸 2𝑇𝐻 + 𝑇𝐸 4𝑇𝑆 + 𝑇𝐻
Table 5. The computation cost of login and communication
AKMS (The Proposed) SKAM TASS NAFH
3𝑇𝑆 + 𝑇𝐻 + 𝑇𝐸 8𝑇𝑆 + 2𝑇𝐻 5𝑇𝑆 + 6𝑇𝐻 + 5𝑇𝐸 21𝑇𝑆
The total computation cost of the registration phase that represents the user and server is
2T H + TE. It is considered a somewhat good value because this phase will run only once for a single user
during his registration to the server. In contrast, the total computation cost of login and communication
phases for the user and server is 3TS + T H + TE. It is considered more efficient compared to the related
works compared with due to the symbols.
3.2.2. Communication cost
The communication cost is usually calculated for measuring the efficiency of login and verification
phases, as it is frequently executing than other phases. The following values suggested as the basis from
which the values of the concerned terms calculated. Where the identity ID, password pw, salt, secret one-way
hash function, visualization function, ticket, related message as answers all recommended imposed to being
128-bits long, while the random number, sentences, and the encrypted session key all 1024-bits long, other
value as the timestamp is of 32-bits long.
In AKMS scheme, the communication overhead is 3872=(5*160+3*1024). As noted, it is better as
compared to other related works. Table 6 shows the communication cost for AKMS scheme and other related
works such SKAM [6], TASS [25], and NAFH [23], calculated in the same way and assuming that the same
9. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 3, June 2022: 3238-3248
3246
variables in each of them have the same values. Table 6 are graphically drawn in Figure 4 to clearly show the
different communication cost of each work measured in bits.
Figure 4. Communication cost of login and communication phase
Table 6. Communication cost of login and communication
AKMS (The Proposed) SKAM TASS NAFH
3872 5376 9280 6912
3.2.3. Storage overhead:
AKMS scheme works by registering some of the user values that the server maintains in its
database, then using that in the future to verify the user in the logging phases. It stores (ID, B, b, PubID) in
the server database, thus the maximum storage cost is 2368 (2*160 + 2*1024) bits. Table 7 provides the
comparison results of AKMS scheme over the related works SKAM [6], TASS [25], and NAFH [23], and
shows that the storage cost of AKMS is relatively reasonable and may become less if the keys used are
reduced and made smaller. Table 7 are graphically drawn in Figure 5 to clearly show the different storage
cost of each work measured in bits.
Table 7. Storage cost of registration phases
AKMS (The Proposed) SKAM TASS NAFH
2368 1504 6112 3392
Figure 5. Storage overhead
10. Int J Elec & Comp Eng ISSN: 2088-8708
An authenticated key management scheme for securing big data environment (Thoyazan Sultan Algaradi)
3247
4. CONCLUSION
This paper covered an authenticated key management scheme for securing big data environment
(AKMS) to protect users' data and privacy by managing the keys safely in a big data environment with an
emphasis on keeping efficient, convenient, and secure. AKMS scheme works through two levels of security.
First, concerns to how the user communicates with the server in a way that prevents any attempt to penetrate
the user who sent/receive the data. Second, to make the data sent vague that no one can read except for the
concerned user. It starts from the users' registration phase on the server where a common unique key for each
user is reached to communicate with the server that entirely depends on the password, in which any
communication between them takes place depending only on this key. Second, AKMS encrypts the data to be
sent before the transfer process using a chosen random encryption key (𝑓𝑖𝑙𝑒𝑘𝑒𝑦) in proportion to the used
encryption algorithm. Then encrypts this key also to get the sharing 𝑓𝑖𝑙𝑒𝑘𝑒𝑦 ciphertext using the shared key
between it and the concerned receiving user, which will be accessed by both parties using Diffie–Hellman
key exchange technology. AKMS ensured that any data sent will not be penetrated by any of the users,
including the server. Thus, provides complete protection of users' data during transferring, including the
preservation of their privacy. Moreover, this paper discussed some concepts related to analysis and
evaluation as keys security, data security, public channel transmission, and security Isolation inquiry in
which demonstrated the rich value that AKMS scheme carried. As well, AKMS achieved very satisfactory
results about computation cost, communication cost, and storage overhead.
REFERENCES
[1] A. Castiglione, A. De Santis, A. Castiglione, and F. Palmieri, “An efficient and transparent one-time authentication protocol with
non-interactive key scheduling and update,” in 2014 IEEE 28th International Conference on Advanced Information Networking
and Applications, May 2014, pp. 351–358, doi: 10.1109/AINA.2014.45.
[2] M. E. Hodeish, L. Bukauskas, and V. T. Humbe, “A new efficient TKHC-based image sharing scheme over unsecured channel,”
Journal of King Saud University - Computer and Information Sciences, Aug. 2019, doi: 10.1016/j.jksuci.2019.08.004.
[3] M. E. Hodeish and V. T. Humbe, “An optimized halftone visual cryptography scheme using error diffusion,” Multimedia Tools
and Applications, vol. 77, no. 19, pp. 24937–24953, Oct. 2018, doi: 10.1007/s11042-018-5724-z.
[4] U. Narayanan, V. Paul, and S. Joseph, “A novel approach to big data analysis using deep belief network for the detection of
android malware,” Indonesian Journal of Electrical Engineering and Computer Science, vol. 16, no. 3, pp. 1447-1454, Dec. 2019,
doi: 10.11591/ijeecs.v16.i3.pp1447-1454.
[5] T. S. Algaradi and B. Rama, “A novel blowfish based-algorithm to improve encryption performance in hadoop using mapreduce,”
International Journal of Scientific & Technology Research, vol. 8, no. 11, pp. 2074–2081, 2019.
[6] T. S. Algaradi and R. B, “Static knowledge-based authentication mechanism for hadoop distributed platform using kerberos,”
International Journal on Advanced Science, Engineering and Information Technology, vol. 9, no. 3, p. 772, Jun. 2019, doi:
10.18517/ijaseit.9.3.5721.
[7] K. Fan, S. Lou, R. Su, H. Li, and Y. Yang, “Secure and private key management scheme in big data networking,” Peer-to-Peer
Networking and Applications, vol. 11, no. 5, pp. 992–999, Sep. 2018, doi: 10.1007/s12083-017-0579-z.
[8] Michael Fire, D. Kagan, A. Elyashar, and Y. Elovici, “Social privacy protector-protecting users’ privacy in social networks,” In
Proc. of the Second International Conference on Social Eco-Informatics (SOTICS), Venice, Italy, 2012.
[9] T. S. Algaradi and B. Rama, “A new encryption scheme for performance improvement in big data environment using
mapreduce,” Journal of Engineering Science and Technology, vol. 16, no. 5, pp. 3772–3791, 2021.
[10] N. IrshadHussain, B. Choudhury, and S. Rakshit, “A novel method for preserving privacy in big-data mining,” International
Journal of Computer Applications, vol. 103, no. 16, pp. 21–25, Oct. 2014, doi: 10.5120/18159-9378.
[11] T. S. Algaradi and B. Rama, “Big data security: a progress study of current user authentication schemes,” in 2018 4th
International Conference on Applied and Theoretical Computing and Communication Technology (iCATccT), Sep. 2018,
pp. 68–75, doi: 10.1109/iCATccT44854.2018.9001946.
[12] A. M. I. Alkuhlani and S. B. Thorat, “Lightweight anonymity-preserving authentication and key agreement protocol for the
internet of things environment,” 2018, pp. 108–125.
[13] S. H. A. Refish and S. W. Shneen, “E-PAC: efficient password authentication code based RMPN method and diffie-hellman
algorithm,” Indonesian Journal of Electrical Engineering and Computer Science, vol. 19, no. 1, p. 485, Jul. 2020, doi:
10.11591/ijeecs.v19.i1.pp485-491.
[14] F. I. Kandah, O. Nichols, and Li Yang, “Efficient key management for big data gathering in dynamic sensor networks,” in 2017
International Conference on Computing, Networking and Communications (ICNC), Jan. 2017, pp. 667–671, doi:
10.1109/ICCNC.2017.7876209.
[15] S. Xu, M. Matthews, and C.-T. Huang, “Security issues in privacy and key management protocols of IEEE 802.16,” in
Proceedings of the 44th annual southeast regional conference on - ACM-SE 44, 2006, p. 113, doi: 10.1145/1185448.1185474.
[16] G. A. Al-Rummana, A. H. A. Al Ahdal, and G. N. Shinde, “A robust user authentication framework for bigdata,” in 2021 Third
International Conference on Intelligent Communication Technologies and Virtual Mobile Networks (ICICV), Feb. 2021,
pp. 1256–1261, doi: 10.1109/ICICV50876.2021.9388505.
[17] M. Ajtai, “Public key cryptosystem and associated method utilizing a hard lattice with O(n log n) random bits for security,” US
8462940 B2[P], 2006.
[18] D. Denning, “Is quantum computing a cybersecurity threat?,” American Scientist, vol. 107, no. 2, 2019, Art. no. 83, doi:
10.1511/2019.107.2.83.
[19] E. Hamida, H. Noura, and W. Znaidi, “Security of cooperative intelligent transport systems: standards, threats analysis and
cryptographic countermeasures,” Electronics, vol. 4, no. 3, pp. 380–423, Jul. 2015, doi: 10.3390/electronics4030380.
[20] L. Eschenauer and V. D. Gligor, “A key-management scheme for distributed sensor networks,” in Proceedings of the 9th ACM
conference on Computer and communications security - CCS ’02, 2002, p. 41, doi: 10.1145/586110.586117.
[21] R. Zhou, Y. Lai, Z. Liu, and J. Liu, “Study on authentication protocol of SDN trusted domain,” in 2015 IEEE Twelfth
11. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 3, June 2022: 3238-3248
3248
International Symposium on Autonomous Decentralized Systems, Mar. 2015, pp. 281–284, doi: 10.1109/ISADS.2015.29.
[22] D. Dolev and A. Yao, “On the security of public key protocols,” IEEE Transactions on Information Theory, vol. 29, no. 2,
pp. 198–208, Mar. 1983, doi: 10.1109/TIT.1983.1056650.
[23] P. K. Rahul and T. GireeshKumar, “A novel authentication framework for hadoop,” In Artificial Intelligence and Evolutionary
Algorithms in Engineering Systems, Springer, New Delhi, 2015, pp. 333–340, doi: 10.1007/978-81-322-2126-5_37.
[24] E. Bresson, O. Chevassut, D. Pointcheval, and J.-J. Quisquater, “Provably authenticated group Diffie-Hellman key exchange,” in
Proceedings of the 8th ACM conference on Computer and Communications Security - CCS ’01, 2001, Art. no. 255, doi:
10.1145/501983.502018.
[25] Y.-S. Jeong and Y.-T. Kim, “A token-based authentication security scheme for hadoop distributed file system using elliptic curve
cryptography,” Journal of Computer Virology and Hacking Techniques, vol. 11, no. 3, pp. 137–142, Aug. 2015, doi:
10.1007/s11416-014-0236-5.
BIOGRAPHIES OF AUTHORS
Thoyazan Sultan Algaradi received the Bachelor’s degree in Computer Science
and Engineering majoring in Computer Engineering from AlHodeidah University, Yemen, in
2014 and received the M.Sc. degree in Computer Science majoring in Computer Network from
School of Computational Science at S.R.T.M. University, Nanded, India, in 2017 and he has
receved the Ph.D. degrees in Big Data Security from the department of Computer Science,
Kakatiya University, Warangal, India, in September 2021. He has published more than 5
referred research papers. His research interests include Information and Network Security,
spicialy on Cryptography, Authentication, and Key Management, Security of Big data, IOT,
and Cloud Computing. He can be contacted at email: yas.sul77@gmail.com.
Boddireddy Rama received her Ph.D. degree in Computer Science from the Sri
Padmavavathi Mahila Viswa Vidhyalayam (India) in 2008. She has been an Assistant
Professor of Computer Science at the Kakatiya University since from 2010. She has published
more than 80 referred research papers, 1 book, and has 10 patents filled and pre published. She
is the recipient of a Best Paper Award at in International Conference for the paper “A Data
mining perspective for forest fire prediction”. Her interests include Artificial Intelligence,
Datamining. Previously she was head of the department of computer science and presently
chairman board of studies in computer science, network director, and incharge website of the
kakatiya university. In her credit 5 phds are awarded and 3 more are submitted under her
guidance. She can be contacted at email: rama.abbidi@gmail.com.