This document summarizes a research paper that proposes using blockchain technology for authentication in Hadoop instead of the traditional Kerberos protocol. It describes some security issues with Kerberos, such as single point of failure and replay attacks. The authors created a model for a distributed authentication mechanism using blockchain concepts that is integrated with the HDFS client. Key features of the blockchain authentication method include decentralized authentication without keys, an unalterable record of transactions, zero single points of failure, and prevention of data theft. The implementation uses a private blockchain to store user information that is verified for authentication to access data on HDFS.
Protecting Global Records Sharing with Identity Based Access Control ListEditor IJCATR
Generally, the information is stored in the database. Protecting sensitive information are encrypted before outsourcing to a
service provider. We send the request to service provider through SQL queries. The query expressiveness is limited by means of any
software-based cryptographical constructs then deployed, for server-side query working on the encrypted data.Data sharing in the
service provider is emerging as a promising technique for allowing users to access data. The growing number of customers who stores
their data in service provider is increasingly challenging users’ privacy and the security of data. The TrustedDB an outsourced
database prototype that allows clients to execute SQL queries with privacy and under regulatory compliance constraints by leveraging
server-hosted. Tamper-proof believed hardware in crucial query processing levels, thereby removing any limits on the type of
supported queries. It focuses on providing a dependable and secure data sharing service that allows users dynamic access to their
information. TrustedDB is constructed and runs on hardware, and its performance and costs are evaluated here.
Learn Basics & advances of Hyperledger - 101-BlockchainsJackSmith435850
Hyperledger is an open-source blockchain project that supports the development of blockchain applications. It was initiated in 2015 by the Linux Foundation and does not define a specific blockchain standard, instead focusing on collaborative development. Hyperledger Fabric is the most notable project in Hyperledger and uses smart contracts and permissioned ledgers. It differs from other blockchains by requiring members to enroll through a Membership Service Provider.
Blockchain technology is a type of distributed database that achieves consensus in an open network without a central authority. However, blockchain may not always be needed, as other distributed database solutions can often achieve the same goals more efficiently. The unique property of blockchain is enabling consensus in a permissionless network, but for permissioned networks a traditional distributed database may suffice. While blockchain technology has potential benefits, its applications are currently limited to scenarios where no trusted third party is available. Other existing distributed database technologies can often meet use case needs without blockchain's limitations.
This document discusses using Intel's Software Guard Extensions (SGX) for reaching consensus in distributed ledger technologies. SGX creates a hardware-based trusted execution environment that could offer a new way to reach consensus with fewer participants than proof-of-work algorithms while maintaining security. However, SGX also has limitations, including known side-channel attacks and functional issues like vendor lock-in. The document evaluates the promises and risks of using SGX for consensus, concluding that while SGX research is improving security, it may not be suitable for applications requiring very strong confidentiality due to inherent risks that cannot be mathematically proven away.
Matching Identity Management Solutions to Self-Sovereign Identity PrinciplesTommy Koens
We created an analysis of near 50 (blockchain based) digital identity management solutions, and matched these against Self Sovereign Identity (SSI) management principles, and additional requirements.
Distributed ledger technical research in central bank of brazilmustafa sarac
This document summarizes a report by the Central Bank of Brazil on its distributed ledger technology research. It provides an overview of the bank's research process, including analyzing potential use cases, examining platforms to develop prototypes, and addressing perceived privacy issues. The bank studied relevant projects, built experiments to test responses to privacy concerns, and hopes the lessons learned will inform future decisions on this technology. It also summarizes previous work by other financial institutions exploring distributed ledgers.
SecCloudPro: A Novel Secure Cloud Storage System for Auditing and DeduplicationIJCERT
In this paper, we show the trustworthiness evaluating and secure deduplication over cloud data utilizing imaginative secure frameworks .Usually cloud framework outsourced information at cloud storage is semi-trusted because of absence of security at cloud storage while putting away or sharing at cloud level because of weak cryptosystem information may be uncover or adjusted by the hackers keeping in mind the end goal to ensure clients information protection and security We propose novel progressed secure framework i.e SecCloudPro which empower the cloud framework secured and legitimate utilizing Verifier(TPA) benefit of Cloud Server. Additionally our framework performs data deduplication in a Secured way in requested to enhance the cloud Storage space too data transfer capacity i.e bandwidth.
Security Check in Cloud Computing through Third Party Auditorijsrd.com
In cloud computing, data owners crowd their data on cloud servers and users (data consumers) can access the data from cloud servers. Due to the data outsourcing, however, it requires an independent auditing service to check the data integrity in the cloud. Some existing remote integrity checking method scan only serve for static records data. Thus, cannot be used in the auditing service since the data in the cloud can be animatedly updated. Thus, an efficient and secure dynamic auditing protocol is required to convince data owners that the data are correctly stored in the cloud. In this paper, we first design an auditing framework for cloud storage systems for privacy-preserving auditing protocol. Then, we extend our auditing protocol to support the data dynamic operations, which is efficient to secure the random model.
Protecting Global Records Sharing with Identity Based Access Control ListEditor IJCATR
Generally, the information is stored in the database. Protecting sensitive information are encrypted before outsourcing to a
service provider. We send the request to service provider through SQL queries. The query expressiveness is limited by means of any
software-based cryptographical constructs then deployed, for server-side query working on the encrypted data.Data sharing in the
service provider is emerging as a promising technique for allowing users to access data. The growing number of customers who stores
their data in service provider is increasingly challenging users’ privacy and the security of data. The TrustedDB an outsourced
database prototype that allows clients to execute SQL queries with privacy and under regulatory compliance constraints by leveraging
server-hosted. Tamper-proof believed hardware in crucial query processing levels, thereby removing any limits on the type of
supported queries. It focuses on providing a dependable and secure data sharing service that allows users dynamic access to their
information. TrustedDB is constructed and runs on hardware, and its performance and costs are evaluated here.
Learn Basics & advances of Hyperledger - 101-BlockchainsJackSmith435850
Hyperledger is an open-source blockchain project that supports the development of blockchain applications. It was initiated in 2015 by the Linux Foundation and does not define a specific blockchain standard, instead focusing on collaborative development. Hyperledger Fabric is the most notable project in Hyperledger and uses smart contracts and permissioned ledgers. It differs from other blockchains by requiring members to enroll through a Membership Service Provider.
Blockchain technology is a type of distributed database that achieves consensus in an open network without a central authority. However, blockchain may not always be needed, as other distributed database solutions can often achieve the same goals more efficiently. The unique property of blockchain is enabling consensus in a permissionless network, but for permissioned networks a traditional distributed database may suffice. While blockchain technology has potential benefits, its applications are currently limited to scenarios where no trusted third party is available. Other existing distributed database technologies can often meet use case needs without blockchain's limitations.
This document discusses using Intel's Software Guard Extensions (SGX) for reaching consensus in distributed ledger technologies. SGX creates a hardware-based trusted execution environment that could offer a new way to reach consensus with fewer participants than proof-of-work algorithms while maintaining security. However, SGX also has limitations, including known side-channel attacks and functional issues like vendor lock-in. The document evaluates the promises and risks of using SGX for consensus, concluding that while SGX research is improving security, it may not be suitable for applications requiring very strong confidentiality due to inherent risks that cannot be mathematically proven away.
Matching Identity Management Solutions to Self-Sovereign Identity PrinciplesTommy Koens
We created an analysis of near 50 (blockchain based) digital identity management solutions, and matched these against Self Sovereign Identity (SSI) management principles, and additional requirements.
Distributed ledger technical research in central bank of brazilmustafa sarac
This document summarizes a report by the Central Bank of Brazil on its distributed ledger technology research. It provides an overview of the bank's research process, including analyzing potential use cases, examining platforms to develop prototypes, and addressing perceived privacy issues. The bank studied relevant projects, built experiments to test responses to privacy concerns, and hopes the lessons learned will inform future decisions on this technology. It also summarizes previous work by other financial institutions exploring distributed ledgers.
SecCloudPro: A Novel Secure Cloud Storage System for Auditing and DeduplicationIJCERT
In this paper, we show the trustworthiness evaluating and secure deduplication over cloud data utilizing imaginative secure frameworks .Usually cloud framework outsourced information at cloud storage is semi-trusted because of absence of security at cloud storage while putting away or sharing at cloud level because of weak cryptosystem information may be uncover or adjusted by the hackers keeping in mind the end goal to ensure clients information protection and security We propose novel progressed secure framework i.e SecCloudPro which empower the cloud framework secured and legitimate utilizing Verifier(TPA) benefit of Cloud Server. Additionally our framework performs data deduplication in a Secured way in requested to enhance the cloud Storage space too data transfer capacity i.e bandwidth.
Security Check in Cloud Computing through Third Party Auditorijsrd.com
In cloud computing, data owners crowd their data on cloud servers and users (data consumers) can access the data from cloud servers. Due to the data outsourcing, however, it requires an independent auditing service to check the data integrity in the cloud. Some existing remote integrity checking method scan only serve for static records data. Thus, cannot be used in the auditing service since the data in the cloud can be animatedly updated. Thus, an efficient and secure dynamic auditing protocol is required to convince data owners that the data are correctly stored in the cloud. In this paper, we first design an auditing framework for cloud storage systems for privacy-preserving auditing protocol. Then, we extend our auditing protocol to support the data dynamic operations, which is efficient to secure the random model.
Password-Authenticated Key Exchange Scheme Using Chaotic Maps towards a New A...dbpublications
The document proposes a new password-authenticated key agreement protocol using chaotic maps towards a multiple servers to server architecture in the standard model. The proposed protocol aims to solve issues with single-point security, efficiency, and failure in centralized registration centers by adopting a multiple servers to server architecture. The protocol provides perfect forward secrecy and resistance to dictionary attacks while allowing weak passwords. A security proof is given for the standard model and an efficiency analysis is presented.
Access control in decentralized online social networks applying a policy hidi...IGEEKS TECHNOLOGIES
The document proposes a policy-hiding cryptographic scheme for access control in decentralized online social networks that aims to achieve both privacy and performance. Existing DOSNs reveal access policies but some cryptographic variants hide policies at the cost of performance. The proposed scheme uses predicate encryption with a univariate polynomial construction for access policies that drastically improves performance while leaking some policy information. Bloom filters are also used to decrease decryption time and indicate decryptable objects. The goal is to enable privacy-preserving access control without compromising usability in resource-constrained DOSN environments.
This document describes BigchainDB, a scalable blockchain database. BigchainDB combines the key benefits of distributed databases and blockchains, with an emphasis on scale. It is built on an existing distributed database to inherit high throughput, capacity, low latency, and querying abilities. BigchainDB also adds blockchain characteristics like decentralized control, immutability, and the ability to create and transfer digital assets. The goal is to provide a decentralized database at scale, filling a gap in existing blockchain technologies.
Identity based proxy-oriented data uploading andKamal Spring
More and more clients would like to store their data to PCS (public cloud servers) along with the rapid development of cloud computing. New security problems have to be solved in order to help more clients process their data in public cloud. When the client is restricted to access PCS, he will delegate its proxy to process his data and upload them. On the other hand, remote data integrity checking is also an important security problem in public cloud storage. It makes the clients check whether their outsourced data is kept intact without downloading the whole data. From the security problems, we propose a novel proxy-oriented data uploading and remote data integrity checking model in identity-based public key cryptography: IDPUIC (identity-based proxy-oriented data uploading and remote data integrity checking in public cloud). We give the formal definition, system model and security model. Then, a concrete ID-PUIC protocol is designed by using the bilinear pairings. The proposed ID-PUIC protocol is provably secure based on the hardness of CDH (computational Diffie-Hellman) problem. Our ID-PUIC protocol is also efficient and flexible. Based on the original client’s authorization, the proposed ID-PUIC protocol can realize private remote data integrity checking, delegated remote data integrity checking and public remote data integrity checking.
IRJET- A Novel and Secure Approach to Control and Access Data in Cloud St...IRJET Journal
This document proposes a novel approach to securely control and access data stored in the cloud using Ciphertext-Policy Attribute-Based Encryption (CP-ABE). The approach aims to address abuse of access credentials by tracing malicious insiders and revoking their access. It presents two new CP-ABE frameworks that allow traceability of malicious cloud clients, identification of misbehaving authorities, and auditing without requiring extensive storage. The frameworks provide fine-grained access control and can revoke credentials of traced attackers.
Benchmark and comparison between hyperledger and MySQLTELKOMNIKA JOURNAL
In this paper, we report the benchmarking results of Hyperledger, a Distributed Ledger, which is the derivation Blockchain Technology. Method to evaluate Hyperledger in a limited infrastructure is developed. Themeasured infrastructure consists of 8 nodes with a load of up to 20000 transactions/second. Hyperledger consistently runs all evaluation, namely, for 20,000 transactions, the run time 74.30s, latency 73.40ms latency, and 257 tps. The benchmarking of Hyperledger shows better than a database system in a high workload scenario. We found that the maximum size data volume in one transaction on the Hyperledger network is around ten (10) times of MySQL. Also, the time spent on processing a single transaction in the blockchain network is 80-200 times faster than MySQL. This initial analysis can provide an overview for practitioners in making decisions about the adoption of blockchain technology in their IT systems.
Security Mechanisms for Precious Data Protection of Divergent Heterogeneous G...RSIS International
This paper portrays security advancements and
components utilized as part of Grid computing environment. The
Grid Security Infrastructure (GSI) executed in the Globus
Toolkit also, is portrayed in detail. The principle concentrate is
on strategies for distinguishing proof, verification and approval,
in view of X.509 endorsements and SSL/TLS conventions. At
long last an answer of group based get to control over the
network assets is displayed, which is make over on the usage of
the Globus Toolkit
The Fabric platform is intended as a foundation for developing blockchain applications, products or solutions. The fabric is a Private and Permissioned system which delivers a high degree of confidentiality, resiliency, flexibility, and scalability. It adopted a modular architecture and supports pluggable implementations of different components like consensus, membership services etc. Like other blockchain technologies, Fabric has a ledger, smart contracts, and it is a system by which participants manage their transactions. The smart contract in the fabric is known as chaincode and it is in the chaincode the business logic is embedded. The following features impart high degree of security and privacy for the fabric framewor
This curriculum vitae outlines Debakshi Chakraborty's experience as a software tester with over 6 years of experience in manual and automated testing. She has experience testing web, mobile, and desktop applications across various domains including banking, finance, telecom, and utilities. She is proficient in testing methodologies, tools like HP ALM, Jira, and languages like C#, SQL. Her experience includes roles as a test lead, test manager and analyst with responsibilities like requirements gathering, test planning, execution, and reporting.
This document proposes an architecture for authorization in constrained environments. It defines three levels - constrained, less-constrained, and principal. The constrained level consists of clients and resource servers with limited capabilities. The less-constrained level includes client and authorization servers with more capabilities. The principal level defines resource owners and requesting parties. It also outlines protocols, authorization granularity levels, and tasks related to authorization and authentication in constrained networks.
bulk ieee projects in pondicherry,ieee projects in pondicherry,final year ieee projects in pondicherry
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
Attribute based encryption with privacy preserving in cloudsSwathi Rampur
This document proposes a new decentralized access control scheme for secure data storage in clouds that supports anonymous authentication. The key points are:
1) It allows for multiple key distribution centers (KDCs) so the architecture is decentralized. This prevents any single point of failure.
2) It provides anonymous authentication of users storing data in the cloud so their identity is protected from the cloud.
3) Only authorized users with valid attributes can access data, so it enables fine-grained and distributed access control. The scheme is resilient against replay and collusion attacks.
iaetsd Robots in oil and gas refineriesIaetsd Iaetsd
This document discusses attribute-based encryption in cloud computing with outsourced revocation. It proposes a pseudonym generation scheme for identity-based encryption and outsourced revocation in cloud computing. The scheme offloads most key generation operations to a Key Update Cloud Service Provider during key issuing and updating, leaving only simple operations for the Private Key Generator and users. It aims to reduce computation overhead at the Private Key Generator while using an untrusted cloud service provider.
This document summarizes a workshop on private blockchains, use cases, and advanced analytics. The agenda includes an introduction, a discussion of Hyperledger Fabric designs and use cases, an overview of the Hyperledger Fabric-Samples repository, integrating Splunk for analytics on Hyperledger Fabric environments, showcasing analytics use cases in Splunk by generating transactions and failures in Hyperledger Fabric, and concluding remarks. Additional topics include advanced data integrity using blockchain and Splunk for Ethereum. The workshop aims to provide information on permissioned blockchain designs, real-world enterprise use cases, and cognitive analytics capabilities for blockchain environments.
The document proposes improvements to an existing CP-ABE scheme for secure data sharing in distributed networks. It aims to address issues of reliability and load balancing. The existing scheme relies heavily on a single key generation center, which poses security risks if the center fails or becomes corrupted. The proposed approach introduces factors like multiple key generation centers and data storage centers to improve reliability. It also uses a 2-party computation protocol to solve the key escrow problem without relying on a single trusted authority. The goal is to develop a more secure and efficient attribute-based encryption system for distributed data sharing that can scale to handle large numbers of users and requests.
This document discusses security issues with Hadoop and available solutions. It identifies vulnerabilities in Hadoop including lack of authentication, unsecured data in transit, and unencrypted data at rest. It describes current solutions like Kerberos for authentication, SASL for encrypting data in motion, and encryption zones for encrypting data at rest. However, it notes limitations of encryption zones for processing encrypted data efficiently with MapReduce. It proposes a novel method for large scale encryption that can securely process encrypted data in Hadoop.
Messages addressed to specific users can be decrypted by Key Generation Centre (KGC) by generating their private keys. Data owner wants the data to be delivered only to specified user and not to unauthorized person that is the data owner makes their private data accessible only to authorized person. We propose attribute based encryption and escrow problem which means written agreement delivered to a third party to overcome this problem. Attribute based Encryption (ABE) is a type of public-key encryption in which the private key of a user and the cipher text are dependent upon attributes. It is a promising cryptographic approach.
Big data as a service (BDaaS) platform is widely used by various
organizations for handling and processing the high volume of data generated
from different internet of things (IoT) devices. Data generated from these IoT
devices are kept in the form of big data with the help of cloud computing
technology. Researchers are putting efforts into providing a more secure and
protected access environment for the data available on the cloud. In order to
create a safe, distributed, and decentralised environment in the cloud,
blockchain technology has emerged as a useful tool. In this research paper, we
have proposed a system that uses blockchain technology as a tool to regulate
data access that is provided by BDaaS platforms. We are securing the access
policy of data by using a modified form of ciphertext policy-attribute based
encryption (CP-ABE) technique with the help of blockchain technology. For
secure data access in BDaaS, algorithms have been created using a mix of CPABE with blockchain technology. Proposed smart contract algorithms are
implemented using Eclipse 7.0 IDE and the cloud environment has been
simulated on CloudSim tool. Results of key generation time, encryption time,
and decryption time has been calculated and compared with access control
mechanism without blockchain technology.
A Survey on Access Control Mechanisms using Attribute Based Encryption in cloudijsrd.com
Cloud computing is an emerging computing technology that enables users to distantly store their data into a cloud so as to enjoy scalable services when required. And user can outsource their resources to server (also called cloud) using Internet. Security is one of the major issues which reduces the growth of cloud computing and complications with data privacy and data protection continue to plague the market. Attribute-based encryption (ABE) can be used for log encryption. This survey is more specific to the different security issues on data access in cloud environment.
Discuss building a trust solution for HealthIT or other regulated enterprises with blockchain using Hyperledger with Hbase for off-blockchain storage for scaling prototyped on Bluemix.
The document proposes improvements to the reliability and load balancing of an existing CP-ABE encryption scheme for secure data sharing in distributed networks. The existing scheme addresses key escrow and revocation issues but lacks reliability factors and load balancing. The proposed approach aims to improve security and efficiency by introducing reliability factors like multiple key generation centers and distributed data storage. It also aims to balance load across the network to avoid overloading any single node. The document reviews related work and outlines the proposed framework, including use of an access structure to define user attributes and access policies for decryption.
Password-Authenticated Key Exchange Scheme Using Chaotic Maps towards a New A...dbpublications
The document proposes a new password-authenticated key agreement protocol using chaotic maps towards a multiple servers to server architecture in the standard model. The proposed protocol aims to solve issues with single-point security, efficiency, and failure in centralized registration centers by adopting a multiple servers to server architecture. The protocol provides perfect forward secrecy and resistance to dictionary attacks while allowing weak passwords. A security proof is given for the standard model and an efficiency analysis is presented.
Access control in decentralized online social networks applying a policy hidi...IGEEKS TECHNOLOGIES
The document proposes a policy-hiding cryptographic scheme for access control in decentralized online social networks that aims to achieve both privacy and performance. Existing DOSNs reveal access policies but some cryptographic variants hide policies at the cost of performance. The proposed scheme uses predicate encryption with a univariate polynomial construction for access policies that drastically improves performance while leaking some policy information. Bloom filters are also used to decrease decryption time and indicate decryptable objects. The goal is to enable privacy-preserving access control without compromising usability in resource-constrained DOSN environments.
This document describes BigchainDB, a scalable blockchain database. BigchainDB combines the key benefits of distributed databases and blockchains, with an emphasis on scale. It is built on an existing distributed database to inherit high throughput, capacity, low latency, and querying abilities. BigchainDB also adds blockchain characteristics like decentralized control, immutability, and the ability to create and transfer digital assets. The goal is to provide a decentralized database at scale, filling a gap in existing blockchain technologies.
Identity based proxy-oriented data uploading andKamal Spring
More and more clients would like to store their data to PCS (public cloud servers) along with the rapid development of cloud computing. New security problems have to be solved in order to help more clients process their data in public cloud. When the client is restricted to access PCS, he will delegate its proxy to process his data and upload them. On the other hand, remote data integrity checking is also an important security problem in public cloud storage. It makes the clients check whether their outsourced data is kept intact without downloading the whole data. From the security problems, we propose a novel proxy-oriented data uploading and remote data integrity checking model in identity-based public key cryptography: IDPUIC (identity-based proxy-oriented data uploading and remote data integrity checking in public cloud). We give the formal definition, system model and security model. Then, a concrete ID-PUIC protocol is designed by using the bilinear pairings. The proposed ID-PUIC protocol is provably secure based on the hardness of CDH (computational Diffie-Hellman) problem. Our ID-PUIC protocol is also efficient and flexible. Based on the original client’s authorization, the proposed ID-PUIC protocol can realize private remote data integrity checking, delegated remote data integrity checking and public remote data integrity checking.
IRJET- A Novel and Secure Approach to Control and Access Data in Cloud St...IRJET Journal
This document proposes a novel approach to securely control and access data stored in the cloud using Ciphertext-Policy Attribute-Based Encryption (CP-ABE). The approach aims to address abuse of access credentials by tracing malicious insiders and revoking their access. It presents two new CP-ABE frameworks that allow traceability of malicious cloud clients, identification of misbehaving authorities, and auditing without requiring extensive storage. The frameworks provide fine-grained access control and can revoke credentials of traced attackers.
Benchmark and comparison between hyperledger and MySQLTELKOMNIKA JOURNAL
In this paper, we report the benchmarking results of Hyperledger, a Distributed Ledger, which is the derivation Blockchain Technology. Method to evaluate Hyperledger in a limited infrastructure is developed. Themeasured infrastructure consists of 8 nodes with a load of up to 20000 transactions/second. Hyperledger consistently runs all evaluation, namely, for 20,000 transactions, the run time 74.30s, latency 73.40ms latency, and 257 tps. The benchmarking of Hyperledger shows better than a database system in a high workload scenario. We found that the maximum size data volume in one transaction on the Hyperledger network is around ten (10) times of MySQL. Also, the time spent on processing a single transaction in the blockchain network is 80-200 times faster than MySQL. This initial analysis can provide an overview for practitioners in making decisions about the adoption of blockchain technology in their IT systems.
Security Mechanisms for Precious Data Protection of Divergent Heterogeneous G...RSIS International
This paper portrays security advancements and
components utilized as part of Grid computing environment. The
Grid Security Infrastructure (GSI) executed in the Globus
Toolkit also, is portrayed in detail. The principle concentrate is
on strategies for distinguishing proof, verification and approval,
in view of X.509 endorsements and SSL/TLS conventions. At
long last an answer of group based get to control over the
network assets is displayed, which is make over on the usage of
the Globus Toolkit
The Fabric platform is intended as a foundation for developing blockchain applications, products or solutions. The fabric is a Private and Permissioned system which delivers a high degree of confidentiality, resiliency, flexibility, and scalability. It adopted a modular architecture and supports pluggable implementations of different components like consensus, membership services etc. Like other blockchain technologies, Fabric has a ledger, smart contracts, and it is a system by which participants manage their transactions. The smart contract in the fabric is known as chaincode and it is in the chaincode the business logic is embedded. The following features impart high degree of security and privacy for the fabric framewor
This curriculum vitae outlines Debakshi Chakraborty's experience as a software tester with over 6 years of experience in manual and automated testing. She has experience testing web, mobile, and desktop applications across various domains including banking, finance, telecom, and utilities. She is proficient in testing methodologies, tools like HP ALM, Jira, and languages like C#, SQL. Her experience includes roles as a test lead, test manager and analyst with responsibilities like requirements gathering, test planning, execution, and reporting.
This document proposes an architecture for authorization in constrained environments. It defines three levels - constrained, less-constrained, and principal. The constrained level consists of clients and resource servers with limited capabilities. The less-constrained level includes client and authorization servers with more capabilities. The principal level defines resource owners and requesting parties. It also outlines protocols, authorization granularity levels, and tasks related to authorization and authentication in constrained networks.
bulk ieee projects in pondicherry,ieee projects in pondicherry,final year ieee projects in pondicherry
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
Attribute based encryption with privacy preserving in cloudsSwathi Rampur
This document proposes a new decentralized access control scheme for secure data storage in clouds that supports anonymous authentication. The key points are:
1) It allows for multiple key distribution centers (KDCs) so the architecture is decentralized. This prevents any single point of failure.
2) It provides anonymous authentication of users storing data in the cloud so their identity is protected from the cloud.
3) Only authorized users with valid attributes can access data, so it enables fine-grained and distributed access control. The scheme is resilient against replay and collusion attacks.
iaetsd Robots in oil and gas refineriesIaetsd Iaetsd
This document discusses attribute-based encryption in cloud computing with outsourced revocation. It proposes a pseudonym generation scheme for identity-based encryption and outsourced revocation in cloud computing. The scheme offloads most key generation operations to a Key Update Cloud Service Provider during key issuing and updating, leaving only simple operations for the Private Key Generator and users. It aims to reduce computation overhead at the Private Key Generator while using an untrusted cloud service provider.
This document summarizes a workshop on private blockchains, use cases, and advanced analytics. The agenda includes an introduction, a discussion of Hyperledger Fabric designs and use cases, an overview of the Hyperledger Fabric-Samples repository, integrating Splunk for analytics on Hyperledger Fabric environments, showcasing analytics use cases in Splunk by generating transactions and failures in Hyperledger Fabric, and concluding remarks. Additional topics include advanced data integrity using blockchain and Splunk for Ethereum. The workshop aims to provide information on permissioned blockchain designs, real-world enterprise use cases, and cognitive analytics capabilities for blockchain environments.
The document proposes improvements to an existing CP-ABE scheme for secure data sharing in distributed networks. It aims to address issues of reliability and load balancing. The existing scheme relies heavily on a single key generation center, which poses security risks if the center fails or becomes corrupted. The proposed approach introduces factors like multiple key generation centers and data storage centers to improve reliability. It also uses a 2-party computation protocol to solve the key escrow problem without relying on a single trusted authority. The goal is to develop a more secure and efficient attribute-based encryption system for distributed data sharing that can scale to handle large numbers of users and requests.
This document discusses security issues with Hadoop and available solutions. It identifies vulnerabilities in Hadoop including lack of authentication, unsecured data in transit, and unencrypted data at rest. It describes current solutions like Kerberos for authentication, SASL for encrypting data in motion, and encryption zones for encrypting data at rest. However, it notes limitations of encryption zones for processing encrypted data efficiently with MapReduce. It proposes a novel method for large scale encryption that can securely process encrypted data in Hadoop.
Messages addressed to specific users can be decrypted by Key Generation Centre (KGC) by generating their private keys. Data owner wants the data to be delivered only to specified user and not to unauthorized person that is the data owner makes their private data accessible only to authorized person. We propose attribute based encryption and escrow problem which means written agreement delivered to a third party to overcome this problem. Attribute based Encryption (ABE) is a type of public-key encryption in which the private key of a user and the cipher text are dependent upon attributes. It is a promising cryptographic approach.
Big data as a service (BDaaS) platform is widely used by various
organizations for handling and processing the high volume of data generated
from different internet of things (IoT) devices. Data generated from these IoT
devices are kept in the form of big data with the help of cloud computing
technology. Researchers are putting efforts into providing a more secure and
protected access environment for the data available on the cloud. In order to
create a safe, distributed, and decentralised environment in the cloud,
blockchain technology has emerged as a useful tool. In this research paper, we
have proposed a system that uses blockchain technology as a tool to regulate
data access that is provided by BDaaS platforms. We are securing the access
policy of data by using a modified form of ciphertext policy-attribute based
encryption (CP-ABE) technique with the help of blockchain technology. For
secure data access in BDaaS, algorithms have been created using a mix of CPABE with blockchain technology. Proposed smart contract algorithms are
implemented using Eclipse 7.0 IDE and the cloud environment has been
simulated on CloudSim tool. Results of key generation time, encryption time,
and decryption time has been calculated and compared with access control
mechanism without blockchain technology.
A Survey on Access Control Mechanisms using Attribute Based Encryption in cloudijsrd.com
Cloud computing is an emerging computing technology that enables users to distantly store their data into a cloud so as to enjoy scalable services when required. And user can outsource their resources to server (also called cloud) using Internet. Security is one of the major issues which reduces the growth of cloud computing and complications with data privacy and data protection continue to plague the market. Attribute-based encryption (ABE) can be used for log encryption. This survey is more specific to the different security issues on data access in cloud environment.
Discuss building a trust solution for HealthIT or other regulated enterprises with blockchain using Hyperledger with Hbase for off-blockchain storage for scaling prototyped on Bluemix.
The document proposes improvements to the reliability and load balancing of an existing CP-ABE encryption scheme for secure data sharing in distributed networks. The existing scheme addresses key escrow and revocation issues but lacks reliability factors and load balancing. The proposed approach aims to improve security and efficiency by introducing reliability factors like multiple key generation centers and distributed data storage. It also aims to balance load across the network to avoid overloading any single node. The document reviews related work and outlines the proposed framework, including use of an access structure to define user attributes and access policies for decryption.
Several Data Security Methodology has been noticed, with recent adoption and spreading of data sharing. One of the most interesting and definitive approach is Cipher text-Policy Attribute-Based Encryption (CP-ABE).CP-ABE provides us with the indulgement of the access policies and its updates. It is used to set or control outsourcing of data sharing; it deals with the issues in CP-ABE. This solution allows encryptor to deals with the access control with respect to the access formula. The lacking of reliability factor lead to weaken the system, therefore we will amplify CB-ABE by introducing some factor. Key Generation center (KGC) and data storing center are the highlighted factors. KGC deals with the drawback of Key escrow problem. As KGC can decrypt the users data as per KGC`s concerns, causing threat to the data sharing Systems. This is not favorable for the distributed scheme where KGC is not trustworthy. Along with the key escrow problem, we will be concerning with the problem of key-revocation that is degradation because of windows of vulnerability. These issues are solved by exploiting the features characteristics of Architecture. The problem of key-escrow is resolved using 2-pc protocol. And Key-revocation is proceeding by using proxy encryption.
IRJET - A Secure Access Policies based on Data Deduplication SystemIRJET Journal
This document summarizes a research paper on a secure access policies based data deduplication system. The system uses attribute-based encryption and a hybrid cloud model with a private cloud for deduplication and a public cloud for storage. It allows defining access policies for encrypted data files. When a user uploads a duplicate file, the system checks for a matching file and replaces it with a reference to the existing copy to save storage. The system provides file and block-level deduplication for efficient storage and uses cryptographic techniques like MD5, 3DES and RSA for encryption, tagging and access control of encrypted duplicate data across clouds.
The document proposes a novel scheme for privacy-preserving public auditing of integrity for shared data stored in the cloud. It utilizes ring signatures to generate verification metadata (i.e. homomorphic authenticators) that allow a public auditor to efficiently audit the correctness of shared data without retrieving the entire data files. The proposed scheme ensures the identity of the signer on each block is kept private from public verifiers during the auditing process. It also extends the scheme to support simultaneous auditing of multiple tasks to improve efficiency. The results demonstrate the scheme achieves privacy-preserving public auditing with high security and negligible overhead.
IRJET- Blockchain based Data Sharing FrameworkIRJET Journal
This document proposes a blockchain-based framework for data sharing. It discusses challenges with traditional centralized data sharing approaches. Blockchain provides an opportunity to address issues of trust, accuracy, and reliability through its decentralized and distributed ledger approach. The proposed framework uses blockchain as the backbone, allowing different parties and ecosystems to securely share data. Key entities are issuers who share data and verifiers who access it. Hashed data is stored on the blockchain to ensure integrity and provenance. The framework aims to address technical and regulatory challenges to data sharing through a decentralized approach.
IRJET - Confidential Image De-Duplication in Cloud StorageIRJET Journal
This document proposes a confidential image de-duplication system for cloud storage. It introduces a hybrid cloud architecture using both public and private clouds. To provide greater security, the private cloud employs tiered authentication. The system performs de-duplication by comparing hash values of files generated using MD5 and SHA algorithms, to detect duplicate files and reduce storage usage. It encrypts files using AES before storage in the cloud. The private cloud server manages encryption keys and performs de-duplication checks by comparing file hashes and contents. This allows detection of duplicate files while preserving data privacy through encryption.
Towards Secure Data Distribution Systems in Mobile Cloud Computing: A SurveyIRJET Journal
This document summarizes 6 research papers related to security in mobile cloud computing. It discusses issues like data integrity, authentication, and access control when mobile devices' data and computations are integrated with cloud computing. Several cryptographic techniques are described that can help ensure privacy and security, such as proxy provable data possession, attribute-based encryption, and proxy re-encryption. The document concludes that while mobile cloud computing provides benefits, security of user data shared in the cloud is the main challenge, and various frameworks have been proposed but no single system addresses all security aspects.
Providing user security guarantees in public infrastructure cloudsKamal Spring
The infrastructure cloud (IaaS) service model offers improved resource flexibility and availability, where tenants – insulated from the minutiae of hardware maintenance – rent computing resources to deploy and operate complex systems. Large-scale services running on IaaS platforms demonstrate the viability of this model; nevertheless, many organizations operating on sensitive data avoid migrating operations to IaaS platforms due to security concerns. In this paper, we describe a framework for data and operation security in IaaS, consisting of protocols for a trusted launch of virtual machines and domain-based storage protection. We continue with an extensive theoretical analysis with proofs about protocol resistance against attacks in the defined threat model. The protocols allow trust to be established by remotely attesting host platform configuration prior to launching guest virtual machines and ensure confidentiality of data in remote storage, with encryption keys maintained outside of the IaaS domain. Presented experimental results demonstrate the validity and efficiency of the proposed protocols. The framework prototype was implemented on a test bed operating a public electronic health record system, showing that the proposed protocols can be integrated into existing cloud environments.
System Design SpecificationsThere are various methods of pro.docxdeanmtaylor1545
System Design Specifications
There are various methods of protecting data-in-transit, also referred to as data-in-motion. However, the most significant vulnerability with cloud storage is not securing the data in transit; it is the security of the data at rest. Therefore, before transmitting data, it is essential to ensure the data is encrypted with tools such as VeraCrypt, which is a tool that enables the use of encrypted containers to protect data at rest.
Secure encryption must be used in order to maintain confidentiality and integrity when transmitting data between the cloud server and the client. Encryption will ensure that only users with the key that was used to encrypt the data will be able to decrypt the data and view the contents (Alsulami, Alharbi, & Monowar, 2015). One method of encryption would be a technique such as the hybrid cryptographic scheme shown in Figure 1.
Figure 1. Hybrid Cryptographic Scheme.
As we see in Figure 1, Alice is sending an encrypted message to Bob using the Hybrid Cryptographic Scheme, which utilizes a combination of Public Key Crypto, Secret Key Crypto, and a Hash Function. Alice’s Private Key and the Hash Function are used to creating a digital signature, and Bob’s public key is combined with a random session key and public key crypto to create the encrypted session key. Alice’s message and the random session key are used in conjunction with the hash function and secret key crypto to formulate the encrypted message.
The combination of the encrypted message and the encrypted session key becomes what is known as the digital envelope. The hash function is a one-way encryption algorithm that uses no key, but instead uses a fixed-length hash value that computes based upon the plaintext, which makes it impossible for both the contents and length of the plaintext to be recovered, thus providing a digital fingerprint to ensure the integrity of the file. Bob recovers the hash value by decrypting the digital signature with Alice's public key. Then Bob recovers the secret session key using his private key and decrypts the encrypted message. If the resultant hash value is different from the value supplied by Alice, then Bob knows that the message has been altered; if the hash values are the same, Bob can have confidence that the message he received is identical to the one that Alice sent (Kessler, 2019).
Now that the messages are encrypted, we will need to use a secure means of transmitting the messages from point A to point B. Various protocols can provide security, such as Hypertext Transfer Protocol Secure (HTTPS), which is a variant of HTTP that adds a layer of security through an SSL or TLS protocol connection (“What is HTTPS,” n.d.). SSL ensures that before communication is established between a client browser and a cloud server, an encrypted link is created between the two (“What is Secure Sockets Layer,” n.d.). TLS is more efficient and secure than SSL as it has stronger message authentication, key mat.
Enhancing Password Manager Chrome Extension through Multi Authentication and ...ijtsrd
The document describes a proposed enhancement to password manager Chrome extensions through multi-authentication and device logs. The proposed system would use PGP encryption and require 2FA for authentication. It would provide cross-device authentication and store user credentials in a secure manner. The system would use Angular, Node.js, MongoDB, and include modules for signup, login, and storing credentials. Implementing this as a Chrome extension initially could later be expanded to mobile or desktop apps to provide a more secure open-source password manager.
IMPLEMENTING BLOCKCHAIN ASSISTED PUBLIC KEY ENCRYPTION TECHNIQUE IN CLOUD COM...IRJET Journal
This document proposes a technique for securing documents stored in cloud computing using blockchain assisted public key encryption. It discusses storing encrypted documents on the cloud server, but storing the secret keys to decrypt the documents in a protected blockchain. It also discusses encrypting keywords from the documents using a separate algorithm and storing them on the cloud server. This is done to improve security by separating the encrypted documents, keywords, and decryption keys across different servers rather than storing all information in one location. The proposed method uses AES encryption for documents, Caesar encryption for keywords, and stores the secret decryption keys in a protected blockchain to improve security of documents and keywords stored in the cloud.
Enabling Integrity for the Compressed Files in Cloud ServerIOSR Journals
This document proposes a scheme for enabling data integrity for compressed files stored in cloud servers. The scheme encrypts some bits of data from each data block using an RSA algorithm and polynomial hashing to generate hash values. These hash values are stored at the client and used to verify integrity by checking responses from the cloud server against the stored hashes. The scheme aims to minimize computational and storage overhead for clients by compressing files, encrypting only some data bits, and requiring clients to store just two secret functions rather than the full data. This allows integrity checks with low bandwidth consumption suitable for thin clients like mobile devices.
CHAPTER 12 Integrating Non-Blockchain Apps with Ethereum EstelaJeffery653
CHAPTER 12 Integrating Non-Blockchain Apps with Ethereum 205
Chapter 12
Integrating Non-
Blockchain Apps
with Ethereum
Although you can build entirely blockchain-based applications, it is far more likely that your applications will be a combination of traditional and blockchain components. You learn in Chapter 3 that some use cases lend
themselves well to blockchain apps but others do not. In this book, we chose to
highlight one use case, supply chain, because blockchain offers clear advantages
over traditional methods. However, even a comprehensive supply chain applica-
tion will likely run partially as a traditional application and partially on the
blockchain.
Many emerging blockchain apps consist of core components that operate as smart
contracts and other components that operate as traditional applications that
interact with users and provide supporting functionality. This hybrid approach to
application development requires the capability to integrate the two different
development models. In other words, to develop hybrid applications that run par-
tially on the blockchain, you need to know how to design them to talk with each
other and operate seamlessly.
IN THIS CHAPTER
» Exploring differences between
blockchain and databases
» Identifying differences between
blockchain and traditional
applications
» Integrating traditional applications
with Ethereum
» Testing and deploying integrated
blockchain apps
206 PART 4 Testing and Deploying Ethereum Apps
Distributed application design and development isn’t new. In fact, some of the
difficulties with distributed applications led to the need for technologies like
blockchain. Remember that blockchain technology doesn’t solve all application
problems, but it does have its place. Now that you know how to develop dApps for
the Ethereum blockchain, in this chapter you learn how to integrate your smart
contracts with applications that do not include blockchain technology. The capa-
bility to integrate blockchain and non-blockchain applications makes it possible
to develop applications that use the right technology for a wide range of needs.
Comparing Blockchain and
Database Storage
In Chapter 2, you learn about some of the differences between storing data in a
blockchain and a database. Both technologies can store data, but clear differences
exist between the two. One of the first obstacles you might encounter when asked
to integrate blockchain with an existing application is determining what data you
should migrate to the blockchain.
Traditional applications store most of their data in a database. Databases provide
fast access to shared data. Blockchains can also provide access to shared data, but
they may not be as fast as a database. As you learn in Chapter 2, there are other
differences as well. It is important that you understand the relative strengths of
each data storage technique to make good design decisions for integrating block-
ch ...
This document summarizes a research paper that examines pricing strategy in a two-stage supply chain consisting of a supplier and retailer. The supplier offers a credit period to the retailer, who then offers credit to customers. A mathematical model is formulated to maximize total profit for the integrated supply chain system. The model considers three cases based on the relative lengths of the credit periods offered at each stage. Equations are developed to represent the profit functions for the supplier, retailer and overall system in each case. The goal is to determine the optimal selling price that maximizes total integrated profit.
The document discusses melanoma skin cancer detection using a computer-aided diagnosis system based on dermoscopic images. It begins with an introduction to skin cancer and melanoma. It then reviews existing literature on automated melanoma detection systems that use techniques like image preprocessing, segmentation, feature extraction and classification. Features extracted in other studies include asymmetry, border irregularity, color, diameter and texture-based features. The proposed system collects dermoscopic images and performs preprocessing, segmentation, extracts 9 features based on the ABCD rule, and classifies images using a neural network classifier to detect melanoma. It aims to develop an automated diagnosis system to eliminate invasive biopsy procedures.
This document summarizes various techniques for image segmentation that have been studied and proposed in previous research. It discusses edge-based, threshold-based, region-based, clustering-based, and other common segmentation methods. It also reviews applications of segmentation in medical imaging, plant disease detection, and other fields. While no single technique can segment all images perfectly, hybrid and adaptive methods combining multiple approaches may provide better results. Overall, image segmentation remains an important but challenging task in digital image processing and computer vision.
This document presents a test for detecting a single upper outlier in a sample from a Johnson SB distribution when the parameters of the distribution are unknown. The test statistic proposed is based on maximum likelihood estimates of the four parameters (location, scale, and two shape) of the Johnson SB distribution. Critical values of the test statistic are obtained through simulation for different sample sizes. The performance of the test is investigated through simulation, showing it performs well at detecting outliers when the contaminant observation represents a large shift from the original distribution parameters. An example application to census data is also provided.
This document summarizes a research paper that proposes a portable device called the "Disha Device" to improve women's safety. The device has features like live location tracking, audio/video recording, automatic messaging to emergency contacts, a buzzer, flashlight, and pepper spray. It is designed using an Arduino microcontroller connected to GPS and GSM modules. When the button is pressed, it sends an alert message with the woman's location, sets off an alarm, activates the flashlight and pepper spray for self-defense. The goal is to provide women a compact, one-click safety system to help them escape dangerous situations or call for help with just a single press of a button.
- The document describes a study that constructed physical fitness norms for female students attending social welfare schools in Andhra Pradesh, India.
- Researchers tested 339 students in classes 6-10 on speed, strength, agility and flexibility tests. Tests included 50m run, bend and reach, medicine ball throw, broad jump, shuttle run, and vertical jump.
- The results showed that 9th class students had the best average time for the 50m run. 10th class students had the highest flexibility on average. Strength and performance generally improved with increased class level.
This document summarizes research on downdraft gasification of biomass. It discusses how downdraft gasifiers effectively convert solid biomass into a combustible producer gas. The gasification process involves pyrolysis and reactions between hot char and gases that produce CO, H2, and CH4. Downdraft gasifiers are well-suited for biomass gasification due to their simple design and ability to manage the gasification process with low tar production. The document also reviews previous studies on gasifier configuration upgrades and their impact on performance, and the principles of downdraft gasifier operation.
This document summarizes the design and manufacturing of a twin spindle drilling attachment. Key points:
- The attachment allows a drilling machine to simultaneously drill two holes in a single setting, improving productivity over a single spindle setup.
- It uses a sun and planet gear arrangement to transmit power from the main spindle to two drilling spindles.
- Components like gears, shafts, and housing were designed using Creo software and manufactured. Drill chucks, bearings, and bits were purchased.
- The attachment was assembled and installed on a vertical drilling machine. It is aimed at improving productivity in mass production applications by combining two drilling operations into one setup.
The document presents a comparative study of different gantry girder profiles for various crane capacities and gantry spans. Bending moments, shear forces, and section properties are calculated and tabulated for 'I'-section with top and bottom plates, symmetrical plate girder, 'I'-section with 'C'-section top flange, plate girder with rolled 'C'-section top flange, and unsymmetrical plate girder sections. Graphs of steel weight required per meter length are presented. The 'I'-section with 'C'-section top flange profile is found to be optimized for biaxial bending but rolled sections may not be available for all spans.
This document summarizes research on analyzing the first ply failure of laminated composite skew plates under concentrated load using finite element analysis. It first describes how a finite element model was developed using shell elements to analyze skew plates of varying skew angles, laminations, and boundary conditions. Three failure criteria (maximum stress, maximum strain, Tsai-Wu) were used to evaluate first ply failure loads. The minimum load from the criteria was taken as the governing failure load. The research aims to determine the effects of various parameters on first ply failure loads and validate the numerical approach through benchmark problems.
This document summarizes a study that investigated the larvicidal effects of Aegle marmelos (bael tree) leaf extracts on Aedes aegypti mosquitoes. Specifically, it assessed the efficacy of methanol extracts from A. marmelos leaves in killing A. aegypti larvae (at the third instar stage) and altering their midgut proteins. The study found that the leaf extract achieved 50% larval mortality (LC50) at a concentration of 49 ppm. Proteomic analysis of larval midguts revealed changes in protein expression levels after exposure to the extract, suggesting its bioactive compounds can disrupt the midgut. The aim is to identify specific inhibitor proteins in the midg
This document presents a system for classifying electrocardiogram (ECG) signals using a convolutional neural network (CNN). The system first preprocesses raw ECG data by removing noise and segmenting the signals. It then uses a CNN to extract features directly from the ECG data and classify arrhythmias without requiring complex feature engineering. The CNN architecture contains 11 convolutional layers and is optimized using techniques like batch normalization and dropout. The system was tested on ECG datasets and achieved classification accuracy of over 93%, demonstrating its effectiveness at automated ECG classification.
This document presents a new algorithm for extracting and summarizing news from online newspapers. The algorithm first extracts news related to the topic using keyword matching. It then distinguishes different types of news about the same topic. A term frequency-based summarization method is used to generate summaries. Sentences are scored based on term frequency and the highest scoring sentences are selected for the summary. The algorithm was evaluated on news datasets from various newspapers and showed good performance in intrinsic evaluation metrics like precision, recall and F-score. Thus, the proposed method can effectively extract and summarize online news for a given keyword or topic.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
artificial intelligence and data science contents.pptxGauravCar
What is artificial intelligence? Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the intellectual processes characteristic of humans, such as the ability to reason.
› ...
Artificial intelligence (AI) | Definitio
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
An improved modulation technique suitable for a three level flying capacitor ...IJECEIAES
This research paper introduces an innovative modulation technique for controlling a 3-level flying capacitor multilevel inverter (FCMLI), aiming to streamline the modulation process in contrast to conventional methods. The proposed
simplified modulation technique paves the way for more straightforward and
efficient control of multilevel inverters, enabling their widespread adoption and
integration into modern power electronic systems. Through the amalgamation of
sinusoidal pulse width modulation (SPWM) with a high-frequency square wave
pulse, this controlling technique attains energy equilibrium across the coupling
capacitor. The modulation scheme incorporates a simplified switching pattern
and a decreased count of voltage references, thereby simplifying the control
algorithm.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
1. International Journal of Research in Advent Technology, Vol.7, No.7, July 2019
E-ISSN: 2321-9637
Available online at www.ijrat.org
10
doi: 10.32622/ijrat.77201924
Abstract: The Apache Hadoop information system uses
Kerberos authentication protocol provided by MIT for
authentication. There are various security issues in the
Kerberos protocol which are remained unsolved like single
purpose of failure, DDoS, replay attacks are some examples.
It illustrate the potential security threats or vulnerabilities and
huge information issues in victimization of Hadoop. Here
authors meant to presents weakness of Kerberos
implementations and identify authentication needs that may
enhance the security of huge information in distributed
environments. The developing mechanism will be a new
perspective of using blockchain in Hadoop for authentication
instead of Kerberos. The mechanism is relies on the rising
technology of blockchain that overcomes the shortcomings of
Kerberos. The author has utilized blockchain basic concepts
and created HDFS client model of blockchain-based
authentication mechanism for big data framework which can
coexist along with Hadoop setup, where it describes and
implemented private blockchain methodology which could
be implemented for a private organization in there Hadoop
setup. Also, it provides the various basic operational feature
for blockchain admin and HDFS client for end-user along
with distributed local authentication mechanism using
blockchain.
Keywords: Big Data, Distributed Authentication, Hadoop,
Security, Blockchain, Decentralized Authentication, Private
Blockchain, Blockchain admin.
I. INTRODUCTION
Security of Big Data is significant on account of ceaselessly
expanding trade of delicate information. Information is being
gathered from various autonomous sources, where they are
frequently than intertwined and broke down to produce
knowledge. Henceforth, these information are a profitable
resource in the present economy. Concerns have concentrated
on security and assurance of delicate data, where these
identify with new dangers to data security and embracing
existing conventional safety efforts isn't satisfactory. The
present authentication arrangement of Apache Hadoop
uncovered the whole Big Data answer for a security issue
because of Kerberos' framework vulnerabilities.
Impediments of Kerberos are obvious in rendition 4 and early
Manuscript revised on June 19, 2019 and published on August 10, 2019
Dr. Pramod Patil, HOD & Professor, Dept. of Computer Engineering, DPU,
Pune. Email: pdpatiljune@gmail.com
Dr. Jyoti Rao, Associate Professor, Dept. of Computer Engineering, DPU,
Pune. Email: jyoti.aswale@gmail.com
Mithun Kankal, Dept. of Computer Engineering, DPU, Pune. Email:
kankal.mithun@gmail.com
drafts of adaptation 5; Single point of failuare, replay
assaults, key exposure, no secret key arrangement for
Kerberos verification and time synchronization are
vulnerabilities recognized. It present the auxiliary downsides
and recognize verification prerequisites, for example,
Blockchain that can improve security of Big Data in
circulated situations.
The intent is to take advantage of blockchain technology
which is decentralized or distributed in nature. Also it uses
password-less/keyless authentication, data encryption in
form of hash and does not need any third party or a central
database. Blockchain authentication is introduced in number
of different sector but it was first used for transaction
verification in Bitcoin. Here Authors focus on the usage of
blockchain as authentication providers and describes how the
blockchain basic concepts could be used for developing
model for distributed authentication mechanism for Hadoop,
which is integrated with HDFS Client. The use of blockchain
technology for authentication provider is at a very early stage
right now, but it is increasing at a rapid pace. Blockchain uses
the key-pair for user registration of identity. The User
information is stored in hashes form and used for storing
name or any other personal information. After that, whenever
user tries to access system, the user information is verifies
against blockchain hashes and checked whether provided
information is true.
II. LITERATURE REVIEW
A. Kerberos and Hadoop Cluster
Hadoop Cluster are normally any set of tightly connected or
loosely connected computers that work together as a single
system is called Hadoop Cluster. In simple words, a cluster of
computer used to deploy Hadoop is called Hadoop Cluster.
Hadoop cluster is a computational cluster build for storing
and analyzing huge amount of structure and unstructured and
semi-structure data in a distributed computing environment.
These clusters run on commodity computers which can
available at low cost.
Hadoop works with a group of computers and each individual
computer executes an independent operating system.
Authentication for individual computer works with OS
boundary. However Hadoop Cluster works across those
boundaries. So, Hadoop should have separate a
network-based authentication system. But unfortunately,
Hadoop doesn't have an in-build authentication capability to
authenticate users and propagate their identity. So, the
community had following options.
1. Develop an in-build network-based authentication
capability into Hadoop.
Developing Blockchain Authentication for Hadoop using
HDFS Client
Dr. Pramod Patil, Dr. Jyoti Rao, Mithun Kankal
2. International Journal of Research in Advent Technology, Vol.7, No.7, July 2019
E-ISSN: 2321-9637
Available online at www.ijrat.org
11
doi: 10.32622/ijrat.77201924
2. Integrate with some other third party system that is
purposefully designed to provide the network-based
authentication capability.
They decided to go with the second option. So, Hadoop uses
Kerberos a third-party tool developed by MIT as part of
Athena project for authentication and identity propagation.
Kerberos prevent default authentication mechanism in
Hadoop and as all machines in the cluster believe every user
credentials presented. To overcome this vulnerability the
Hadoop uses Kerberos to provide a way for verifying the
identity of users. Kerberos authentication and identity
verification is implemented via a client/server model as
shown in following fig.
Fig 1: Hadoop Cluster and Kerberos (Narayanan Nov
2013)
The need and use of Big Data/Hadoop have exhibited a wide
assortment of security challenges as a result of Kerberos. The
following are a few challenges that are shown for Kerberos
before a blockchain arrangement is displayed.
Password-base authentication
Keys Exposure
Single Point of Failure
Time Synchronization
Denial-of-Service (DoS) Attacks
B. Blockchain
A blockchain, initially block chain, is a persistently
developing rundown of records that are connected and
verified utilizing cryptography. It is additionally called as
Blocks. Each block contains a cryptographic hash of the
previous block, transaction information and timestamp of
transaction. Blocks hold legitimate transaction data in form
of hash and encoded into a Merkle tree.
A blockchain is a distributed, public digital ledger and
decentralized and that is used for storing financial
transactions record in terms of block across multiple
computers which are connected in network so that any
involved record which is stored in blocks and placed on
computers cannot be altered retrospectively, without the
alteration of all blocks stored on multiple computer which are
connected in subsequent manner. It allows the participants
who are involved in transactions to audit and verify
transactions records relativeness in independent manner. A
blockchain decentralized database is managed autonomously
using a multiple computer which are connected in
peer-to-peer network and a distributed timestamping server.
Fig 2: Structure of Generic Blockchain (Kalogeropoulos
2018)
Fig2 presents the structure of generic blockchain. It
illustrated blocks include ordered transactions which contains
timestamp, data, hash and previousHash. Each block records
transactional data and each transaction is linked to the
previous one to maintain an ordered structure. Recorded data
is information related to anything like user information for
authentication of identification of user, banking transaction
etc. As a consequence, user can be authenticated against data
by verifying information stored in block and transactions can
be traced back in time to check authenticity of transaction. In
general, a blockchain can possess different characteristics in
terms of accessibility. We can use blockchain distributed
database in different way based on use case. A classification
of these features is presented in the following table as
suggested by (Garzik 2015).
Table 1. Designs and characteristics of Blockchain
(Garzik 2015)
Blockchain type Characteristics
Public No restrictions for reading or submitting
transactions for inclusion.
Private Direct access to data, submitting transactions is
limited to a predefined list of entities.
Permission-less No restrictions on identities of transaction
processors.
Permissioned Solely predefined list of subjects with known
identities can process.
C. Consensus Algorithms
For reaching a consensus the Blockchain uses a
proof-of-work algorithm. The value must be smaller for
cryptographic hash function of each block in order to be
considered value. For this a nonce is included in the block.
By using the proof-of-work method, in order to change the
data in one block, a huge amount of calculation is necessary
and all successors of that block must be re-written. In
addition, the shorter ones would be discarded at the situation
of branches of the chain whereas the longest chain would be
accepted by the network. This method or process makes the
data in blocks practically unmodifiable or un-hackable, and
moreover the harder the processing of overwriting the data
where more blocks built upon the block in which the data is
contained.
III. BLOCKCHAIN AUTHENTICATION FOR HADOOP
Hadoop is a distributed framework where the primary test is
the multifaceted nature of dealing with large implementation,
new ways to deal with security are required. Authentication
and information access control ought to be overseen by
strong authentication, adaptable, versatile and decentralized
that denies any vindictive client from gaining admittance to
Big Data servers. Henceforth, the new strategy needs to beat
the weaknesses of security imperfections in existing
execution. This segment quickly talks about the new
3. International Journal of Research in Advent Technology, Vol.7, No.7, July 2019
E-ISSN: 2321-9637
Available online at www.ijrat.org
12
doi: 10.32622/ijrat.77201924
methodology utilizing blockchain that upgrades verification
of Big Data.
The Authentication of Big data using Blockchain is based on
the creation of a new HDFS client/gateway interface using
existing API of Hadoop ecosystem and new custom python
API based on concepts of Blockchain decentralized
distributed database. The integration of Hadoop ecosystem
and blockchain is a major challenge for this implementation
as blockchain is still evolving around the identification and
authentication areas.
The HDFS client interface has been build using python along
with the creation of blockchain functionality using python
libraries. Here, Author uses Private Blockchain type from
various available blockchain types for implementation of
HDFS Client which will be used for user authentication in
Hadoop, where information of a user will be stored as data
(transaction) into a block. With the help of basic blockchain
features author has created the new authentication
mechanism which provides following various features that
overcome the shortcoming of Kerberos. The following
features described in details in following sub-section.
Decentralized Authentication
Unbreakable Record
Zero Single Point of Failure
No Session Keys
Prevent Data Theft
Authentication process has several steps which are described
in subsequent sub-section. These steps will extract
information of the user from longing detail and verified
against the data (transaction) present in blockchain
distributed decentralized database. The user will able to
access data present on HDFS upon successful verification or
authentication process.
Following fig shows how HDFS Client along with
blockchain authentication layer would be fit with Hadoop
ecosystem.
Fig 3: Hadoop and Blockchain (Architecture)
Following are various block chain features that overcomes
the shortcoming of Kerberos systems.
A. Decentralized Authentication
Decentralized authentication replaces verification instrument
which depends on username/secret word created keys and the
client side SSL certificate with elliptic bend cryptographic
produced keys; this is a similar methodology utilized in a
blockchain technology. It removes central databases where
user information is stored and managed centrally, which is
helpless against programmers who bargain whole
accreditations. In this authentication mechanism, the client
secret word is just utilized in the client's own machine to get
to the private key.
The private key is never moved/uncovered through system or
server and can't be traded over a side channel between the
server and client. This authentication protocol is based on the
digital signature which uses obvious personality confirmation
dependent on open keys. A client is confirmed when an
exchange or message was appointed by an endorsed private
key. It is deduced that the accurate character of the proprietor
is unimportant if whoever approaches the private key is the
proprietor.
B. Unbreakable Record
Blockchain innovation decentralized and disseminated
database, which is utilized between gatherings of non-trust
parties without need of any center man arrangement or
standards to oversee in nonpartisan way, in contrast to Oracle
or some other social framework. Blockchain appropriated
database that keeps up a chain of record which can develop
irreversibly and each record in requested chain rundown
named as blocks. Each block contains a Nonce, exchange
information and date with time record and a connection to a
past information square.
The blocks are inconceivable and unattainable to
computationally adjust or change back to past state; it makes
attainable to verify block of records from break-in and fake
movement. The hash and hash of records makes exchange
information can't be adjusted once it is composed to block.
The executive or end client of information would not
permitted to change or expel any information put away in
blocks. Every copy of the record in blockchain in the system
must be the equivalent all through the system. At that point,
an accord is accomplished by utilizing a proof-of-work
convention in the mining procedure. A proof-of-work
convention is a bit of information, which is hard to deliver yet
simple to check by others client. This makes blockchain
distributed database appropriate for account delicate data. For
example personality identifiable information, medicinal and
financial data.
C. Zero Single Point of Failure
Blockchain is a decentralized and distributed database or
information stockpiling innovation that keeps up a chain of
square or records which persistently develop in an arranged
way. It expels the dangers with information which stores
halfway and diminishes the defenselessness of single-point
disappointment or powerlessness of system programmers.
4. International Journal of Research in Advent Technology, Vol.7, No.7, July 2019
E-ISSN: 2321-9637
Available online at www.ijrat.org
13
doi: 10.32622/ijrat.77201924
Each blockchain server or hub which associated in
blockchain system contains the duplicate of the blockchain.
Nature of information is kept up by enormous replication of
database and is cryptographically trusted. Blockchain utilized
for client confirmation in a framework makes an un-hackable
and sealed advanced character. It possibly diminishes the
adequacy of phishing assaults.
The decentralized and conveyed nature of the blockchain
system would make it inconceivable for the foundation to
flop under an overabundance of solicitations. Henceforth, the
verification strategy dependent on blockchain innovation, it
is resistant to DDoS assaults and un-hackable.
D. No Session Keys
Utilizing SIN convention is viewed as more secure than
session key sharing over the system in a current
authentication convention of Kerberos. The SIN can be
imparted to everybody straightforwardly, as its relating
private key is verified/stored on client side and it never
transmitted in a system over the wire, and not imparted to any
client or substance. During a confirmation component, the
server checks or approve a client by client shared public key
against their digital mark and the SIN client shared
beforehand. It affirms that SIN past nonce in blockchain
square record to avert against replay assaults and in this
manner validate the client demand. The advantages of
utilizing SIN in recognizable proof instrument is its
transportability, where a similar distinguishing proof strategy
can be utilized on various gadgets without uncovering clients
session key and qualifications over the system.
E. Prevent Data Theft
The expansion in the measure of information burglary and
hacking occurrences that have caused shock concerning the
getting to of individual sensitive data, specifically money
related information, for example, financial balances
subtleties, charge cards and wellbeing records or medicinal
records. Petland, who are the Professor in MIT has
investigated blockchain to construct Enigma, which could
possibly permit blockchain conveyed databases to hold
sensitive data and procedure it without gambling introduction
to malicious programmers. Enigma is depicted as a
peer-to-peer network system, empowering distinctive client
or associations to together store and run calculations on
information while keeping the information totally private.
The blockchain innovation makes it harder to break into a
framework as opposed to the innovation that does not totally
upset robbery. The total usage and framework of blockchain
upgrades protection, opportunity and security of transport of
information.
IV. IMPLEMENTATION DETAILS
The blockchain generated data for authentication is secure
because of blockchain network architecture and due to which
it cannot be forged or modified. In a blockchain, data is
completely transparent as if a record is not verified then it is
automatically rejected. The author has used private
blockchain type mechanism for implementation of an
authentication process in the form of HDFS Client. The
Private Blockchain is also called as permissioned blockchain
and in this type of blockchain network has a restriction on
who is allowed to participate in the financial transaction
record as participant or any other transaction which are used
record user information. In private blockchain, we usually
know who are the individual users are, and from which
origination they are, and what role they are associated with.
Here we also assume that user behaves fairly and
misbehaving in any way, they have gone suffer the
consequences for that.
Following are some of the benefits of private blockchain are.
Enterprise or Single Organization and Permissioned.
Identities are Known
Lighter Blockchain and Faster Transaction Speed.
Participants are Pre-approved.
Better Scalability.
More Efficient Consensus Process.
Using blockchain for Authentication process in form of
HDFS Client adds additional layer to big data analytics
process. This additional layer of authentication for Hadoop
compiles of two main modules which are as follows.
Blockchain Admin Module.
Blockchain Authentication Module using HDFS
Client.
A. Data Flow Diagram
The High level functionality of these is represented in
following Data flow diagram of system.
Fig 4: Level 0 Data Flow Diagram
5. International Journal of Research in Advent Technology, Vol.7, No.7, July 2019
E-ISSN: 2321-9637
Available online at www.ijrat.org
14
doi: 10.32622/ijrat.77201924
Fig 5: Level 1 User Authentication DFD
V. RESULT AND DISCUSSION
A. Blockchain Admin Module
The Author has implemented the Blockchain Admin Module
as part of Private Blockchain Permissioned network, where
user can participate in blockchain and access the HDFS file
system only after completion of pre-approved user access
provisioning process. Following are various operational
features implemented and provided for blockchain admin
module that are shown in following Fig.
Fig 6: Blockchain Admin Module Options.
User Registration in Blockchain Database
This operational feature used for adding new user to
blockchain database for Hadoop authentication. However the
user will be added temporarily till the newly added user is not
validated and verified by Mining Operation. For adding user
permanently blockchain admin has to execute operation 1
followed by Operation 2. If process of adding new user failed
then blockchain admin has to do mining before adding any
new user. Here each user information is stored in block, each
block contain 64 bit hash code of previous block along with
transaction data. The transaction data will contain the
information about user.
Blockchain Mining for User Access Provisioning
This operational feature to verify existing blockchain and
validates it contents before adding any new user information
to existing blockchain. The new user information is added to
blockchain upon successful validation of blockchain. This
operation takes consensus from other participating node in
blockchain before adding new user information into new
block at the end of blockchain.
Displaying Blocks of Blockchain
This feature will display the list of users exists in Hadoop
blockchain database in form of chain of block with hash code
of each block along with user information. Here, each row
consists of one block from blockchain which is represented
by index value 0, 1, 2 etc. This index value will be
incremented after new block added to blockchain db. The
Index 0 block represent the genesis block and remaining
following block contains user information along with 64bit
hash code of previous block.
Fig 7: Displaying Blocks of Blockchain
Validation of Blockchain
The validation feature gives blockchain admin to verify the
entries in blockchain database are in intact and no one has
done any tampering or modification to it. The validation
operation is lightweight operation as compared to Mining
process as it will take less resources. This process will only
passed when the blockchain hash code present in current
block is matched with newly calculated hash of previous
block in blockchain. If this operation is failed then the node
on which this operation failed is considered as faulty node. If
any node declared as faulty then all operational attempt to
access data on HDFS will be failed. Also the validation of
blockchain process will fail if someone has modified or try to
modify entries in blockchain.
Listing Users from Blockchain Database
This feature will shows the list of users exists in blockchain
database which are authenticated to access Hadoop using
HDFS Client.
Disabling Hadoop User
This operational feature allows the blockchain admin to
disable the Hadoop access of user for any reason. Such that
user would not able to access the HDFS file system using
authentication module.
Displaying Disabled Hadoop User List
This operational feature allows blockchain admin to
check/displays the list of user whose access have been
disabled and user cannot access Hadoop by any way using
HDFS Client.
Enabling User
6. International Journal of Research in Advent Technology, Vol.7, No.7, July 2019
E-ISSN: 2321-9637
Available online at www.ijrat.org
15
doi: 10.32622/ijrat.77201924
This operational features allows blockchain admin to enable
Hadoop access of user from disabled user list.
B. Blockchain Authentication Module using HDFS
Client
The Author has implemented this Blockchain Authentication
Module which consists of two parts. First part is private
blockchain, where user can participate in blockchain and
access the HDFS file system only after pre-approval and user
access provisioning process. Second part is HDFS Client
which contains interface to HDFS and interact using
command line interface. Following fig shows how to use
blockchain based HDFS Client to access the HDFS directory
and successful user authentication using blockchain.
Fig 8: Successful User Authentication using Blockchain
Here, for example, the user2 try to access the Hadoop but it
will not able to access it because user2 does not exists in
blockchain authentication database and get authentication
failed message.
Fig 9: Unsuccessful User Authentication using
Blockchain
C. Comparison:
Table 2. Comparison between Keberos and Blockchain
Auth.
Kerberos
Blockchain
Authentication
Authentication Type Centralized Decentralized
Authentication
Mechanism
Password based Password less
Session Key
Time based session Key
authentication
No Session Key
Failure Single Point Failure Decentralized
Exposure to attacks Brute Force, DDoS etc.
Unbreakable/
un-hackable
VI. CONCLUSION AND FUTURE SCOPE
Here author represents the common security problems
associated with Kerberos. The Kerberos uses in large
network such as the internet and increasingly used in variety
of systems such as Big Data environment where security
vulnerability is common and it is highly vulnerable due to
shortcoming of Kerberos. Here the Kerberos limitations have
been addressed.
New solutions would be needed for big data environment in
an era where greater security requirements needed as
integration of big data system happening with different other
system. Blockchain technology has provided various scalable
security solutions to for various fields in multiple sector, and
it was first introduced by Bitcoin, also it is been used in
researching for solving other various common security issues.
We also tried to use blockchain technology to build
authentication mechanism for Hadoop.
Existing authentication mechanism using Kerberos, positions
Big Data systems to depend on many security risks and
vulnerabilities. The mechanism for Hadoop authentication
using blockchain is based on distributed and decentralized
infrastructure and that is scalable, reliable and has no single
point failure.
Therefore, the utilizing the advantages of blockchain
technology could be leveraged to harden security systems,
including distributed authentication and no single point
failure of Big Data system and there is no failure due to
centralized servers as the mechanism is based on distributed
technique. Hence author tried to build a new identity system
and authentication framework for big data in form of HDFS
Client which is based on blockchain technology. This
authentication mechanism is built with cloudera quickstart
vm along with python libraries which are used to create
blockchain. Currently this mechanism is works for one node
and in future work we can extends and build it for Hadoop
cluster which has multiple hosts in it.
REFERENCES
[1] Nazri Abdullah, Anne Håkansson, Esmiralda Moradian "Blockchain
based Approach to Enhance Big Data Authentication in Distributed
Environment" In: International Conference on Ubiquitous and Future
Networks. ICUFN, pp. 887–892.
[2] Mithun Kankal, Pramod Patil "An Adaptive Authentication Based on
Blockchain for Bigdata Hadoop Framework" Volume 5 - Issue 1
(89-94) January - February 2019, International Journal of Engineering
and Techniques (IJET),ISSN:2395-1303, www.ijetjournal.org
[3] “Cloud Security Alliance and CSA Releases the Expanded Top Ten Big
Data Security & Privacy Challenges:” [Online] at:
https://cloudsecurityalliance.org/media/news/csa-releases-theexpande
d- Top-ten-big-data-security-privacy-challenges/. [Accessed:
19-Jan-2016].
[4] “Welcome to ApacheTM Hadoop®!” [Online]. Available:
https://hadoop.apache.org/. [Accessed: 12-Jan-2016].
[5] S. M. Bellovin and M. Merritt, “Limitations of the Kerberos
Authentication System,” SIGCOMM Compute Common Rev, vol. 20,
no. 5, pp. 119–132, Oct. 1990.
[6] D. Davis and D. E. Geer, “Kerberos Security with Clocks Adrift.” in
USENIX Security, 1995.
7. International Journal of Research in Advent Technology, Vol.7, No.7, July 2019
E-ISSN: 2321-9637
Available online at www.ijrat.org
16
doi: 10.32622/ijrat.77201924
[7] D. E. Denning and G. M. Sacco, “Distribution Protocols and used of
Timestamps,” Commun ACM, vol. 24, no. 8, pp. 533–536, Aug. 1981.
[8] “Intel-hadoop/project-rhino,” GitHub. [Online]. Available:
https://github.com/intel-hadoop/project-rhino. [Accessed: 23-Mar-
2016].
[9] [Online] Available: “Lightweight Directory client server Access
Protocol,” Wikipedia, the free encyclopedia. 20-Mar-2016.
[10] [Online] Available:
https://101blockchains.com/consensus-algorithms-blockchain/
AUTHORS PROFILE
Mithun Kankal
Hadoop Administrator and Developer with 9+ years of
Industry Experience on big data and big data analytics
using Hadoop and hadoop ecosystem. Working as
Researcher and Implementer of various new
technologies like Cloudera, AWS, Ansible, Teraform
etc.
Dr. Pramod Patil
An alumnus of COEP Pune, Pramod holds Masters in
Computer Engineering and Ph.D from COEP. He has
total 14 years of experience in Academics, Research
and Industry. He held various positions such as HOD,
Associate Professor, Assistant Professor, and Lecturer
during his tenure. He is recognized as a Post Graduate
Teacher, Computer Engineering at University of Pune.
Dr. Jyoti Rao
Total 12.5 years of Teaching Experience in Computer
Engineering in DYPIET Pimpri Pune 18. PhD in
Vignan University, Guntur, Andhra Pradesh. Approved
PG Teacher. Development proficiency in Microsoft
Technologies C++, C Sharp. Have developed a live
project for BMC Software Pune during 2007-2008.
Worked on Unix, Solaris, Linux in IUCAA during
2000 – 2001.