In this paper, we report the benchmarking results of Hyperledger, a Distributed Ledger, which is the derivation Blockchain Technology. Method to evaluate Hyperledger in a limited infrastructure is developed. Themeasured infrastructure consists of 8 nodes with a load of up to 20000 transactions/second. Hyperledger consistently runs all evaluation, namely, for 20,000 transactions, the run time 74.30s, latency 73.40ms latency, and 257 tps. The benchmarking of Hyperledger shows better than a database system in a high workload scenario. We found that the maximum size data volume in one transaction on the Hyperledger network is around ten (10) times of MySQL. Also, the time spent on processing a single transaction in the blockchain network is 80-200 times faster than MySQL. This initial analysis can provide an overview for practitioners in making decisions about the adoption of blockchain technology in their IT systems.
Blockchain and Services – Exploring the LinksIngo Weber
In this keynote talk, given at the ASSRI Symposium 2018, I explore four different facets of the relationship between Blockchain and Services.
First, application-level service interfaces for interaction with Blockchain-based applications enable easy integration with existing infrastructure. Second, service composition can be achieved through smart contracts, and enable different approaches to orchestrations and choreographies. Third, Blockchain-aaS offerings cover infrastructure operation, but can go beyond that. And finally, microservice principles can be applied to smart contract design.
Software Architecture and Model-Driven Engineering for BlockchainIngo Weber
This talk was given at the August SydEthereum meetup, and gives an overview of our Blockchain research (Data61, CSIRO). The focus is on Software Architecture and Model-Driven Engineering. In addition to some approaches and tooling, it mentions some of the empirical work on availability of write transactions on Ethereum.
Yao Yao, Jack Rasmus-Vorrath, Ivelin Angelov
https://github.com/yaowser/basic_blockchain
https://www.slideshare.net/YaoYao44/blockchain-security-and-demonstration/
Distributed ledger technology over a network of computers, which provides an alternative to the centralized system
Distributed Database
Peer-to-Peer Transmission
Transparency with Pseudonymity
Records are immutable
Computational Logic
https://www.youtube.com/watch?v=5ArZxRdhyPc
Blockchain: Background and Data61 Research OverviewIngo Weber
My keynote slides at the Korean National Blockchain Conference, giving an overview of our research in Software Architecture, Model-Driven Engineering, Dependability / Availability, and Business Process Execution in the context of Blockchain.
Fair and trustworthy: Lock-free enhanced tendermint blockchain algorithmTELKOMNIKA JOURNAL
Blockchain Technology is exclusively used to make online transactions secure by
maintaining a distributed and decentralized ledger of records across multiple computers.
Tendermint is a general-purpose blockchain engine that is composed of two
parts; Tendermint Core and the blockchain application interface. The application interface
makes Tendermint suitable for a wide range of applications. In this paper, we
analyze and improve Practical Byzantine Fault Tolerant (PBFT), a consensus-based
Tendermint blockchain algorithm. In order to avoid negative issues of locks, we first
propose a lock-free algorithm for blockchain in which the proposal and voting phases
are concurrent whereas the commit phase is sequential. This consideration in the algorithm
allows parallelism. Secondly, a new methodology is used to decide the size
of the voter set which is a subset of blockchain nodes, further investigating the block
sensitivity and trustworthiness of nodes. Thirdly, to fairly select the voter set nodes,
we employ the random walk algorithm. Fourthly, we imply the wait-freedom property
by using a timeout due to which all blocks are eventually committed or aborted. In
addition, we have discussed voting conflicts and consensuses issues that are used as a
correctness property, and provide some supportive techniques.
Blockchain and Services – Exploring the LinksIngo Weber
In this keynote talk, given at the ASSRI Symposium 2018, I explore four different facets of the relationship between Blockchain and Services.
First, application-level service interfaces for interaction with Blockchain-based applications enable easy integration with existing infrastructure. Second, service composition can be achieved through smart contracts, and enable different approaches to orchestrations and choreographies. Third, Blockchain-aaS offerings cover infrastructure operation, but can go beyond that. And finally, microservice principles can be applied to smart contract design.
Software Architecture and Model-Driven Engineering for BlockchainIngo Weber
This talk was given at the August SydEthereum meetup, and gives an overview of our Blockchain research (Data61, CSIRO). The focus is on Software Architecture and Model-Driven Engineering. In addition to some approaches and tooling, it mentions some of the empirical work on availability of write transactions on Ethereum.
Yao Yao, Jack Rasmus-Vorrath, Ivelin Angelov
https://github.com/yaowser/basic_blockchain
https://www.slideshare.net/YaoYao44/blockchain-security-and-demonstration/
Distributed ledger technology over a network of computers, which provides an alternative to the centralized system
Distributed Database
Peer-to-Peer Transmission
Transparency with Pseudonymity
Records are immutable
Computational Logic
https://www.youtube.com/watch?v=5ArZxRdhyPc
Blockchain: Background and Data61 Research OverviewIngo Weber
My keynote slides at the Korean National Blockchain Conference, giving an overview of our research in Software Architecture, Model-Driven Engineering, Dependability / Availability, and Business Process Execution in the context of Blockchain.
Fair and trustworthy: Lock-free enhanced tendermint blockchain algorithmTELKOMNIKA JOURNAL
Blockchain Technology is exclusively used to make online transactions secure by
maintaining a distributed and decentralized ledger of records across multiple computers.
Tendermint is a general-purpose blockchain engine that is composed of two
parts; Tendermint Core and the blockchain application interface. The application interface
makes Tendermint suitable for a wide range of applications. In this paper, we
analyze and improve Practical Byzantine Fault Tolerant (PBFT), a consensus-based
Tendermint blockchain algorithm. In order to avoid negative issues of locks, we first
propose a lock-free algorithm for blockchain in which the proposal and voting phases
are concurrent whereas the commit phase is sequential. This consideration in the algorithm
allows parallelism. Secondly, a new methodology is used to decide the size
of the voter set which is a subset of blockchain nodes, further investigating the block
sensitivity and trustworthiness of nodes. Thirdly, to fairly select the voter set nodes,
we employ the random walk algorithm. Fourthly, we imply the wait-freedom property
by using a timeout due to which all blocks are eventually committed or aborted. In
addition, we have discussed voting conflicts and consensuses issues that are used as a
correctness property, and provide some supportive techniques.
Blockchain Essentials for Enterprise ArchitectsGokul Alex
My session on Blockchain : Protocols, Platforms, Principles and Paradigms presented in the Benagluru Chamber of Industries and Commerce #BCIC Talk Series held at VMware Software India in collaboration with WomenWhoCode. This presentation is a compilation of essential concepts about Bitcoin, Ethereum, IPFS, Hyperledger, R3 Corda.
Applying Blockchain Technology for Digital TransformationGokul Alex
My virtual webinar session on applying Blockchain Technology for Digital Transformation of Contemporary Business Models in the UL Talks Series organised by ULTS, the IT Subsidiary of ULCCS. This presentation is a journey through the basic concepts of Blockchain Technology and a compilation of interesting business cases around Blockchain Technology.
The basic idea of decentralization is to distribute control and authority to the peripheries of an organization instead of one central body being in full control of the organization.
Details in how InfiniteChain is the most practical solution to connect your centralized environment with public blockchain. Ethereum-based InfiniteChain addresses the blockchain bottleneck of privacy, transaction speed and block bloat.
BLOCKCHAIN-BASED SMART CONTRACTS : A SYSTEMATIC MAPPING STUDY csandit
An appealing feature of blockchain technology is smart contracts. A smart contract is
executable code that runs on top of the blockchain to facilitate, execute and enforce an
agreement between untrusted parties without the involvement of a trusted third party. In this
paper, we conduct a systematic mapping study to collect all research that is relevant to smart
contracts from a technical perspective. The aim of doing so is to identify current research topics
and open challenges for future studies in smart contract research. We extract 24 papers from
different scientific databases. The results show that about two thirds of the papers focus on
identifying and tackling smart contract issues. Four key issues are identified, namely, codifying,
security, privacy and performance issues. The rest of the papers focuses on smart contract
applications or other smart contract related topics. Research gaps that need to be addressed in
future studies are provided.
Blockchain Scalability - Architectures and AlgorithmsGokul Alex
My presentation on 'Blockchain Scalability - Architectures and Algorithms' for the TechAthena Digital Community Webinar.
Blockchain Scalability is one of the most significant concern for Minimum Viable Blockchain Implementation. It is one of the key aspects determining the relevance and feasibility of Blockchain Technology for a particular use case.This session will cover the fundamental aspects of distributed computing that determine the contours of scalability.
Subsequently, the session will outline the parameters and metrics related to Blockchain Scalability in detail. In this context, the session will deep dive into architectural and algorithmic techniques that enables a scalable Blockchain.Architectural techniques such as vertical scaling and horizontal scaling will be explained in detail. Design techniques such as State Channels, Sharding, SideChains, Off chain computations, Block Size and Time Optimization etc. will be explained.
In summary, this session will conclude with the implications and trade-off between Blockchain Scalability, Security, Simplicity and Interoperability. Looking forward to your views and thoughts !
Blockchain technology has changed the revolution of data storage and privacy. Decentralized data storage technique in Blockchain introduced the dependent ledger system. The main motive of Blockchain is to avoid the third party authorization and validation process and intermediaries. This research process shows the different areas where Blockchain can be implemented and some guidelines. And what are the factors need to be considered while deploying the distributed ledgers. Nitin | Dr. Lakshmi J. V. N | Sharique Raza "A Study on Applications of Blockchain" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-6 , October 2020, URL: https://www.ijtsrd.com/papers/ijtsrd33693.pdf Paper Url: https://www.ijtsrd.com/computer-science/other/33693/a-study-on-applications-of-blockchain/nitin
A SYSTEMATIC MAPPING STUDY ON CURRENT RESEARCH TOPICS IN SMART CONTRACTSijcsit
An appealing feature of blockchain technology is smart contracts. A smart contract is executable code that runs on top of the blockchain to facilitate, execute and enforce an agreement between untrusted parties without the involvement of a trusted third party. In this paper, we conduct a systematic mapping study to collect all research that is relevant to smart contracts from a technical perspective. The aim of doing so is to identify current research topics and open challenges for future studies in smart contract research. We extract 24 papers from different scientific databases. The results show that about two thirds of the papers focus on identifying and tackling smart contract issues. Four key issues are identified, namely, codifying, security, privacy and performance issues. The rest of the papers focuses on smart contract applications or other smart contract related topics. Research gaps that need to be addressed in future studies are provided.
Confidential Computing - Analysing Data Without Seeing DataMaximilian Ott
How can people collaborate over data analysis without disclosing their data to each other? This seminar will cover an end-to-end solution to this problem, including privacy preserving entity resolution and the application of partial homomorphic encryption and Rademacher observations to private linear classification tasks.
In particular we will show that it is possible to learn from data, while keeping the data confidential, both with and without the entity resolution step. We will give a brief overview of potential applications and give some practical examples of how these approaches can be used.
An interesting way to reach consensus is that by means of trusted hardware, for example Intel’s Software Guard Extension (SGX). In this paper we explore how SGX can be applied to reach consensus in a distributed network. Furthermore, we discuss some of its applications, and its pros and cons.
Blockchain Essentials for Enterprise ArchitectsGokul Alex
My session on Blockchain : Protocols, Platforms, Principles and Paradigms presented in the Benagluru Chamber of Industries and Commerce #BCIC Talk Series held at VMware Software India in collaboration with WomenWhoCode. This presentation is a compilation of essential concepts about Bitcoin, Ethereum, IPFS, Hyperledger, R3 Corda.
Applying Blockchain Technology for Digital TransformationGokul Alex
My virtual webinar session on applying Blockchain Technology for Digital Transformation of Contemporary Business Models in the UL Talks Series organised by ULTS, the IT Subsidiary of ULCCS. This presentation is a journey through the basic concepts of Blockchain Technology and a compilation of interesting business cases around Blockchain Technology.
The basic idea of decentralization is to distribute control and authority to the peripheries of an organization instead of one central body being in full control of the organization.
Details in how InfiniteChain is the most practical solution to connect your centralized environment with public blockchain. Ethereum-based InfiniteChain addresses the blockchain bottleneck of privacy, transaction speed and block bloat.
BLOCKCHAIN-BASED SMART CONTRACTS : A SYSTEMATIC MAPPING STUDY csandit
An appealing feature of blockchain technology is smart contracts. A smart contract is
executable code that runs on top of the blockchain to facilitate, execute and enforce an
agreement between untrusted parties without the involvement of a trusted third party. In this
paper, we conduct a systematic mapping study to collect all research that is relevant to smart
contracts from a technical perspective. The aim of doing so is to identify current research topics
and open challenges for future studies in smart contract research. We extract 24 papers from
different scientific databases. The results show that about two thirds of the papers focus on
identifying and tackling smart contract issues. Four key issues are identified, namely, codifying,
security, privacy and performance issues. The rest of the papers focuses on smart contract
applications or other smart contract related topics. Research gaps that need to be addressed in
future studies are provided.
Blockchain Scalability - Architectures and AlgorithmsGokul Alex
My presentation on 'Blockchain Scalability - Architectures and Algorithms' for the TechAthena Digital Community Webinar.
Blockchain Scalability is one of the most significant concern for Minimum Viable Blockchain Implementation. It is one of the key aspects determining the relevance and feasibility of Blockchain Technology for a particular use case.This session will cover the fundamental aspects of distributed computing that determine the contours of scalability.
Subsequently, the session will outline the parameters and metrics related to Blockchain Scalability in detail. In this context, the session will deep dive into architectural and algorithmic techniques that enables a scalable Blockchain.Architectural techniques such as vertical scaling and horizontal scaling will be explained in detail. Design techniques such as State Channels, Sharding, SideChains, Off chain computations, Block Size and Time Optimization etc. will be explained.
In summary, this session will conclude with the implications and trade-off between Blockchain Scalability, Security, Simplicity and Interoperability. Looking forward to your views and thoughts !
Blockchain technology has changed the revolution of data storage and privacy. Decentralized data storage technique in Blockchain introduced the dependent ledger system. The main motive of Blockchain is to avoid the third party authorization and validation process and intermediaries. This research process shows the different areas where Blockchain can be implemented and some guidelines. And what are the factors need to be considered while deploying the distributed ledgers. Nitin | Dr. Lakshmi J. V. N | Sharique Raza "A Study on Applications of Blockchain" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-6 , October 2020, URL: https://www.ijtsrd.com/papers/ijtsrd33693.pdf Paper Url: https://www.ijtsrd.com/computer-science/other/33693/a-study-on-applications-of-blockchain/nitin
A SYSTEMATIC MAPPING STUDY ON CURRENT RESEARCH TOPICS IN SMART CONTRACTSijcsit
An appealing feature of blockchain technology is smart contracts. A smart contract is executable code that runs on top of the blockchain to facilitate, execute and enforce an agreement between untrusted parties without the involvement of a trusted third party. In this paper, we conduct a systematic mapping study to collect all research that is relevant to smart contracts from a technical perspective. The aim of doing so is to identify current research topics and open challenges for future studies in smart contract research. We extract 24 papers from different scientific databases. The results show that about two thirds of the papers focus on identifying and tackling smart contract issues. Four key issues are identified, namely, codifying, security, privacy and performance issues. The rest of the papers focuses on smart contract applications or other smart contract related topics. Research gaps that need to be addressed in future studies are provided.
Confidential Computing - Analysing Data Without Seeing DataMaximilian Ott
How can people collaborate over data analysis without disclosing their data to each other? This seminar will cover an end-to-end solution to this problem, including privacy preserving entity resolution and the application of partial homomorphic encryption and Rademacher observations to private linear classification tasks.
In particular we will show that it is possible to learn from data, while keeping the data confidential, both with and without the entity resolution step. We will give a brief overview of potential applications and give some practical examples of how these approaches can be used.
An interesting way to reach consensus is that by means of trusted hardware, for example Intel’s Software Guard Extension (SGX). In this paper we explore how SGX can be applied to reach consensus in a distributed network. Furthermore, we discuss some of its applications, and its pros and cons.
computerweekly.com 17-23 September 2019 16W hen people int.docxmccormicknadine86
computerweekly.com 17-23 September 2019 16
W hen people interact with each other, for example via financial transactions, sharing legal docu-ments or trading through supply chains, they need a high level of confidence that the data
recording their interaction is accurate and true.
A distributed ledger makes it possible to build applications
where multiple parties can execute transactions online without
the need to trust a central authority or indeed each other.
Over the past few years, the number of use cases for distributed
ledgers, and their more specialised form, blockchains, has been
increasing, as has the technology to support the underlying infra-
structure and build applications on top of it.
With a distributed ledger, every user has their own full, or in some
cases partial, copy of the database, referred to as a node, which
can be a physical device, a virtual machine or a software container.
Each node runs the relevant software to provide the infrastruc-
ture management and the relevant application, including the
ability to complete “smart contracts” that negotiate the direct
exchange of assets between participating nodes.
consensus
For a transaction to proceed, all nodes must verify a transaction
and agree its order on the ledger.
Doing so is termed “consensus”, which is necessary, for exam-
ple, to avoid double counting or overspending when it comes to
financial assets.
Consensus involves four steps, from the transaction being
initiated to it being committed on all nodes with a timestamp
InsIde blockchaIn and Its
varIous applIcatIons
Bob Tarzey explores the technology around
blockchain shaping how businesses use data
BUYER’S GUIDE TO BLOCKCHAIN | PART 2 OF 3
G
O
LD
EN
S
IK
O
R
K
A
/A
D
O
B
E
Home
http://www.computerweekly.com
https://searchcio.techtarget.com/definition/blockchain
https://searchservervirtualization.techtarget.com/definition/virtual-machine
https://www.techtarget.com/contributor/Bob-Tarzey
computerweekly.com 17-23 September 2019 17
Home
News
How IT departments
can find different
ways to upskill in
the new economy
Travel company Clarity
bakes ThoughtSpot
search and AI functions
into analytics tool
Digital factory
approach signals a new
departure for Network
Rail’s IT strategy
Editor’s comment
Buyer’s guide
to blockchain
Delivering cloud in the
financial services sector
How 5G will transform
your business
Downtime
providing a unique cryptographic signature. These steps can be
completed in seconds or minutes, depending on the technology.
Blockchains are distinguished from other distributed ledgers in
being updated by adding blocks of new transactions to create an
immutable tamper-proof log of sensitive activity.
The right to write blocks may require proof-of-work – which
can be time and resource intensive – the aim being to prevent, for
example, mass updates by bots.
Nomenclature has become confusing as the two terms, dis-
tributed ledger and blo ...
How Integrated Process Management Completes the Blockchain JigsawCognizant
Blockchains, or distributed ledger technology, makes digital transactions safer for all parties, assuming that organizations apply traditional business orchestration and integrated process management to tightly connect legacy systems of record with emerging blockchain networks, promoting trust and true collaboration across their value chains.
Blockchain Computing: Prospects and Challenges for Digital Transformation Pr...eraser Juan José Calderón
Blockchain Computing: Prospects and Challenges for Digital Transformation . Professor Syed Akhter Hossain.
Abstract:
A revolutionary trustable sharable computing outcome, the blockchain is essentially a distributed database of records or public ledger of all transactions originated from digital events and shared among participating parties within a computing framework. Each transaction of the chain in the public ledger is verified by consensus of a majority of the participants in the system and its constituents. Once recorded, information can never be erased and neither altered. The blockchain contains a certain and verifiable record of every single transaction ever made during the business operations. In general sense, the blockchain could be described simply as being a way of storing the information of a transaction, between multiple parties in a trustable way. Recording, sharing, storing and redistributing contents in a secure and decentralized way. Being owned, run and monitored by everybody and without anyone controlling it. Besides, avoiding any kind of modifications or abuses from a central authority. Blockchain technology is non-controversial and has worked flawlessly over the last few years and is being successfully applied to both financial and non-financial world applications and listed as as the most important invention since the Internet itself. Besides, digital transformation is taking off as rapid agent for change as part of the global business convergence. In this article, detail of blockchain technologies is presented from the pe
Blockchain for AI: Review and Open. Research Challenges K. SALAH, M. H. REHMA...eraser Juan José Calderón
Blockchain for AI: Review and Open. Research Challenges
K. SALAH, M. H. REHMAN, N. NIZAMUDDIN and A. Al-Fuqaha
ABSTRACT
Recently, Artificial Intelligence (AI) and blockchain have become two of the most trending and disruptive technologies. Blockchain technology has the ability to automate payment in cryptocurrency and to provide access to a shared ledger of data, transactions, and logs in a decentralized, secure, and trusted manner. Also with smart contracts, blockchain has the ability to govern interactions among participants with no intermediary or a trusted third party. AI, on the other hand, offers intelligence and decision- making capabilities for machines similar to humans. In this paper, we present a detailed survey on blockchain applications for AI. We review the literature, tabulate, and summarize the emerging blockchain applications, platforms, and protocols specifically targeting AI area. We also identify and discuss open research challenges of utilizing blockchain technologies for AI.
A decentralized consensus application using blockchain ecosystem IJECEIAES
The consensus is a critical operation of any decision-making process. It involves a set of eligible members; whose decision need to be honored by taking their acknowledgment before making any decision. The traditional consensus process follows centralized architecture, the members need to rely on and trust this architecture. The proposed system aims to develop a secure decentralized consensus application in the untrusted environment by making use of blockchain technology along with smart contract and interplanetary file system (IPFS).
Blockchain - Use Cases for The Nigerian Economy and Potential Legal RisksRilwan Shittu
It is not uncommon for people to misconceive the use, relevance or applicability of new and emerging technologies - this has been the case with the Blockchain technology as a lot of people only have a vague or inaccurate understanding of what it really means and what it can do. Some have even gone as far as saying “Blockchain is the future” and “Blockchain will save Nigeria”, etc.
This article presents a practical overview of how the technology works; the potential use cases in the Nigerian economy; the legal risks and the role of regulatory agencies in aiding the adoption and growth of the technology.
Read more below:
Blockchain- Use Cases for the Nigerian Economy and Potential Legal RisksDamilola A. Oyebayo
It is not uncommon for people to misconceive the use, relevance or applicability of new and emerging technologies - this has been the case with the Blockchain technology as a lot of people only have a vague or inaccurate understanding of what it really means and what it can do. Some have even gone as far as saying “Blockchain is the future” and “Blockchain will save Nigeria”, etc.
This article presents a practical overview of how the technology works; the potential use cases in the Nigerian economy; the legal risks and the role of regulatory agencies in aiding the adoption and growth of the technology.
Read more below:
Amazon products reviews classification based on machine learning, deep learni...TELKOMNIKA JOURNAL
In recent times, the trend of online shopping through e-commerce stores and websites has grown to a huge extent. Whenever a product is purchased on an e-commerce platform, people leave their reviews about the product. These reviews are very helpful for the store owners and the product’s manufacturers for the betterment of their work process as well as product quality. An automated system is proposed in this work that operates on two datasets D1 and D2 obtained from Amazon. After certain preprocessing steps, N-gram and word embedding-based features are extracted using term frequency-inverse document frequency (TF-IDF), bag of words (BoW) and global vectors (GloVe), and Word2vec, respectively. Four machine learning (ML) models support vector machines (SVM), logistic regression (RF), logistic regression (LR), multinomial Naïve Bayes (MNB), two deep learning (DL) models convolutional neural network (CNN), long-short term memory (LSTM), and standalone bidirectional encoder representations (BERT) are used to classify reviews as either positive or negative. The results obtained by the standard ML, DL models and BERT are evaluated using certain performance evaluation measures. BERT turns out to be the best-performing model in the case of D1 with an accuracy of 90% on features derived by word embedding models while the CNN provides the best accuracy of 97% upon word embedding features in the case of D2. The proposed model shows better overall performance on D2 as compared to D1.
Design, simulation, and analysis of microstrip patch antenna for wireless app...TELKOMNIKA JOURNAL
In this study, a microstrip patch antenna that works at 3.6 GHz was built and tested to see how well it works. In this work, Rogers RT/Duroid 5880 has been used as the substrate material, with a dielectric permittivity of 2.2 and a thickness of 0.3451 mm; it serves as the base for the examined antenna. The computer simulation technology (CST) studio suite is utilized to show the recommended antenna design. The goal of this study was to get a more extensive transmission capacity, a lower voltage standing wave ratio (VSWR), and a lower return loss, but the main goal was to get a higher gain, directivity, and efficiency. After simulation, the return loss, gain, directivity, bandwidth, and efficiency of the supplied antenna are found to be -17.626 dB, 9.671 dBi, 9.924 dBi, 0.2 GHz, and 97.45%, respectively. Besides, the recreation uncovered that the transfer speed side-lobe level at phi was much better than those of the earlier works, at -28.8 dB, respectively. Thus, it makes a solid contender for remote innovation and more robust communication.
Design and simulation an optimal enhanced PI controller for congestion avoida...TELKOMNIKA JOURNAL
In this paper, snake optimization algorithm (SOA) is used to find the optimal gains of an enhanced controller for controlling congestion problem in computer networks. M-file and Simulink platform is adopted to evaluate the response of the active queue management (AQM) system, a comparison with two classical controllers is done, all tuned gains of controllers are obtained using SOA method and the fitness function chose to monitor the system performance is the integral time absolute error (ITAE). Transient analysis and robust analysis is used to show the proposed controller performance, two robustness tests are applied to the AQM system, one is done by varying the size of queue value in different period and the other test is done by changing the number of transmission control protocol (TCP) sessions with a value of ± 20% from its original value. The simulation results reflect a stable and robust behavior and best performance is appeared clearly to achieve the desired queue size without any noise or any transmission problems.
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...TELKOMNIKA JOURNAL
Vehicular ad-hoc networks (VANETs) are wireless-equipped vehicles that form networks along the road. The security of this network has been a major challenge. The identity-based cryptosystem (IBC) previously used to secure the networks suffers from membership authentication security features. This paper focuses on improving the detection of intruders in VANETs with a modified identity-based cryptosystem (MIBC). The MIBC is developed using a non-singular elliptic curve with Lagrange interpolation. The public key of vehicles and roadside units on the network are derived from number plates and location identification numbers, respectively. Pseudo-identities are used to mask the real identity of users to preserve their privacy. The membership authentication mechanism ensures that only valid and authenticated members of the network are allowed to join the network. The performance of the MIBC is evaluated using intrusion detection ratio (IDR) and computation time (CT) and then validated with the existing IBC. The result obtained shows that the MIBC recorded an IDR of 99.3% against 94.3% obtained for the existing identity-based cryptosystem (EIBC) for 140 unregistered vehicles attempting to intrude on the network. The MIBC shows lower CT values of 1.17 ms against 1.70 ms for EIBC. The MIBC can be used to improve the security of VANETs.
Conceptual model of internet banking adoption with perceived risk and trust f...TELKOMNIKA JOURNAL
Understanding the primary factors of internet banking (IB) acceptance is critical for both banks and users; nevertheless, our knowledge of the role of users’ perceived risk and trust in IB adoption is limited. As a result, we develop a conceptual model by incorporating perceived risk and trust into the technology acceptance model (TAM) theory toward the IB. The proper research emphasized that the most essential component in explaining IB adoption behavior is behavioral intention to use IB adoption. TAM is helpful for figuring out how elements that affect IB adoption are connected to one another. According to previous literature on IB and the use of such technology in Iraq, one has to choose a theoretical foundation that may justify the acceptance of IB from the customer’s perspective. The conceptual model was therefore constructed using the TAM as a foundation. Furthermore, perceived risk and trust were added to the TAM dimensions as external factors. The key objective of this work was to extend the TAM to construct a conceptual model for IB adoption and to get sufficient theoretical support from the existing literature for the essential elements and their relationships in order to unearth new insights about factors responsible for IB adoption.
Efficient combined fuzzy logic and LMS algorithm for smart antennaTELKOMNIKA JOURNAL
The smart antennas are broadly used in wireless communication. The least mean square (LMS) algorithm is a procedure that is concerned in controlling the smart antenna pattern to accommodate specified requirements such as steering the beam toward the desired signal, in addition to placing the deep nulls in the direction of unwanted signals. The conventional LMS (C-LMS) has some drawbacks like slow convergence speed besides high steady state fluctuation error. To overcome these shortcomings, the present paper adopts an adaptive fuzzy control step size least mean square (FC-LMS) algorithm to adjust its step size. Computer simulation outcomes illustrate that the given model has fast convergence rate as well as low mean square error steady state.
Design and implementation of a LoRa-based system for warning of forest fireTELKOMNIKA JOURNAL
This paper presents the design and implementation of a forest fire monitoring and warning system based on long range (LoRa) technology, a novel ultra-low power consumption and long-range wireless communication technology for remote sensing applications. The proposed system includes a wireless sensor network that records environmental parameters such as temperature, humidity, wind speed, and carbon dioxide (CO2) concentration in the air, as well as taking infrared photos.The data collected at each sensor node will be transmitted to the gateway via LoRa wireless transmission. Data will be collected, processed, and uploaded to a cloud database at the gateway. An Android smartphone application that allows anyone to easily view the recorded data has been developed. When a fire is detected, the system will sound a siren and send a warning message to the responsible personnel, instructing them to take appropriate action. Experiments in Tram Chim Park, Vietnam, have been conducted to verify and evaluate the operation of the system.
Wavelet-based sensing technique in cognitive radio networkTELKOMNIKA JOURNAL
Cognitive radio is a smart radio that can change its transmitter parameter based on interaction with the environment in which it operates. The demand for frequency spectrum is growing due to a big data issue as many Internet of Things (IoT) devices are in the network. Based on previous research, most frequency spectrum was used, but some spectrums were not used, called spectrum hole. Energy detection is one of the spectrum sensing methods that has been frequently used since it is easy to use and does not require license users to have any prior signal understanding. But this technique is incapable of detecting at low signal-to-noise ratio (SNR) levels. Therefore, the wavelet-based sensing is proposed to overcome this issue and detect spectrum holes. The main objective of this work is to evaluate the performance of wavelet-based sensing and compare it with the energy detection technique. The findings show that the percentage of detection in wavelet-based sensing is 83% higher than energy detection performance. This result indicates that the wavelet-based sensing has higher precision in detection and the interference towards primary user can be decreased.
A novel compact dual-band bandstop filter with enhanced rejection bandsTELKOMNIKA JOURNAL
In this paper, we present the design of a new wide dual-band bandstop filter (DBBSF) using nonuniform transmission lines. The method used to design this filter is to replace conventional uniform transmission lines with nonuniform lines governed by a truncated Fourier series. Based on how impedances are profiled in the proposed DBBSF structure, the fractional bandwidths of the two 10 dB-down rejection bands are widened to 39.72% and 52.63%, respectively, and the physical size has been reduced compared to that of the filter with the uniform transmission lines. The results of the electromagnetic (EM) simulation support the obtained analytical response and show an improved frequency behavior.
Deep learning approach to DDoS attack with imbalanced data at the application...TELKOMNIKA JOURNAL
A distributed denial of service (DDoS) attack is where one or more computers attack or target a server computer, by flooding internet traffic to the server. As a result, the server cannot be accessed by legitimate users. A result of this attack causes enormous losses for a company because it can reduce the level of user trust, and reduce the company’s reputation to lose customers due to downtime. One of the services at the application layer that can be accessed by users is a web-based lightweight directory access protocol (LDAP) service that can provide safe and easy services to access directory applications. We used a deep learning approach to detect DDoS attacks on the CICDDoS 2019 dataset on a complex computer network at the application layer to get fast and accurate results for dealing with unbalanced data. Based on the results obtained, it is observed that DDoS attack detection using a deep learning approach on imbalanced data performs better when implemented using synthetic minority oversampling technique (SMOTE) method for binary classes. On the other hand, the proposed deep learning approach performs better for detecting DDoS attacks in multiclass when implemented using the adaptive synthetic (ADASYN) method.
The appearance of uncertainties and disturbances often effects the characteristics of either linear or nonlinear systems. Plus, the stabilization process may be deteriorated thus incurring a catastrophic effect to the system performance. As such, this manuscript addresses the concept of matching condition for the systems that are suffering from miss-match uncertainties and exogeneous disturbances. The perturbation towards the system at hand is assumed to be known and unbounded. To reach this outcome, uncertainties and their classifications are reviewed thoroughly. The structural matching condition is proposed and tabulated in the proposition 1. Two types of mathematical expressions are presented to distinguish the system with matched uncertainty and the system with miss-matched uncertainty. Lastly, two-dimensional numerical expressions are provided to practice the proposed proposition. The outcome shows that matching condition has the ability to change the system to a design-friendly model for asymptotic stabilization.
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...TELKOMNIKA JOURNAL
Many systems, including digital signal processors, finite impulse response (FIR) filters, application-specific integrated circuits, and microprocessors, use multipliers. The demand for low power multipliers is gradually rising day by day in the current technological trend. In this study, we describe a 4×4 Wallace multiplier based on a carry select adder (CSA) that uses less power and has a better power delay product than existing multipliers. HSPICE tool at 16 nm technology is used to simulate the results. In comparison to the traditional CSA-based multiplier, which has a power consumption of 1.7 µW and power delay product (PDP) of 57.3 fJ, the results demonstrate that the Wallace multiplier design employing CSA with first zero finding logic (FZF) logic has the lowest power consumption of 1.4 µW and PDP of 27.5 fJ.
Evaluation of the weighted-overlap add model with massive MIMO in a 5G systemTELKOMNIKA JOURNAL
The flaw in 5G orthogonal frequency division multiplexing (OFDM) becomes apparent in high-speed situations. Because the doppler effect causes frequency shifts, the orthogonality of OFDM subcarriers is broken, lowering both their bit error rate (BER) and throughput output. As part of this research, we use a novel design that combines massive multiple input multiple output (MIMO) and weighted overlap and add (WOLA) to improve the performance of 5G systems. To determine which design is superior, throughput and BER are calculated for both the proposed design and OFDM. The results of the improved system show a massive improvement in performance ver the conventional system and significant improvements with massive MIMO, including the best throughput and BER. When compared to conventional systems, the improved system has a throughput that is around 22% higher and the best performance in terms of BER, but it still has around 25% less error than OFDM.
Reflector antenna design in different frequencies using frequency selective s...TELKOMNIKA JOURNAL
In this study, it is aimed to obtain two different asymmetric radiation patterns obtained from antennas in the shape of the cross-section of a parabolic reflector (fan blade type antennas) and antennas with cosecant-square radiation characteristics at two different frequencies from a single antenna. For this purpose, firstly, a fan blade type antenna design will be made, and then the reflective surface of this antenna will be completed to the shape of the reflective surface of the antenna with the cosecant-square radiation characteristic with the frequency selective surface designed to provide the characteristics suitable for the purpose. The frequency selective surface designed and it provides the perfect transmission as possible at 4 GHz operating frequency, while it will act as a band-quenching filter for electromagnetic waves at 5 GHz operating frequency and will be a reflective surface. Thanks to this frequency selective surface to be used as a reflective surface in the antenna, a fan blade type radiation characteristic at 4 GHz operating frequency will be obtained, while a cosecant-square radiation characteristic at 5 GHz operating frequency will be obtained.
Reagentless iron detection in water based on unclad fiber optical sensorTELKOMNIKA JOURNAL
A simple and low-cost fiber based optical sensor for iron detection is demonstrated in this paper. The sensor head consist of an unclad optical fiber with the unclad length of 1 cm and it has a straight structure. Results obtained shows a linear relationship between the output light intensity and iron concentration, illustrating the functionality of this iron optical sensor. Based on the experimental results, the sensitivity and linearity are achieved at 0.0328/ppm and 0.9824 respectively at the wavelength of 690 nm. With the same wavelength, other performance parameters are also studied. Resolution and limit of detection (LOD) are found to be 0.3049 ppm and 0.0755 ppm correspondingly. This iron sensor is advantageous in that it does not require any reagent for detection, enabling it to be simpler and cost-effective in the implementation of the iron sensing.
Impact of CuS counter electrode calcination temperature on quantum dot sensit...TELKOMNIKA JOURNAL
In place of the commercial Pt electrode used in quantum sensitized solar cells, the low-cost CuS cathode is created using electrophoresis. High resolution scanning electron microscopy and X-ray diffraction were used to analyze the structure and morphology of structural cubic samples with diameters ranging from 40 nm to 200 nm. The conversion efficiency of solar cells is significantly impacted by the calcination temperatures of cathodes at 100 °C, 120 °C, 150 °C, and 180 °C under vacuum. The fluorine doped tin oxide (FTO)/CuS cathode electrode reached a maximum efficiency of 3.89% when it was calcined at 120 °C. Compared to other temperature combinations, CuS nanoparticles crystallize at 120 °C, which lowers resistance while increasing electron lifetime.
In place of the commercial Pt electrode used in quantum sensitized solar cells, the low-cost CuS cathode is created using electrophoresis. High resolution scanning electron microscopy and X-ray diffraction were used to analyze the structure and morphology of structural cubic samples with diameters ranging from 40 nm to 200 nm. The conversion efficiency of solar cells is significantly impacted by the calcination temperatures of cathodes at 100 °C, 120 °C, 150 °C, and 180 °C under vacuum. The fluorine doped tin oxide (FTO)/CuS cathode electrode reached a maximum efficiency of 3.89% when it was calcined at 120 °C. Compared to other temperature combinations, CuS nanoparticles crystallize at 120 °C, which lowers resistance while increasing electron lifetime.
A progressive learning for structural tolerance online sequential extreme lea...TELKOMNIKA JOURNAL
This article discusses the progressive learning for structural tolerance online sequential extreme learning machine (PSTOS-ELM). PSTOS-ELM can save robust accuracy while updating the new data and the new class data on the online training situation. The robustness accuracy arises from using the householder block exact QR decomposition recursive least squares (HBQRD-RLS) of the PSTOS-ELM. This method is suitable for applications that have data streaming and often have new class data. Our experiment compares the PSTOS-ELM accuracy and accuracy robustness while data is updating with the batch-extreme learning machine (ELM) and structural tolerance online sequential extreme learning machine (STOS-ELM) that both must retrain the data in a new class data case. The experimental results show that PSTOS-ELM has accuracy and robustness comparable to ELM and STOS-ELM while also can update new class data immediately.
Electroencephalography-based brain-computer interface using neural networksTELKOMNIKA JOURNAL
This study aimed to develop a brain-computer interface that can control an electric wheelchair using electroencephalography (EEG) signals. First, we used the Mind Wave Mobile 2 device to capture raw EEG signals from the surface of the scalp. The signals were transformed into the frequency domain using fast Fourier transform (FFT) and filtered to monitor changes in attention and relaxation. Next, we performed time and frequency domain analyses to identify features for five eye gestures: opened, closed, blink per second, double blink, and lookup. The base state was the opened-eyes gesture, and we compared the features of the remaining four action gestures to the base state to identify potential gestures. We then built a multilayer neural network to classify these features into five signals that control the wheelchair’s movement. Finally, we designed an experimental wheelchair system to test the effectiveness of the proposed approach. The results demonstrate that the EEG classification was highly accurate and computationally efficient. Moreover, the average performance of the brain-controlled wheelchair system was over 75% across different individuals, which suggests the feasibility of this approach.
Adaptive segmentation algorithm based on level set model in medical imagingTELKOMNIKA JOURNAL
For image segmentation, level set models are frequently employed. It offer best solution to overcome the main limitations of deformable parametric models. However, the challenge when applying those models in medical images stills deal with removing blurs in image edges which directly affects the edge indicator function, leads to not adaptively segmenting images and causes a wrong analysis of pathologies wich prevents to conclude a correct diagnosis. To overcome such issues, an effective process is suggested by simultaneously modelling and solving systems’ two-dimensional partial differential equations (PDE). The first PDE equation allows restoration using Euler’s equation similar to an anisotropic smoothing based on a regularized Perona and Malik filter that eliminates noise while preserving edge information in accordance with detected contours in the second equation that segments the image based on the first equation solutions. This approach allows developing a new algorithm which overcome the studied model drawbacks. Results of the proposed method give clear segments that can be applied to any application. Experiments on many medical images in particular blurry images with high information losses, demonstrate that the developed approach produces superior segmentation results in terms of quantity and quality compared to other models already presented in previeous works.
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...TELKOMNIKA JOURNAL
Drug addiction is a complex neurobiological disorder that necessitates comprehensive treatment of both the body and mind. It is categorized as a brain disorder due to its impact on the brain. Various methods such as electroencephalography (EEG), functional magnetic resonance imaging (FMRI), and magnetoencephalography (MEG) can capture brain activities and structures. EEG signals provide valuable insights into neurological disorders, including drug addiction. Accurate classification of drug addiction from EEG signals relies on appropriate features and channel selection. Choosing the right EEG channels is essential to reduce computational costs and mitigate the risk of overfitting associated with using all available channels. To address the challenge of optimal channel selection in addiction detection from EEG signals, this work employs the shuffled frog leaping algorithm (SFLA). SFLA facilitates the selection of appropriate channels, leading to improved accuracy. Wavelet features extracted from the selected input channel signals are then analyzed using various machine learning classifiers to detect addiction. Experimental results indicate that after selecting features from the appropriate channels, classification accuracy significantly increased across all classifiers. Particularly, the multi-layer perceptron (MLP) classifier combined with SFLA demonstrated a remarkable accuracy improvement of 15.78% while reducing time complexity.
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Benchmark and comparison between hyperledger and MySQL
1. TELKOMNIKA Telecommunication, Computing, Electronics and Control
Vol. 18, No. 2, April 2020, pp. 705~715
ISSN: 1693-6930, accredited First Grade by Kemenristekdikti, Decree No: 21/E/KPT/2018
DOI: 10.12928/TELKOMNIKA.v18i2.13743 705
Journal homepage: http://journal.uad.ac.id/index.php/TELKOMNIKA
Benchmark and comparison between hyperledger and MySQL
Onno W. Purbo, Sriyanto, Suhendro, Rz Abd. Aziz, Riko Herwanto
Institut Informatika dan Bisnis Darmajaya Bandar Lampung, Indonesia
Article Info ABSTRACT
Article history:
Received Jul 27, 2019
Revised Dec 31, 2019
Accepted Feb 5, 2020
In this paper, we report the benchmarking results of Hyperledger, a Distributed
Ledger, which is the derivation Blockchain Technology. Method to evaluate
Hyperledger in a limited infrastructure is developed. Themeasured
infrastructure consists of 8 nodes with a load of up to 20000
transactions/second. Hyperledger consistently runs all evaluation, namely, for
20,000 transactions, the run time 74.30s, latency 73.40ms latency, and 257 tps.
The benchmarking of Hyperledger shows better than a database system in
a high workload scenario. We found that the maximum size data volume in
one transaction on the Hyperledger network is around ten (10) times of
MySQL. Also, the time spent on processing a single transaction in
the blockchain network is 80-200 times faster than MySQL. This initial
analysis can provide an overview for practitioners in making decisions about
the adoption of blockchain technology in their IT systems.
Keywords:
Blockchain
Hyperledger
Latency
MySQL
Throughput
This is an open access article under the CC BY-SA license.
Corresponding Author:
Riko Herwanto,
Institut Informatika dan Bisnis Darmajaya Bandar Lampung, Indonesia.
Email: rikoherwanto@darmajaya.ac.id
1. INTRODUCTION
In this work, Hyperledger Fabric [1], a Distributed Ledger Technology (DLT) [2, 3] implementation
from the Linux Foundation, is benchmarked. A DLT manages Ledger through peer-to-peer networks using
consensus mechanisms and smart contracts. Hyperledger is the implementation of a Blockchain framework
which used to develop applications with modular architecture [4]. The Hyperledger is an open-source
Blockchain and related tools project. Thus, DLT provides new models of trust and business opportunities.
For this reason, DLT is a technology that emerges in many fields, such as Financial Technology (Fintech) [5],
health services [6], including government organizations [7]. Unfortunately, due to complex peer-to-peer
interactions, DLT performance is more difficult to access than a centralized system [8]. This work reports on
an attempt to benchmark the DLT and compared it to a database system. In comparing distributed blockchain
with relational databases, relational databases are more developed. Thus, there are more options in relational
database frameworks than that of permissioned Blockchain frameworks [9]. The available permissioned
Blockchains are all in the early development stages. It is likely more available options for permissioned
Blockchains in the future. Support in cloud solutions, there is more for relational databases.
Blockchain is relatively new as compared to distributed databases. This work shows that Blockchain
technology is comparable to older techniques in terms of latency [10]. In some cases, the performance is better,
and when taking into account the consistency model, there will likely be several cases of meaningful use soon
where the permissioned blockchain will be a better choice than the distributed database [11]. Blockchain is
a new technology for sharing transactional data and calculations without using third party facilities. Blockchain
uses different architecture as compared with a traditional database or protocol. The difference in architecture
causes differences in performance, cost, and security, but few predict the performance of blockchain-based
2. ISSN: 1693-6930
TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 2, April 2020: 705 - 715
706
systems [12]. In this paper, we have evaluated Hyperledger Fabric v1.0. The assessment shows that
Hyperledger Fabric v1.0 with more than two nodes has better performance in all evaluation metrics compared
to only one. Also, we are interested in comparing the performance of blockchain platforms and public
blockchain with traditional databases (MySQL).
Blockchain [13], initially Block Chain, is a growing record, called blocks, which is connected and
cryptographically secured. Each block usually contains a cryptographic hash from the previous block [14],
timestamp, and transaction data [15]. Blockchain is designed resistant to data modification. The Blockchain is
an open distributed ledger that records transactions between two parties or more efficiently and in a verifiable
and permanent way [16]. As a peer-to-peer network collectively manages a distributed ledger blockchain by
following specific protocols for confirming communication between nodes. Once recorded, the data in
the block cannot be changed retroactively without changes to the next blocks, which requires the majority
consensus of the network. The Blockchain is technically defined as a ledger to record transactions, maintained
in a distributed network of non-trusting peers, in which each peer stores a copy of the ledger.
From its beginning, Blockchain is designed for safe, secure by design, and becoming an example of
a distributed computing system with high byzantine fault tolerance (BFT) [17]. A decentralized consensus can
be achieved by blockchain [9]. Thus, Blockchain suitable for recording events, medical records [18, 19],
and other record management activities, such as identity management [20-24], transaction processing,
documentation of evidence, or traceability [25]. In short, Blockchain is a ledger system that records every
transaction in the form of a decentralized database network. Each data transaction is recorded in a block entity,
and each block is connected (chained) to a pre-existing block. Blockchain was first mentioned by Satoshi
Nakamoto when he made the world's first cryptocurrency innovation called Bitcoin in 2008. Since then,
Blockchain has become the technology that supports Bitcoin's performance up to now.
One of the neat features of Blockchain is the ability to embed logical computation. When a specific
criterion fulfilled, Blockchain may automatically carry out a transaction. For example, companies may program
their Blockchain account to make automatic procurement raw material payments when the truck carrying
material has entered the company complex. Blockchain technology has opened up opportunities for new types
of applications that allow elegant data sharing across organizational boundaries where all entities can
collectively own and manage shared data [26-28]. Though often confused as well as alternatives to relational
databases or big data solutions, Blockchain is not a solution or a substitute for them. Blockchain is very
attractive for applications that require multi-party reconciliation, trusted intermediaries, and transparency,
auditable, and high-level integrity. Blockchain is publicly used in Bitcoin, Ethereum, and other
cryptocurrencies as they emerge. Enterprise-scale Blockchain applications are emerging, and may soon be in
production-scale deployment.
2. RESEARCH METHOD
The complex DLT architecture is divided into four layers, i.e., networks, nodes, ledgers, and
application layers to facilitate further analysis. On each layer, several metrics and influence factors are
determined. Different metrics and influenced by different factors are measured to benchmark. The primary and
simulated DLT workload concepts are introduced. Based on this analysis, a framework was designed that built
the distributed foundation from the testing environment and carried out reproducible measurements. This
framework is designed so that technology is evaluated and the test environment is easily exchanged. As a result,
the design framework is implemented with the benchmarking tool. For example, performance measurement
and evaluation, Hyperledger Fabric, the experiments were carried out in a controlled laboratory environment.
Evaluation of measurement results provides information about the performance effects of four factors, changes
explicitly in transaction levels, workload, block size and also the impact of package loss. Measurements show
that the factors of each layer can directly affect the performance of the entire network, increase the level of
transactions, demand workloads, unfavourable memory configurations, or unfavorable network conditions.
In this works, the Hyperledger blockchain platform, Hyperledger Fabric v1.0 was evaluated. The experiment
was carried out on an i7 Laptop, 8GB RAM, 500GB SSD, 3 Core 2 Duo CPU, 4GB RAM, 160GB HDD, and
running Ubuntu 18.04 LTE.
− The first experiment was to analyze the performance of the Hyperledger Fabric v1.0 platform. Platform
performance evaluations assessed in terms of execution time latency and throughput by varying workloads
the number of transactions and requests (requests or requests) requested simultaneously up to 20,000
transactions, the transaction is measured by the transaction submission for consensus by the partner, to add
transactions to the block. Execution time is the time needed for the platform to add and execute transactions
successfully. Throughput can be defined as "the number of successful transactions per second." Finally,
latency in the blockchain can be measured as the time it takes for a specific platform to respond to each
transaction.
3. TELKOMNIKA Telecommun Comput El Control
Benchmark and compassion betwen hyperledger and MySQL (Onno W. Purbo)
707
− The second Experiment is comparing Hyperledger's work with Relational Database, in this case,
a MySQL, with the same data load.
− Throughput, Throughput is defined as the quantity of data that is successfully transferred between nodes
per unit of time, usually measured in seconds, as shown in Figure 1. Throughput is often defined for
individual connections or sessions, but in some cases, the total network throughput is determined. Ideally,
throughput must be the same as capacity. Capacity depends on the physical layer technology used. Network
capacity must be sufficient to handle the load incurred, even when the maximum is busy in network traffic.
Theoretically, throughput will increase when the load offered increases, up to a maximum of network
capacity. However, network throughput depends on the access method (for example, passing token or
carrier sensing), network load, and error rate. In the picture below, it shows an ideal situation, where
throughput increases linearly with the load offered, and in fact, where the actual throughput decreases
because the load offered reaches a certain maximum point.
Figure 1. Throughput workflow
− Latency, Latency is related to the time needed to send messages from one end of the network to the other
end. Latency may also be the time lag required in delivering data packets from the sender to the recipient.
The higher the time lag or latency, the higher the risk of access failure. Network latency is also often
interpreted as the level of delay in delivery to data communication networks and voice. Latency is measured
strictly in the form of time. For example, a network to send messages takes 24 milliseconds (ms) from one
end to the other. In general, there are three components of latency, namely, propagation delay, transit, and
queue.
2.1. Experiment platform
In this paper, the evaluation framework for the private distributed ledger is designed and developed.
For this purpose, various layers have been determined. It is the network layer, above the node layer and
ledger layer to the application layer. For each of these layers, metrics, and factors have been identified, which
make it possible to measure/influence the performance of DLT networks. Also, the workload has been
determined, which emphasizes individual aspects of the DLT or represents realistic use cases. Four phases of
the experiments are:
− Design phase
The aim is to determine the framework that runs experiments on DLT. It will support technology
configuration and actual measurements and evaluation of results. The design phase is to determine various
objectives, i.e., Throughput and Latency, discussed, divided into three phases, application, measurement, and
evaluation, which will be discussed below.
− Testbed preparation phase
A testbed is prepared before the benchmarking process. It includes the installation and configuration
of the software, i.e., OS, Docker engine, Network, necessary to facilitate the following steps. After that, Ledger
Hosts, equipped with GO language and Docker CE, is needed to benchmark DLT. The testbed includes many
tools, such as, Chaincode-Payload-Size, Chaincode-Scalability, Channel-Scalability, which are used during
the measurement phase to record experimental hosts.
− Measurement phase
Figure 2 describes the measurement phase, starting with the initial configuration step. In running
an experiment, every last configuration change is applied to the experiment Hosts, such as network interference.
After this, initializing all monitors in the experiment host where the experiment takes place. It might include
4. ISSN: 1693-6930
TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 2, April 2020: 705 - 715
708
recording network traffic or using resources, for example, The workload execution was initiated by
Orchestration Hosts, which will not disturb the process and wait until finished. Benchmark hosts take over and
operate experiments based on the definition of workload on the Ledger hosts. All previous configuration and
initialization steps allow the system to run experiments without any external intervention. All changes during
the experiment are timed or are directly induced by workload execution on the Ledger hosts. After the workload
execution is complete, the experiment is turned off. The monitor is stopped at the Ledger host, and any
interference applied during the measurement configuration step is canceled. For example, removing any
artificial network connection lost that has been placed on the network, to allow a continuously not interrupted
job on the testbed. Finally, each collected information is extracted from ledger hosts and collected at
Orchestration Hosts. It makes it possible to immediately reset the ledger hosts except for Orchestration Hosts,
for further measurements.
Figure 2. Measurement workflow
− Evaluation phase
The last phase is the evaluation phase, where the data collected must be evaluated, starting with
the preprocessing phase. Results that come from several hosts must go through the following steps:
− Simplify further processing, such as converting to standard file formats.
− Clean any duplication of recorded traffic on many hosts.
− Normalizing time steps.
− Integration and connect various data sources.
The processed data can then be evaluated based on relevant metrics. Several methods of evaluation
are defined in this work, and the format of the data that was processed beforehand allows to add further
evaluation methods so that they are obtained easily:
− Transactions & Read Latency: Measure the time for transactions issued to be completed and responses
available for the application that issued the transaction. Maximum, minimum, and latency for the test cycle
is provided.
− Transactions & Read Throughput: Measure the flow rate of all transactions through the system, in
transactions per second, during the cycle.
In the experiment, the transaction is first executed to pre-populate the chain/ledger with Block. It is
the starting point for estimating how the system behaves when enough data is in the system. Then,
the read-write transaction is executed where each transaction randomly reads and modifies the Block. In each
experiment, one (or two) parameters vary (marked in dashed line), while keeping other parameters fixed. OS
5. TELKOMNIKA Telecommun Comput El Control
Benchmark and compassion betwen hyperledger and MySQL (Onno W. Purbo)
709
file system cache is not removed between Insert transactions and read-write transactions assuming that, in
practical settings, most of the live data will come from the file system cache. In a separate experiment, by
clearing the file system cache before running the read-write experiment, the observed throughput decreased by
about 17%. The experiment uses the following parameters:
− Total number of chains (default - 10)
− Number of transactions simulated in parallel on each chain (default - 10)
− Total Block in the entire chain (keys distributed evenly across the chain) (default - 20,000)
− Number of keys randomly read and modified by each transaction (default - 4)
− Size value for each Block (default - 200 bytes)
− Number of transactions in each block (default - 50)
Script Chaincode is developed to set the default experiment. The script is written using JavaScript
Object Notation (JSON), which is a concise format for exchanging computer data. The format is text-based,
human-readable, and is used to represent simple data structures and associative arrays (called objects).
The JSON is used to send structured data through a network connection. In this work, the script to set
the configuration is as follows:
type
type configuration struct {
chainMgrConf *chainmgmt.ChainMgrConf
batchConf *chainmgmt.BatchConf
dataConf *dataConf
txConf *txConf
}
func defaultConf() *configuration {
conf := &configuration{}
conf.chainMgrConf = &chainmgmt.ChainMgrConf{DataDir: "/tmp/fabric/ledgerPerfTests",
NumChains: 1}
conf.batchConf = &chainmgmt.BatchConf{BatchSize: 10, SignBlock: false}
conf.txConf = &txConf{numTotalTxs: 20000, numParallelTxsPerChain: 2, numWritesPerTx: 4,
numReadsPerTx: 4}
conf.dataConf = &dataConf{numKVs: 20000, kvSize: 200}
return conf
}
3. RESULT AND DISCUSSION
3.1. Result performance distributed ledger technology (Hyperledger Fabric)
Shown Figure 3 the DLT increases in transaction rate up to around 257 transactions per second (tps).
We simulate a network with eight (8) nodes. The distance between each node is constant at one (1) second.
We generate different levels of block varying from 0 to 10 blocks. Block size sets to 50 transactions per block.
The transaction arrival rate is 65 transactions per second. Throughput is directly dependent on two (2)
parameters: block size, which is the number of bytes that can contain transactions in each block, and
inter-block time intervals, that is, the average time needed for the system to access new block. To increase
the Hyperledger throughput, one may increase the Block size and putting more transactions, or to reduce
inter-block time intervals, so that the blocks are processed at a higher level. In other word, the transactions per
second (tps) is affected by the increased number of nodes.
As shown in Figure 3 with increases in transactions, throughput linearly increases to around 257 tps.
At that time, as shown in Figure 4 latency decreased to 100 ms with an about 62 percent decrease of latency
per nodes. Latency is the time lag required in delivering data packets from the sender to the recipient.
The higher the delay or latency, the higher the chance of access failure. Fortunately, latency decreases when
more nodes join the transaction.
Figure 5 shows that the number of parallel transactions on the chain does not affect the overall
throughput. We simulate the influence of the number of chains on the output by running the test on a different
number of chains. In this experiment, we use four simulated parallel transactions on each chain, 20,000 Blocks
throughout the chain with keys distributed evenly across the chain, four keys randomly read and modified by
each transaction, Block size of 200 bytes, and 50 transactions in each Block. Since the process in each Block
can be performed simultaneously, a process with ten (10) chains has a 25 percent higher throughput as
compared with one (1) chain process. Thus, the more chains used in the process, the higher the throughput.
6. ISSN: 1693-6930
TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 2, April 2020: 705 - 715
710
It is evident, considering that Chaincodes, which are included scripts for processing inter-block transactions,
can be installed on nodes that link code between nodes and allow parallel chain code execution, and will affect
execution time.
Figure 6 shows that increasing the number of chains increases overall throughput, the main reason
seems to be the parallel validation and block commit processes. In this work, we use ten (10) chains, four (4)
simulated parallel transactions in each chain, 20,000 Blocks in all chains with keys distributed evenly across
chains, four (4) keys read and modified randomly by each transaction, Block size of 200 bytes, and 50
transactions in each Block.
As shown in Figure 3 increasing the number of chains will increase the throughput. As explained in
Figure 3 the transactions per second (tps) increases with an increase of nodes. Since each node has a copy of
the ledger, this will enable an increase in throughput. However, as shown in Figure 6 when the chain is below
100, the throughput does not appear significantly affected, around 38 percent of transactions per second.
However, if one adds the number of chains up to 5 times, the percentage of throughput increases to around
62 percent.
Figure 3. Throughput
Figure 4. Latency
Figure 7 shows that increasing the number of blocks, operated in each transaction, reduces
the overall throughput. In this work, we use ten chains, ten transactions simulated in parallel on each chain,
20,000 blocks in all chains with keys distributed evenly across chains, four keys read and modified randomly
by each transaction, each Block sizes 200 bytes and 50 transactions in each Block which consists of a volume
of transactions to be processed; any additional Block will reduce overall throughput since transactions are
ordered and grouped, and sent as Block to peer. Each peer processes one block at a time.
7. TELKOMNIKA Telecommun Comput El Control
Benchmark and compassion betwen hyperledger and MySQL (Onno W. Purbo)
711
Figure 8 shows that an increase in Block size reduces overall throughput. In this work, we use ten
chains, four simulated parallel transactions on each chain, 20,000 Blocks in all chains with keys distributed
evenly across chains, four (4) keys read and modified randomly by each transaction, Block size of 200 bytes,
and 50 transactions in each Block. As Block size increases, the latency increases as the arrival rate increases
with block size. Thus, as Block size increases, the number of transactions processed per second will rise that
increase the throughput of the entire transactions.
In the experiment to generate Figure 9 we use ten (10) chains, four (4) simulated parallel transactions
simulated in each chain, 20,000 Blocks in all chains with keys distributed evenly across chains, four (4) keys
read and modified randomly by each transaction, Block size is 200 bytes, with 50 transactions in each Block.
An increase in the number of transaction sizes will increase overall throughput. This increase has the potential
because of writing simultaneous mass transactions to each Block. The increase seems to occur at more than
100 transactions.
Figure 5. Throughput vs number parallel process
Figure 6. Throughput vs number chains
8. ISSN: 1693-6930
TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 2, April 2020: 705 - 715
712
Figure 7. Throughput vs number block per transaction
Figure 8. Throughput vs block size
Figure 9. Throughput vs number of transaction 9
9. TELKOMNIKA Telecommun Comput El Control
Benchmark and compassion betwen hyperledger and MySQL (Onno W. Purbo)
713
3.2. Comparing hyperledger and a relational database
In this section, HyperLedger and relational databases, such as MySQL, with two ways, read and write
data will be compared. This comparison will provide some practical insights to combine blockchain technology
and centralized databases. In this work, Hyperledger Fabric for the blockchain and MySQL database
for relational databases are used. As for such arrangements, we have two standards. The first is whether the
object tested is functionally complete or not. Second, Fabric Hyperledger to implement various functions and
executions [16]. Also, MySQL can be used to store various types of data.
Figure 10 shows the execution time of each transaction, the Blockchain spends more time on a larger
volume of data, reaching 2027 ms with 6KB data, the average reading and writing time is 1.22 ms. However,
we found that the time used to read and write data that is spent by the Blockchain is 1660 times higher than
MySQL. Figure 11 shows the relationship between the throughput and volume of data per transaction between
Blockchain and MySQL database. As data volumes increase and exceed the 2KB threshold, the Hyperledger
throughput increasing.
Comparing the result, Hyperledger shows higher performance with a much larger data volume
and linear relationships with data. It is found that Hyperledger is 80-200 faster than MySQL. With the increase
in the data volume in one transaction, the needed processing time increases. The time and volume of data used
results in an exponential relationship. Note that there are eight nodes in the blockchain, a linear relationship
may be the result of the number of nodes.
Figure 10. Data volume vs time R/W
Figure 11. Throughput vs data volume
10. ISSN: 1693-6930
TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 2, April 2020: 705 - 715
714
3.3. Findings
From the test results, it was found that the maximum data volume in one transaction on
the Hyperledger network is around 10 times from MySQL. Also, the time spent processing one transaction on
the blockchain network is 80-200 times faster than MySQL. Figure 3 shows that the level of DLT transactions
increased to around 257 transactions per second. By reducing the load in the form of a chain of code causing
delays in stacking, the load on the node or network is generated accumulated from the previous transaction.
Figure 4 shows a slight increase in transaction latency based on workload, with a load of 10 KB. Workload
shows an earlier and more significant increase in transaction latency. Also, this shows that the transaction
performance increases when the block size increased. The reduction from 10 transactions per block
to 5 transactions leads to doubling the average transaction latency at the high transaction level. Increased
transaction rates, heavy workloads, unconfirmed block sizes, and high network losses have been confirmed to
limit the performance of distributed ledgers. Also, this work shows that performance depends on
parameters/conditions on different layers. Some of these parameters can be modified to improve performance.
Adjusting block sizes to individual requirements from general ledger applications may improve
the performance, while other settings may not be easily controlled, such as workloads or network characteristics
run by distributed ledgers.
4. CONCLUSION AND FUTURE WORK
In this paper, we report a structured experimental approach to characterize performance Blockchain
Hyperledger platform. The framework developed in this paper was built to be expanded. It includes adding
factors, metrics, workloads, and more factors that can be extended with various network limitations such as
limited network speed or network delays. Metrics must include information on the burden of each node, to
enable making appropriate statements on the congestion of the ledger distributed. In this work, this feature has
been partially implemented. Thus, the amount of workload may be easily increased. We found that the read
throughput of the system is found to be linear while the Write process is almost linear at low transaction level
below 1000 transactions per second. The read and write latency is affected by the number of nodes.
The number of participating nodes may be increased and may result in better throughput and latency.
An increase in participating nodes may require infrastructure scaling to achieve the desired performance. More
infrastructure scaling experiments have to be carried out on the testbed. Evaluate the limit of the distributed
ledgers setup may also of interest. The ledger parameters adjustment to optimize its performance in a constraint
running environment. The performance, i.e., throughput, execution time, and latency, indicates that
Hyperledger is consistently better than SQL. We found that the maximum data volume in one transaction on
the Hyperledger network is around ten (10) times of MySQL. Also, the time spent processing one transaction
on the blockchain network is 80-200 times faster than MySQL.
In general, Hyperledger produces the same performance for a certain number of nodes, regardless of
the load. However, Hyperledger performance is affected as the number of nodes is changed, and, thus,
the number of Blocks, Block size, and the Number of Transactions. To achieve higher throughput, greater
efficiency, the block interval should be made as little as possible. We have found that the block interval
for Hyperledger based Blockchain protocol should not be less than 12s. It would ensure faster propagation
and low latency. This result implies that the blockchain may be more suitable for data-intensive
applications/systems.
REFERENCES
[1] E. Androulaki, et al., “Hyperledger Fabric: A Distributed Operating System for Permissioned Blockchains,”
Proc. of the Thirteenth EuroSys Conf., pp. 1-15, 2018.
[2] H. Sukhwani, N. Wang, K. S. Trivedi and A. Rindos, “Performance Modeling of Hyperledger Fabric (Permissioned
Blockchain Network),” IEEE 17th Int. Symp. on Network Comp. and App, pp. 1-8, 2018.
[3] Walport M, “Distributed Ledger Technology: Beyond Block Chain,” UK Government Office for Science, Tech.
Rep; 2016.
[4] S. Davidson, P. D. Filippi, and J. Potts., “Disrupting Governance: The New Institutional Economics of Distributed
Ledger Technology, SSRN Electronic Journal, pp. 1-27, 2016.
[5] S. Chishti, and J, Barberis, “The FinTech Book: The Financial Technology Handbook for Investors, Entrepreneurs
and Visionaries,” John Wiley & Sons, 2016.
[6] J. Cunningham, J. Ainsworth, “Enabling Patient Control of Personal Electronic Health Records Through Distributed
Ledger Technology,” Stud Health Technol Inform, vol. 245, pp. 45-48, 2018.
[7] D. Genkin, D. Papadopoulos, and C. Papamanthou, “Privacy in Decentralized Cryptocurrencies,” Communications
of the ACM, vol. 61, no. 6, pp. 78-88, 2018.
[8] I. Kocsis, et al., “Towards Performance Modeling of Hyperledger Fabric,” International IBM Cloud
Academy Conf. 2017.
11. TELKOMNIKA Telecommun Comput El Control
Benchmark and compassion betwen hyperledger and MySQL (Onno W. Purbo)
715
[9] D. Chays and Y. Deng, “Demonstration of AGENDA Tool Set for Testing Relational Database Applications,” Proc.
of 25th Int. Conf. on Software Engineering, pp. 802-803, 2003.
[10] R. M. R. Yasaweerasinghelage, M. Staples and I. Weber. “Predicting Latency of Blockchain-Based Systems Using
Architectural Modelling and Simulation,” IEEE Int. Conf. on Software Architecture, pp. 253-256, 2017.
[11] Rauchs M, et al., “Distributed Ledger Technology Systems: A Conceptual Framework,” SSRN Electronic
Journal, 2018.
[12] Q. Nasir, I. A. Qasse, M. A. Talib, and A. B. Nassif, “Performance Analysis of Hyperledger Fabric Platforms,”
Security and Communication Networks, vol. 2018, pp. 1-15, 2018.
[13] A. O’dowd., “Hands-On Blockchain with Hyperledger: Building Decentralized Applications with Hyperledger
Fabric and Composer,” Packt Publishing, 2018.
[14] A. Baliga, “Performance Characterization of Hyperledger Fabric,” Crypto Valley Conference on Blockchain
Technology, pp. 65-74, 2018.
[15] B. White, et al., “An Integrated Experimental Environment for Distributed Systems and Networks,” Proc. of the 5th
symposium on Operating systems design and implementation. Boston, Massachusetts, vol. 36, pp. 255-270, 2002.
[16] E. Buchman, “Tendermint: Byzantine Fault Tolerance in the Age of Blockchains,” Department Engineering Systems
and Computing University of Guelph, 2016.
[17] V. Dhillon, D. Metcalf, and M. Hooper, “The Hyperledger Project,” In Blockchain Enabled Applications, Apress,
Berkeley, pp. 139-149, 2017.
[18] S. Nakamoto, “Bitcoin: A Peer-to-peer Electronic Cash System,” BITCOIN, pp. 1-9. 2008.
[19] C. Cachin, “Architecture of the Hyperledger Blockchain Fabric,” In Workshop on Distributed Cryptocurrencies and
consensus ledgers, vol. 310, pp. 4-11, 2016.
[20] C. Decker and R. Wattenhofer, “Information Propagation in the Bitcoin Network,” 13-th IEEE International
Conference on Peer-to-Peer Computing, pp. 1-10, 2013.
[21] J. Cunningham, and J. Ainsworth. “Enabling Patient Control of Personal Electronic Health Records Through
Distributed Ledger Technology,” Stud Health Technol Inform, vol. 245, pp. 45-8, 2018.
[22] R. Bolze, et al., “Grid'5000: A Large Scale and Highly Reconfigurable Experimental Grid Testbed,”
The International Journal of High Performance Computing Applications, vol. 4, pp. 481-494, 2006.
[23] K. Croman, et al., “On Scaling Decentralized Blockchains,” In International Conference on Financial Cryptography
and Data Security, pp. 106-125, 2016.
[24] R. Maull, et al., “Distributed Ledger Technology: Applications and Implications,” Strategic Change, vol. 26, no. 5,
pp. 481-489, 2017.
[25] N. Papadis, et al., “Stochastic Models and Wide-Area Network Measurements for Blockchain Design and Analysis,”
IEEE Conf. on Comp. Comm, pp. 2546-2554, 2018.
[26] P. Sajana, M. Sindhu, and M. Sethumadhavan, “On Blockchain Application: Hyperledger Fabric and Ethereum,”
International Journal of Pure and Applied Mathematics, vol. 118, no. 18, pp. 2965-2970, 2018.
[27] S. Pongnumkul, C. Siripanpornchana and S. Thajchayapong, “Performance Analysis of Private Blockchain Platforms
in Varying Workloads,” 26th Int. Conf. on Comp. Commu. and Net, pp. 1-6, 2017.
[28] P. Thakkar, “Performance Benchmarking and Optimizing Hyperledger Fabric Blockchain Platform,” IEEE 26th Int.
Symp. on Modeling, Analysis, and Simulation of Comp and Telecomm, Sys., pp. 264-276, 2018.