This document describes the design and implementation of a trade finance application built on the Hyperledger Fabric permissioned blockchain platform. It discusses the architecture of blockchain-based applications in general and this trade finance application specifically. Key aspects covered include identifying different types of software connectors (linkage, arbitrator, event, adaptor) that are important building blocks in the architecture. The trade finance application uses connectors like the blockchain facade connector and block/transaction event connector to interface between layers and handle asynchronous event propagation. Overall the document aims to provide insights into architectural considerations and best practices for developing blockchain-based applications.
Cloud Security and Data Integrity with Client Accountability FrameworkIDES Editor
The Cloud based services provide much efficient
and seamless ways for data sharing across the cloud. The fact
that the data owners no longer possess data makes it very
difficult to assure data confidentiality and to enable secure
data sharing in the cloud. Despite of all its advantages this
will remain a major limitation that acts as a barrier to the
wider deployment of cloud based services. One of the possible
ways for ensuring trust in this aspect is the introduction of
accountability feature in the cloud computing scenario. The
Cloud framework requires promotion of distributed
accountability for such dynamic environment[1]. In some
works, there‘s an accountable framework suggested to ensure
distributed accountability for data sharing by the generation
of only a log of data access, but without any embedded feedback
mechanism for owner permission towards data
protection[2].The proposed system is an enhanced client
accountability framework which provides an additional client
side verification for each access towards enhanced security of
data. The integrity of content of data which resides in the
cloud service provider is also maintained by secured
outsourcing. Besides, the authentication of JAR(Java Archive)
files are done to ensure file protection and to maintain a safer
environment for data sharing. The analysis of various
functionalities of the framework depicts both the
accountability and security feature in an efficient manner.
Cloud Security and Data Integrity with Client Accountability FrameworkIDES Editor
The Cloud based services provide much efficient
and seamless ways for data sharing across the cloud. The fact
that the data owners no longer possess data makes it very
difficult to assure data confidentiality and to enable secure
data sharing in the cloud. Despite of all its advantages this
will remain a major limitation that acts as a barrier to the
wider deployment of cloud based services. One of the possible
ways for ensuring trust in this aspect is the introduction of
accountability feature in the cloud computing scenario. The
Cloud framework requires promotion of distributed
accountability for such dynamic environment[1]. In some
works, there‘s an accountable framework suggested to ensure
distributed accountability for data sharing by the generation
of only a log of data access, but without any embedded feedback
mechanism for owner permission towards data
protection[2].The proposed system is an enhanced client
accountability framework which provides an additional client
side verification for each access towards enhanced security of
data. The integrity of content of data which resides in the
cloud service provider is also maintained by secured
outsourcing. Besides, the authentication of JAR(Java Archive)
files are done to ensure file protection and to maintain a safer
environment for data sharing. The analysis of various
functionalities of the framework depicts both the
accountability and security feature in an efficient manner.
Distributed reflection denial of service attack: A critical review IJECEIAES
As the world becomes increasingly connected and the number of users grows exponentially and “things” go online, the prospect of cyberspace becoming a significant target for cybercriminals is a reality. Any host or device that is exposed on the internet is a prime target for cyberattacks. A denial-of-service (DoS) attack is accountable for the majority of these cyberattacks. Although various solutions have been proposed by researchers to mitigate this issue, cybercriminals always adapt their attack approach to circumvent countermeasures. One of the modified DoS attacks is known as distributed reflection denial-of-service attack (DRDoS). This type of attack is considered to be a more severe variant of the DoS attack and can be conducted in transmission control protocol (TCP) and user datagram protocol (UDP). However, this attack is not effective in the TCP protocol due to the three-way handshake approach that prevents this type of attack from passing through the network layer to the upper layers in the network stack. On the other hand, UDP is a connectionless protocol, so most of these DRDoS attacks pass through UDP. This study aims to examine and identify the differences between TCP-based and UDP-based DRDoS attacks.
Named Data Networking (NDN) is a recently designed Internet architecture that benefits data names
instead of locations and creates essential changes in the abstraction of network services from "delivering
packets to specific destinations” to "retrieving data with special names" makes. This fundamental change
creates new opportunities and intellectual challenges in all areas, especially network routing and
communication, communication security, and privacy. The focus of this dissertation is on the forwarding
aircraft feature introduced by NDN. Communication in NDN is done by exchanging interest and data
packets
A Critical Survey on Privacy Prevailing in Mobile Cloud Computing: Challenges...Rida Qayyum
With the explosive growth of mobile applications and extensive praxis of cloud computing, mobile cloud computing has been introduced to be a potential technology for mobile services. But privacy is the main concern for a mobile user in the modern era. In the current study, we address the privacy challenges faced by mobile users while outsourcing their data to the service provider for storage and processing. However, a secure mobile user is required to protect these fundamental privacy factors such as their personal data, real identity, current location and the actual query sent to the cloud vendor server while availing different cloud services. Under these privacy metrics, we evaluated the existing approaches that are counting privacy challenge in mobile cloud computing. The primary focus of this study is to presents a critical survey of recent privacy protection techniques. Leading to objective, the current study conduct a comparative analysis of these state of the art methods with their strong points, privacy level and scalability. After analysis, this paper suggests the pseudo-random permutation method could be a promising solution that can be taken into consideration for preserving user personal information and data query privacy in MCC more efficiently. Primarily, the purpose of the survey was to focus on further advancements of the suggested method. Furthermore, we present the future research directions in the mobile cloud computing paradigms.
Enhanced security framework to ensure data security in cloud using security b...eSAT Journals
Abstract Data security and Access control is a challenging research work in Cloud Computing. Cloud service users upload there private and confidential data over the cloud. As the data is transferred among the server and client, the data is to be protected from unauthorized entries into the server, by authenticating the user’s and provide high secure priority to the data. So the Experts always recommend using different passwords for different logins. Any normal person cannot possibly follow that advice and memorize all their usernames and passwords. That is where password managers come in. The purpose of this paper is to secure data from unauthorized person using Security blanket algorithm.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Cloud computing is the technology which enables obtaining resources like so services,
software, hardware over the internet. With cloud storage users can store their data remotely and
enjoy on-demand services and application from the configurable resources. The cloud data storage
has many benefits over local data storage. Users should be able to just use the cloud storage as if it is
local, without worrying about the need to verify its integrity. The problem is that ensuring data
security and integrity of data of user. Sohere, I am going to have public audit ability for cloud storage
that users can resort to a third-party auditor (TPA) to check the integrity of data. This paper gives the
various issues related to privacy while storing the user’s data to the cloud storage during the TPA
auditing. Without appropriate security and privacy solutions designed for clouds this computing
paradigm could become a big failure. I am a giving privacy-preserving public auditing using ring
signature process for secure cloud storage system. This paper is going to analyze various techniques
to solve these issues and to provide the privacy and security to the data in cloud
Performance Analysis of Internet of Things Protocols Based Fog/Cloud over Hig...Istabraq M. Al-Joboury
The Internet of Things (IoT) becomes the future of a global data field in which the embedded devices communicate with each other, exchange data and making decisions through the Internet. IoT could improves the qualityoflife in smart cities, but a massive amount of data from different smart devices could slow down or crash database systems. In addition, IoT data transfer to Cloud for monitoring information and generating feedback thus will lead to highdelay in infrastructure level. Fog Computing can help by offering services closer to edge devices. In this paper, we propose an efficient system architecture to mitigate the problem of delay. We provide performance analysis like responsetime, throughput and packet loss for MQTT (Message Queue Telemetry Transport) and HTTP (Hyper Text Transfer Protocol) protocols based on Cloud or Fog serverswith large volume of data form emulated traffic generator working alongsidewith one real sensor. We implement both protocols in the same architecture, with low cost embedded devices to local and Cloud servers with different platforms. The results show that HTTP response time is 12.1 and 4.76 times higher than MQTT Fog and cloud based located in the same geographical area of the sensors respectively. The worst case in performance is observed when the Cloud is public and outside the country region. The results obtained for throughput shows that MQTT has the capability to carry the data with available bandwidth and lowest percentage of packet loss. We also prove that the proposed Fog architecture is an efficient way to reduce latency and enhance performance in Cloud based IoT.
Authenticated and unrestricted auditing of big data space on cloud through v...IJMER
Cloud unlocks a different era in Information technology where it has the capability of providing the customers with a variety of scalable and flexible services. Cloud provides these services through a prepaid system, which helps the customers cut down on large investments on IT hardware
and other infrastructure. Also according to the Cloud viewpoint, customers don’t have control on their
respective data. Hence security of data is a big issue of using a Cloud service. Present work shows that
the data auditing can be done by any third party agent who is trusted and known as auditor. The auditor can verify the integrity of the data without having the ownership of the actual data. There are many disadvantages for the above approach. One of them is the absence of a required verification procedure among the auditor and service provider which means any person can ask for the verification of the file which puts this auditing at certain risk. Also in the existing scheme the data updates can be
done only for coarse granular updates i.e. blocks with the uneven size. And hence resulting in repeated communication and updating of auditor for a whole file block causing higher communication costs and
requires more storage space. In this paper, the emphasis is to give a proper breakdown for types of
fixed granular updates and put forward a design that will be capable to maintain authenticated and unrestricted auditing. Based on this system, there is also an approach for remarkably decreasing the communication costs for auditing little updates
Improve HLA based Encryption Process using fixed Size Aggregate Key generationEditor IJMTER
Cloud computing is an innovative idea for IT industries which provides several services to
users. In cloud computing secure authentication and data integrity of data is a major challenge, due to
internal and external threats. For improvement in data security over cloud, various techniques are
used.MAC based authentication is one of them, which suffers from undesirable systematic demerits
which have bounded usage and not secure verification, which may pose additional online load to users,
in a public auditing setting. Reliable and secure auditing are also challenging in cloud. In Cloud auditing
existing audit systems are based on aggregate key HLA algorithm. This algorithm is based on variable
sizes, different aggregate key generation, which encounters with security issues at decryption level.
Current Scheme generates a high length of key decryption that encounters with problem of space
complexity. To overcome these issues, We can improve HLA algorithm by improve aggregate key
generation, based on fixed key size. This algorithm generates constant aggregate key which will
overcomes problem of sharing of keys, security issues and space complexity.
Using blockchain to get ahead of the game: Creating trust and driving operati...Accenture Insurance
The rise of blockchain promises to bring disruption to commercial insurance by fundamentally reshaping principles and processes that have governed the industry since the 17th century. Blockchain offers a more efficient alternative to the processes the insurance industry developed as an answer to the absence of mutual trust between affected parties and a lack of end-to-end transaction transparency.
In this report we address how blockchain can create trust and drive operational excellence, and we assess its wider implications for commercial insurance brokers.
Distributed reflection denial of service attack: A critical review IJECEIAES
As the world becomes increasingly connected and the number of users grows exponentially and “things” go online, the prospect of cyberspace becoming a significant target for cybercriminals is a reality. Any host or device that is exposed on the internet is a prime target for cyberattacks. A denial-of-service (DoS) attack is accountable for the majority of these cyberattacks. Although various solutions have been proposed by researchers to mitigate this issue, cybercriminals always adapt their attack approach to circumvent countermeasures. One of the modified DoS attacks is known as distributed reflection denial-of-service attack (DRDoS). This type of attack is considered to be a more severe variant of the DoS attack and can be conducted in transmission control protocol (TCP) and user datagram protocol (UDP). However, this attack is not effective in the TCP protocol due to the three-way handshake approach that prevents this type of attack from passing through the network layer to the upper layers in the network stack. On the other hand, UDP is a connectionless protocol, so most of these DRDoS attacks pass through UDP. This study aims to examine and identify the differences between TCP-based and UDP-based DRDoS attacks.
Named Data Networking (NDN) is a recently designed Internet architecture that benefits data names
instead of locations and creates essential changes in the abstraction of network services from "delivering
packets to specific destinations” to "retrieving data with special names" makes. This fundamental change
creates new opportunities and intellectual challenges in all areas, especially network routing and
communication, communication security, and privacy. The focus of this dissertation is on the forwarding
aircraft feature introduced by NDN. Communication in NDN is done by exchanging interest and data
packets
A Critical Survey on Privacy Prevailing in Mobile Cloud Computing: Challenges...Rida Qayyum
With the explosive growth of mobile applications and extensive praxis of cloud computing, mobile cloud computing has been introduced to be a potential technology for mobile services. But privacy is the main concern for a mobile user in the modern era. In the current study, we address the privacy challenges faced by mobile users while outsourcing their data to the service provider for storage and processing. However, a secure mobile user is required to protect these fundamental privacy factors such as their personal data, real identity, current location and the actual query sent to the cloud vendor server while availing different cloud services. Under these privacy metrics, we evaluated the existing approaches that are counting privacy challenge in mobile cloud computing. The primary focus of this study is to presents a critical survey of recent privacy protection techniques. Leading to objective, the current study conduct a comparative analysis of these state of the art methods with their strong points, privacy level and scalability. After analysis, this paper suggests the pseudo-random permutation method could be a promising solution that can be taken into consideration for preserving user personal information and data query privacy in MCC more efficiently. Primarily, the purpose of the survey was to focus on further advancements of the suggested method. Furthermore, we present the future research directions in the mobile cloud computing paradigms.
Enhanced security framework to ensure data security in cloud using security b...eSAT Journals
Abstract Data security and Access control is a challenging research work in Cloud Computing. Cloud service users upload there private and confidential data over the cloud. As the data is transferred among the server and client, the data is to be protected from unauthorized entries into the server, by authenticating the user’s and provide high secure priority to the data. So the Experts always recommend using different passwords for different logins. Any normal person cannot possibly follow that advice and memorize all their usernames and passwords. That is where password managers come in. The purpose of this paper is to secure data from unauthorized person using Security blanket algorithm.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Cloud computing is the technology which enables obtaining resources like so services,
software, hardware over the internet. With cloud storage users can store their data remotely and
enjoy on-demand services and application from the configurable resources. The cloud data storage
has many benefits over local data storage. Users should be able to just use the cloud storage as if it is
local, without worrying about the need to verify its integrity. The problem is that ensuring data
security and integrity of data of user. Sohere, I am going to have public audit ability for cloud storage
that users can resort to a third-party auditor (TPA) to check the integrity of data. This paper gives the
various issues related to privacy while storing the user’s data to the cloud storage during the TPA
auditing. Without appropriate security and privacy solutions designed for clouds this computing
paradigm could become a big failure. I am a giving privacy-preserving public auditing using ring
signature process for secure cloud storage system. This paper is going to analyze various techniques
to solve these issues and to provide the privacy and security to the data in cloud
Performance Analysis of Internet of Things Protocols Based Fog/Cloud over Hig...Istabraq M. Al-Joboury
The Internet of Things (IoT) becomes the future of a global data field in which the embedded devices communicate with each other, exchange data and making decisions through the Internet. IoT could improves the qualityoflife in smart cities, but a massive amount of data from different smart devices could slow down or crash database systems. In addition, IoT data transfer to Cloud for monitoring information and generating feedback thus will lead to highdelay in infrastructure level. Fog Computing can help by offering services closer to edge devices. In this paper, we propose an efficient system architecture to mitigate the problem of delay. We provide performance analysis like responsetime, throughput and packet loss for MQTT (Message Queue Telemetry Transport) and HTTP (Hyper Text Transfer Protocol) protocols based on Cloud or Fog serverswith large volume of data form emulated traffic generator working alongsidewith one real sensor. We implement both protocols in the same architecture, with low cost embedded devices to local and Cloud servers with different platforms. The results show that HTTP response time is 12.1 and 4.76 times higher than MQTT Fog and cloud based located in the same geographical area of the sensors respectively. The worst case in performance is observed when the Cloud is public and outside the country region. The results obtained for throughput shows that MQTT has the capability to carry the data with available bandwidth and lowest percentage of packet loss. We also prove that the proposed Fog architecture is an efficient way to reduce latency and enhance performance in Cloud based IoT.
Authenticated and unrestricted auditing of big data space on cloud through v...IJMER
Cloud unlocks a different era in Information technology where it has the capability of providing the customers with a variety of scalable and flexible services. Cloud provides these services through a prepaid system, which helps the customers cut down on large investments on IT hardware
and other infrastructure. Also according to the Cloud viewpoint, customers don’t have control on their
respective data. Hence security of data is a big issue of using a Cloud service. Present work shows that
the data auditing can be done by any third party agent who is trusted and known as auditor. The auditor can verify the integrity of the data without having the ownership of the actual data. There are many disadvantages for the above approach. One of them is the absence of a required verification procedure among the auditor and service provider which means any person can ask for the verification of the file which puts this auditing at certain risk. Also in the existing scheme the data updates can be
done only for coarse granular updates i.e. blocks with the uneven size. And hence resulting in repeated communication and updating of auditor for a whole file block causing higher communication costs and
requires more storage space. In this paper, the emphasis is to give a proper breakdown for types of
fixed granular updates and put forward a design that will be capable to maintain authenticated and unrestricted auditing. Based on this system, there is also an approach for remarkably decreasing the communication costs for auditing little updates
Improve HLA based Encryption Process using fixed Size Aggregate Key generationEditor IJMTER
Cloud computing is an innovative idea for IT industries which provides several services to
users. In cloud computing secure authentication and data integrity of data is a major challenge, due to
internal and external threats. For improvement in data security over cloud, various techniques are
used.MAC based authentication is one of them, which suffers from undesirable systematic demerits
which have bounded usage and not secure verification, which may pose additional online load to users,
in a public auditing setting. Reliable and secure auditing are also challenging in cloud. In Cloud auditing
existing audit systems are based on aggregate key HLA algorithm. This algorithm is based on variable
sizes, different aggregate key generation, which encounters with security issues at decryption level.
Current Scheme generates a high length of key decryption that encounters with problem of space
complexity. To overcome these issues, We can improve HLA algorithm by improve aggregate key
generation, based on fixed key size. This algorithm generates constant aggregate key which will
overcomes problem of sharing of keys, security issues and space complexity.
Using blockchain to get ahead of the game: Creating trust and driving operati...Accenture Insurance
The rise of blockchain promises to bring disruption to commercial insurance by fundamentally reshaping principles and processes that have governed the industry since the 17th century. Blockchain offers a more efficient alternative to the processes the insurance industry developed as an answer to the absence of mutual trust between affected parties and a lack of end-to-end transaction transparency.
In this report we address how blockchain can create trust and drive operational excellence, and we assess its wider implications for commercial insurance brokers.
computerweekly.com 17-23 September 2019 16W hen people int.docxmccormicknadine86
computerweekly.com 17-23 September 2019 16
W hen people interact with each other, for example via financial transactions, sharing legal docu-ments or trading through supply chains, they need a high level of confidence that the data
recording their interaction is accurate and true.
A distributed ledger makes it possible to build applications
where multiple parties can execute transactions online without
the need to trust a central authority or indeed each other.
Over the past few years, the number of use cases for distributed
ledgers, and their more specialised form, blockchains, has been
increasing, as has the technology to support the underlying infra-
structure and build applications on top of it.
With a distributed ledger, every user has their own full, or in some
cases partial, copy of the database, referred to as a node, which
can be a physical device, a virtual machine or a software container.
Each node runs the relevant software to provide the infrastruc-
ture management and the relevant application, including the
ability to complete “smart contracts” that negotiate the direct
exchange of assets between participating nodes.
consensus
For a transaction to proceed, all nodes must verify a transaction
and agree its order on the ledger.
Doing so is termed “consensus”, which is necessary, for exam-
ple, to avoid double counting or overspending when it comes to
financial assets.
Consensus involves four steps, from the transaction being
initiated to it being committed on all nodes with a timestamp
InsIde blockchaIn and Its
varIous applIcatIons
Bob Tarzey explores the technology around
blockchain shaping how businesses use data
BUYER’S GUIDE TO BLOCKCHAIN | PART 2 OF 3
G
O
LD
EN
S
IK
O
R
K
A
/A
D
O
B
E
Home
http://www.computerweekly.com
https://searchcio.techtarget.com/definition/blockchain
https://searchservervirtualization.techtarget.com/definition/virtual-machine
https://www.techtarget.com/contributor/Bob-Tarzey
computerweekly.com 17-23 September 2019 17
Home
News
How IT departments
can find different
ways to upskill in
the new economy
Travel company Clarity
bakes ThoughtSpot
search and AI functions
into analytics tool
Digital factory
approach signals a new
departure for Network
Rail’s IT strategy
Editor’s comment
Buyer’s guide
to blockchain
Delivering cloud in the
financial services sector
How 5G will transform
your business
Downtime
providing a unique cryptographic signature. These steps can be
completed in seconds or minutes, depending on the technology.
Blockchains are distinguished from other distributed ledgers in
being updated by adding blocks of new transactions to create an
immutable tamper-proof log of sensitive activity.
The right to write blocks may require proof-of-work – which
can be time and resource intensive – the aim being to prevent, for
example, mass updates by bots.
Nomenclature has become confusing as the two terms, dis-
tributed ledger and blo ...
Blockchain in Banking: A Measured ApproachCognizant
Here's our foundational view on what the financial services industry needs to consider as organizations move from ideation to experimentation to pilot deployments of blockchain.
How Blockchain Can Reinvigorate Facultative Reinsurance Contract ManagementCognizant
Blockchain is ideally suited for streamlining and securing the cumbersome facultative reinsurance contract management process by offering trust and transparency and all the benefits of smart contracts.
The Blockchain Imperative: The Next Challenge for P&C CarriersCognizant
Blockchain, a universal ledger and data-storage platform, can help P&C carriers address some of their most critical business challenges and significantly alter the way they operate. Although the technology has yet to achieve widespread adoption in the insurance space, the time is ripe for carriers to begin thinking about, exploring and experimenting with blockchain.
All about Blockchain Technology and it's applications in Finance functionvinodavg
Blockchain technology is a vast, distributed ledger, operating on millions of devices, recording anything, with identical copies maintained on each of the network computers. When a new transaction or an edit to an existing transaction comes in, generally a majority of the nodes within a blockchain network must execute some algorithms and essentially evaluate and verify the history of the transaction that is proposed and come to a consensus that the history and signature is valid, then the new transaction is accepted into the ledger. If a majority of nodes do not concede to the addition or modification of the ledger entry, then it is denied and not added to the chain. All the members can review previous entries and record new transactions. These are then grouped into ‘blocks’, which then form part of a ‘chain’, thus leading to a ‘blockchain’
All about Blockchain Technology and it's applications in Finance functionvinodavg
Blockchain Technology is a vast, distributed ledger, operating on millions of devices, recording anything, with identical copies maintained on each of the network computers. When a new transaction or an edit to an existing transaction comes in, generally a majority of the nodes within a blockchain network must execute some algorithms and essentially evaluate and verify the history of the transaction that is proposed and come to a consensus that the history and signature is valid, then the new transaction is accepted into the ledger. If a majority of nodes do not concede to the addition or modification of the ledger entry, then it is denied and not added to the chain. All the members can review previous entries and record new transactions. These are then grouped into ‘blocks’, which then form part of a ‘chain’, thus leading to a ‘blockchain
Decrypting Insurance Broking through BlockchainCognizant
Blockchain technology could help brokers maximize their operational efficiencies by using smart contracts to automate key processes, freeing them to focus on value-added services that drive customer loyalty.
The Blockchain: Capital Markets Use Cases. @GreySparkUK
GreySpark Partners presenta un informe, el Blockchain: Capital Markets casos de uso, examinando cómo los bancos de inversión y otras empresas de los mercados financieros potencialmente podrían utilizar la tecnología distribuida libro mayor (DLT) en el futuro. El informe caracteriza a una amplia variedad de diferentes formas de aplicaciones blockchain siendo desarrollado por Fintech empresas de nueva creación a nivel mundial, y se analiza cómo estas aplicaciones podrían eventualmente ser utilizadas por los participantes en los mercados de capital como medio de sustitución de los sistemas de front y back-office existentes y procesos dentro de la buyside y la sellside.
A framework for improving the efficiency of the transparency in financial dom...Dr. C.V. Suresh Babu
National Web Conference on Challenges and Innovation in Engineering and Technology, NWCCIET 2021, organized by Ramco Institute of Technology, Tamil Nadu, India on 19th and 20th March 2021
Blockchain's Smart Contracts: Driving the Next Wave of Innovation Across Manu...Cognizant
By eliminating intermediaries and by enabling smart contracts with embedded, trusted business rules, blockchain offers extraordinary opportunities for manufacturing on every level of the supply chain. To profitably ride this wave of disruptive innovation, any stakeholder in the manufacturing value chain should be familiar with the basics and guidelines for proceeding.
Distributed Ledgers: Possibilities and Challenges in Capital Markets Applicat...Cognizant
Distributed ledgers - blockchain technology - stands to make numerous financial services activities more secure, autonomous, and efficient. Here's a walk-through of a range of potential use cases: IPO issuance, trade agreements and settlements, confirmations, etc. and a strategy for transition.
Our latest white paper, “Blockchain Technology and the Financial Services Market,” covers themes around:
Distributed ledger and blockchain are about to cause major business transformations in the financial services industry
Three of the most promising fields of application are payment transactions, trade finance and over-the-counter markets
Technical challenges and legal frameworks are currently a major obstacle
Many market participants are exploring ways of using blockchain, including established institutions and start-ups firms
Read the entire research report for expert insights and the full Infosys Consulting point-of-view!
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
2. truth, mutually agreed by the members in the network and
all information is available in real-time.
B. Conventional Trade Finance
The conventional system, multiparty transactions
requires third party intermediaries. The intermediaries that
facilitate payment are the respective banks of the exporter
and the importer. In this case, the trade arrangement is
fulfilled by the trusted relationships between a bank and its
client, and between the two banks. Such banks typically
have international connections and reputations to maintain.
Therefore, a commitment (or promise) by the importer's
bank to make a payment to the exporter's bank is sufficient
to trigger the process. The goods are dispatched by the
exporter through a reputed international carrier after
obtaining regulatory clearances from the exporting country's
government. Proof of delivery to the carrier is sufficient to
clear payment from the importer's bank to the exporter's
bank, and such clearance is not contingent on the goods
reaching their intended destination (it is assumed that the
goods are insured against loss or damage in transit.)
The traditional trade finance requires constant reliance
on purely documentary evidences, suffer greatly from
inherent inefficiencies in the processes. The risks inherent in
transferring goods or making payments in the absence of
trusted mediators inspired the involvement of banks and led
to the creation of the letter of credit and bill of lading.
Fig. 1. Conventional Trade Finance Model
C. Blockchain-based Trade Finance
In an ideal trade scenario, only the process of preparing
and shipping the goods would take time. A blockchain, on
the other hand, with its fast transaction commitments and
assurance guarantees, opens possibilities that did not
previously exist. As an example, we can introduce payment
by instalments, which cannot be implemented in the
conventional trade finance framework because there is no
guaranteed way of knowing and sharing information about a
shipment's progress. Such a variation would be deemed too
risky, that is why payments are linked purely to documentary
evidence. By getting all participants in a trade agreement on
a single blockchain implementing a common smart contract,
we can provide a single shared source of truth that will
minimize risk and simultaneously increase accountability.
Another key advantage is the increased trust in the trade
finance processes as all relevant documentary evidences e.g.
letter of credit and invoices are stored in blockchain. Hence,
they are clearly visible to the network participants and
whatever have been recorded are irreversible [2].
Fig. 2. Blockchain-based Trade Finance Model
III. APPLICATION ARCHITECTURE
Designing a system based on blockchain poses many
challenges as the designers may face difficulties due to
many variants and configurations the technology can offer
[5]. Apart from implementation choices, performance is
another area that must be carefully considered in the
implementation. Since blockchain requires consensus on
agreed state among multiple participants when doing
transaction, changing state transaction may not perform that
well as compared to doing similar transaction in the
traditional database.
Xiwei Xu et al. have discussed detailed architecture of
blockchain as software connector [4]. This paper will
discuss from the overall blockchain-based application
architecture from software connector perspective. The
discussion will focus primarily on the following connector
types: arbitrator, linkage, event, and adaptor [10].
A. The Nature of Hyperledger Fabric Application
Hyperledger Fabric can be viewed as a distributed
transaction processing system, with a staged pipeline of
operations that may eventually result in a change to the state
of the shared replicated ledger maintained by the network
peers. A blockchain application is a collection of processes
through which a user may submit transactions to, or read
state from, a smart contract. Under the hood, a user request is
channeled into the different stages of the transaction pipeline
and results will be extracted to provide feedback at the end of
the process.
An application developed with the smart contract at its core
can be viewed as a transaction-processing database
application with a set of views or a service API. However,
every Hyperledger Fabric transaction is asynchronous i.e.
the result of the transaction will not be available in the same
communication session that it was submitted in. This is
because a transaction must go through consensus which
requires collective approval by the peers in the network and
may take unbounded amount time for completion.
489
3. B. Application and Transaction Stages
The application and transaction stages can be broken
down to the following:
• Instantiation of blockchain
• Initialization of peer network
• Installation of smart contract or chaincode
The first step in the creation of an application is the
instantiation of the blockchain, or the shared ledger itself.
An instance of a blockchain is referred to as a channel, and
therefore the first step in a blockchain application is the
creation of a channel and the bootstrapping of the network
ordering service with the channel's genesis block.
The next step is the initialization of the peer network,
whereby all the peer nodes selected to run the application
must join the channel, a process that allows each peer to
maintain a copy of the ledger. Every peer that's joined to the
channel will possess ledger commitment privileges and may
participate in a gossip protocol in order to sync ledger state
with each other. After the creation of the peer network
comes the installation of the smart contract on that network.
A subset of the peers joined to the channel will be selected
to run the smart contract i.e. they will possess endorsement
privileges. The contract code will be deployed to these peers
subsequent operation. In Fabric parlance, the smart contract
is also known as chaincode.
Once the chaincode has been installed on the endorsing
peers, it will be initialized as per the logic embedded in it.
At this point, the application is up and running. Now,
transactions may be sent to the chaincode to either update
the state of the ledger (invocations) or to read the ledger
state (queries) for the lifetime of the application. The above
description can be best illustrated by the following diagram.
Fig. 3. The staged pipeline in the creation and operation of a
blockchain application
C. Application Model and Architecture
The process of writing a Fabric application begins with
chaincode, but we must make judicious decisions about how
an client application must interface with that chaincode.
How the assets of the chaincode, and the operations of the
blockchain network running that chaincode, ought to be
exposed ought to be dealt with great care. Significant
damage is possible if these capabilities are exposed without
restriction e.g. blockchain bootstrapping and configurations.
We propose a three-layer architecture as the standard for
a Fabric application. This architecture is a logical view that
provides clear responsibilities among those layers in the
application, as illustrated in the following diagram:
Fig. 4. Three-layer architecture of a hyperledger fabric application
At the lowest layer lies the smart contract that operates
directly on the shared ledger, which may be written using
one or more chaincode units. These chaincodes run on the
network peers, exposing a service API for invocations and
queries, and publishing event notifications of transaction
results, as well as configuration changes occurring on the
channel.
In the middle layer lies the functions to orchestrate the
various stages of a blockchain application (see Figure 3:
The staged pipeline in the creation and operation of a
blockchain application). This layer plays the role of
arbitrator that streamline operations and resolve any
conflicts between the lowest(blockchain) and topmost
(application) layer. Hyperledger Fabric provides an Node.js
SDK to perform functions such as channel creation and
joining, registration, and enrolment of users, as well as
chaincode operations.
At the topmost layer lies a user-facing application that
exports a service API consisting mostly of application-
specific capabilities, though administrative operations such
as channel and chaincode operations may also be exposed
for system administrators. We refer to this layer simply as
the application. This layer will often consist of multiple
application stacks tailored to the different participants.
This architecture is meant to serve purely as a guideline.
Depending on the complexity of the application, both the
number of layers and the verticals may vary. For a very
simple application with a small number of capabilities, the
middleware and application layers maybe compressed into
one.
IV. BLOCKCHAIN-BASED APPLICATION WITH SOFTWARE
CONNECTOR TYPES
Software connectors are the fundamental building blocks
of software interactions. A connector is an interaction
Identify applicable funding agency here. If none, delete this text box.
490
4. mechanism for the components. Connectors include pipes,
repositories, and sockets. For example, middleware can be
viewed as a connector between the components that use the
middleware [6]. Connectors in distributed systems are the
key elements to achieve system properties, such as
performance, reliability, security, etc. Connectors provide
interaction services, which are largely independent of the
functionality of the interacting components [11].
The discussion on connectors used can be broken down
into two kinds: static and dynamic connector. Static
connectors are those tying system components together and
hold them in such state statically at compile time. In
contrast, dynamic connectors allows interactions between
system component dynamically at runtime.
We have identified a number of connectors that were
applied in the application architecture. These key connectors
play significant roles and provide rich description of
capturing invaluable interactions within the architecture.
The summary of each connector, dimension and values are
shown in the table below.
TABLE I. BLOCKCHAIN FAÇADE CONNECOTR
Connector Type Dimension Value
Arbitrator Concurrency Weight Heavy
Arbitrator Fault Handling Authoritative
Arbitrator Authorization
Access Control
List
Adaptor Invocation Conversion Translation
Adaptor Invocation Conversion Marshalling
Linkage Binding Compile-time
Event Synchronicity Asynchronous
Event Notification Publish/subscribe
Each of the connector type and role is described in more
detailed in the subsequent sections.
A. Linkage Connector
Linkage connectors are used to tie the system
components together and enable the establishment of the
channel of communication and coordination. They do not
necessarily contribute towards enhancing the system but
merely serve to monitor, grow and repair the system. In our
scenario, linkage connectors are used to describe the static
dependency relationship among software modules as shown
below.
Fig. 5. Lingkage connector showing module dependencies
B. Blockchain Façade Connector
Blockchain Façade Connector is a commonly used
abstraction to allow application system to access back-end
systems i.e. blockchain system in our case. It acts like a
façade to access the blockchain services. It dispatches
incoming calls for blockchain system operations; the actual
behaviour of this connector depends on the calling
application and the requested operations. The façade
connector implemented the functionality of middleware as
shown in Figure 4. The façade is composed of routes API
connector, application API connector and blockchain API
connector, as illustrated in Figure 6 below.
Fig. 6. Blockchain façade connector
The façade connector is a high-order connector that
provides rich interaction among its contained connectors.
The asynchronous nature of ledger-update transactions of
the blockchain requires an arbitration layer between the
chaincode and the client application. The façade connector
performs arbitration of interaction between the client
application and the chaincode and adaptation by dispatching
calls to routes API, application API and finally the
blockchain API.
In essence the façade connector also serves to hide as
much complexity as possible to allow client application to
focus on transactions that impact the application rather than
the details of the backend blockchain operations. Consider
the complexity of a client application invoking chaincode
operation that updates the state of the ledger. As we have
learned earlier, operation that changes the state of the ledger
would require consensus. Since consensus would take
unbounded amount of time, the result of the said operation
would be communicated back to the client asynchronously
via event subscription.
491
5. C. Block and Transaction Event Connector
All of the state-changing operations of the ledger are
asynchronous. As a result block and transaction event
connector plays significant role to allow the result of
invoking chaincode to be communicated to client
application. There are two types of events can be generated
by the event connector: block and transaction events. Once
the event connector learns about the occurrence of an event,
it generates messages for all interested parties and yields
control to the components for processing the events. The
contents of the event contain information such as the block
ID, block number, transaction ID, time-stamped etc.
We setup listeners to receive block and transaction events
from event connector as shown below
var eventPromises = [];
eventhubs.forEach((eh) => {
let txPromise =
new Promise((resolve, reject) =>
{
let handle = setTimeout(reject,
40000);
// Registering block event
// listener
…
// Registering transaction
// event listener
…
});
eventPromises.push(txPromise);
});
D. Event Adaptor Connector
The event details and the communication channel
abstraction used to transmit the events determine the type of
adaptation required at each layer. The blockchain API
connector would use different communication channel to
transmit the event than the application API connector. For
example, blockchain API connector would require an
abstraction called EventEmitter to transmit event. In
contrast, application API connector would use a WebSocket
to transmit event from application API connector to client
application.
Blockchain events are very raw in details and some of
the details may not be required by the next listener down the
chain. Hence adaptation is required to filter out some of
these details. As shown in Fig 6, the façade connector
provides two event adaptors. The first is blockchain event
adaptor which received raw blockchain events such as
transaction and block events. These events would be
retransmitted to the application event connector through
EventEmitter object. Subsequently, application event
connector had to adapt it further by transmitting other event
details through WebSocket to client application.
First step in the adaptation model is to provide the
EventEmitter as the first communication channel and
simplify the API the listener would use to listen to the event
as show in the code snippet below:
var blockchainEvents = new EventEmitter()
var on = blockchainEvents.on
.bind(blockchainEvents) (1)
var emit = blockchainEvents.emit
.bind(blockchainEvents) (2)
module.exports.on = on
module.exports.emit = emit
For ease of using API, we simplified the API for listener
to use by hiding the type communication channel used to
transmit the event as shown in (1) and (2). For
example instead of calling
ClientUtils.blockchainEvents.emit(), the
client can simply issue simplified call as
ClientUtils.emit(). Next we setup blockchain
event adaptor to receive events from blockchain and
unmarshall the data associated with the event for the
application API connector listener. The unmarshalling is
part of the connector adapting process. In this scenario, the
block data was unmarshalled (block object creation) from
byte streams which had been passed down by the
blockchain. The unmarshalling process is shown in (3).
There can be block event as well as transaction event.
Registering listener or callback for listening to newly
created block events is shown below:
const blockEventCb = (block) => {
clearTimeout(handle)
ClientUtils.emit(‘block’,
ClientUtils.unmarshall(block)) (3)
}
eh.registerblockEvent(blockEventCb)
Similarly, we can register callback for listening transaction
events:
const transactionEventCb =
(data, code) => {
clearTimeout(handle)
if(code !== ‘VALID) {
reject()
}
else {
ClientUtils.emit(‘tx’, data)
Resolve()
}
}
eh.registerblockEvent(transactionEventCb)
The parameters passed to the listener include a handle to
the transaction and a status code, which can be checked to
see whether the chaincode invocation result was
successfully committed to the ledger. Once the event has
been received, the event listener is unregistered to free up
system resources. Finally, application event adaptor would
receive event from application API connector and adapt the
event further by transmitting the event as is through
WebSocket to client application:
const blockEventCb = (block) => {
websocket.emit(‘block’, block)
}
const txEventCb = (transaction) => {
websocket.emit(‘tx, transaction)
}
// retransmit events to client application
// after received them from application event
// connector
492
6. ClientUtils.on(‘block’, blockEventCb)
ClientUtils.on(‘tx, txEventCb)
Both of the block and transaction events can be directly
consumed by the client applications via browser or mobile
application.
V. DISCUSSION AND CONCLUSION
It is hoped the software connectors discussed would
benefit other software architecture practitioners in
recognizing the challenges that might surface in building
blockchain-based application of this scale. We do not expect
our treatment of the software connectors is exhaustive.
Other connectors could have been discovered and hence
shed new light in gaining a thorough understanding.
Many issues remain for future work. We intend to
investigate the non-functional system properties such as
performance and scalability. Performance aspect especially
remains an elusive feature in blockchain network. In an
effort to achieve better performance, Hyperledger Fabric
took an extreme approach of having an Orderer Nodes for
creating new block and leaving out a decentralized
consensus. We have yet to conduct any real performance
test to measure the performance characteristics of this
approach. This is something we intend to do in the near
future.
We believe that the identification of primitive building
blocks of software connectors and the comprehensive
discussion of the application of those connectors in
blockchain-based application architecture in this paper form
the necessary foundation for building similar architecture in
the future.
ACKNOWLEDGMENT
The authors would like to acknowledge MIMOS Berhad
to allow in carrying out this research. The views and
conclusions contained herein are those of the authors and
should not be interpreted as necessarily representing the
official policies or endorsement, either expressed or implied,
of MIMOS Berhad.
REFERENCES
[1] G. Eason, B. Noble, and I. N. Sneddon, “Hyperledger Fabric: A
Distributed Operating System for Permissioned Blockchain” Eurosys
2018.
[2] A.V.Baguscharkov, I.E.Pokamestove,K.R.Adamova and Zh.N
Tropina: Adoption of Blockchain Technology in Trade Finance, Nov
2018
[3] FinTech Futures, Why blockchain could revolutionise trade finance
documentation, June 2018 (on web page https://www.fintechfutures
.com/2018/06/why-blockchain-could-revolutionise-trade-finance-
documentation/ )
[4] Xiwei Xu, Cessare Paustasso, Liming Zhu, Vincent Gramoli,
Alexander Panomarev, Shiping Chen: Blockchain as Software
Connector, 2016
[5] Xiwei Xu, Ingo Weber, Liming Zhu, Jan Bosch, Cesare Pautasso,
Paul Rimba: A Taxonomy of Blockchain-Based Systems for
Architecture Design, April 2017
[6] S. Omohundro. Cryptocurrencies, smart contracts, and artificial
intelligence. AI Matters, 1(2):19–21, Dec. 2014.
[7] M. Swan. Blockchain: Blueprint for a New Economy. O’Reilly, US,
[8] Fred B.Schneider, “Implementing Fault-Tolerance Service Using
State Machine Approach: A Tutorial” ACM Computing Survey, vol.
22 issue 4, Dec. 1990, pp. 299-319.
[9] Leslie Lamport, Robert Shostak and Marshall Pease, “The Byzantine
General Problems” ACM Transactions on Programming Languages
and Systems, vol. 4, Dec. 1982.
[10] Nikunj R.Mehta, Nenad Medvidovic, Sandeep Phadke: Towards a
Taxonomy of Software Connectors, 2000
[11] R.N.Taylor, N.Medvidovic, and E.M.Dashofy: Software Architecture:
Foundations, Theory, and Practice. Wiley, 20009
493