Trabalho de Sistemas Paralelos e Distribuidos : "Parallel and Distributed Computing: BOINC Grid Implementation" por Rodrigo Neves, Nuno Mestre, Francisco Machado e João Lopes
This document presents an algorithm for imperceptibly embedding a DNA-encoded watermark into a color image for authentication purposes. It applies a multi-resolution discrete wavelet transform to decompose the image. The watermark, encoded into DNA nucleotides, is then embedded into the third-level wavelet coefficients through a quantization process. Specifically, the watermark nucleotides are complemented and used to quantize coefficients in the middle frequency band, modifying the coefficients. The watermarked image is reconstructed through inverse wavelet transform. Extraction reverses these steps to recover the watermark without the original image. The algorithm aims to balance imperceptibility and robustness through this wavelet-based, blind watermarking scheme.
Cassandra framework a service oriented distributed multimediaJoão Gabriel Lima
This document describes the CASSANDRA framework, a distributed multimedia content analysis system. It uses a service-oriented architecture that allows individual analysis components to be integrated and upgraded easily. The system is modular, self-organizing, and real-time. It can dynamically distribute workloads across available devices. The framework allows for flexible integration of new analysis algorithms and coordination of existing algorithms from different domains.
Context-aware systems represent extremely complex and heterogeneous systems. The need for middleware to bind components together is well recognized and many attempts to build middleware for context-aware systems have been made.
We provide a general introduction about the evolution of the middlewares and then we proceed with an analysis of the requirements and the issues for context-aware middleware.
SECURITY FOR SOFTWARE-DEFINED (CLOUD, SDN AND NFV) INFRASTRUCTURES – ISSUES A...csandit
This document discusses the security challenges of software-defined infrastructures including cloud, SDN, and NFV technologies. It outlines several issues such as insecure interfaces/APIs, malicious insiders, account hijacking, virtualization vulnerabilities, and service interruptions for cloud computing. For NFV, the key challenges discussed are hypervisor security issues that could allow attackers to access VMs and compromise the entire infrastructure. The document argues that these technologies introduce both traditional security risks as well as new technology-specific risks, and that a software-defined security approach is needed to address challenges across integrated cloud, SDN, and NFV platforms.
Kuncoro Wastuwibowo is the Vice Chair of IEEE Indonesia Section and has experience in multimedia services creation at Telkom Indonesia. He has also served as Chairman of IEEE Communications Society Indonesia Chapter from 2009-2011 and Vice Chair from 2007-2008. He currently works as a Senior Service Creation at Telkom Indonesia Multimedia Division and can be contacted by email at kuncoro@computer.org or on Twitter @kuncoro.
Martine Penilla Group is an intellectual property law firm specializing in patent prosecution. The firm's attorneys have technical degrees across various engineering fields including electrical, mechanical, chemical, materials science, computer science, physics, mathematics and chemistry. They have extensive experience drafting patent applications across a wide range of technologies including software, computer networking, databases, electronic circuitry and the internet. Based in Silicon Valley, the firm focuses on securing patent rights for clients through legal advice and patent application drafting.
Martine Penilla Group is an intellectual property law firm specializing in patent prosecution. The firm's attorneys have technical degrees across various engineering fields including electrical, mechanical, chemical, materials science, computer science, physics, mathematics and chemistry. They have extensive experience drafting patent applications across a wide range of technologies including software, computer networking, databases, electronic circuitry and the internet. Based in Silicon Valley, the firm focuses on securing patent rights for clients through legal advice and patent application drafting.
The document summarizes a presentation on research challenges in networked systems. It discusses recommendations from an evaluation of ICT research in Norway, including better aligning research with industry needs. It looks back at topics from 2000 like wireless sensor networks and voice over IP. Potential future areas discussed include cloud computing, cyber-physical systems, smart grids, and security. The presentation calls for more experimental research, collaboration, and evaluation to address open challenges in these emerging areas.
This document presents an algorithm for imperceptibly embedding a DNA-encoded watermark into a color image for authentication purposes. It applies a multi-resolution discrete wavelet transform to decompose the image. The watermark, encoded into DNA nucleotides, is then embedded into the third-level wavelet coefficients through a quantization process. Specifically, the watermark nucleotides are complemented and used to quantize coefficients in the middle frequency band, modifying the coefficients. The watermarked image is reconstructed through inverse wavelet transform. Extraction reverses these steps to recover the watermark without the original image. The algorithm aims to balance imperceptibility and robustness through this wavelet-based, blind watermarking scheme.
Cassandra framework a service oriented distributed multimediaJoão Gabriel Lima
This document describes the CASSANDRA framework, a distributed multimedia content analysis system. It uses a service-oriented architecture that allows individual analysis components to be integrated and upgraded easily. The system is modular, self-organizing, and real-time. It can dynamically distribute workloads across available devices. The framework allows for flexible integration of new analysis algorithms and coordination of existing algorithms from different domains.
Context-aware systems represent extremely complex and heterogeneous systems. The need for middleware to bind components together is well recognized and many attempts to build middleware for context-aware systems have been made.
We provide a general introduction about the evolution of the middlewares and then we proceed with an analysis of the requirements and the issues for context-aware middleware.
SECURITY FOR SOFTWARE-DEFINED (CLOUD, SDN AND NFV) INFRASTRUCTURES – ISSUES A...csandit
This document discusses the security challenges of software-defined infrastructures including cloud, SDN, and NFV technologies. It outlines several issues such as insecure interfaces/APIs, malicious insiders, account hijacking, virtualization vulnerabilities, and service interruptions for cloud computing. For NFV, the key challenges discussed are hypervisor security issues that could allow attackers to access VMs and compromise the entire infrastructure. The document argues that these technologies introduce both traditional security risks as well as new technology-specific risks, and that a software-defined security approach is needed to address challenges across integrated cloud, SDN, and NFV platforms.
Kuncoro Wastuwibowo is the Vice Chair of IEEE Indonesia Section and has experience in multimedia services creation at Telkom Indonesia. He has also served as Chairman of IEEE Communications Society Indonesia Chapter from 2009-2011 and Vice Chair from 2007-2008. He currently works as a Senior Service Creation at Telkom Indonesia Multimedia Division and can be contacted by email at kuncoro@computer.org or on Twitter @kuncoro.
Martine Penilla Group is an intellectual property law firm specializing in patent prosecution. The firm's attorneys have technical degrees across various engineering fields including electrical, mechanical, chemical, materials science, computer science, physics, mathematics and chemistry. They have extensive experience drafting patent applications across a wide range of technologies including software, computer networking, databases, electronic circuitry and the internet. Based in Silicon Valley, the firm focuses on securing patent rights for clients through legal advice and patent application drafting.
Martine Penilla Group is an intellectual property law firm specializing in patent prosecution. The firm's attorneys have technical degrees across various engineering fields including electrical, mechanical, chemical, materials science, computer science, physics, mathematics and chemistry. They have extensive experience drafting patent applications across a wide range of technologies including software, computer networking, databases, electronic circuitry and the internet. Based in Silicon Valley, the firm focuses on securing patent rights for clients through legal advice and patent application drafting.
The document summarizes a presentation on research challenges in networked systems. It discusses recommendations from an evaluation of ICT research in Norway, including better aligning research with industry needs. It looks back at topics from 2000 like wireless sensor networks and voice over IP. Potential future areas discussed include cloud computing, cyber-physical systems, smart grids, and security. The presentation calls for more experimental research, collaboration, and evaluation to address open challenges in these emerging areas.
The document summarizes a presentation on research challenges in networked systems. It discusses recommendations from an evaluation of ICT research in Norway, including better aligning research with industry needs. It also looks at past research topics from 2000 and potential future areas like cloud computing, cyber-physical systems, smart grids, and security. The presentation concludes that security issues will remain important and that energy efficiency is a grand challenge, requiring interdisciplinary collaboration to address complexity.
This document provides an overview of networking technologies including:
1. Networks connect computers across small and large distances using components like applications, computers, networking devices, and cabling. Common network locations include SOHO, branch offices, mobile users, and corporate offices.
2. Topologies define how devices are connected and include star, bus, ring, and mesh. Physical topologies describe the cabling, while logical topologies describe communication. Common examples are Ethernet using star physically but bus logically, and FDDI using dual ring physically and logically.
3. Network types include LANs for close connections using Ethernet, WANs for long-distance connections using services like Frame Relay, and MANs for
This document discusses network traffic monitoring using the Winpcap packet capturing tool. It begins with an introduction to enterprise network monitoring and requirements. It then provides an overview of Winpcap, including its architecture and how it works. Key aspects covered include the packet capture driver, Packet.dll, and WinPcap.dll libraries. The document also discusses related tools like Jpcap for Java packet capturing. It concludes with an overview of a sample network traffic monitoring application that implements packet capturing using Winpcap.
Hadoop World 2011: Security Considerations for Hadoop Deployments - Jeremy Gl...Cloudera, Inc.
Security in a distributed environment is a growing concern for most industries. Few face security challenges like the Defense Community, who must balance complex security constraints with timeliness and accuracy. We propose to briefly discuss the security paradigms defined in DCID 6/3 by NSA for secure storage and access of data (the “Protection Level” system). In addition, we will describe the implications of each level on the Hadoop architecture and various patterns organizations can implement to meet these requirements within the Hadoop ecosystem. We conclude with our “wish list” of features essential to meet the federal security requirements.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document summarizes mobile cloud computing (MCC), which refers to running applications and storing data on centralized servers in the cloud rather than locally on mobile devices. The key benefits of MCC include overcoming limitations of mobile devices like limited processing power, storage, and battery life. It discusses existing approaches like offloading processing to remote servers and using efficient encoding to reduce data transferred over wireless networks. The document also outlines the architecture of MCC involving a viewer component on mobile devices that acts as a remote display for applications running on cloud servers.
1. The document describes a hybrid middleware for an RFID-based parking management system that combines publish-subscribe and group communication in overlay networks.
2. The hybrid middleware uses group communication relevant to P2P networks as the focus of its technology development. A group of peer nodes efficiently handle events from RFID readers and vehicle detectors to be processed by services.
3. The simulation results showed the approach improved performance of the P2P network. The implementation provides a lower-cost model for building an electronic parking management system.
A GENERIC FRAMEWORK FOR DEVICE PAIRING IN UBIQUITOUS COMPUTING ENVIRONMENTSIJNSA Journal
Recently secure device pairing has had significant attention from a wide community of academic as well as industrial researchers and a plethora of schemes and protocols have been proposed, which use various forms of out-of-band exchange to form an association between two unassociated devices. These protocols and schemes have different strengths and weaknesses – often in hardware requirements, strength against various attacks or usability in particular scenarios. From ordinary user’s point of view, the problem then becomes which to choose or which is the best possible scheme in a particular scenario. We advocate that in a world of modern heterogeneous devices and requirements, there is a need for mechanisms that allow automated selection of the best protocols without requiring the user to have an in-depth knowledge of the minutiae of the underlying technologies. Towards this, the main argument forming the basis of this research work is that the integration of a discovery mechanism and several pairing schemes into a single system is more efficient from a usability point of view as well as security point of view in terms of dynamic choice of pairing schemes. In pursuit of this, we have proposed a generic system for secure device pairing by demonstration of physical proximity. The contributions presented in this paper include the design and prototype implementation of the proposed framework along with a novel Co-Location protocol.
The document discusses the Computer Architecture Group at the University of A Coruña and its work over the last 10 years on performance and scalability of distributed enterprise applications. The group has been working to achieve scalable performance on high speed clusters, handling millions of messages per second, and reducing application runtime from weeks to hours. The group's research covers areas like cluster/cloud computing, Java communication middleware, GPUs, and GIS/visualization. It develops JavaFastComms products for high performance Java communication and offers professional services in performance analytics, computer engineering, and training.
The document proposes modifications to the AODV routing protocol to prevent denial of service attacks in mobile ad hoc networks. It describes how a malicious node can currently overload the network by flooding route requests. The proposed scheme limits the number of route requests a node can accept or forward to prevent this attack. It also blacklists nodes that exceed the route request limit to isolate misbehaving nodes. Simulations show the proposed approach reduces packet loss compared to the standard AODV protocol when under a denial of service attack.
The document discusses three ways to improve computer performance: working harder with faster hardware, working smarter with optimized algorithms, and getting help from multiple linked computers. It then provides examples of centralized versus distributed systems and defines peer-to-peer (P2P) computing as using distributed algorithms across networked computers. Finally, it introduces the concept of middleware enabling coordination between distributed applications across different locations and platforms.
This document describes a proposed software framework called SmartX that aims to provide advanced network security for the Windows operating system. SmartX seeks to overcome drawbacks of virtual private networks (VPNs) by reducing buffer copies and protocol overhead during network packet transmission. It uses a mutual identity algorithm for authentication between endpoints and 128-bit AES encryption of packets. The framework would reside in the Network Driver Interface Specification (NDIS) and modify packets before transmission to provide secure and efficient communication with reduced processing overhead compared to standard VPNs.
Cloud computing: new challenge to the entire computer industryStudying
This document discusses cloud computing and its architecture. It defines cloud computing as using internet and remote servers to maintain data and applications, allowing more efficient computing through centralized resources. The document outlines the three layers of cloud computing: Applications, Platforms, and Infrastructure. Applications are software delivered as a service. Platforms provide computing platforms and tools without managing underlying hardware. Infrastructure provides virtualized computer systems and resources as a utility service.
Enabling High Level Application Development In The Internet Of ThingsPankesh Patel
The Internet of Things (IoT) combines Wireless Sensor and Actuation Networks (WSANs), Pervasive
computing, and the elements of the \\traditional" Internet such as Web and database servers. This leads to
the dual challenges of scale and heterogeneity in these systems, which comprise a large number of devices of
dierent characteristics. In view of the above, developing IoT applications is challenging because it involves
dealing with a wide range of related issues, such as lack of separation of concerns, need for domain experts to
write low level code, and lack of specialized domain specic languages (DSLs). Existing software engineering
approaches only cover a limited subset of the above-mentioned challenges.
In this work, we propose an application development process for the IoT that aims to comprehensively
address the above challenges. We rst present the semantic model of the IoT, based on which we identify
the roles of the various stakeholders in the development process, viz., domain expert, software designer,
application developer, device developer, and network manager, along with their skills and responsibilities.
To aid them in their tasks, we propose a model-driven development approach which uses customized lan-
guages for each stage of the development process: Srijan Vocabulary Language (SVL) for specifying the
domain vocabulary, Srijan Architecture Language (SAL) for specifying the architecture of the application,
and Srijan Network Language (SNL) for expressing the properties of the network on which the application
will execute; each customized to the skill level and area of expertise of the relevant stakeholder. For the
application developer specifying the internal details of each software component, we propose the use of a
customized generated framework using a language such as Java. Our DSL-based approach is supported by
code generation and task-mapping techniques in an application development tool developed by us. Our
initial evaluation based on two realistic scenarios shows that the use of our techniques/framework succeeds
in improving productivity while developing IoT applications.
Signaling for multimedia conferencing in stand alone mobile ad hoc networksAlexander Decker
This document summarizes a research paper on signaling for multimedia conferencing in mobile ad hoc networks (MANETs). The paper proposes a novel cluster-based architecture where nodes that act as cluster heads are elected based on their capabilities. The cluster heads establish and manage multimedia conferences within their clusters. The paper discusses the requirements for multimedia conferencing signaling in MANETs, reviews existing solutions, and presents the cluster-based architecture and its implementation using SIP. It also evaluates the performance of the proposed architecture through simulations.
11.signaling for multimedia conferencing in stand alone mobile ad hoc networksAlexander Decker
This document summarizes a research paper about signaling for multimedia conferencing in mobile ad hoc networks (MANETs). The key points are:
1. MANETs pose challenges for signaling due to their infrastructureless and dynamic nature. The paper proposes a novel cluster-based signaling architecture to address these challenges.
2. In the proposed architecture, nodes dynamically form application-level clusters for conferencing. Cluster heads are elected based on capabilities, and clusters split based on size. This allows for scalability.
3. The paper implements the architecture using Session Initiation Protocol (SIP) and evaluates performance through simulation using OPNET. Results show the cluster-based approach meets requirements for MANET conferencing signaling.
The document introduces MedPort, an integrated system that creates secure bridges of communication to simplify the exchange of healthcare information between disparate systems. It does this using existing infrastructure like portable object discovery and lightweight agents and connectors, reducing complexity and costs. The system is designed to be easy to deploy and use to help address challenges in healthcare communications and interoperability. It also aims to provide secure centralized management of resources and information exchange.
This document discusses new innovations from HP in software-defined networking (SDN). It introduces HP's Virtual Application Networks SDN controller, which provides an open and integrated hardware and software SDN solution. The controller supports OpenFlow and HP SDN applications through open APIs. It also notes that HP has expanded its OpenFlow-enabled switch portfolio to include 9 additional switch models, bringing the total to 25 OpenFlow switches and over 15 million OpenFlow ports.
1. The document discusses the journey to cloud computing through three phases - classic data center, virtual data center, and cloud. In a classic data center, components like compute, applications, databases, storage, and networks are physically separate.
2. The next phase is a virtual data center, where virtualization allows multiple operating systems to run simultaneously on a single physical machine. Virtual machines act like physical machines but are logical files.
3. Security is a key challenge in cloud computing due to the use of virtualization. Various security measures like authentication, access control, and virtual machine theft prevention need to be implemented to secure data and systems in the cloud.
The real time publisher subscriber inter-process communication model for dist...yancha1973
1) The document proposes a real-time publisher/subscriber model for inter-process communication in distributed real-time systems.
2) In the model, processes publish and subscribe to messages using logical handles called distribution tags, without knowledge of senders/receivers.
3) An application programming interface is presented that allows processes to create/destroy tags, publish/receive messages, and query senders/receivers.
4) The model is fault-tolerant, supporting applications like clock synchronization across nodes and allowing processes to be upgraded online.
Grid computing allows for the sharing and aggregation of distributed computing resources like computers, networks, databases and instruments. It provides a large virtual computing system for end users and applications. Key characteristics include facilitating solutions to large, complex problems across locations and organizations through integrated and collaborative use of heterogeneous resources. Popular applications include medical research, astronomy, climate modeling and more. Examples of operational grids discussed are TeraGrid, Pauá Grid Project and academic research projects like SETI@home.
This document discusses the evolution of grid computing from its origins in parallel and distributed computing. It outlines how early research in parallel programming and distributed systems in the 1980s-1990s led to the development of tools and systems like NOWs, DCE, and CORBA that enabled groups of machines to work together. However, issues around resource discovery, security, fault tolerance remained. A major demonstration called the I-WAY in 1995 helped crystallize the potential of distributed computing. This led to projects like Globus, Legion and Condor in the late 1990s that began developing middleware and services to more seamlessly integrate distributed resources, laying the foundation for modern grid computing.
The document summarizes the NetTop project, which aimed to allow commercial off-the-shelf (COTS) technology to be used safely in high assurance applications. The project developed an architecture using virtual machine monitors (VMMs) to encapsulate and constrain the end-user operating system. It identified the VMware virtualization product as suitable for this due to its efficient operation on x86 hardware. The initial capability developed was a secure remote access solution over the internet. The architecture suggests a near-term approach that can address user requirements like multi-network access and data transfer between isolated networks.
The document summarizes a presentation on research challenges in networked systems. It discusses recommendations from an evaluation of ICT research in Norway, including better aligning research with industry needs. It also looks at past research topics from 2000 and potential future areas like cloud computing, cyber-physical systems, smart grids, and security. The presentation concludes that security issues will remain important and that energy efficiency is a grand challenge, requiring interdisciplinary collaboration to address complexity.
This document provides an overview of networking technologies including:
1. Networks connect computers across small and large distances using components like applications, computers, networking devices, and cabling. Common network locations include SOHO, branch offices, mobile users, and corporate offices.
2. Topologies define how devices are connected and include star, bus, ring, and mesh. Physical topologies describe the cabling, while logical topologies describe communication. Common examples are Ethernet using star physically but bus logically, and FDDI using dual ring physically and logically.
3. Network types include LANs for close connections using Ethernet, WANs for long-distance connections using services like Frame Relay, and MANs for
This document discusses network traffic monitoring using the Winpcap packet capturing tool. It begins with an introduction to enterprise network monitoring and requirements. It then provides an overview of Winpcap, including its architecture and how it works. Key aspects covered include the packet capture driver, Packet.dll, and WinPcap.dll libraries. The document also discusses related tools like Jpcap for Java packet capturing. It concludes with an overview of a sample network traffic monitoring application that implements packet capturing using Winpcap.
Hadoop World 2011: Security Considerations for Hadoop Deployments - Jeremy Gl...Cloudera, Inc.
Security in a distributed environment is a growing concern for most industries. Few face security challenges like the Defense Community, who must balance complex security constraints with timeliness and accuracy. We propose to briefly discuss the security paradigms defined in DCID 6/3 by NSA for secure storage and access of data (the “Protection Level” system). In addition, we will describe the implications of each level on the Hadoop architecture and various patterns organizations can implement to meet these requirements within the Hadoop ecosystem. We conclude with our “wish list” of features essential to meet the federal security requirements.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document summarizes mobile cloud computing (MCC), which refers to running applications and storing data on centralized servers in the cloud rather than locally on mobile devices. The key benefits of MCC include overcoming limitations of mobile devices like limited processing power, storage, and battery life. It discusses existing approaches like offloading processing to remote servers and using efficient encoding to reduce data transferred over wireless networks. The document also outlines the architecture of MCC involving a viewer component on mobile devices that acts as a remote display for applications running on cloud servers.
1. The document describes a hybrid middleware for an RFID-based parking management system that combines publish-subscribe and group communication in overlay networks.
2. The hybrid middleware uses group communication relevant to P2P networks as the focus of its technology development. A group of peer nodes efficiently handle events from RFID readers and vehicle detectors to be processed by services.
3. The simulation results showed the approach improved performance of the P2P network. The implementation provides a lower-cost model for building an electronic parking management system.
A GENERIC FRAMEWORK FOR DEVICE PAIRING IN UBIQUITOUS COMPUTING ENVIRONMENTSIJNSA Journal
Recently secure device pairing has had significant attention from a wide community of academic as well as industrial researchers and a plethora of schemes and protocols have been proposed, which use various forms of out-of-band exchange to form an association between two unassociated devices. These protocols and schemes have different strengths and weaknesses – often in hardware requirements, strength against various attacks or usability in particular scenarios. From ordinary user’s point of view, the problem then becomes which to choose or which is the best possible scheme in a particular scenario. We advocate that in a world of modern heterogeneous devices and requirements, there is a need for mechanisms that allow automated selection of the best protocols without requiring the user to have an in-depth knowledge of the minutiae of the underlying technologies. Towards this, the main argument forming the basis of this research work is that the integration of a discovery mechanism and several pairing schemes into a single system is more efficient from a usability point of view as well as security point of view in terms of dynamic choice of pairing schemes. In pursuit of this, we have proposed a generic system for secure device pairing by demonstration of physical proximity. The contributions presented in this paper include the design and prototype implementation of the proposed framework along with a novel Co-Location protocol.
The document discusses the Computer Architecture Group at the University of A Coruña and its work over the last 10 years on performance and scalability of distributed enterprise applications. The group has been working to achieve scalable performance on high speed clusters, handling millions of messages per second, and reducing application runtime from weeks to hours. The group's research covers areas like cluster/cloud computing, Java communication middleware, GPUs, and GIS/visualization. It develops JavaFastComms products for high performance Java communication and offers professional services in performance analytics, computer engineering, and training.
The document proposes modifications to the AODV routing protocol to prevent denial of service attacks in mobile ad hoc networks. It describes how a malicious node can currently overload the network by flooding route requests. The proposed scheme limits the number of route requests a node can accept or forward to prevent this attack. It also blacklists nodes that exceed the route request limit to isolate misbehaving nodes. Simulations show the proposed approach reduces packet loss compared to the standard AODV protocol when under a denial of service attack.
The document discusses three ways to improve computer performance: working harder with faster hardware, working smarter with optimized algorithms, and getting help from multiple linked computers. It then provides examples of centralized versus distributed systems and defines peer-to-peer (P2P) computing as using distributed algorithms across networked computers. Finally, it introduces the concept of middleware enabling coordination between distributed applications across different locations and platforms.
This document describes a proposed software framework called SmartX that aims to provide advanced network security for the Windows operating system. SmartX seeks to overcome drawbacks of virtual private networks (VPNs) by reducing buffer copies and protocol overhead during network packet transmission. It uses a mutual identity algorithm for authentication between endpoints and 128-bit AES encryption of packets. The framework would reside in the Network Driver Interface Specification (NDIS) and modify packets before transmission to provide secure and efficient communication with reduced processing overhead compared to standard VPNs.
Cloud computing: new challenge to the entire computer industryStudying
This document discusses cloud computing and its architecture. It defines cloud computing as using internet and remote servers to maintain data and applications, allowing more efficient computing through centralized resources. The document outlines the three layers of cloud computing: Applications, Platforms, and Infrastructure. Applications are software delivered as a service. Platforms provide computing platforms and tools without managing underlying hardware. Infrastructure provides virtualized computer systems and resources as a utility service.
Enabling High Level Application Development In The Internet Of ThingsPankesh Patel
The Internet of Things (IoT) combines Wireless Sensor and Actuation Networks (WSANs), Pervasive
computing, and the elements of the \\traditional" Internet such as Web and database servers. This leads to
the dual challenges of scale and heterogeneity in these systems, which comprise a large number of devices of
dierent characteristics. In view of the above, developing IoT applications is challenging because it involves
dealing with a wide range of related issues, such as lack of separation of concerns, need for domain experts to
write low level code, and lack of specialized domain specic languages (DSLs). Existing software engineering
approaches only cover a limited subset of the above-mentioned challenges.
In this work, we propose an application development process for the IoT that aims to comprehensively
address the above challenges. We rst present the semantic model of the IoT, based on which we identify
the roles of the various stakeholders in the development process, viz., domain expert, software designer,
application developer, device developer, and network manager, along with their skills and responsibilities.
To aid them in their tasks, we propose a model-driven development approach which uses customized lan-
guages for each stage of the development process: Srijan Vocabulary Language (SVL) for specifying the
domain vocabulary, Srijan Architecture Language (SAL) for specifying the architecture of the application,
and Srijan Network Language (SNL) for expressing the properties of the network on which the application
will execute; each customized to the skill level and area of expertise of the relevant stakeholder. For the
application developer specifying the internal details of each software component, we propose the use of a
customized generated framework using a language such as Java. Our DSL-based approach is supported by
code generation and task-mapping techniques in an application development tool developed by us. Our
initial evaluation based on two realistic scenarios shows that the use of our techniques/framework succeeds
in improving productivity while developing IoT applications.
Signaling for multimedia conferencing in stand alone mobile ad hoc networksAlexander Decker
This document summarizes a research paper on signaling for multimedia conferencing in mobile ad hoc networks (MANETs). The paper proposes a novel cluster-based architecture where nodes that act as cluster heads are elected based on their capabilities. The cluster heads establish and manage multimedia conferences within their clusters. The paper discusses the requirements for multimedia conferencing signaling in MANETs, reviews existing solutions, and presents the cluster-based architecture and its implementation using SIP. It also evaluates the performance of the proposed architecture through simulations.
11.signaling for multimedia conferencing in stand alone mobile ad hoc networksAlexander Decker
This document summarizes a research paper about signaling for multimedia conferencing in mobile ad hoc networks (MANETs). The key points are:
1. MANETs pose challenges for signaling due to their infrastructureless and dynamic nature. The paper proposes a novel cluster-based signaling architecture to address these challenges.
2. In the proposed architecture, nodes dynamically form application-level clusters for conferencing. Cluster heads are elected based on capabilities, and clusters split based on size. This allows for scalability.
3. The paper implements the architecture using Session Initiation Protocol (SIP) and evaluates performance through simulation using OPNET. Results show the cluster-based approach meets requirements for MANET conferencing signaling.
The document introduces MedPort, an integrated system that creates secure bridges of communication to simplify the exchange of healthcare information between disparate systems. It does this using existing infrastructure like portable object discovery and lightweight agents and connectors, reducing complexity and costs. The system is designed to be easy to deploy and use to help address challenges in healthcare communications and interoperability. It also aims to provide secure centralized management of resources and information exchange.
This document discusses new innovations from HP in software-defined networking (SDN). It introduces HP's Virtual Application Networks SDN controller, which provides an open and integrated hardware and software SDN solution. The controller supports OpenFlow and HP SDN applications through open APIs. It also notes that HP has expanded its OpenFlow-enabled switch portfolio to include 9 additional switch models, bringing the total to 25 OpenFlow switches and over 15 million OpenFlow ports.
1. The document discusses the journey to cloud computing through three phases - classic data center, virtual data center, and cloud. In a classic data center, components like compute, applications, databases, storage, and networks are physically separate.
2. The next phase is a virtual data center, where virtualization allows multiple operating systems to run simultaneously on a single physical machine. Virtual machines act like physical machines but are logical files.
3. Security is a key challenge in cloud computing due to the use of virtualization. Various security measures like authentication, access control, and virtual machine theft prevention need to be implemented to secure data and systems in the cloud.
The real time publisher subscriber inter-process communication model for dist...yancha1973
1) The document proposes a real-time publisher/subscriber model for inter-process communication in distributed real-time systems.
2) In the model, processes publish and subscribe to messages using logical handles called distribution tags, without knowledge of senders/receivers.
3) An application programming interface is presented that allows processes to create/destroy tags, publish/receive messages, and query senders/receivers.
4) The model is fault-tolerant, supporting applications like clock synchronization across nodes and allowing processes to be upgraded online.
Grid computing allows for the sharing and aggregation of distributed computing resources like computers, networks, databases and instruments. It provides a large virtual computing system for end users and applications. Key characteristics include facilitating solutions to large, complex problems across locations and organizations through integrated and collaborative use of heterogeneous resources. Popular applications include medical research, astronomy, climate modeling and more. Examples of operational grids discussed are TeraGrid, Pauá Grid Project and academic research projects like SETI@home.
This document discusses the evolution of grid computing from its origins in parallel and distributed computing. It outlines how early research in parallel programming and distributed systems in the 1980s-1990s led to the development of tools and systems like NOWs, DCE, and CORBA that enabled groups of machines to work together. However, issues around resource discovery, security, fault tolerance remained. A major demonstration called the I-WAY in 1995 helped crystallize the potential of distributed computing. This led to projects like Globus, Legion and Condor in the late 1990s that began developing middleware and services to more seamlessly integrate distributed resources, laying the foundation for modern grid computing.
The document summarizes the NetTop project, which aimed to allow commercial off-the-shelf (COTS) technology to be used safely in high assurance applications. The project developed an architecture using virtual machine monitors (VMMs) to encapsulate and constrain the end-user operating system. It identified the VMware virtualization product as suitable for this due to its efficient operation on x86 hardware. The initial capability developed was a secure remote access solution over the internet. The architecture suggests a near-term approach that can address user requirements like multi-network access and data transfer between isolated networks.
The Indo-American Journal of Agricultural and Veterinary Sciences is an online international journal published quarterly. It is a peer-reviewed journal that focuses on disseminating high-quality original research work, reviews, and short communications of the publishable paper.
This document presents a technical seminar presentation on real-time rain detection and wiper control using embedded deep learning. The presentation contains an abstract, introduction, literature survey, description of the architecture, applications, advantages and disadvantages, and conclusion. It discusses the RAIN project, which developed key software components for reliable distributed systems using off-the-shelf hardware components. The RAIN architecture provides fault-tolerant communication, consistent data sharing, and ability to recover applications if nodes fail.
Department of Computer Application- Advanced computer network
Locations
Resource-Sharing Functions and Benefits
Resource sharing
Network User Applications
Characteristics of a Network
Foundation
Advance Internet working
Congestion Control & Resource Allocation
Network Security
Cryptographic Building Blocks
Grid computing involves connecting geographically distributed computers and resources into a single network to create a virtual supercomputer. Key aspects of grid computing include combining computational power from multiple computers, providing single sign-on access to distributed resources, and distributing programs across processes or computers. Popular software for implementing grids includes Globus, Condor, Legion, and NetSolve. Grids are useful for tasks like distributed supercomputing, high-throughput computing, and data-intensive computing.
This document summarizes a student's paper on using reinforcement learning for anomaly detection in software defined networks. The student aims to use machine learning techniques, specifically reinforcement learning, to make network traffic control decisions given certain network attack scenarios. The student's methodology involves using network statistics collected from an OpenFlow switch to define states for a reinforcement learning algorithm. The algorithm is deployed on the application plane of an SDN architecture and aims to identify anomalous traffic flows based on features like flow size and packet counts, then take actions through the controller to stop anomalous traffic from affecting the network. Initial testing of the approach showed potential for detecting ping flood and SYN flood attacks on the simulated network.
The document discusses Grid Computing, which uses distributed computing resources like computer clusters connected via high-speed networks to provide high computational power. It describes the Globus Toolkit, an open-source software toolkit that provides basic services for building Grids. Key components of the Globus Toolkit allow for resource management, security, data management, and communication. The document also discusses parallel programming using MPI (Message Passing Interface) and potential applications of Grid Computing such as distributed supercomputing, real-time systems, and data-intensive processing.
Department Of computer Application- Advanced computer network
Main office:
Remote locations
Branch offices:
Home offices:
Mobile users
Resource-Sharing Functions and Benefits
Network User Applications
Characteristics of a Network
Foundation
Advance Internet working
Congestion Control & Resource Allocation
Network Security
Symmetric Key Encryption
Cryptographic Building Blocks
Cloud computing Review over various scheduling algorithmsIJEEE
Cloud computing has taken an importantposition in the field of research as well as in thegovernment organisations. Cloud computing uses virtualnetwork technology to provide computer resources tothe end users as well as to the customer’s. Due tocomplex computing environment the use of high logicsand task scheduler algorithms are increase which resultsin costly operation of cloud network. Researchers areattempting to build such kind of job scheduling algorithms that are compatible and applicable in cloud computing environment.In this paper, we review research work which is recently proposed by researchers on the base of energy saving scheduling techniques. We also studying various scheduling algorithms and issues related to them in cloud computing.
This document provides lecture notes on cloud computing. It begins with an introduction to cloud computing, defining key terms like distributed computing, grid computing, parallel computing, and cloud characteristics. It then discusses the evolution of distributed computing platforms from mainframes to today's internet clouds. The document outlines common cloud computing models including Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). It also covers essential cloud computing characteristics like elasticity, on-demand provisioning, and the benefits of cloud computing.
Crypto Mark Scheme for Fast Pollution Detection and Resistance over NetworkingIRJET Journal
The document proposes a new scheme called HOPVOTE to efficiently detect pollution attacks in networks. HOPVOTE is a packet HOP_VOTE technique that attaches an encrypted key to each packet to verify its integrity at each hop. It aims to rapidly identify polluters and misbehaving data/routes. The scheme uses keybit verification and cache-based recovery to identify and block nodes that drop or modify data, and recovers polluted data for retransmission. Simulation results show the scheme can effectively detect pollution attacks with low overhead. Future work will analyze routing performance under common network applications and flows.
This document discusses trends in embedded systems. It outlines that embedded systems integrate computer hardware and software onto a single microprocessor board. Key trends in embedded systems include systems-on-a-chip (SoC), wireless technology, multi-core processors, support for multiple languages, improved user interfaces, use of open source technologies, interoperability, automation, enhanced security, and reduced power consumption. SoCs integrate all system components onto a single chip to reduce power usage. Wireless connectivity and multi-core processors improve performance. Embedded systems also support multiple languages and have improved user interfaces.
A Case Study On Implementation Of Grid Computing To Academic InstitutionArlene Smith
This document discusses implementing a grid computing environment in an academic institution. It begins by outlining the vision, strategy, and roadmap for setting up a grid. The basic hardware, software, and human resource requirements are then described. Setting up grid applications is covered, including deploying code and data. An intra-grid topology is proposed as the initial design, with the ability to later expand to extra-grid and inter-grid models. Maintaining and upgrading the grid is also addressed. The goal is to provide a guideline for IT managers on exploring how computer clusters on campus could be linked and shared as a grid to tackle computational problems through coordinated resource sharing.
This document discusses distance evaluation using mobile agent technology. It begins by explaining client-server technology and some of its limitations for distance evaluation, including lack of support for subjective questions, delivery of dynamic content, and offline examinations. It then introduces mobile agent technology as an alternative that can address these limitations. Mobile agents are software processes that can migrate between machines to access resources and services. The document proposes using a mobile agent approach to design and implement a computer assisted testing and evaluation system for distance education that considers the full examination process from paper setting to evaluation. Key advantages of mobile agents for this application include reduced network traffic, asynchronous autonomous interaction, and support for heterogeneous environments.
This document provides an overview of distributed computing paradigms such as cloud computing, jungle computing, and fog computing. It defines distributed computing as utilizing multiple autonomous computers located across different areas to solve large problems. Cloud computing is described as internet-based computing using shared online resources and data storage. Jungle computing combines distributed systems for high performance, while fog computing extends cloud computing to network edges for low latency applications. The document discusses characteristics, architectures, advantages and disadvantages of these paradigms.
This document summarizes a research paper on wireless network intrinsic secrecy. The paper proposes a framework to model wireless networks with inherent secrecy given by physical properties like node spatial distribution, wireless propagation medium, and total network interference. It develops metrics to measure network secrecy and evaluates how properties like path loss, fading and interference can enhance secrecy. The analysis provides insights into exploiting inherent properties of wireless networks to improve security and privacy of communications. Evaluation results show that interference can significantly benefit network secrecy and a deeper understanding of how natural network properties can be used to enhance secrecy.
A STUDY ON JOB SCHEDULING IN CLOUD ENVIRONMENTpharmaindexing
This document discusses job scheduling algorithms in cloud computing environments. It begins with an introduction to cloud computing and job scheduling challenges. It then reviews several existing job scheduling algorithms that aim to minimize completion time and costs while improving performance and quality of service. These algorithms use approaches like genetic algorithms, priority queues, and workload prediction. The document also discusses issues like priority-based scheduling and balancing mixed workloads. Overall, the document analyzes the problem of job scheduling in clouds and surveys different proposed scheduling algorithms and their objectives.
Similar to "Parallel and Distributed Computing: BOINC Grid Implementation" por Rodrigo Neves, Nuno Mestre, Francisco Machado e João Lopes (20)
O documento fornece um resumo das principais funcionalidades do GMail, incluindo filtros, rótulos, gestor de contactos, conversação, pesquisa, anexos e customização. O GMail oferece um serviço gratuito e online com grande capacidade de armazenamento e funcionalidades automatizadas para organizar emails.
The document discusses JavaScript, describing it as:
- Created in 1995 by Netscape and based on the ECMAScript standard.
- A dynamic, weakly typed, object-oriented programming language that is often misunderstood.
- Used for client-side scripting of webpages as well as server-side and application scripting.
- Commonly disliked due to past bad practices, implementations, and browser differences, but these issues are improving over time.
Trabalho de Introdução a Energias Renováveis: "Estudo de implementação de alimentação eléctrica através de energia solar fotovoltaica para Jardim de Infância" por Rodrigo Neves, Henrique Evaristo e Hugo Amorim
Trabalho de Sistemas Paralelos e Distribuidos apresentado em 17 de Junho de 2009: "Volunteer Computing With Boinc" por Diamantino Cruz e Ricardo Madeira
Este documento apresenta um estudo sobre a implementação de alimentação elétrica através de energia solar fotovoltaica para um jardim de infância. Fornece detalhes sobre cenários de consumo, soluções propostas como painéis solares fixos e móveis, e conclusões sobre a viabilidade do projeto.
Trabalho de Sistemas Paralelos e Distribuidos apresentado em 17 de Junho de 2009: "Grid Computing: Boinc Overview" por Rodrigo Neves, Nuno Mestre, Francisco Machado e João Lopes
Web services are software systems designed to support interoperable machine-to-machine interaction over a network. SOAP (Simple Object Access Protocol) is used for exchanging messages between computers and is necessary for web services to communicate, with XML being the format for SOAP messages. Axis is an implementation of SOAP that works over the Apache Tomcat HTTP server, which can be used to install and configure a web services environment to develop a simple Java web service and J2ME mobile client application.
O documento descreve o sintetizador de voz eSpeak, incluindo sua história, características e usos. Ele fornece vozes naturais e inteligíveis para interfaces e é usado em aplicativos de computador e dispositivos móveis.
O documento introduz o sistema tipográfico TEX e sua versão mais popular, LATEX. O TEX foi criado por Donald Knuth para permitir a criação de documentos com alta qualidade tipográfica de forma fácil. LATEX foi desenvolvido por Leslie Lamport como uma interface amigável para usuários do TEX. O documento discute as vantagens do TEX/LATEX e como utilizá-los.
O documento introduz o Squid e SquidGuard, servidores proxy e filtro de conteúdo. Detalha a instalação e configuração dos serviços, incluindo regras ACL, relatórios e segurança. Também mostra diagramas da rede antes e depois da implementação da proxy transparente.
More from Núcleo de Electrónica e Informática da Universidade do Algarve (12)
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectors
"Parallel and Distributed Computing: BOINC Grid Implementation" por Rodrigo Neves, Nuno Mestre, Francisco Machado e João Lopes
1. 1
Parallel and Distributed Computing
BOINC Grid Implementation
˜
Rodrigo Neves, Nuno Mestre, Francisco Machado, and Joao Lopes
Abstract—With the development of communications and Internet, distributed computing became an everyday reality for everyone
rather than just for a limited group of IT specialists and investors. This development allowed the emerging of several new computational
concepts, some even at the cost of non-consensus. This paper intends to make a brief approach at some of the current paradigms
like cloud and grid computing, peer-to-peer and client-server methods. Afterwards, it will go deeper into a detailed review of the Public
Resource Computing concept and the BOINC software implementation. Finally, a case study on the Extended BOINC System created
by Søttrup and Pederson [18] will present a simple solution to integrate the concepts of PRC and Grid in order to provide simple,
scalable, volunteer based computer power to process QoS dependent jobs.
Index Terms—parallel computing, distributed computing, grid computing, cloud computing, client server, peer to peer, boinc, extended
boinc, public resource computing, quality of service
!
1 I NTRODUCTION work hardware, operating systems and programming
languages, the term middleware has been created. It is
D ISTRIBUTED systems have been given many defini-
tions throughout the years but none of these has
been consistent with each other.
a software layer that provides abstraction, setting up a
uniform computational model for software developers to
As Andrew S. Tanenbaum and Maarten van Steen work on. One of the most widespread middleware soft-
suggested [1]: wares available is the Common Object Request Broker
(CORBA) [2].
A distributed system is a collection of independent
computers that appears to its users as a single
coherent system. 2 I MPORTANT C ONCEPTS
The first part of this definition deals with hardware In order to better understand the remaining of this
while the second focus specially on software. paper, it might be interesting to clear up some technical
Distributed systems must present a transparent work- nomenclature in the Parallel and Distributed Systems
flow to the user regardless of the differences in hardware environment.
and communication methods throughout the connected • Servers: Typically high-powered workstations,
computers. Other important characteristics of these sys- minicomputers or mainframes that hold the infor-
tems are the ease of scalability and the high availability. mation and provide it to clients through request
Such systems should easily connect users to resources handling;
while hiding the fact that these may be distributed across • Clients: Computers or mainframes that request
a network. It should also be open and respect the avail- services from the servers;
able standards in order to facilitate future development • API: An Application Programming Interface is a
and scalability and ensure the data and communication tool used to provide the end-user with an abstract
security. All these specific issues shall be addressed in set of operations. Its main attraction is the ability
further detail later on this paper. to hide from the user the implementation of such
In order to create the illusion of a single system operations;
both at high-level, for users, and at low-level, for net- • SDK: The Software Development Kit is typically
a set of development tools that allows a software
• R. Neves is with MIEET, Departmento de Engenharia Electr´ nica e o
Inform´ tica, Faculdade de Ciˆncia e Tecnologia, Universidade do Algarve.
a e
engineer to create applications for a certain software
E-mail: a25067@ualg.pt package. Usually a SDK implements an API;
• N. Mestre is with LEI, Departmento de Engenharia Electr´ nica e In-
o • Middleware: A software layer that conceals all
form´ tica, Faculdade de Ciˆncia e Tecnologia, Universidade do Algarve.
a e
E-mail: a28997@ualg.pt
the heterogeneity in a system in order for software
• F. Machado is with LEI, Departmento de Engenharia Electr´ nica e In-
o developers to easily work on;
form´ tica, Faculdade de Ciˆncia e Tecnologia, Universidade do Algarve.
a e • Query Languages: Techniques and protocols to get
E-mail: a28994@ualg.pt
• J. Lopes is with LEI, Departmento de Engenharia Electr´ nica e Inform´ tica,
o a
the sough information being the most famous the
Faculdade de Ciˆncia e Tecnologia, Universidade do Algarve.
e Structured Query Language (SQL);
E-mail: a27981@ualg.pt • Virtual Organization: A dynamic set of individ-
uals and/or institutions defined by a batch of
2. 2
resource-sharing rules. Those rules need to make else users will eventually evade the security in favor
perfectly clear just what is shared, who is allowed of productivity.
to share, and the conditions under which sharing
occurs [9]; 3.3 Scalability
• Public Resource Computing: Usually known as
Scalability is one of the most important design goals for
PRC, this concept describes the idea of getting any-
developers in distributed systems. According to Clifford
body with an Internet connection and spare com-
Neuman [5], a system’s scalability may be weighted
pute power to donate CPU cycles on their computer
along three main dimensions:
for a greater project.
• Size: Whether it is simple to add more users and
resources to the system or not;
3 D ISTRIBUTED S YSTEMS C HALLENGES
• Geographical: Regarding the location of users and
3.1 Connecting Users and Resources / Concurrency resources;
Resource sharing and distribution in any given multi- • Administration: Ease to manage the system even
user system is always a main concern. In the case of dis- when it is shared by multiple administrative orga-
tributed systems this task becomes of utter importance. nizations.
In a distributed system, both applications and services As a system scales up, in any of these three dimen-
provide resources that can be shared among clients. sions, it may exhibit problems that could affect perfor-
These usually allow multiple client requests to be ac- mance.
cepted even though they may be processed one at a time. Size scalability problems are related with the increased
If we consider each resource being encapsulated as amount of users and their demands. Such situation
an object and the requests being treated as concurrent occurring on centralized services, data and algorithms
threads, it becomes clear that any application or service will eventually create a bottleneck at the server.
must be carefully managing this concurrency in order to Geographical scaling problems are usually related to
avoid inconsistent results and/or deadlocks. the connectivity limitations of the communications. In
Though at a first glance this situation may require wide-area networks, access to information is usually
careful analysis and programming, it’s cost effectiveness made through unreliable connections and virtually al-
may become obvious when sharing expensive resources ways a point-to-point connection.
like high performance processing and data structures. Administrative scalability issues often relate to mul-
tiple domain trustworthy security certificates. As more
3.2 Transparency / Heterogeneity organizations join the network, strict administration and
One of the major concepts and advantages of distributed management protocols must be defined as all included
systems is the transparency. It allows a user to access parts should play an active role in this processes.
all resources and data that may be scattered around
a network, without having to worry where these are 3.4 Security
actually located. This may prove to be a problem when Usually, the information resources available and main-
an virtually infinite combination of hardware resources tained in a distributed system have a high intrinsic
and operating systems are involved. This ability to pro- value to the users, therefore the security of that data is
vide transparency within all the heterogeneity of the dis- considerably important.
tributed system is usually implemented by a middleware There are three main security components that should
software layer as described on the introduction. be taken into consideration:
The concept of transparency may be applied to many
• Confidentiality: Protection against unauthorized
aspects of distributed systems [1]:
accesses;
• Access: Hide differences in data representation and
• Integrity: Protection against unauthorized alter-
how a resource is accessed; ation or corruption;
• Location: Hide where a resource is located;
• Availability: Protection against disruption of the
• Migration: Hide that a resource may move to communication with the resources.
another location;
Preventive actions should be taken in order to ensure
• Relocation: Hide that a resource may be moved to
this security parameters, like the use of a firewall and
another location while in use;
careful code development.
• Replication: Hide that a resource is replicated;
• Concurrency: Hide that a resource may be shared
by several competitive users; 3.5 Openness
• Failure: Hide the failure and recovery of a resource; According to Kazi Farooqui, Luigi Logrippo and Jan de
• Persistence: Hide whether a (software) resource is Meer [3], openness is the combination of the previous
in memory or on disk; stated characteristics. Sometimes however, it is generi-
• Security: Negotiation of secure access to resources cally referred to as the level of respect for standards that
must require a minimum of user intervention, or define the syntax and semantics of the provided services.
3. 3
time. Unlike the Client-Server approach, P2P systems
rely on an agglomeration of applications and systems al-
together in order to have access to distributed resources
on a decentralized way. In terms of maintenance costs,
P2P has the edge over Client-Server since the contents
and systems are maintained individually by each peer.
P2P systems can be categorized into three main
groups: centralized, decentralized and hybrid implemen-
tations.
The centralized P2P concept bases its implementation
on a central server that executes simple generic functions
for the system, like work load scheduling and result
validation.
Decentralized P2P systems rely completely on the
peers themselves to perform all the functions without
Fig. 1. Generic client-server environment intervention of centralized servers. In these cases, all the
result validation and communication must be handled
by each node and coordinated with its peers.
This important concept, despite of the interpretation, Hybrid systems are a bit of a mix between the pre-
provides the developers with the needed flexibility to vious two implementations. These bring up the concept
add, configure and integrate new components and ser- of super-nodes, made of several regular nodes, which
vices. Also, this flexibility ensures that the addition and are responsible of the centralized work while it’s con-
removal of components can be made without affecting stituents can still work on the regular decentralized way.
the stability of the overall system.
4.3 Cloud Computing
4 D ISTRIBUTED S YSTEM M ODELS
A Distributed System is a set of loosely coupled re- Nowadays, data storage and programs are being swept
sources interconnected by a communication network. from the desktop computers and corporate server rooms
[4] and installed in the computer cloud. Cloud computing
emerges from the lesser need of the users to have
applications installed on their machines due to increased
4.1 Client-Server communications speed and availability.
Client-Server is a particular type of distributed systems Every operating system update cascades into a batch
design that clearly distinguishes the relationship be- of time and resource consuming software revisions. Out-
tween two computers. The Server provides some kind of sourcing computation through Internet based services
service, such as processing database queries or sending significantly reduces these costs while offering a whole
out current stock prices. The client uses the services that new set of advantages like mobility and cooperation.
are provided by the server, either displaying database The amount of services and applications provided in
query results to the user or making stock purchase the cloud is growing every day and cannot be considered
recommendations to an investor. as a bunch of simple tools anymore. Companies are
The communication that occurs between the client and starting to acquire cloud based services for every kind
the server must be reliable. That is, no data can be of managerial and business oriented tasks.
dropped and it must arrive on the client side in the Growing voices of worry have been expressing their
same order in which the server sent it. In order to ensure concerns about data privacy and confidentiality not tak-
the reliability between Server and Client the communi- ing the service’s privacy policies as creditable enough.
cation uses the TCP/IP protocols. The Internet Protocol One famous scenario used as argument is cited by Hayes
(IP) suite is a set of communication tools that regulate [17]:
communication on the Internet and most commercial
(...) a government agency presents a subpoena or
networks. The Transmission Control Protocol (TCP) is
search warrant to the third party that has posses-
one of the core protocols of this suite. Using TCP, clients
sion of your data. If you had retained the physical
and servers can create connections to one another, over
custody, you might still have been compelled to
which they can exchange data in packets.
surrender the information, but at least you would
have been able to decide for yourself whether or
4.2 Peer-To-Peer not to contest the order. The third-party service
The concept of Peer-To-Peer communication, also known is presumably less likely to go to court on your
as P2P, is based on the idea that each individual node behalf. In some circumstances you might not even
(peer) in the network is both client and server at the same be informed that your documents have been released.
4. 4
These kind of issues will probably never be solved 4.4.1.1 Applying the three point checklist: Follow-
in time to stop or control the growth of the cloud as ing there are some practical examples to help us make
we see major software developers and investors (Apache the concept of grid clear, according to Ian Foster’s three
Foundation, Amazon, Adobe, Google, IBM, etc.) trying point checklist.
to keep up with the evolutionary pace. • Sun Grid Engine:
The Grid Engine project is an open source commu-
4.4 Grid Computing nity effort to facilitate the adoption of distributed
4.4.1 Definition computing solutions. [12]
The term grid was first used as a metaphor to the This system delivers quality of service when in-
electric power grid. The intended idea was that access stalled on a parallel computer or local area network.
to computation and data should be as easy, pervasive However, its complete knowledge of system states
and standard as plugging in an appliance into an outlet. and user requests, as well as control over individual
Nowadays it is hard to find a consensual definition. Here components, implements a centralized management
are some from the most referred authors: system that makes this system fail the first point of
Foster’s checklist.
A computational grid is a hardware and software
• The Web: The Web is open and its general-purpose
infrastructure that provides dependable, consistent,
protocols support access to distributed resources,
pervasive and inexpensive access to high-end
however it fails to coordinate those resources to
computational capabilities. [6]
deliver interesting quality of services.
• TeraGrid:
Computational grid is the technology that enables
TeraGrid is an open scientific discovery infrastruc-
resource virtualization, on-demand provisioning and
ture combining leadership class resources at eleven
service (resource) sharing between organizations. [7]
partner sites to create an integrated, persistent com-
putational resource. [13]
Grid computing has the ability, using a set of
This system integrates resources from multiple insti-
open standards and protocols, to gain access to
tutions, each with their own policies, uses open and
applications and data, processing power, storage
general-purpose protocols to negotiate and manage
capacity and a vast array of other computing
sharing and addresses multiple quality of service
resources over the Internet. A grid is a type of
dimensions, therefore fully fits Ian Foster’s three-
parallel and distributed system that enables the
point checklist.
sharing, selection and aggregation of resources
distributed across ”multiple” administrative 4.4.2 Usual Features
domains based on their (resources) availability,
In this section we thrive to describe in detail some of
capacity, performance, cost and users’ quality-of-
the most important features usually associated with the
service requirements. [8]
grid computing method.
4.4.2.1 Volunteer Computing: Most grids use vol-
The problem that underlines the Grid concept is unteer resources, that is, resources contributed to the
coordinated resource sharing and problem solving in grid by anonymous individuals or organizations with
dynamic, multi-institutional virtual organizations. no profit intended.
[9] 4.4.2.2 Geographically dispersed: Due to its archi-
tecture and volunteer characteristic some grid resources
Although we can find some common ground, like re- can be spread throughout the globe.
source sharing and processing power, these are features 4.4.2.3 Idle Resources: One of the benefits of using
not only of a grid computing system but of any kind grid computing is that you can exploit resources with
of distributed system. Due to the lack of consensus and low usage rates because in most organizations, there are
the use of the term ”grid” as a marketing slogan (science large amounts of under utilized computing resources. Most
grid, access grid, knowledge grid, bio grid, campus grid, desktop machines are busy less than 5% of the time over a
commodity grid, etc.), Ian Foster suggested a checklist to business day [14].
define what is and is not a grid in his paper ”What is 4.4.2.4 Inexpensive: Either through volunteering
the grid? A three point checklist” [11]: or by using idle resources within a company, usually
A grid is a system that: it’s possible to reach a considerable computational per-
1) coordinates resources that are not subject to formance without major investments in supercomputers
centralized control (...) or clusters.
2) (...) using standard, open, general-purpose pro-
tocols and interfaces (...) 4.4.3 Architecture
3) (...) to deliver nontrivial quality of service. In an effort to standardize grid architectures, I. Foster, C.
Kesselman and S. Tuecke [9] presented an open five-layer
5. 5
4.4.3.4 Collective Layer: The Collective layer con-
tains protocols and services, like APIs and SDKs, which
are not associated with specific resources but global
in nature, and capture interactions across collections of
resources. Meaning that, at this layer, individual resource
architectures and functionalities are abstracted in order
to provide collective functions that can be implemented
as persistent services with associated protocols, or as
SDKs and APIs, designed to be linked to applications.
4.4.3.5 Application layer: This is the final layer,
therefore it is responsible to provide the end user with an
Fig. 2. Layered Grid Architecture abstract interface that includes the user applications and
functionalities that operate within a Virtual Organization
environment. These applications are constructed using
services defined at any layer.
structure (Application, Collective, Resource, Connectiv-
ity and Fabric layer) based on the ”hourglass model”
[10]. 4.5 Quality of Service
In this architecture the narrow neck, represented by In the multimedia communities, Quality of Service (QoS)
the Resource and Connectivity layers, defines a small set issues are geared to provide a client with an acceptable
of protocols onto which many high-level behaviors, used level of presentation quality when accessing content.
by the Application and Collective layer, can be mapped Network QoS deals specifically with providing certain
(the top of the hourglass). On the other hand, the ”neck” quality levels for network link characteristics between
protocols can themselves be mapped onto many different two points. These characteristics are expressed in terms
underlying technologies (the base of the hourglass, the of delay, jitter, packet loss rate and throughput.
Fabric layer) (Figure 2). Unlike multimedia and network QoS, Grid QoS re-
4.4.3.1 Fabric Layer: This layer provides the re- quires a central information service for up-to-date infor-
sources to which shared accesses are mediated by Grid mation on resources available for use by others. Such
protocols. Fabric layer works with resource-specific op- information can be interrogated by an application user
erations, there is no abstraction at this level. These to determine which resources can be used to execute an
operations are usually a result of sharing operations at operation. In Grid computing, QoS management focus
higher layers. There’s interdependency between func- on providing assurance on resource access while main-
tions implemented in this layer and sharing operations taining the security level between domains [22].
supported by the Grid. A rich and more complex Fabric
set of functionalities enables more sophisticated sharing 4.5.1 QoS in Grid Computing
operations. As opposed, fewer functionalities and de- Once the Grid applications submit their requirements to
mands at this layer imply a simplified Grid structure. the management services that schedule jobs as resources
4.4.3.2 Connectivity Layer: The connectivity layer become available, these must support a resource man-
defines the core communication and authentication pro- ager or scheduler that can receive requests from external
tocols. Communication protocols enable the exchange of applications. Nevertheless, there are several applications
data between Fabric layer resources, requiring transport, that need to get results for their tasks within strict
routing and naming mechanisms. Authentication proto- deadlines. Consequently they cannot wait for resources
cols are used to ensure the security and identity of users to become available so, it is necessary to reserve Grid
and resources. Due to complex security problems and resource and services in a particular time. In order
wide usage, existing protocols and standards should be to handle complex scientific and business applications,
used to implement this layer. other features are highly desirable, sometimes even re-
4.4.3.3 Resource Layer: Resource layer provides quired.
the means to share single resources using information A Grid resource management system tries to address
and management protocols. Information protocols are the following QoS issues [22]:
used to obtain details about the structure and state of • Advanced Resource Reservation: It’s important
a single resource. Management protocols are used to when dealing with scarce resources, as is often the
negotiate access to the shared resource, specifying its case with end resources made available on the Grid.
requirements and the operations to be performed. These Should support mechanisms for advance, immedi-
protocols should also ensure that requested operations ate, or on-demand resource reservation.
respect individual policies of each resource. This layer is • Reservation Policy: The system should have mech-
only concerned with individual resources. Global state anisms that provide the Grid resource owners ways
and atomic actions over multiple resources are issues of of enforcing their policies by governing when, how,
the next layer. and who can use their resource.
6. 6
• Agreement Protocol: The system should inform
the clients of their advance reservation status, and
the resource quality they should expect during the
service session.
• Security: The system should prevent malicious
users penetrating or altering data repositories that
hold information about reservations, policies and
agreement protocols.
• Simplicity: The QoS enhancement should be rea-
sonable and simplistic in design so that it requires
minimal changes to be made to existing computa-
tion, storage or network infrastructure.
• Scalability: The approach should be scalable to a
large number of entities, since the Grid is a global-
scale infrastructure.
5 BOINC Fig. 3. BOINC Infrastructure
Berkeley Open Interface for Network Computing
(BOINC) is an open middleware platform that supplies
the scientific research by placing their confidence in temporary input and output files on the server. Only
using resources donated by simple personal computers the files are deleted, the entries on the database are
around the world (core clients). The objective is to gather kept and therefore it is possible to find information
all this energy and make a supercomputer which helps even after the project is completed.
the researchers (clients) in several projects. [18] • The ”feeder” is used to enhance the schedulers per-
Using BOINC allows user to achieve high processing formance and to reduce the queries to the database.
performance with low costs. For example, to have 100 It does so by placing WUs, from the database, into
TFlops available for one year, Amazon’s Elastic Com- a shared memory.
puting Cloud costs 175 million dollars, to build a cluster • The ”database purger” removes work-related
you need 12.4 million dollars, but with BOINC, clients database entries when they are no longer needed
only need, in average, 125,000 dollars [19]. On the other in order to keep the database from growing into an
hand, looking at the top500 supercomputers list (Novem- unpractical size.
ber/2008) [21], roadrunner achieves 1105 TFlops, where
BOINC has a daily average of 1,700 TFlops and the most 5.1.2 Scheduler
relevant project (SETI@Home) has a daily average of 615 The scheduler is a CGI software that runs every time
TFlops. a client connects to a project and asks for work. It has
to compare the available WU’s needs with the clients’
5.1 Infrastructure shared resources in order to match them.
BOINC provides a set of tools, daemons, scheduler and 5.1.3 Database
database (Figure 3).
The BOINC database is a MySQL database that stores in-
5.1.1 Daemons formation about registered users and hosts, applications
and their versions, WU’s and their results, and other
BOINC servers use daemons to manage and keep track relevant information.
of their jobs or, in BOINC terms, work units (WU).
• The ”work generator” has to generate WU and
correspondent input files. 5.2 PRC vs. Grid
• The ”transitioner” has to control and change the According to David P. Anderson [23], both PRC and Grid
states of each WU. Computing methodologies share a common goal: to use
• The ”validator” has to validate the results of each the existing resources in the best possible way. There are
WU and its redundant copies. however some important differences between the two.
• The ”assimilator” daemon regroups the final results While a Grid is usually managed and controlled by
and processes them according to the administrator’s a single organization, a PRC network relies on separate
specification. It could zip and e-mail the results or individuals to share their resources. While this particu-
automatically do post processing and store those larity may allow a huge growth in terms of connected
results on a magnetic tape. nodes, it brings out other liabilities like the unreliability
• The ”file deleter”, as the name indicates, checks for of the processed results and uncertain processor time.
completed and assimilated WUs and then deletes Since each user is volunteer and therefore allowed to
7. 7
Manage their states;
•
Pull the results from the grid resource broker.
•
These results don’t need validation because the re-
sources from the Grid are considered trustworthy.
6 C ONCLUSION
Throughout the development of this paper, we scanned
the currently available paradigms on distributed com-
puting, their main advantages and limitations. This
process has taken us through part of the history of
computing as we start from the traditional client-server
model and develop it until nowadays cloud and grid
computing concepts.
The evolution of communications and commodity per-
sonal computers brought the distributed computation
to a whole new level while approaching people and
Fig. 4. Extended BOINC Infrastructure science through Public Resource Computing. This new
area of computational resource sharing brought the need
to rethink the Quality of Service requirements in dis-
control the amount of work done, nothing ensures the
tributed networks. PRC proved that a huge ammount
project management that this user will be cooperating
of volunteers can supply an unbeatable system-wide
for a long time or keeping a steady work-flow.
throughput without the need of strict QoS policies. For
Another important difference between the two meth-
instance, the major BOINC based project, and also the
ods relies on the Quality of Service. It is virtually impos-
one that encouraged it’s development, SETI@Home has
sible to ensure a strong QoS on a PRC network due to
produced so far 3 Million+ years of processing time. [20]
slow connections and low availability.
For the reduced ammount of research projects and
processing jobs that require strict deadlines, high com-
5.3 Extended BOINC System munication speed and permanent connectivity there was
As we have seen BOINC only implements two out of the a need to merge the benefits of PRC globalization and
three points in Foster’s checklist. It has a decentralized the QoS that a Grid system could provide. In order to
control over resources and it uses open protocols and address this need, the bridge connector model [18] was
interfaces, but it fails to deliver non-trivial quality of developped to extend the regular BOINC System.
service, because it does not fulfill the information ac-
cessibility and connectivity requirements. R EFERENCES
With this in mind, Søttrup and Pederson [18], sug-
[1] A. S. Tannenbaum and M. van Steen, Distributed Systems - Principles
gested a bridge between BOINC and a private Grid, and Paradigms, International Edition, Pearson, U.S.A.: Prentice Hall,
where instead of clients pulling the jobs from BOINC 2002.
server by connecting to it, the server connects to a [2] G. Coulouris, J. Dollimore and T. Kindberg, Distributed Systems
- Concepts and Design, Fourth Edition, Pearson, U.K.: Addison-
resource broker, responsible for scheduling, submitting Wesley, 2005.
jobs to remote machines, transferring files and logging, [3] K. Farooqui, L. Logrippo, J. de Meer, The ISO Reference Model for
on the Grid and pushes jobs into it. This Grid will be re- Open Distributed Processing - An Introduction, February 14, 1996
[4] A. Silberschatz, P. B. Galvin, and G. Gagne, Operating System
sponsible for providing quality of service and therefore, Concepts, Seventh Edition, Wiley, U.S.A.: John Wiley & Sons, 2005,
together they would build a Foster’s Grid (Figure 4). Page 611.
[5] B. C. Neuman, Scale in Distributed Systems, Readings in Distributed
Computing Systems, IEEE Computer Society Press, 1994
5.3.1 BOINC to Grid architecture [6] I. Foster, C. Kesselman, The Grid: Blueprint for a New Computing
Infrastructure, University of Michigan, U.S.A.: Morgan Kaufmann
In order to process the WU’s directly and for the speci- Publishers, 1999
fications of the Grid, a new daemon has to be created [7] P. Plaszczak, R. Wellner, Grid computing: The Savvy Manager’s Guide,
(bridge daemon) and some of the other have to be U.S.A.: Elsevier/Morgan Kaufmann, 2005
modified. The transitioner daemon must be adapted so [8] IBM Solutions Grid for Business Partners: Helping IBM Business
Partners to Grid-enable applications for the next phase of e-business on
that it wont change the state of the WUs sent into demand, U.S.A.:IBM, 2002
the Grid. Manipulating and controlling the several WU [9] I. Foster, C. Kesselman, S. Tuecke, The Anatomy of the Grid: En-
states is now the bridge daemon’s responsibility. The abling Scalable Virtual Organizations, International J. Supercomputer
Applications, 2001
feeder daemon also needs to be modified so that it wont [10] L. Kleinrock, Realizing the Information Future: The Internet and
load the WUs, intended to the Grid, into the shared Beyond, National Research Council, U.S.A.: National Academy
memory. This way, the bridge daemon has to: Press, 1994
[11] I. Foster, What is the Grid? A Three Point Checklist, GRIDToday, July
• Push WUs into the Grid; 20, 2002
8. 8
[12] Grid Engine Project, http://gridengine.sunsource.net,
SunSource.net, Sun, June 6, 2009
[13] About TeraGrid, http://www.teragrid.org/about, TeraGrid, Na-
tional Science Foundation, June 6, 2009
[14] B. Jacob, M. Brown, K. Fukui, N. Trivedi, Introduction to Grid Com-
puting, First Edition, International Business Machines Corporation,
U.S.A.: IBM, International Technical Support Organization, 2005,
Page 8
[15] P. Roy, Operating Systems: Internals and Design Principles, 6/E
William Stallings, Manatee Community College, U.S.A.: Prentice
Hall, 2008
[16] Introduction to Distributed System Design,
http://code.google.com/intl/pt-PT/edu/parallel/dsd-
tutorial.html, Google Code University, 2009
[17] B.Hayes, Cloud Computing, Communications of the ACM, ACM,
July 2008, Pages 9-11
[18] C. U. Søttrup, J. G. Pedersen, Developing Distrubited Computing
Solutions: Combining Grid Computing and Public Computing, M. Sc.
Thesis, Department of Computer Science, University of Copen-
hagen, March 1, 2005
[19] BOINC Documentation Project, Why Use BOINC?,
http://boinc.berkeley.edu/trac/wiki/WhyUseBoinc, University
of California, June 11, 2009
[20] J. Koulouris, The Big BOINC ! Projects and Chronology Page,
http://www.angelfire.com/jkoulouris-boinc, June 11, 2009
[21] Top500.org, Top500 List - November 2008,
http://www.top500.org/list/2008/11/100, Top500 Supercom-
puting Sites, Top500.org, June 11, 2009
[22] R. J. Al-Ali, K. Amin, G. von Laszewski, O. F. Rana, D. W. Walker,
M. Hategan, N. Zaluzec Analysis and Provision of QoS for Distributed
Grid Applications, Kluwer Academic Publishers, 2004
[23] D. P. Anderson, BOINC: A System for Public-Resource Computing
and Storage, Proceedings of the Fifth IEEE/ACM International
Workshop on Grid Computing, 2004