Rich Miller & Surendra Reddy
Lighthouse is a concept for the creation of an intercloud registry service, based on (1) access points established and maintained by cloud instances to disseminate operational metadata; and, (2) the use of publish/subscribe (pub/sub) asynchronous messaging as the dominant means of disseminating operational metadata among the constituents of the intercloud.
Learning objectives
• Understand how to handle massive amount of data using data grid.
• Explains data replication and namespaces
• Identify the various data access model.
Grid fabrication of traffic maintenance system clustering at road junctionseSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Details about the software connector type; Adaptor which convert data and control among software components. (Presentation was a group work of 5 undergraduates from CSE - UOM)
Project - UG - BTech IT - Cluster based Approach for Service Discovery using ...Yogesh Santhan
Abstract— Web services that are appropriate to a user specific request are usually not considered in discovering the exact service since they are present without explicit related semantic descriptions. In our approach, we deal with the issue of service discovery provided non-explicit service description semantics that match a particular service request. We propose a system that involves semantic-based service categorization which is performed at the UDDI with a key for achieving the service categorization at functional level based on an ontology skeleton. Also, clustering is used for literally systemizing the web services based on functionality which is achieved by using analytic algorithm. An efficient matching for the relevant services is achieved by the enhancing the service request semantically and involves expanding the additional functionality (obtained from ontology) that are related for the requested service. The pattern recognition algorithm is used to select appropriate service from the cluster formation of related (grouped) web services.
This is my UG Final Year Project - BTech Information Technology.
ADAPTIVE MULTI-TENANCY POLICY FOR ENHANCING SERVICE LEVEL AGREEMENT THROUGH R...IJCNCJournal
The appearance of infinite computing resources that available on demand and fast enough to adapt with
load surges makes Cloud computing favourable service infrastructure in IT market. Core feature in Cloud
service infrastructures is Service Level Agreement (SLA) that led seamless service at high quality of service
to client. One of the challenges in Cloud is providing heterogeneous computing services for the clients.
With the increasing number of clients/tenants in the Cloud, unsatisfied agreement is becoming a critical
factor. In this paper, we present an adaptive resource allocation policy which attempts to improve
accountable in Cloud SLA while aiming for enhancing system performance. Specifically, our allocation
incorporates dynamic matching SLA rules to deal with diverse processing requirements from
tenants.Explicitly, it reduces processing overheadswhile achieving better service agreement. Simulation
experiments proved the efficacy of our allocation policy in order to satisfy the tenants; and helps improve
reliable computing.
Lifecycle Management of Service-based Applications on Multi-Clouds: A Resear...George Baryannis
We identify current challenges in the deployment
of complex distributed applications on multiple Cloud
providers and review the state of the art in model-driven
Cloud software engineering. Challenges include lack of support
for heterogeneous Cloud providers; limited matchmaking
between application requirements and Cloud capabilities;
lack of meaningful cross-platform Cloud resource descriptions;
lack of lifecycle management of Cloud applications;
inadequate cross-layer monitoring and adaptation based
on event correlation; and others. In this paper we propose
solutions to these challenges and highlight the expected benefits
in the context of a complex distributed application.
Learning objectives
• Understand how to handle massive amount of data using data grid.
• Explains data replication and namespaces
• Identify the various data access model.
Grid fabrication of traffic maintenance system clustering at road junctionseSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Details about the software connector type; Adaptor which convert data and control among software components. (Presentation was a group work of 5 undergraduates from CSE - UOM)
Project - UG - BTech IT - Cluster based Approach for Service Discovery using ...Yogesh Santhan
Abstract— Web services that are appropriate to a user specific request are usually not considered in discovering the exact service since they are present without explicit related semantic descriptions. In our approach, we deal with the issue of service discovery provided non-explicit service description semantics that match a particular service request. We propose a system that involves semantic-based service categorization which is performed at the UDDI with a key for achieving the service categorization at functional level based on an ontology skeleton. Also, clustering is used for literally systemizing the web services based on functionality which is achieved by using analytic algorithm. An efficient matching for the relevant services is achieved by the enhancing the service request semantically and involves expanding the additional functionality (obtained from ontology) that are related for the requested service. The pattern recognition algorithm is used to select appropriate service from the cluster formation of related (grouped) web services.
This is my UG Final Year Project - BTech Information Technology.
ADAPTIVE MULTI-TENANCY POLICY FOR ENHANCING SERVICE LEVEL AGREEMENT THROUGH R...IJCNCJournal
The appearance of infinite computing resources that available on demand and fast enough to adapt with
load surges makes Cloud computing favourable service infrastructure in IT market. Core feature in Cloud
service infrastructures is Service Level Agreement (SLA) that led seamless service at high quality of service
to client. One of the challenges in Cloud is providing heterogeneous computing services for the clients.
With the increasing number of clients/tenants in the Cloud, unsatisfied agreement is becoming a critical
factor. In this paper, we present an adaptive resource allocation policy which attempts to improve
accountable in Cloud SLA while aiming for enhancing system performance. Specifically, our allocation
incorporates dynamic matching SLA rules to deal with diverse processing requirements from
tenants.Explicitly, it reduces processing overheadswhile achieving better service agreement. Simulation
experiments proved the efficacy of our allocation policy in order to satisfy the tenants; and helps improve
reliable computing.
Lifecycle Management of Service-based Applications on Multi-Clouds: A Resear...George Baryannis
We identify current challenges in the deployment
of complex distributed applications on multiple Cloud
providers and review the state of the art in model-driven
Cloud software engineering. Challenges include lack of support
for heterogeneous Cloud providers; limited matchmaking
between application requirements and Cloud capabilities;
lack of meaningful cross-platform Cloud resource descriptions;
lack of lifecycle management of Cloud applications;
inadequate cross-layer monitoring and adaptation based
on event correlation; and others. In this paper we propose
solutions to these challenges and highlight the expected benefits
in the context of a complex distributed application.
Achieve business agility with Cloud APIs, Cloud-aware Apps, and Cloud DevOps ...Chris Haddad
o match today’s rapid business pace; teams are adopting flexible Cloud-Native architecture and composing APIs into business-driven, Cloud-aware solutions. This workshop will describe how you can adopt API-first practices, remix Cloud services, and accelerate agility using DevOps PaaS. As teams reshape IT architecture, new business model innovations are possible.
<November 2017 Updated from earlier presentations on Cloud-native Data>
Cloud-native applications form the foundation for modern, cloud-scale digital solutions, and the patterns and practices for cloud-native at the app tier are becoming widely understood – statelessness, service discovery, circuit breakers and more. But little has changed in the data tier. Our modern apps are often connected to monolithic shared databases that have monolithic practices wrapped around them. As a result, the autonomy promised by moving to a microservices application architecture is compromised.
What we need are patterns and practices for cloud-native data. The anti-patterns of shared databases and simple proxy-style web services to front them give way to approaches that include use of caches (Netflix calls caching their hidden microservice), database per service and polyglot persistence, modern versions of ETL and data integration and more. In this session, aimed at the application developer/architect, Cornelia will look at those patterns and see how they serve the needs of the cloud-native application.
This talk provides an architecture overview of data-centric microservices illustrated with an example application. The following Microservices concepts are illustrated - domain driven design, event-driven services, Saga transactions, Application tracing and Health monitoring with different microservices using a variety of data types supported in the database - business data, documents, spatial, graph, and events. A running example of a mobile food delivery application (called GrubDash) is used, with a hands-on-lab that is available for attendees to work through on the Oracle Cloud after these sessions. The rest of the talks will build upon this Microservices architecture framework.
Data Engineer, Patterns & Architecture The future: Deep-dive into Microservic...Igor De Souza
With Industry 4.0, several technologies are used to have data analysis in real-time, maintaining, organizing, and building this on the other hand is a complex and complicated job. Over the past 30 years, we saw several ideas to centralize the database in a single place as the united and true source of data has been implemented in companies, such as Data wareHouse, NoSQL, Data Lake, Lambda & Kappa Architecture.
On the other hand, Software Engineering has been applying ideas to separate applications to facilitate and improve application performance, such as microservices.
The idea is to use the MicroService patterns on the date and divide the model into several smaller ones. And a good way to split it up is to use the model using the DDD principles. And that's how I try to explain and define DataMesh & Data Fabric.
The presentation takes the Mark Cummings functional model and maps it to a communications stack model with its attendant metadata and associated policies.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
6. Lighthouse
Where to start?
• Agreement on identification, location
and ID-Loc resolution
• A registry for the discovery and
description of intercloud constituents
• A mechanism for the delivery of cloud
service descriptive & operational data
• A governance structure for
admission & ejection, assurance,
permissions & entitlements
7. Lighthouse
The concept:
• Each member takes responsibility for
its own metadata access services
• Membership in a communal registry of
metadata access services, with
identification – location resolution
• Agreement on mechanisms for
- pub/sub/search/query
- asynchronous message delivery
8. Lighthouse Scope
Scope is limited to providing the
Service Access Point and related
metadata to service Consumers
10. Intercloud: Use Case #1
• Customer A, EDA company, seeks a list of
IaaS services which claim to provide:
• cloud data management
• Linux OS image management
• Queries the Intercloud registry,
returns IDs of services that meet criteria
• Searches IaaS service metadata to make selection
• Access the Service Access Point (SAP) of a
vendor to validate claims
• Subscribes to Service Access Point for receipt of
service announcements, rate changes, etc
11. Intercloud: Use Case #2
• Customer B, an insurance company, seeks a
single IaaS provider to continuously satisfy
service requirements (constraints)
• E.g. latencies, geography, SLAs etc.
• Queries the Intercloud registry,
returns IDs of services that meet criteria
• Searches IaaS metadata to make selection
• Access the SAPs of vendor to create
Cloud Service Account Instance
• Subscribes to SAP for receipt of relevant
requirement-specific metadata
• Takes specific actions based on timely notifications
(near realtime alerts) via Service Provider APIs and
management functions
12. Intercloud: Use Case #3
• Customer C, a globally distributed online
service looking for an IaaS Providers in Europe
and in USA with specific SLAs.
• Using the Intercloud registry, locates services
meeting needs in two locations.
• Identifies alternative providers for the business
continuity (DR, backup, …) functions.
• Customer C’s application management system
subscribes to failure events & performance sensors
from the IaaS providers.
• Based monitored event/sensor feeds, C’s service
monitoring application dynamically scales up/down
the resources (computing, networking, and storage)
to their applications
13. Intercloud: Use Case #4
• Customer D, a financial services company,
runs applications that are either (or both)
• latency sensitive
• throughput sensitive
• After selecting IaaS provider:
• Sets up the virtual network between on-premise
data center and the IaaS provider cloud.
• Customer D runs their own application mobility
controller within their data center.
• Application Mobility Controller subscribes to
IaaS and data center metadata related to:
• traffic flows, performance metrics
• log feeds from the IaaS cloud service.
14. Intercloud: Use Case #5
• PaaS E, a security broker service, provides an
anti-phishing service for e-mail:
“whitelist”, analytics and forensics
• Operates on behalf of domain holders
• A list management and forensics for multiple
receiver services (e.g. web mail services)
• After establishing service w/ receiver:
• Each receiver establishes a metadata access
point (MAP) regarding failed email
• PaaS E publishes notifications of phishing
attempts to subs, on behalf of domain holder
• All new events and changes in state/status
distributed as pub/sub metadata
16. Lighthouse Requirements
• Defines a dynamically extensible set of
identifiers and metadata
• Automatically aggregates and associates
real-time info from many different sources
• Provides real-time pub/sub/search
mechanism for data regarding cloud instances,
their state and their activities
• Scales for cloud to cloud coordination
17. Lighthouse Concept
Autonomous Metadata Access Point
• All interested and authenticated cloud
services, acting in ‘good faith’, provide
their own Metadata Access Point.
• A Metadata Access Point publishes to
the intercloud community any
information about itself.
18. Lighthouse Concept
A Registry of Registries
• Identity and location of individually and
autonomously managed
Metadata Access Services
• Authoritatively establishes the status of
any individual cloud service and its
standing within the Intercloud
community
19. Lighthouse Concept
Process / Event Coordination
• All 'interested' consumers of a cloud’s
MAP Service may subscribe to
metadata updates that result in a
'property' change
• Many systems can coordinate through
a Metadata Access Protocol with no in-
depth knowledge of each other's APIs
21. Intercloud Registry: Features
• Discovery of a registry’s specific
interfaces / capabilities
• Auditable logging mechanism
• For element / value changes
• For publishing event
22. Intercloud Registry: Features
Forms of Search & Query
• search and report of items based on
(…)
• comparison of object to ‘checklist’ of
elements and parameters
• ‘standing’ search/query established as
subscription
• query and retrieval of items based on
published / recognized (?) data scheme
23. Intercloud Registry: Operational
• Distributed MAP Servers:
Each Cloud Service is responsible for
establishing and administering
• its own Registry Server, or
• publication of metadata by a trusted party
• Authoritative compilation of Registries
(and, therefore, of Cloud Services)
• Unambiguous identification
• Authentication method associated with ID
25. Current Standards/Protocols
Federated UDDI Registry
• Pros:
• Federated UDDI consisting of multiple repositories
that are synchronized periodically.
• Federated UDDI is an efficient solution for service
discovery in distributed service networks.
• Cons:
• too expensive to replicate frequently updated
information
• it is hard to directly utilize this approach to support
discovery of dynamic information
• Governance nightmare…
26. Current Standards/Protocols
Service Location Protocol (SLP)
• Pros:
• agent based service discovery framework
• designed for service discovery in for local area
network
• extensions to SLP proposed aiming to the WAN
environment
• Cons:
• Not suitable for wide area network environment
• unsuitable for Cloud environment due to the scale
and distribution complexities involved.
27. Current Standards/Protocols
IF-MAP
• Pros:
• Client-Server based, real-time pub/sub/search
• Designed to disseminate network security info on
objects & events (dynamic state and activity data)
• Easily extensible to components other than network
and security components
• XML-based SOAP protocol
• Supports standardized, dynamic data interchange
• Provides an uniform mechanism to securely
discover, consume, and manage a single
management domain’s metadata.
28. Current Standards/Protocols
IF-MAP (continued)
• Cons:
• SOAP based only, heavy messaging structure
• Scale for Cloud
• Need lot of extensions to existing metadata model
• IF-MAP access point becomes a central authority
• TBD
• Federation to support intercloud scale?
• Wider range of protocols / RESTful interface?
• A MAP-to-MAP (P2P) approach to bi-directional
pub/sub?
• Asynch messaging queues?
• “Economical” message encoding system ?
hierarchical, binary, self-describing
33. Lighthouse: Conceptual Architecture 1
Cloud Service Provider
CSP
CSP
CSP
MAP
Metadata
Access Point
IC-
MAP "
InterCloud
Registry
IC Registry
Metadata
Access Point
34. Lighthouse: Conceptual Architecture 2
Cloud Service Provider
CSP
CSP
CSP IC
MAP InterCloud Registry
"
IC IC-
ROOT
Metadata
Access Point
IC Registry
Metadata
“Root Server”
35. Lighthouse: Call(s) to Action
Rich Miller
Surendra Reddy
Infrastructure 2.0 Working Group
Editor's Notes
Rich, add some talking points why did you choose Lighthouse? Helping us to get to the shore
12/27/09 RHM: modify use case to include not just finding the directory / registry, but establishes requirements / criteria
12/27/09 RHM: I’m not sure I understand the reference here to “Customer B.” Are you saying that Customer B (from previous page) is using Customer C’s services? Or, is this a typo and you’re referring to Customer C? I assume it’s the latter, and that you were referring to Customer C.
Customer’s application controller helps them to negotiate the resources from various cloud service providers using the Intercloud registry as a matchmaking service utilizing Searching, Location services offered by the Intercloud registry. NOTE: the difference between earlier and this use case, the timely meta-data delivery and consistency of mata-data are two critical needs for this use case.
Customer’s application controller helps them to negotiate the resources from various cloud service providers using the Intercloud registry as a matchmaking service utilizing Searching, Location services offered by the Intercloud registry. NOTE: the difference between earlier and this use case, the timely meta-data delivery and consistency of mata-data are two critical needs for this use case.
Marketplace for Cloud Services offers matchmaking of service providers based on competitive pricing, SLA, and Location preferences. 12/27/09 RHM: I’m not sure I would include the Marketplace as part of Lighthouse. The idea of marketplaces, brokers of various kinds, intercloud ‘core services’ … any of the service offerings that require a ‘trusted third party’ … require the existence of the Lighthouse underpinnings: Autonomously managed metadata access services a registry for the discovery of metadata access services, identification – location resolution a common set of mechanisms and protocols for messaging and pub/sub/search/query