Cloud computing is a new computing paradigm that delivers resources as general utilities over the Internet. It offers advantages like no upfront investment, lower operating costs through scalability, and easy access to resources. However, cloud computing also faces challenges that need to be addressed, like standardization and security. The paper surveys the key concepts, architecture, implementations, and research challenges of cloud computing.
This document discusses the convergence of digital TV systems and the role of middleware in supporting applications across different transport platforms. It emphasizes that future digital TV systems will integrate all current TV services and transport platforms into a single core of distributed services. The middleware layer will conceal the transport system from applications, similarly to how operating systems conceal hardware. The document uses NCL (Nested Context Language) and Ginga-NCL as examples of middleware technologies that can provide the necessary support for applications across convergent digital TV systems through their ability to synchronize multimedia, support multiple devices, and adapt content and presentation. Some open issues with these technologies are also raised.
Business implementation of Cloud ComputingQuaid Sodawala
This document discusses business implementation and security concerns regarding cloud computing. It provides an introduction to cloud computing, explaining that it allows users to access applications from anywhere through connected devices. The key benefits of cloud computing are discussed as scalability, cost savings, instant access, and mobility. The document also outlines the evolution of cloud computing from earlier concepts like grid computing and utility computing. It describes the different cloud computing models of public, private, and hybrid clouds and how they each have different security risk levels. Finally, it mentions there are also different cloud service models beyond the infrastructure models.
Access security on cloud computing implemented in hadoop systemJoão Gabriel Lima
This document summarizes a research paper on implementing access security on a Hadoop cloud computing system using fingerprint identification and face recognition. The researchers built a Hadoop cloud computing system connected to mobile and thin client devices over wired and wireless networks. They used fingerprint and face recognition for rapid user identification and verification on the cloud system within 2.2 seconds. The goal was to evaluate the effectiveness and efficiency of access security on the Hadoop cloud computing platform.
This document summarizes a survey on cloud computing and its services. It discusses key aspects of cloud computing including characteristics, types of cloud services (IaaS, PaaS, SaaS), related terminology, and tools for cloud development and simulation. Specifically, it covers CloudSim and eXo IDE as important tools - CloudSim enables simulation of cloud computing environments and eXo IDE provides a development environment for cloud applications. The paper also reviews related work on cloud computing platforms, operating systems, challenges, and management of cloud infrastructure and resources.
This document discusses the future of cloud computing and infrastructure. It covers several topics including:
1. How technological advances will enable machines to make rapid decisions without human biases.
2. The economic benefits cloud computing provides through standardized workloads, rapid provisioning, and usage-based billing.
3. The challenges of cloud computing including security, data privacy, application mobility between cloud providers, and lack of visibility across business processes.
4. Emerging trends in cloud computing like the rise of application containers and platforms as a service, as well as countertrends around vendor strategies and the role of operating systems.
Assessing no sql databases for telecom applicationsJoão Gabriel Lima
This document discusses migrating a telecom application from a relational database to a NoSQL database to take advantage of cloud computing. It presents the data model used for call management, including entities for customers, accounts, services, and plans. It then describes migrating the process of selecting a subscriber's services to Cassandra, a distributed NoSQL database, and compares performance to PostgreSQL in different workloads. The document aims to assess how NoSQL databases can adapt telecom business models to cloud environments.
Cloud computing is a large-scale distributed computing paradigm that delivers dynamically scalable and virtualized resources as a service over the Internet. While cloud computing shares similarities with grid computing in aggregating computing power, it differs in offering abstracted, on-demand services on a massive scale driven by economies of scale. Key differences relate to business models, architecture, and resource management, though both paradigms face challenges in data-intensive applications and security.
Cloud Computing Building A Framework For Successful Transition Gtsijerry0040
The document discusses cloud computing and provides a framework for government agencies to successfully transition to cloud solutions. It defines cloud computing and outlines its key characteristics and deployment models. The benefits of cloud computing for government include increased capacity at lower costs, reduced IT operating costs and resources, and improved collaboration. The document proposes a five step Cloud Computing Maturity Model for agencies to follow: 1) consolidation of servers and resources, 2) virtualization, 3) automation, 4) utility or on-demand access, and 5) full adoption of cloud solutions. Each step builds upon the previous to help agencies maximize benefits while mitigating risks of the transition.
This document discusses the convergence of digital TV systems and the role of middleware in supporting applications across different transport platforms. It emphasizes that future digital TV systems will integrate all current TV services and transport platforms into a single core of distributed services. The middleware layer will conceal the transport system from applications, similarly to how operating systems conceal hardware. The document uses NCL (Nested Context Language) and Ginga-NCL as examples of middleware technologies that can provide the necessary support for applications across convergent digital TV systems through their ability to synchronize multimedia, support multiple devices, and adapt content and presentation. Some open issues with these technologies are also raised.
Business implementation of Cloud ComputingQuaid Sodawala
This document discusses business implementation and security concerns regarding cloud computing. It provides an introduction to cloud computing, explaining that it allows users to access applications from anywhere through connected devices. The key benefits of cloud computing are discussed as scalability, cost savings, instant access, and mobility. The document also outlines the evolution of cloud computing from earlier concepts like grid computing and utility computing. It describes the different cloud computing models of public, private, and hybrid clouds and how they each have different security risk levels. Finally, it mentions there are also different cloud service models beyond the infrastructure models.
Access security on cloud computing implemented in hadoop systemJoão Gabriel Lima
This document summarizes a research paper on implementing access security on a Hadoop cloud computing system using fingerprint identification and face recognition. The researchers built a Hadoop cloud computing system connected to mobile and thin client devices over wired and wireless networks. They used fingerprint and face recognition for rapid user identification and verification on the cloud system within 2.2 seconds. The goal was to evaluate the effectiveness and efficiency of access security on the Hadoop cloud computing platform.
This document summarizes a survey on cloud computing and its services. It discusses key aspects of cloud computing including characteristics, types of cloud services (IaaS, PaaS, SaaS), related terminology, and tools for cloud development and simulation. Specifically, it covers CloudSim and eXo IDE as important tools - CloudSim enables simulation of cloud computing environments and eXo IDE provides a development environment for cloud applications. The paper also reviews related work on cloud computing platforms, operating systems, challenges, and management of cloud infrastructure and resources.
This document discusses the future of cloud computing and infrastructure. It covers several topics including:
1. How technological advances will enable machines to make rapid decisions without human biases.
2. The economic benefits cloud computing provides through standardized workloads, rapid provisioning, and usage-based billing.
3. The challenges of cloud computing including security, data privacy, application mobility between cloud providers, and lack of visibility across business processes.
4. Emerging trends in cloud computing like the rise of application containers and platforms as a service, as well as countertrends around vendor strategies and the role of operating systems.
Assessing no sql databases for telecom applicationsJoão Gabriel Lima
This document discusses migrating a telecom application from a relational database to a NoSQL database to take advantage of cloud computing. It presents the data model used for call management, including entities for customers, accounts, services, and plans. It then describes migrating the process of selecting a subscriber's services to Cassandra, a distributed NoSQL database, and compares performance to PostgreSQL in different workloads. The document aims to assess how NoSQL databases can adapt telecom business models to cloud environments.
Cloud computing is a large-scale distributed computing paradigm that delivers dynamically scalable and virtualized resources as a service over the Internet. While cloud computing shares similarities with grid computing in aggregating computing power, it differs in offering abstracted, on-demand services on a massive scale driven by economies of scale. Key differences relate to business models, architecture, and resource management, though both paradigms face challenges in data-intensive applications and security.
Cloud Computing Building A Framework For Successful Transition Gtsijerry0040
The document discusses cloud computing and provides a framework for government agencies to successfully transition to cloud solutions. It defines cloud computing and outlines its key characteristics and deployment models. The benefits of cloud computing for government include increased capacity at lower costs, reduced IT operating costs and resources, and improved collaboration. The document proposes a five step Cloud Computing Maturity Model for agencies to follow: 1) consolidation of servers and resources, 2) virtualization, 3) automation, 4) utility or on-demand access, and 5) full adoption of cloud solutions. Each step builds upon the previous to help agencies maximize benefits while mitigating risks of the transition.
1) The document discusses community cloud computing as an alternative model to the centralized vendor-driven cloud computing model. It proposes combining cloud computing with principles from grid computing, digital ecosystems, and green computing.
2) Key concerns with centralized cloud computing include lack of privacy, dependence on large data centers which harms resilience and sustainability, and lack of user control over their data and services.
3) The proposed community cloud computing (C3) model utilizes spare resources of networked personal computers to provide cloud-like services in a decentralized manner, addressing the concerns with centralized clouds.
The document announces the Second International Conference on Cloud Computing, GRIDs, and Virtualization (CLOUD COMPUTING 2011) to be held in Rome, Italy from September 25-30, 2011. It calls for submissions of technical papers, position papers, surveys, and panel proposals on topics related to cloud computing, grids, and virtualization. Suggested topics include cloud technologies, services, platforms, applications, security and privacy challenges, and the relationships between cloud computing, grids, and virtualized environments. The conference aims to explore applications of cloud computing and identify open issues to address in these emerging technologies.
This document discusses two ArcGIS applications deployed in the cloud by the Forest Health Technology Enterprise Team (FHTET). A public Forest Pest Conditions Viewer application allows users to explore forest pest impact data. A secured Disturbance Mapper application uses remote sensing data to identify disturbed forest areas and enable analysis of the causes and effects of disturbances. Both applications were built with ArcGIS Server 10 and deployed to Amazon Web Services, demonstrating how custom ArcGIS applications can be quickly deployed to the cloud.
1. The document discusses effective storage management strategies for cloud computing environments. It describes different cloud configurations like private, public, and hybrid clouds.
2. Key aspects of storage management in clouds are data protection and recovery, data lifecycle management, storage utilization and optimization, and storage resource management.
3. IBM software solutions help with data migration between storage tiers, automating storage policies, and providing visibility into cloud storage resources and asset utilization.
- Federated networked cloud allows for rapid and on-demand provisioning of cloud connectivity across the world by establishing and managing distributed cloud resources within and across providers' domains.
- It enables enterprise users to deploy infrastructure in different datacenters and connect them over wide area networks on demand.
- Service providers can collaborate by bundling their cloud and network services to gain new business opportunities and provide unified solutions to customers.
SDN and NFV Value in Business Services: Innovations in Network Monetization a...Alan Sardella
White paper submitted to the Society of Cable Telecommunications Engineers (SCTE) by Mazen Khaddem of Cox Communications and Dr. Loukas Paraschis of Cisco Systems. Paper covers technical reference design in SDN including the role of open source, orchestration and control, and the importance of a hybrid control plane for legacy, multivendor networks.
A Journey to the Future of Cloud-native Media MicroservicesWashington Cabral
The impact of digital transformation on media
and entertainment (M&E) sector is reshaping traditional
media services and the way solution vendors think, design
and market their products. Audience consuming habits shifts
from traditional, linear viewing hours, to pervasive, any time,
any device, any location patterns, broadcasters find
themselves on a hard track to keep up their infrastructure
with a consumption model that defies both operational and
business models.
The document discusses the evolution of cloud computing strategies from early portrayals focusing on centralized cloud services to more hybrid models. It describes four emerging strategies - leveraging local devices, browsers, on-premise applications, and platforms. The author introduces TCS iON's strategy of "The Fourth Game", which represents the next evolution by offering IT-as-a-Service and commoditizing a wide range of IT capabilities and services to optimize local devices and deliver maximum benefits to SMB customers.
The document proposes a cloud based query solver system featuring ontology for educational institutes. It provides infrastructure as a service (IaaS) allowing students to access virtual machines and software as a service (SaaS) with a query solving interface. An ontology collects and categorizes data from different departments to generate inferences for common student queries and allows domain-specific redirection of questions. The implemented system designs an interface for students and professors and installs Eucalyptus cloud to create operating system instances for access.
The document summarizes key telecom trends including the growth of connected devices and machines, big data challenges, cloud computing advances, emerging applications, and the evolution of networks towards more intelligent, automated, and distributed architectures. Major technology directions include the internet of things, content-centric networking, heterogeneous networks, virtualization, and the changing role of telecom operators.
CUTTING THROUGH THE FOG: UNDERSTANDING THE COMPETITIVE DYNAMICS IN CLOUD COMP...HarshitParkar6677
This document provides an introduction to cloud computing by defining key terms and conceptualizing the competitive landscape.
It begins with an operating definition of cloud computing as on-demand network access to dynamically scalable computing resources delivered as a service via pay-as-you-go models. The document then maps the emerging "stacks" of competition, including cloud applications, platforms, and infrastructure resources. It analyzes the strategies of providers across these stacks and introduces concepts of access networks, delivery models, and how competition is unfolding.
Addressing the Challenges of Tactical Information Management in Net-Centric S...Angelo Corsaro
This paper provides an overview of the advantages provided by the OMG Data Distribution Service for Real-Time Systemts (DDS) for addressing the challenges associated with Tactical Information distribution.
The document discusses Ericsson's 3D visual communication demo showcased at Mobile World Congress 2013. The demo aims to improve video conferencing by adding depth perception through 3D technology. It allows for more realistic and immersive communication. While the demo currently uses glasses, the goal is to deploy glasses-free auto-stereoscopic displays. Further work is needed to standardize new 3D video codecs and implement them in real-time for high quality 3D visual communication products and services. The technology brings benefits for user experience, businesses, sustainability and society.
This document discusses virtualizing private cloud resources to maximize utilization. It begins by introducing cloud computing and its service models of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). It then explains how virtualizing an existing private cloud using virtualization software like VMware can enable multiple virtual clouds to run on a single physical machine. This "cloud within a cloud" approach significantly improves CPU and memory utilization compared to a single private cloud.
Agenda 120213 Future Media Distribuation using Information Centric NetworksSAIL
The document announces a workshop on future media distribution using information-centric networks. It will bring together two large research projects involving partners from vendors, operators, media industry and academia. The workshop will discuss business models and technical solutions for future media distribution and explore how information-centric networks can enable more efficient distribution of media content over the internet. The day-long event will include keynote presentations, panels, and workshops on related topics such as measurement of quality of experience, content demand patterns, regulatory aspects, and the status of open source NetInf software resources.
This document discusses the basics of cloud computing, including definitions, characteristics, deployment strategies, service models, opportunities, challenges, and issues. Specifically, it defines cloud computing as on-demand access to configurable computing resources via the internet. The key characteristics of cloud computing are on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service. The document also outlines the public, private, and hybrid deployment strategies as well as software (SaaS), platform (PaaS), and infrastructure (IaaS) service models. Finally, it discusses opportunities for end users, businesses and developers, and examines challenges related to security, performance, availability, cost, and regulatory requirements.
IBM Global Financing can tailor financing solutions to your specific IT needs. For more information on great rates, flexible payment plans and loans, and asset buyback and disposal
El documento describe diferentes conceptos y metodologías relacionadas con el desarrollo local endógeno y sostenible. Explica que la economía social estudia la satisfacción de necesidades humanas a través de la producción y distribución social. También describe el desarrollo a nivel nacional, regional y local, así como indicadores y teorías de desarrollo. Finalmente, detalla metodologías participativas como el diagnóstico comunitario, proyectos de aprendizaje y formulación de proyectos para promover el desarrollo desde lo local.
Este documento describe las competencias clave de un líder en organizaciones complejas del siglo 21. Discute que un líder debe tener la capacidad de inspirar e influir a otros para lograr objetivos comunes. Identifica cinco competencias principales: comunicación, interpersonal, emocional, traducción y estratégica. También explica que la comunicación es una competencia esencial para dirigir una organización hacia el futuro. Finalmente, concluye que para tener éxito, los líderes deben administrar el rol comunicacional desde un enfoque complejo y transdisciplinario.
El documento argumenta que implementar el gobierno electrónico en los centros penitenciarios de Venezuela sería beneficioso para mejorar el control y monitoreo de la población penal. Actualmente, los presos usan redes sociales y tienen acceso a comunicarse y cometer delitos desde la cárcel. El gobierno electrónico podría ayudar a registrar la información de los presos de manera más efectiva y eficiente.
1) The document discusses community cloud computing as an alternative model to the centralized vendor-driven cloud computing model. It proposes combining cloud computing with principles from grid computing, digital ecosystems, and green computing.
2) Key concerns with centralized cloud computing include lack of privacy, dependence on large data centers which harms resilience and sustainability, and lack of user control over their data and services.
3) The proposed community cloud computing (C3) model utilizes spare resources of networked personal computers to provide cloud-like services in a decentralized manner, addressing the concerns with centralized clouds.
The document announces the Second International Conference on Cloud Computing, GRIDs, and Virtualization (CLOUD COMPUTING 2011) to be held in Rome, Italy from September 25-30, 2011. It calls for submissions of technical papers, position papers, surveys, and panel proposals on topics related to cloud computing, grids, and virtualization. Suggested topics include cloud technologies, services, platforms, applications, security and privacy challenges, and the relationships between cloud computing, grids, and virtualized environments. The conference aims to explore applications of cloud computing and identify open issues to address in these emerging technologies.
This document discusses two ArcGIS applications deployed in the cloud by the Forest Health Technology Enterprise Team (FHTET). A public Forest Pest Conditions Viewer application allows users to explore forest pest impact data. A secured Disturbance Mapper application uses remote sensing data to identify disturbed forest areas and enable analysis of the causes and effects of disturbances. Both applications were built with ArcGIS Server 10 and deployed to Amazon Web Services, demonstrating how custom ArcGIS applications can be quickly deployed to the cloud.
1. The document discusses effective storage management strategies for cloud computing environments. It describes different cloud configurations like private, public, and hybrid clouds.
2. Key aspects of storage management in clouds are data protection and recovery, data lifecycle management, storage utilization and optimization, and storage resource management.
3. IBM software solutions help with data migration between storage tiers, automating storage policies, and providing visibility into cloud storage resources and asset utilization.
- Federated networked cloud allows for rapid and on-demand provisioning of cloud connectivity across the world by establishing and managing distributed cloud resources within and across providers' domains.
- It enables enterprise users to deploy infrastructure in different datacenters and connect them over wide area networks on demand.
- Service providers can collaborate by bundling their cloud and network services to gain new business opportunities and provide unified solutions to customers.
SDN and NFV Value in Business Services: Innovations in Network Monetization a...Alan Sardella
White paper submitted to the Society of Cable Telecommunications Engineers (SCTE) by Mazen Khaddem of Cox Communications and Dr. Loukas Paraschis of Cisco Systems. Paper covers technical reference design in SDN including the role of open source, orchestration and control, and the importance of a hybrid control plane for legacy, multivendor networks.
A Journey to the Future of Cloud-native Media MicroservicesWashington Cabral
The impact of digital transformation on media
and entertainment (M&E) sector is reshaping traditional
media services and the way solution vendors think, design
and market their products. Audience consuming habits shifts
from traditional, linear viewing hours, to pervasive, any time,
any device, any location patterns, broadcasters find
themselves on a hard track to keep up their infrastructure
with a consumption model that defies both operational and
business models.
The document discusses the evolution of cloud computing strategies from early portrayals focusing on centralized cloud services to more hybrid models. It describes four emerging strategies - leveraging local devices, browsers, on-premise applications, and platforms. The author introduces TCS iON's strategy of "The Fourth Game", which represents the next evolution by offering IT-as-a-Service and commoditizing a wide range of IT capabilities and services to optimize local devices and deliver maximum benefits to SMB customers.
The document proposes a cloud based query solver system featuring ontology for educational institutes. It provides infrastructure as a service (IaaS) allowing students to access virtual machines and software as a service (SaaS) with a query solving interface. An ontology collects and categorizes data from different departments to generate inferences for common student queries and allows domain-specific redirection of questions. The implemented system designs an interface for students and professors and installs Eucalyptus cloud to create operating system instances for access.
The document summarizes key telecom trends including the growth of connected devices and machines, big data challenges, cloud computing advances, emerging applications, and the evolution of networks towards more intelligent, automated, and distributed architectures. Major technology directions include the internet of things, content-centric networking, heterogeneous networks, virtualization, and the changing role of telecom operators.
CUTTING THROUGH THE FOG: UNDERSTANDING THE COMPETITIVE DYNAMICS IN CLOUD COMP...HarshitParkar6677
This document provides an introduction to cloud computing by defining key terms and conceptualizing the competitive landscape.
It begins with an operating definition of cloud computing as on-demand network access to dynamically scalable computing resources delivered as a service via pay-as-you-go models. The document then maps the emerging "stacks" of competition, including cloud applications, platforms, and infrastructure resources. It analyzes the strategies of providers across these stacks and introduces concepts of access networks, delivery models, and how competition is unfolding.
Addressing the Challenges of Tactical Information Management in Net-Centric S...Angelo Corsaro
This paper provides an overview of the advantages provided by the OMG Data Distribution Service for Real-Time Systemts (DDS) for addressing the challenges associated with Tactical Information distribution.
The document discusses Ericsson's 3D visual communication demo showcased at Mobile World Congress 2013. The demo aims to improve video conferencing by adding depth perception through 3D technology. It allows for more realistic and immersive communication. While the demo currently uses glasses, the goal is to deploy glasses-free auto-stereoscopic displays. Further work is needed to standardize new 3D video codecs and implement them in real-time for high quality 3D visual communication products and services. The technology brings benefits for user experience, businesses, sustainability and society.
This document discusses virtualizing private cloud resources to maximize utilization. It begins by introducing cloud computing and its service models of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). It then explains how virtualizing an existing private cloud using virtualization software like VMware can enable multiple virtual clouds to run on a single physical machine. This "cloud within a cloud" approach significantly improves CPU and memory utilization compared to a single private cloud.
Agenda 120213 Future Media Distribuation using Information Centric NetworksSAIL
The document announces a workshop on future media distribution using information-centric networks. It will bring together two large research projects involving partners from vendors, operators, media industry and academia. The workshop will discuss business models and technical solutions for future media distribution and explore how information-centric networks can enable more efficient distribution of media content over the internet. The day-long event will include keynote presentations, panels, and workshops on related topics such as measurement of quality of experience, content demand patterns, regulatory aspects, and the status of open source NetInf software resources.
This document discusses the basics of cloud computing, including definitions, characteristics, deployment strategies, service models, opportunities, challenges, and issues. Specifically, it defines cloud computing as on-demand access to configurable computing resources via the internet. The key characteristics of cloud computing are on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service. The document also outlines the public, private, and hybrid deployment strategies as well as software (SaaS), platform (PaaS), and infrastructure (IaaS) service models. Finally, it discusses opportunities for end users, businesses and developers, and examines challenges related to security, performance, availability, cost, and regulatory requirements.
IBM Global Financing can tailor financing solutions to your specific IT needs. For more information on great rates, flexible payment plans and loans, and asset buyback and disposal
El documento describe diferentes conceptos y metodologías relacionadas con el desarrollo local endógeno y sostenible. Explica que la economía social estudia la satisfacción de necesidades humanas a través de la producción y distribución social. También describe el desarrollo a nivel nacional, regional y local, así como indicadores y teorías de desarrollo. Finalmente, detalla metodologías participativas como el diagnóstico comunitario, proyectos de aprendizaje y formulación de proyectos para promover el desarrollo desde lo local.
Este documento describe las competencias clave de un líder en organizaciones complejas del siglo 21. Discute que un líder debe tener la capacidad de inspirar e influir a otros para lograr objetivos comunes. Identifica cinco competencias principales: comunicación, interpersonal, emocional, traducción y estratégica. También explica que la comunicación es una competencia esencial para dirigir una organización hacia el futuro. Finalmente, concluye que para tener éxito, los líderes deben administrar el rol comunicacional desde un enfoque complejo y transdisciplinario.
El documento argumenta que implementar el gobierno electrónico en los centros penitenciarios de Venezuela sería beneficioso para mejorar el control y monitoreo de la población penal. Actualmente, los presos usan redes sociales y tienen acceso a comunicarse y cometer delitos desde la cárcel. El gobierno electrónico podría ayudar a registrar la información de los presos de manera más efectiva y eficiente.
Este documento trata sobre el conocimiento científico. Discute que el conocimiento científico comienza con problemas prácticos y teóricos y busca la verdad objetiva a través de la eliminación de errores, aunque nunca puede alcanzar la certeza absoluta. También distingue entre verdad y certeza, señalando que hay verdades inciertas pero no certezas inciertas. Finalmente, rechaza el relativismo y el cientismo dogmático, afirmando su compromiso con el conocimiento crítico y falible.
El documento habla sobre la economía social y sus principios. Explica que el ser humano es un ser social que vive en civilizaciones y culturas. La economía social se basa en la producción como una actividad social cooperativa más que competitiva, con el objetivo de lograr el bienestar de una manera sostenible y solidaria. También recomienda varios libros sobre estos temas, incluyendo Economía Política de Oskar Lange.
Este documento describe el método Delphi, un método para lograr consenso entre expertos a través de múltiples rondas de encuestas. Se originó en RAND Corporation para pronósticos a largo plazo. El método involucra recopilar opiniones anónimas de expertos sobre un tema, compartirlas de forma agregada, y luego permitir que los expertos reevalúen sus respuestas. Esto conduce a un consenso del grupo. El método se usa comúnmente para pronósticos tecnológicos y de educación.
Patokorpi logic of sherlock holmes in technology enhanced learning - articuloLILI
This document discusses abductive reasoning and its role in technology enhanced learning. It begins by defining abduction as a method of reasoning that allows people to form hypotheses to explain observations or facts. The document then examines different forms of abductive inference like selective, creative, non-sentential, and manipulative abduction. It argues that abductive reasoning is important in technology enhanced learning, especially when using mobile devices, as learners must often reason under uncertainty. The document concludes by discussing how abduction represents a semiotic paradigm of knowledge where interpretation allows for new knowledge to be created.
El fuego de las revoluciones se enciende desde internet internacional elunive...LILI
Las redes sociales sirvieron como catalizador de las protestas recientes en países de Medio Oriente y África. Los jóvenes usaron Facebook, Twitter y blogs para organizar manifestaciones contra regímenes autoritarios y corruptos. El uso generalizado de internet y las redes sociales hizo posible que las protestas se expandieran rápidamente y desafiaran a los gobiernos establecidos.
En esta presentación vemos conceptos sobre Cloud Computing y su implementación en Windows Azure y SQ Azure.
Saludos,
Ing. Eduardo Castro, PhD
http://ecastrom.blogspot.com
http://comunidadwindows.org
Presentación Computación en Nube Venezolana CONUVENuzcateguidf
Este documento presenta una introducción a la computación en la nube. Explica las ventajas y desventajas, la arquitectura de múltiples capas, y los modelos de negocio como Software como Servicio, Plataforma como Servicio e Infraestructura como Servicio. También analiza el estado actual de la tecnología de computación en la nube y los desafíos futuros para la investigación.
Visión gerencial sobre el manejo y la resolución de conflictosLILI
La Unión Europea ha propuesto un nuevo paquete de sanciones contra Rusia que incluye un embargo al petróleo ruso. El embargo se aplicaría gradualmente durante seis meses para el petróleo crudo y ocho meses para los productos refinados. Este paquete de sanciones requiere la aprobación unánime de los 27 estados miembros de la UE.
Este documento habla sobre la inteligencia emocional. Brevemente describe su historia y cómo fue evolucionando el concepto a través de los años desde 1943 hasta la actualidad. Explica que la inteligencia emocional es la capacidad de sentir, entender, controlar y modificar los estados emocionales propios y ajenos. Finalmente, compara la inteligencia emocional con la inteligencia intelectual y explica algunas de sus diferencias.
Este documento discute la importancia de la transparencia en la gestión pública en Venezuela. Explica que la transparencia permite a los ciudadanos ver claramente cómo se están manejando los asuntos públicos y tener un medio para controlar a los funcionarios electos. Señala algunos elementos clave de la transparencia como la identificación de reglas y responsabilidades, el acceso a la información pública, los sistemas de rendición de cuentas y la participación ciudadana. También menciona algunas leyes venezolanas que rigen la transparencia a nivel
El documento discute los diferentes paradigmas y dimensiones del gobierno electrónico, incluyendo la prestación de servicios electrónicos, la promoción de procesos democráticos y de elaboración de políticas públicas. También analiza conceptos como la nueva gestión pública, la gobernanza y cómo el neoinstitucionalismo puede ayudar a comprender la interacción entre actores públicos y privados en el desarrollo de iniciativas de gobierno electrónico.
Aplicación práctica del concepto de prevención en el ámbito del delitoLILI
Este documento presenta diferentes tipos de prevención del delito, incluyendo la prevención primaria, secundaria y terciaria. Define la prevención primaria como medidas para modificar las condiciones criminógenas y mejorar la calidad de vida. La prevención secundaria se enfoca en grupos de alto riesgo. La prevención terciaria busca prevenir la reincidencia a través de la readaptación social. También discute la prevención situacional, comunitaria y policial del delito.
No se quede en las nubes cuando oiga hablar de la 'computación en nube'LILI
El artículo ofrece una guía en inglés sencillo sobre la "computación en la nube", que va desde simplemente almacenar tus cosas en línea hasta ejecutar aplicaciones completas que viven en servidores en lugar de tu PC o Mac. La nube se refiere a Internet y los servidores en todo el mundo. Almacenar archivos o usar aplicaciones en la nube significa que residen en un servidor al que se accede a través de Internet. Cada vez más productos y servicios se basan en la nube para almacenar, compart
This document outlines three metaphors of learning: knowledge acquisition, participation, and knowledge creation. It argues that a third metaphor of knowledge creation is needed to conceptualize learning in a knowledge society. It reviews three models that represent knowledge creation: Bereiter's knowledge building approach, Engestrom's expansive learning theory, and Nonaka and Takeuchi's model of knowledge creation in organizations. These models emphasize innovative and collaborative processes of developing new knowledge and artifacts, representing a "trialogical" approach beyond individual and social conceptions of learning. The knowledge creation metaphor conceptualizes learning as the advancement of shared objects through mediated collaborative processes.
El uso d elas tics para desaparecer las escuelasLILI
Este documento analiza si el uso de las tecnologías de la información y la comunicación (TICs) harán desaparecer las escuelas. Examina cómo la tecnología y la educación pueden coexistir y complementarse. Argumenta que la escuela no desaparecerá porque sigue siendo importante el aprendizaje presencial, la lectura, la escritura y las interacciones sociales que ofrece la escuela. Además, destaca la importancia de una pedagogía humanista enfocada en el desarrollo integral del estudiante más all
L+¡der vers+ítil p 3-7 18 abril 2011 - opini+¦n - el universalLILI
Este documento describe la importancia de la versatilidad en el liderazgo. Explica que los equipos exitosos están compuestos de personas con diferentes estilos y habilidades. Un líder versátil puede adaptarse a los diferentes estilos y sacar lo mejor de cada persona para lograr resultados sobresalientes. Usando el ejemplo del maestro samurai Miyamoto Musashi, quien podía adaptar su estilo de combate a cada oponente, enfatiza que la versatilidad es una habilidad clave para triunfar en situaciones desafiantes.
J Internet Serv Appl (2010) 1 7–18DOI 10.1007s13174-010-00.docxpriestmanmable
This document provides an overview of cloud computing, including its key concepts, architectural principles, and research challenges. It defines cloud computing as a model enabling on-demand access to configurable computing resources via the internet. The document outlines the layered architecture of cloud computing and different service models like IaaS, PaaS, and SaaS. It also discusses types of clouds including public, private, hybrid, and virtual private clouds. The document aims to provide a better understanding of cloud computing design challenges and identify important research directions in this area.
This document discusses the impact of cloud computing on the IT industry. It begins by defining cloud computing and the various types of cloud services, including Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). It then reviews how cloud computing transforms the IT industry by making software available as an online utility rather than installed locally. The document also examines cloud computing's ability to improve flexibility and reduce costs for IT services compared to traditional computing models. Finally, it analyzes how cloud computing architecture works by provisioning resources on-demand from large pools of virtualized hardware and software.
A Virtualization Model for Cloud ComputingSouvik Pal
Cloud Computing is now a very emerging field in the IT industry as well as research field. The advancement of Cloud Computing came up due to fast-growing usage of internet among the people. Cloud Computing is basically on-demand network access to a collection of physical resources which can be provisioned according to the need of cloud user under the supervision of Cloud Service provider interaction. From business prospective, the viable achievements of Cloud Computing and recent developments in Grid computing have brought the platform that has introduced virtualization technology into the era of high performance computing. Virtualization technology is widely applied to modern data center for cloud computing. Virtualization is used computer resources to imitate other computer resources or whole computers. This paper provides a Virtualization model for cloud computing that may lead to faster access and better performance. This model may help to combine self-service capabilities and ready-to-use facilities for computing resources.
How Should I Prepare Your Enterprise For The Increased...Claudia Brown
The document discusses different types of processor architectures: RISC, CISC, mainframe, and vector processors. It notes that each architecture has evolved over the years for different applications and purposes. Network processors in particular have evolved for networking applications. The document suggests that a comparative study of these architectures would provide insights into their relative strengths and weaknesses for various uses.
Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. It allows users to access technology-based services from the network cloud without knowledge of, expertise with, or control over the underlying technology infrastructure that supports them. Key benefits of cloud computing include lower costs, better scalability and flexibility.
This document discusses cloud computing and related concepts:
1. Cloud computing is a model for delivering computing resources such as hardware and software via a network. Users can access scalable resources from the cloud without knowing details of the infrastructure.
2. Technologies like virtualization, distributed storage, and broadband internet access enable cloud computing. This shifts processing to large remote data centers managed by cloud providers.
3. For service providers, cloud computing offers benefits like reduced infrastructure costs and improved efficiency. For users, it provides flexible access to resources without upfront investment or management overhead.
The document discusses cloud computing, defining it as a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction. Some key points:
- Cloud computing builds on distributed computing approaches like grid computing by centralizing computation and storage in distributed data centers managed by third parties.
- It aims to provide IT services on-demand with flexibility, availability, reliability and scalability using a utility computing model.
- Cloud computing architectures involve multiple cloud components communicating over APIs, resembling the Unix philosophy of multiple programs working together over universal interfaces.
Introduction to Cloud Computing and Cloud InfrastructureSANTHOSHKUMARKL1
Introduction, Cloud Infrastructure: Cloud computing, Cloud computing delivery models and services, Ethical issues, Cloud vulnerabilities, Cloud computing at Amazon, Cloud computing the Google perspective, Microsoft Windows Azure and online services, Open-source software platforms for private clouds.
Efficient architectural framework of cloud computing Souvik Pal
This document discusses an efficient architectural framework for cloud computing. It begins by providing background on cloud computing and discusses challenges such as security, privacy, and reliability. It then proposes a new architectural framework that separates infrastructure as a service (IaaS) into three sub-modules: IaaS itself, a hypervisor monitoring environment (HME), and resources as a service (RaaS). The HME acts as middleware between IaaS and physical resources, using a hypervisor to allocate resources from a pool managed by RaaS. This proposed framework is intended to improve performance and access speed for cloud computing.
Now a days the work is being done by hiring the space and resources from the cloud providers in order to do work effectively and less costly. This paper describes the cloud, its challenges, evolution, attacks along with the approaches required to handle data on cloud. The practice of using a network of remote servers hosted on the Internet to store, manage, and process data, rather than a local server or a personal computer. The need of this review paper is to provide the awareness of the current emerging technology which saves the cost of users.
The document discusses billing technologies used in cloud computing. It begins by explaining how cloud computing allows users to access computing resources over the internet as needed rather than purchasing hardware. It then explores how billing technologies in cloud computing allow users to pay based on actual usage, similar to utility bills, rather than fixed costs. Finally, it examines some key attributes cloud computing providers consider for billing, such as subscriptions, metered usage, resources and bandwidth used, data transfer amounts, and storage capacity.
Opportunistic job sharing for mobile cloud computingijccsa
Cloud Computing is the evolution of new business era which is covered with many of technologies.These
technology are taking advantage of economies of scale and multi tenancy which are used to decrees the
cost of information technology resources. Many of the organization are eager to reduce their computing
cost through the means of virtualization. This demand of reducing the computing cost and time has led to
the innovation of Cloud Computing. Itenhanced computing through improved deployment and
infrastructure costs and processing time. Mobile computing & its applications in smart phones enable a
new, rich user experience. Due to extreme usage of limited resources in smart phones it create problems
which are battery problems, memory space and CPU. To solve this problem, we propose a dynamic mobile
cloud computing architecture framework to use global resources instead of local resources. In this
proposed framework the usefulness of job sharing workload at runtime reduces the load at the local client
and the dynamic throughput time of the job through Wi-Fi Connectivity.
Green Cloud Computing :Emerging TechnologyIRJET Journal
This document discusses green cloud computing and how cloud infrastructure contributes to high energy consumption. It summarizes that while cloud computing provides cost and scalability benefits, the growing demand on data centers has increased energy usage and carbon emissions. However, the document also explains that cloud computing technologies like dynamic provisioning, multi-tenancy, high server utilization, and efficient data center design can help reduce the environmental impact and enable more sustainable "green" cloud computing through higher efficiency. Future research directions are needed to further optimize cloud resource usage and energy efficiency from a holistic perspective.
This document discusses the core concepts of cloud computing. It begins by explaining how cloud computing evolved from earlier technologies like mainframe computing, client-server systems, virtualization, distributed computing, and internet technologies. It then defines the key aspects of cloud computing models, including service models (IaaS, PaaS, SaaS) and deployment models (private, public, hybrid cloud). The document also outlines some of the core desired features of cloud computing like self-service, elasticity, metering and billing, and customization. Finally, it discusses some challenges and risks of cloud computing including security, privacy, trust issues as well as dependency on the cloud infrastructure.
This document provides a comprehensive study of cloud computing. It discusses cloud computing models including infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). It explores the benefits of cloud computing such as scalability, flexibility, and reduced costs. However, it also notes avoidance issues like security, privacy, internet dependency, and availability. The document concludes by stating that vertical scalability presents a technical challenge in cloud environments.
This document provides a review and comparison of cloud and grid computing. It begins with definitions of cloud computing and discusses the key characteristics of clouds such as scalability, on-demand access to resources, and utility-based pricing. It then describes the various cloud deployment models including public, private, hybrid and community clouds. The document outlines the architecture of cloud computing including virtual machines, physical machines, and a high-level market-oriented model. It provides Amazon's GrepTheWeb application as an example of a cloud-based architecture and discusses its scalability and cost advantages. In the conclusion, the document compares clouds to grids and their similarities and differences.
This document provides a 3 paragraph summary of a seminar report on cloud computing submitted by Rahul Gupta to his professor Shraddha Khenka. The report acknowledges those who contributed to advancements in internet and computing technologies that enable cloud computing. It includes an introduction to cloud computing, comparisons to other technologies, economics of cloud computing, architectural layers, key features, deployment models, and issues. The summary covers the essential topics and information presented in the seminar report on cloud computing.
: Cloud processing is turning into an inexorably mainstream endeavor demonstrate in which figuring assets are made accessible on-request to the client as required. The one of a kind incentivized offer of distributed computing makes new chances to adjust IT and business objectives. Distributed computing utilizes the web advancements for conveyance of IT-Enabled abilities 'as an administration' to any required clients i.e. through distributed computing we can get to anything that we need from anyplace to any PC without agonizing over anything like about their stockpiling, cost, administration etc. In this paper, I give a far-reaching study on the inspiration variables of receiving distributed computing, audit the few cloud sending and administration models. It additionally investigates certain advantages of distributed computing over customary IT benefit environment-including versatility, adaptability, decreased capital and higher asset usage are considered as appropriation explanations behind distributed computing environment. I additionally incorporate security, protection, and web reliance and accessibility as shirking issues. The later incorporates vertical versatility as specialized test in cloud environment.
A detailed study of cloud computing is presented. Starting from its basics, the characteristics and different modalities
are dwelt upon. Apart from this, the pros and cons of cloud computing is also highlighted. Apart from this, service
models of cloud computing are lucidly highlighted.
Similar to La computación en nube el estado de la técnica y desafíos de la investigación (20)
Los Llanos son una de las sabanas tropicales más ricas del mundo, con más de 100 especies de mamíferos y 700 de aves. El río Orinoco, uno de los más caudalosos del planeta, fluye a través de esta región de bosques secos, sabanas e inundaciones estacionales antes de alimentar bosques pantanosos y manglares en su desembocadura en el Océano Atlántico.
The document discusses an experience investigating pedagogical support for academic communities of teachers, with the goal of understanding how to accompany teachers in navigating the complexities of teaching practice and education culture. The experience aims to promote ongoing dynamism, interactivity, dialogue, and active learning among teachers around topics like pedagogical work, teaching practices, citizenship education, scientific knowledge, and the link between theory and practice.
La cartelera del segundo semestre de la Misión Ribas en el Liceo Bolívar de San Antonio de los Altos reconoce a nueve vencedores, incluyendo a Kevin Jesús Niño Castillo, Saidy Katterin Oropeza Ojeda y Daniela del Carmen Frikke Espinoza. La cartelera celebra el éxito de estos estudiantes en julio de 2011.
Este documento resume una investigación sobre las condiciones de iluminación artificial en el Liceo Bolivariano Luis Eduardo EguiArocha. La investigación encontró deficiencias en la iluminación de las aulas, baños y áreas comunes. Se concluye que la iluminación en general es insuficiente y se recomienda mejorar el mantenimiento de lámparas, usar colores claros en paredes e inmobiliario, e instalar nuevas lámparas en baños y áreas comunes.
Este proyecto buscó rescatar el espacio físico del salón de clases del Liceo Luis Eduardo Egui Arocha en San Antonio de los Altos, Miranda. Los estudiantes observaron irregularidades como ventanas rotas, filtraciones, falta de pintura y equipamiento dañado. Realizaron encuestas que mostraron que los estudiantes consideran importante un espacio adecuado. Llevaron a cabo reparaciones que mejoraron el salón para beneficio de los estudiantes.
Este documento describe una investigación realizada sobre el mantenimiento físico del Aula A-2 del Liceo Bolivariano “Luis Eduardo EguiArocha” en Venezuela, donde funciona el cuarto semestre de la Misión Ribas. La investigación encontró que el aula presenta deterioro en las paredes y ventanas debido a la falta de mantenimiento y cuidado, lo que crea un ambiente negativo para los estudiantes. Se recomienda realizar mejoras en el aula para proporcionar un espacio de estudio más agradable.
El documento compara dos métodos cualitativos de investigación social: la fenomenología de Alfred Schütz y la investigación participativa. La fenomenología de Schütz se basa en el análisis de la experiencia diaria para entender el mundo social, mientras que la investigación participativa emerge de la necesidad de que la investigación social sea más relevante y se desarrolle de forma colaborativa con las comunidades. Ambos métodos han influido en el estudio de las ciencias sociales y continúan evolucionando a través de críticas y autocríticas.
El documento presenta un modelo teórico de gerencia efectiva para el Instituto Nacional de Orientación Femenina (I.N.O.F.) en Los Teques, Venezuela. Actualmente, el I.N.O.F. sufre de hacinamiento, tráfico de drogas, violación de derechos humanos, malas condiciones de salud e higiene. El modelo propone implementar una gerencia transdisciplinaria y flexible para enfrentar los retos de un entorno cambiante, promoviendo el aprendizaje y trabajo en equipo para mejorar las condiciones y lograr
El documento presenta un análisis crítico de la tesis doctoral en elaboración sobre un modelo teórico de gerencia efectiva en el Instituto Nacional de Orientación Femenina en Los Teques, Venezuela. La investigación abordará la problemática de las cárceles femeninas desde una perspectiva de género a través de un enfoque cualitativo. Se estudiarán aspectos como la historia de las cárceles, la estructura penitenciaria, el entorno familiar, la población del instituto y los derechos humanos aplicando la fenomenología
Análisis crítico de la tesis en elaboración (cuadro + análisis)LILI
Este documento presenta un resumen de tres retos que debe enfrentar la nueva titular del Ministerio de Servicios Penitenciarios de Venezuela, Iris Varela. Los especialistas consideran que Varela debe abordar el problema de hacinamiento en las cárceles, la corrupción existente en el sistema carcelario y la ociosidad de la población reclusa. El documento proviene de un reporte periodístico publicado en el diario venezolano El Nacional.
El poema desea cosas buenas para la persona a la que va dirigido. Entre ellas, que ame y sea amado, tenga amigos leales aunque también enemigos justos, sea útil pero no insustituible, tolerante incluso con quienes se equivocan mucho, experimente diferentes edades de forma equilibrada, descubra la belleza en la naturaleza y los seres que lo rodean, tenga dinero pero también comprenda su verdadero valor, y si tiene una pareja que sea una buena compañía. El objetivo es desear una vida plena de experi
El plátano proporciona una gran cantidad de energía y nutrientes que benefician la salud de varias maneras. Contiene azúcares, fibra, vitaminas y minerales que ayudan a aumentar la energía, mejorar el estado de ánimo, reducir la presión arterial, prevenir la anemia y aliviar problemas como la acidez, las náuseas y las picaduras de insectos. Comer plátanos regularmente puede también ayudar a prevenir enfermedades como los derrames cerebrales y mantener un peso saludable.
Este documento presenta un resumen del método fenomenológico de Alfred Schütz. Explica brevemente la biografía de Schütz y cómo se inspiró en pensadores como Husserl, Bergson y la escuela de Viena. También define los principios de la fenomenología y describe cómo Schütz buscaba comprender la formación de significados en la acción social a través del estudio del mundo de la vida cotidiana. Finalmente, propone aplicar este método fenomenológico al análisis de la gerencia efectiva en centros penit
Este documento presenta un resumen del método fenomenológico de Alfred Schütz. Explica brevemente la biografía de Schütz y cómo se inspiró en pensadores como Husserl, Bergson y la escuela vienesa de economía. También define los principios de la fenomenología y describe conceptos clave de Schütz como la realidad social, el mundo de la vida y la situación biográfica. El documento analiza la filosofía de Schütz y su enfoque en comprender los significados de la acción social en el mundo cotidiano.
Este documento resume los deberes de los contribuyentes según el Código Orgánico Tributario de Venezuela e incluye una clasificación de los tributos. Explica que los tributos municipales son impuestos, tasas y contribuciones creados para financiar servicios locales y mejorar la infraestructura. Finalmente, señala que uno de los objetivos de Venezuela es disponer de una administración tributaria moderna y eficiente que determine, liquide y recaude tributos con bajos costos y evasión.
El documento describe el Sistema Integrado de Identificación Balística (IBIS), el cual extrae una "firma" única de muestras balísticas ingresadas y las compara con una base de datos para identificar posibles coincidencias. El IBIS utiliza cámaras, microscopios y software sofisticado para digitalizar imágenes de marcas en balas y casquillos vacíos, extraer sus "firmas" matemáticas y realizar análisis de correlación automáticos. Los exámenes buscan identificar armas us
El documento describe los principios y métodos de la identificación criminalística. Explica que la identificación se basa en los principios de identidad y causalidad, y aplica el método científico deductivo y cuatro principios: intercambio, correspondencia, reconstrucción y probabilidad. También describe los procedimientos de protección del sitio del suceso, recolección de indicios, y los métodos de identificación como antropométricos, dactiloscópicos, odontológicos y oftalmológicos.
Informe sobre el diplomado en prevención del delito y seguridad ciudadanaLILI
Este documento resume los aprendizajes del autor sobre liderazgo socialista y comunitario. Explica que un líder socialista antepone los intereses colectivos a los individuales y busca servir a los demás, a diferencia de un líder individualista y egoísta. También describe cómo los Consejos Comunales permiten a las comunidades organizarse y gestionar proyectos para satisfacer sus necesidades de manera participativa y democrática. El autor concluye que el desarrollo personal y de talento es clave para las organizaciones protagonistas del socialismo del siglo
Este documento presenta varios casos reportados de personas que presuntamente fueron drogadas con burundanga. Se describe cómo la burundanga puede administrarse en alimentos, bebidas u otros objetos y causar pérdida de la conciencia y la voluntad. También incluye opiniones de expertos toxicólogos que cuestionan la gravedad de los efectos atribuidos a la burundanga y la falta de evidencia científica sobre muchos de los casos reportados.
Un médico advierte sobre una nueva droga llamada Progesterex que está siendo usada por violadores en fiestas para abusar de sus víctimas. La droga se disuelve fácilmente en bebidas y causa esterilización permanente en las mujeres. También se usa junto con Rohypnol para causar amnesia y facilitar el robo de órganos.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
2. 8 J Internet Serv Appl (2010) 1: 7–18
Easy access: Services hosted in the cloud are generally has generated not only market hypes, but also a fair amount
web-based. Therefore, they are easily accessible through a of skepticism and confusion. For this reason, recently there
variety of devices with Internet connections. These devices has been work on standardizing the definition of cloud com-
not only include desktop and laptop computers, but also cell puting. As an example, the work in [49] compared over 20
phones and PDAs. different definitions from a variety of sources to confirm a
Reducing business risks and maintenance expenses: By standard definition. In this paper, we adopt the definition
outsourcing the service infrastructure to the clouds, a service of cloud computing provided by The National Institute of
provider shifts its business risks (such as hardware failures) Standards and Technology (NIST) [36], as it covers, in our
to infrastructure providers, who often have better expertise opinion, all the essential aspects of cloud computing:
and are better equipped for managing these risks. In addi-
tion, a service provider can cut down the hardware mainte- NIST definition of cloud computing Cloud computing is a
nance and the staff training costs. model for enabling convenient, on-demand network access
However, although cloud computing has shown consid- to a shared pool of configurable computing resources (e.g.,
erable opportunities to the IT industry, it also brings many networks, servers, storage, applications, and services) that
unique challenges that need to be carefully addressed. In this can be rapidly provisioned and released with minimal man-
paper, we present a survey of cloud computing, highlighting agement effort or service provider interaction.
its key concepts, architectural principles, state-of-the-art im-
plementations as well as research challenges. Our aim is to The main reason for the existence of different percep-
provide a better understanding of the design challenges of tions of cloud computing is that cloud computing, unlike
cloud computing and identify important research directions other technical terms, is not a new technology, but rather
in this fascinating topic. a new operations model that brings together a set of ex-
The remainder of this paper is organized as follows. In isting technologies to run business in a different way. In-
Sect. 2 we provide an overview of cloud computing and deed, most of the technologies used by cloud computing,
compare it with other related technologies. In Sect. 3, we such as virtualization and utility-based pricing, are not new.
describe the architecture of cloud computing and present Instead, cloud computing leverages these existing technolo-
gies to meet the technological and economic requirements
its design principles. The key features and characteristics of
of today’s demand for information technology.
cloud computing are detailed in Sect. 4. Section 5 surveys
the commercial products as well as the current technologies
2.2 Related technologies
used for cloud computing. In Sect. 6, we summarize the cur-
rent research topics in cloud computing. Finally, the paper Cloud computing is often compared to the following tech-
concludes in Sect. 7. nologies, each of which shares certain aspects with cloud
computing:
Grid Computing: Grid computing is a distributed com-
2 Overview of cloud computing puting paradigm that coordinates networked resources to
achieve a common computational objective. The develop-
This section presents a general overview of cloud comput- ment of Grid computing was originally driven by scien-
ing, including its definition and a comparison with related tific applications which are usually computation-intensive.
concepts. Cloud computing is similar to Grid computing in that it also
employs distributed resources to achieve application-level
2.1 Definitions objectives. However, cloud computing takes one step further
by leveraging virtualization technologies at multiple levels
The main idea behind cloud computing is not a new one. (hardware and application platform) to realize resource shar-
John McCarthy in the 1960s already envisioned that com- ing and dynamic resource provisioning.
puting facilities will be provided to the general public like Utility Computing: Utility computing represents the
a utility [39]. The term “cloud” has also been used in vari- model of providing resources on-demand and charging cus-
ous contexts such as describing large ATM networks in the tomers based on usage rather than a flat rate. Cloud comput-
1990s. However, it was after Google’s CEO Eric Schmidt ing can be perceived as a realization of utility computing. It
used the word to describe the business model of provid- adopts a utility-based pricing scheme entirely for economic
ing services across the Internet in 2006, that the term re- reasons. With on-demand resource provisioning and utility-
ally started to gain popularity. Since then, the term cloud based pricing, service providers can truly maximize resource
computing has been used mainly as a marketing term in a utilization and minimize their operating costs.
variety of contexts to represent many different ideas. Cer- Virtualization: Virtualization is a technology that ab-
tainly, the lack of a standard definition of cloud computing stracts away the details of physical hardware and provides
3. J Internet Serv Appl (2010) 1: 7–18 9
virtualized resources for high-level applications. A virtual- The hardware layer: This layer is responsible for man-
ized server is commonly called a virtual machine (VM). Vir- aging the physical resources of the cloud, including phys-
tualization forms the foundation of cloud computing, as it ical servers, routers, switches, power and cooling systems.
provides the capability of pooling computing resources from In practice, the hardware layer is typically implemented
clusters of servers and dynamically assigning or reassigning in data centers. A data center usually contains thousands
virtual resources to applications on-demand. of servers that are organized in racks and interconnected
Autonomic Computing: Originally coined by IBM in through switches, routers or other fabrics. Typical issues
2001, autonomic computing aims at building computing sys- at hardware layer include hardware configuration, fault-
tems capable of self-management, i.e. reacting to internal tolerance, traffic management, power and cooling resource
and external observations without human intervention. The management.
goal of autonomic computing is to overcome the manage- The infrastructure layer: Also known as the virtualiza-
ment complexity of today’s computer systems. Although tion layer, the infrastructure layer creates a pool of storage
cloud computing exhibits certain autonomic features such and computing resources by partitioning the physical re-
as automatic resource provisioning, its objective is to lower sources using virtualization technologies such as Xen [55],
the resource cost rather than to reduce system complexity. KVM [30] and VMware [52]. The infrastructure layer is an
In summary, cloud computing leverages virtualization essential component of cloud computing, since many key
technology to achieve the goal of providing computing re- features, such as dynamic resource assignment, are only
sources as a utility. It shares certain aspects with grid com- made available through virtualization technologies.
puting and autonomic computing but differs from them in The platform layer: Built on top of the infrastructure
other aspects. Therefore, it offers unique benefits and im- layer, the platform layer consists of operating systems and
poses distinctive challenges to meet its requirements. application frameworks. The purpose of the platform layer
is to minimize the burden of deploying applications directly
into VM containers. For example, Google App Engine oper-
ates at the platform layer to provide API support for imple-
3 Cloud computing architecture
menting storage, database and business logic of typical web
applications.
This section describes the architectural, business and various The application layer: At the highest level of the hierar-
operation models of cloud computing. chy, the application layer consists of the actual cloud appli-
cations. Different from traditional applications, cloud appli-
3.1 A layered model of cloud computing cations can leverage the automatic-scaling feature to achieve
better performance, availability and lower operating cost.
Generally speaking, the architecture of a cloud comput- Compared to traditional service hosting environments
ing environment can be divided into 4 layers: the hard- such as dedicated server farms, the architecture of cloud
ware/datacenter layer, the infrastructure layer, the platform computing is more modular. Each layer is loosely coupled
layer and the application layer, as shown in Fig. 1. We de- with the layers above and below, allowing each layer to
scribe each of them in detail: evolve separately. This is similar to the design of the OSI
Fig. 1 Cloud computing
architecture
4. 10 J Internet Serv Appl (2010) 1: 7–18
model for network protocols. The architectural modularity 3.3 Types of clouds
allows cloud computing to support a wide range of applica-
tion requirements while reducing management and mainte- There are many issues to consider when moving an enter-
nance overhead. prise application to the cloud environment. For example,
some service providers are mostly interested in lowering op-
3.2 Business model eration cost, while others may prefer high reliability and se-
curity. Accordingly, there are different types of clouds, each
with its own benefits and drawbacks:
Cloud computing employs a service-driven business model.
Public clouds: A cloud in which service providers of-
In other words, hardware and platform-level resources are fer their resources as services to the general public. Pub-
provided as services on an on-demand basis. Conceptually, lic clouds offer several key benefits to service providers, in-
every layer of the architecture described in the previous sec- cluding no initial capital investment on infrastructure and
tion can be implemented as a service to the layer above. shifting of risks to infrastructure providers. However, pub-
Conversely, every layer can be perceived as a customer of lic clouds lack fine-grained control over data, network and
the layer below. However, in practice, clouds offer services security settings, which hampers their effectiveness in many
that can be grouped into three categories: software as a ser- business scenarios.
vice (SaaS), platform as a service (PaaS), and infrastructure Private clouds: Also known as internal clouds, private
as a service (IaaS). clouds are designed for exclusive use by a single organiza-
1. Infrastructure as a Service: IaaS refers to on-demand tion. A private cloud may be built and managed by the orga-
nization or by external providers. A private cloud offers the
provisioning of infrastructural resources, usually in terms
highest degree of control over performance, reliability and
of VMs. The cloud owner who offers IaaS is called an
security. However, they are often criticized for being simi-
IaaS provider. Examples of IaaS providers include Ama-
lar to traditional proprietary server farms and do not provide
zon EC2 [2], GoGrid [15] and Flexiscale [18].
benefits such as no up-front capital costs.
2. Platform as a Service: PaaS refers to providing platform
Hybrid clouds: A hybrid cloud is a combination of public
layer resources, including operating system support and
and private cloud models that tries to address the limitations
software development frameworks. Examples of PaaS
of each approach. In a hybrid cloud, part of the service in-
providers include Google App Engine [20], Microsoft
frastructure runs in private clouds while the remaining part
Windows Azure [53] and Force.com [41].
runs in public clouds. Hybrid clouds offer more flexibility
3. Software as a Service: SaaS refers to providing on- than both public and private clouds. Specifically, they pro-
demand applications over the Internet. Examples of SaaS vide tighter control and security over application data com-
providers include Salesforce.com [41], Rackspace [17] pared to public clouds, while still facilitating on-demand
and SAP Business ByDesign [44]. service expansion and contraction. On the down side, de-
The business model of cloud computing is depicted by signing a hybrid cloud requires carefully determining the
Fig. 2. According to the layered architecture of cloud com- best split between public and private cloud components.
puting, it is entirely possible that a PaaS provider runs its Virtual Private Cloud: An alternative solution to address-
cloud on top of an IaaS provider’s cloud. However, in the ing the limitations of both public and private clouds is called
current practice, IaaS and PaaS providers are often parts of Virtual Private Cloud (VPC). A VPC is essentially a plat-
the same organization (e.g., Google and Salesforce). This is form running on top of public clouds. The main difference is
that a VPC leverages virtual private network (VPN) technol-
why PaaS and IaaS providers are often called the infrastruc-
ogy that allows service providers to design their own topol-
ture providers or cloud providers [5].
ogy and security settings such as firewall rules. VPC is es-
sentially a more holistic design since it not only virtualizes
servers and applications, but also the underlying commu-
nication network as well. Additionally, for most companies,
VPC provides seamless transition from a proprietary service
infrastructure to a cloud-based infrastructure, owing to the
virtualized network layer.
For most service providers, selecting the right cloud
model is dependent on the business scenario. For exam-
ple, computation-intensive scientific applications are best
deployed on public clouds for cost-effectiveness. Arguably,
Fig. 2 Business model of cloud computing certain types of clouds will be more popular than others.
5. J Internet Serv Appl (2010) 1: 7–18 11
In particular, it was predicted that hybrid clouds will be the Self-organizing: Since resources can be allocated or de-
dominant type for most organizations [14]. However, vir- allocated on-demand, service providers are empowered to
tual private clouds have started to gain more popularity since manage their resource consumption according to their own
their inception in 2009. needs. Furthermore, the automated resource management
feature yields high agility that enables service providers to
respond quickly to rapid changes in service demand such as
4 Cloud computing characteristics the flash crowd effect.
Utility-based pricing: Cloud computing employs a pay-
Cloud computing provides several salient features that are per-use pricing model. The exact pricing scheme may vary
different from traditional service computing, which we sum- from service to service. For example, a SaaS provider may
marize below: rent a virtual machine from an IaaS provider on a per-hour
Multi-tenancy: In a cloud environment, services owned basis. On the other hand, a SaaS provider that provides
by multiple providers are co-located in a single data center. on-demand customer relationship management (CRM) may
The performance and management issues of these services charge its customers based on the number of clients it serves
are shared among service providers and the infrastructure (e.g., Salesforce). Utility-based pricing lowers service oper-
provider. The layered architecture of cloud computing pro- ating cost as it charges customers on a per-use basis. How-
vides a natural division of responsibilities: the owner of each ever, it also introduces complexities in controlling the oper-
layer only needs to focus on the specific objectives associ-
ating cost. In this perspective, companies like VKernel [51]
ated with this layer. However, multi-tenancy also introduces
provide software to help cloud customers understand, ana-
difficulties in understanding and managing the interactions
lyze and cut down the unnecessary cost on resource con-
among various stakeholders.
sumption.
Shared resource pooling: The infrastructure provider of-
fers a pool of computing resources that can be dynamically
assigned to multiple resource consumers. Such dynamic re-
5 State-of-the-art
source assignment capability provides much flexibility to in-
frastructure providers for managing their own resource us-
In this section, we present the state-of-the-art implementa-
age and operating costs. For instance, an IaaS provider can
tions of cloud computing. We first describe the key technolo-
leverage VM migration technology to attain a high degree
of server consolidation, hence maximizing resource utiliza- gies currently used for cloud computing. Then, we survey
tion while minimizing cost such as power consumption and the popular cloud computing products.
cooling.
Geo-distribution and ubiquitous network access: Clouds 5.1 Cloud computing technologies
are generally accessible through the Internet and use the
Internet as a service delivery network. Hence any device This section provides a review of technologies used in cloud
with Internet connectivity, be it a mobile phone, a PDA or computing environments.
a laptop, is able to access cloud services. Additionally, to
achieve high network performance and localization, many 5.1.1 Architectural design of data centers
of today’s clouds consist of data centers located at many
locations around the globe. A service provider can easily A data center, which is home to the computation power and
leverage geo-diversity to achieve maximum service utility. storage, is central to cloud computing and contains thou-
Service oriented: As mentioned previously, cloud com- sands of devices like servers, switches and routers. Proper
puting adopts a service-driven operating model. Hence it planning of this network architecture is critical, as it will
places a strong emphasis on service management. In a cloud, heavily influence applications performance and throughput
each IaaS, PaaS and SaaS provider offers its service accord- in such a distributed computing environment. Further, scala-
ing to the Service Level Agreement (SLA) negotiated with bility and resiliency features need to be carefully considered.
its customers. SLA assurance is therefore a critical objective Currently, a layered approach is the basic foundation of
of every provider. the network architecture design, which has been tested in
Dynamic resource provisioning: One of the key features some of the largest deployed data centers. The basic layers
of cloud computing is that computing resources can be ob- of a data center consist of the core, aggregation, and access
tained and released on the fly. Compared to the traditional layers, as shown in Fig. 3. The access layer is where the
model that provisions resources according to peak demand, servers in racks physically connect to the network. There
dynamic resource provisioning allows service providers to are typically 20 to 40 servers per rack, each connected to an
acquire resources based on the current demand, which can access switch with a 1 Gbps link. Access switches usually
considerably lower the operating cost. connect to two aggregation switches for redundancy with
6. 12 J Internet Serv Appl (2010) 1: 7–18
Scalability: The network infrastructure must be able to
scale to a large number of servers and allow for incremental
expansion.
Backward compatibility: The network infrastructure
should be backward compatible with switches and routers
running Ethernet and IP. Because existing data centers have
commonly leveraged commodity Ethernet and IP based de-
vices, they should also be used in the new architecture with-
out major modifications.
Another area of rapid innovation in the industry is the de-
sign and deployment of shipping-container based, modular
data center (MDC). In an MDC, normally up to a few thou-
sands of servers, are interconnected via switches to form
the network infrastructure. Highly interactive applications,
which are sensitive to response time, are suitable for geo-
Fig. 3 Basic layered design of data center network infrastructure diverse MDC placed close to major population areas. The
MDC also helps with redundancy because not all areas are
likely to lose power, experience an earthquake, or suffer ri-
10 Gbps links. The aggregation layer usually provides im- ots at the same time. Rather than the three-layered approach
portant functions, such as domain service, location service, discussed above, Guo et al. [22, 23] proposed server-centric,
server load balancing, and more. The core layer provides recursively defined network structures of MDC.
connectivity to multiple aggregation switches and provides
a resilient routed fabric with no single point of failure. The 5.1.2 Distributed file system over clouds
core routers manage traffic into and out of the data center.
A popular practice is to leverage commodity Ethernet Google File System (GFS) [19] is a proprietary distributed
switches and routers to build the network infrastructure. In file system developed by Google and specially designed to
different business solutions, the layered network infrastruc- provide efficient, reliable access to data using large clusters
ture can be elaborated to meet specific business challenges. of commodity servers. Files are divided into chunks of 64
Basically, the design of a data center network architecture megabytes, and are usually appended to or read and only
should meet the following objectives [1, 21–23, 35]: extremely rarely overwritten or shrunk. Compared with tra-
Uniform high capacity: The maximum rate of a server- ditional file systems, GFS is designed and optimized to run
to-server traffic flow should be limited only by the available on data centers to provide extremely high data throughputs,
capacity on the network-interface cards of the sending and low latency and survive individual server failures.
receiving servers, and assigning servers to a service should Inspired by GFS, the open source Hadoop Distributed
be independent of the network topology. It should be possi- File System (HDFS) [24] stores large files across multi-
ble for an arbitrary host in the data center to communicate ple machines. It achieves reliability by replicating the data
with any other host in the network at the full bandwidth of across multiple servers. Similarly to GFS, data is stored on
multiple geo-diverse nodes. The file system is built from a
its local network interface.
cluster of data nodes, each of which serves blocks of data
Free VM migration: Virtualization allows the entire VM
over the network using a block protocol specific to HDFS.
state to be transmitted across the network to migrate a VM
Data is also provided over HTTP, allowing access to all con-
from one physical machine to another. A cloud comput-
tent from a web browser or other types of clients. Data nodes
ing hosting service may migrate VMs for statistical multi-
can talk to each other to rebalance data distribution, to move
plexing or dynamically changing communication patterns
copies around, and to keep the replication of data high.
to achieve high bandwidth for tightly coupled hosts or to
achieve variable heat distribution and power availability in 5.1.3 Distributed application framework over clouds
the data center. The communication topology should be de-
signed so as to support rapid virtual machine migration. HTTP-based applications usually conform to some web ap-
Resiliency: Failures will be common at scale. The net- plication framework such as Java EE. In modern data center
work infrastructure must be fault-tolerant against various environments, clusters of servers are also used for computa-
types of server failures, link outages, or server-rack failures. tion and data-intensive jobs such as financial trend analysis,
Existing unicast and multicast communications should not or film animation.
be affected to the extent allowed by the underlying physical MapReduce [16] is a software framework introduced by
connectivity. Google to support distributed computing on large data sets
7. J Internet Serv Appl (2010) 1: 7–18 13
on clusters of computers. MapReduce consists of one Mas- data as “objects” that are grouped in “buckets.” Each object
ter, to which client applications submit MapReduce jobs. contains from 1 byte to 5 gigabytes of data. Object names
The Master pushes work out to available task nodes in the are essentially URI [6] pathnames. Buckets must be explic-
data center, striving to keep the tasks as close to the data itly created before they can be used. A bucket can be stored
as possible. The Master knows which node contains the in one of several Regions. Users can choose a Region to opti-
data, and which other hosts are nearby. If the task cannot mize latency, minimize costs, or address regulatory require-
be hosted on the node where the data is stored, priority is ments.
given to nodes in the same rack. In this way, network traffic Amazon Virtual Private Cloud (VPC) is a secure and
on the main backbone is reduced, which also helps to im- seamless bridge between a company’s existing IT infrastruc-
prove throughput, as the backbone is usually the bottleneck. ture and the AWS cloud. Amazon VPC enables enterprises
If a task fails or times out, it is rescheduled. If the Master to connect their existing infrastructure to a set of isolated
fails, all ongoing tasks are lost. The Master records what it AWS compute resources via a Virtual Private Network
is up to in the filesystem. When it starts up, it looks for any (VPN) connection, and to extend their existing management
such data, so that it can restart work from where it left off. capabilities such as security services, firewalls, and intrusion
The open source Hadoop MapReduce project [25] is in- detection systems to include their AWS resources.
spired by Google’s work. Currently, many organizations are For cloud users, Amazon CloudWatch is a useful man-
using Hadoop MapReduce to run large data-intensive com- agement tool which collects raw data from partnered AWS
putations. services such as Amazon EC2 and then processes the in-
formation into readable, near real-time metrics. The metrics
5.2 Commercial products about EC2 include, for example, CPU utilization, network
in/out bytes, disk read/write operations, etc.
In this section, we provide a survey of some of the dominant
cloud computing products.
5.2.2 Microsoft Windows Azure platform
5.2.1 Amazon EC2
Microsoft’s Windows Azure platform [53] consists of three
Amazon Web Services (AWS) [3] is a set of cloud services, components and each of them provides a specific set of ser-
providing cloud-based computation, storage and other func- vices to cloud users. Windows Azure provides a Windows-
tionality that enable organizations and individuals to deploy based environment for running applications and storing data
applications and services on an on-demand basis and at com- on servers in data centers; SQL Azure provides data services
modity prices. Amazon Web Services’ offerings are acces- in the cloud based on SQL Server; and .NET Services offer
sible over HTTP, using REST and SOAP protocols. distributed infrastructure services to cloud-based and local
Amazon Elastic Compute Cloud (Amazon EC2) enables applications. Windows Azure platform can be used both by
cloud users to launch and manage server instances in data applications running in the cloud and by applications run-
centers using APIs or available tools and utilities. EC2 in- ning on local systems.
stances are virtual machines running on top of the Xen virtu- Windows Azure also supports applications built on the
alization engine [55]. After creating and starting an instance, .NET Framework and other ordinary languages supported in
users can upload software and make changes to it. When Windows systems, like C#, Visual Basic, C++, and others.
changes are finished, they can be bundled as a new machine Windows Azure supports general-purpose programs, rather
image. An identical copy can then be launched at any time. than a single class of computing. Developers can create web
Users have nearly full control of the entire software stack applications using technologies such as ASP.NET and Win-
on the EC2 instances that look like hardware to them. On dows Communication Foundation (WCF), applications that
the other hand, this feature makes it inherently difficult for run as independent background processes, or applications
Amazon to offer automatic scaling of resources. that combine the two. Windows Azure allows storing data
EC2 provides the ability to place instances in multiple lo- in blobs, tables, and queues, all accessed in a RESTful style
cations. EC2 locations are composed of Regions and Avail- via HTTP or HTTPS.
ability Zones. Regions consist of one or more Availability SQL Azure components are SQL Azure Database and
Zones, are geographically dispersed. Availability Zones are “Huron” Data Sync. SQL Azure Database is built on Mi-
distinct locations that are engineered to be insulated from crosoft SQL Server, providing a database management sys-
failures in other Availability Zones and provide inexpensive, tem (DBMS) in the cloud. The data can be accessed using
low latency network connectivity to other Availability Zones ADO.NET and other Windows data access interfaces. Users
in the same Region. can also use on-premises software to work with this cloud-
EC2 machine images are stored in and retrieved from based information. “Huron” Data Sync synchronizes rela-
Amazon Simple Storage Service (Amazon S3). S3 stores tional data across various on-premises DBMSs.
8. 14 J Internet Serv Appl (2010) 1: 7–18
Table 1 A comparison of representative commercial products
Cloud Provider Amazon EC2 Windows Azure Google App Engine
Classes of Utility Computing Infrastructure service Platform service Platform service
Target Applications General-purpose applications General-purpose Windows Traditional web applications
applications with supported framework
Computation OS Level on a Xen Virtual Microsoft Common Language Predefined web application
Machine Runtime (CLR) VM; Predefined frameworks
roles of app. instances
Storage Elastic Block Store; Amazon Azure storage service and SQL BigTable and MegaStore
Simple Storage Service (S3); Data Services
Amazon SimpleDB
Auto Scaling Automatically changing the Automatic scaling based on Automatic Scaling which is
number of instances based on application roles and a transparent to users
parameters that users specify configuration file specified by
users
The .NET Services facilitate the creation of distributed and management of the resources. Users can choose one
applications. The Access Control component provides a type or combinations of several types of cloud offerings to
cloud-based implementation of single identity verification satisfy specific business requirements.
across applications and companies. The Service Bus helps
an application expose web services endpoints that can be
accessed by other applications, whether on-premises or in 6 Research challenges
the cloud. Each exposed endpoint is assigned a URI, which
clients can use to locate and access a service. Although cloud computing has been widely adopted by the
All of the physical resources, VMs and applications in industry, the research on cloud computing is still at an early
the data center are monitored by software called the fabric stage. Many existing issues have not been fully addressed,
controller. With each application, the users upload a config- while new challenges keep emerging from industry applica-
uration file that provides an XML-based description of what tions. In this section, we summarize some of the challenging
the application needs. Based on this file, the fabric controller research issues in cloud computing.
decides where new applications should run, choosing phys-
ical servers to optimize hardware utilization. 6.1 Automated service provisioning
5.2.3 Google App Engine One of the key features of cloud computing is the capabil-
ity of acquiring and releasing resources on-demand. The ob-
Google App Engine [20] is a platform for traditional web jective of a service provider in this case is to allocate and
applications in Google-managed data centers. Currently, the de-allocate resources from the cloud to satisfy its service
supported programming languages are Python and Java. level objectives (SLOs), while minimizing its operational
Web frameworks that run on the Google App Engine include cost. However, it is not obvious how a service provider can
Django, CherryPy, Pylons, and web2py, as well as a custom achieve this objective. In particular, it is not easy to de-
Google-written web application framework similar to JSP termine how to map SLOs such as QoS requirements to
or ASP.NET. Google handles deploying code to a cluster, low-level resource requirement such as CPU and memory
monitoring, failover, and launching application instances as requirements. Furthermore, to achieve high agility and re-
necessary. Current APIs support features such as storing and spond to rapid demand fluctuations such as in flash crowd
retrieving data from a BigTable [10] non-relational database, effect, the resource provisioning decisions must be made on-
making HTTP requests and caching. Developers have read- line.
only access to the filesystem on App Engine. Automated service provisioning is not a new problem.
Table 1 summarizes the three examples of popular cloud Dynamic resource provisioning for Internet applications has
offerings in terms of the classes of utility computing, tar- been studied extensively in the past [47, 57]. These ap-
get types of application, and more importantly their models proaches typically involve: (1) Constructing an application
of computation, storage and auto-scaling. Apparently, these performance model that predicts the number of application
cloud offerings are based on different levels of abstraction instances required to handle demand at each particular level,
9. J Internet Serv Appl (2010) 1: 7–18 15
in order to satisfy QoS requirements; (2) Periodically pre- communication requirements, have also been considered re-
dicting future demand and determining resource require- cently [34].
ments using the performance model; and (3) Automatically However, server consolidation activities should not hurt
allocating resources using the predicted resource require- application performance. It is known that the resource usage
ments. Application performance model can be constructed (also known as the footprint [45]) of individual VMs may
using various techniques, including Queuing theory [47], vary over time [54]. For server resources that are shared
Control theory [28] and Statistical Machine Learning [7]. among VMs, such as bandwidth, memory cache and disk
Additionally, there is a distinction between proactive I/O, maximally consolidating a server may result in re-
and reactive resource control. The proactive approach uses source congestion when a VM changes its footprint on the
predicted demand to periodically allocate resources before server [38]. Hence, it is sometimes important to observe
they are needed. The reactive approach reacts to immedi- the fluctuations of VM footprints and use this information
ate demand fluctuations before periodic demand prediction for effective server consolidation. Finally, the system must
is available. Both approaches are important and necessary quickly react to resource congestions when they occur [54].
for effective resource control in dynamic operating environ-
ments. 6.4 Energy management
6.2 Virtual machine migration Improving energy efficiency is another major issue in cloud
computing. It has been estimated that the cost of powering
Virtualization can provide significant benefits in cloud com- and cooling accounts for 53% of the total operational expen-
puting by enabling virtual machine migration to balance diture of data centers [26]. In 2006, data centers in the US
load across the data center. In addition, virtual machine mi- consumed more than 1.5% of the total energy generated in
gration enables robust and highly responsive provisioning in that year, and the percentage is projected to grow 18% an-
data centers. nually [33]. Hence infrastructure providers are under enor-
Virtual machine migration has evolved from process mous pressure to reduce energy consumption. The goal is
migration techniques [37]. More recently, Xen [55] and not only to cut down energy cost in data centers, but also to
VMWare [52] have implemented “live” migration of VMs meet government regulations and environmental standards.
that involves extremely short downtimes ranging from tens Designing energy-efficient data centers has recently re-
of milliseconds to a second. Clark et al. [13] pointed out that ceived considerable attention. This problem can be ap-
migrating an entire OS and all of its applications as one unit proached from several directions. For example, energy-
allows to avoid many of the difficulties faced by process- efficient hardware architecture that enables slowing down
level migration approaches, and analyzed the benefits of live CPU speeds and turning off partial hardware components
migration of VMs. [8] has become commonplace. Energy-aware job schedul-
The major benefits of VM migration is to avoid hotspots; ing [50] and server consolidation [46] are two other ways to
however, this is not straightforward. Currently, detecting reduce power consumption by turning off unused machines.
workload hotspots and initiating a migration lacks the agility Recent research has also begun to study energy-efficient net-
to respond to sudden workload changes. Moreover, the in- work protocols and infrastructures [27]. A key challenge in
memory state should be transferred consistently and effi- all the above methods is to achieve a good trade-off between
ciently, with integrated consideration of resources for appli- energy savings and application performance. In this respect,
cations and physical servers. few researchers have recently started to investigate coordi-
nated solutions for performance and power management in
6.3 Server consolidation a dynamic cloud environment [32].
Server consolidation is an effective approach to maximize 6.5 Traffic management and analysis
resource utilization while minimizing energy consumption
in a cloud computing environment. Live VM migration tech- Analysis of data traffic is important for today’s data cen-
nology is often used to consolidate VMs residing on multi- ters. For example, many web applications rely on analysis
ple under-utilized servers onto a single server, so that the of traffic data to optimize customer experiences. Network
remaining servers can be set to an energy-saving state. The operators also need to know how traffic flows through the
problem of optimally consolidating servers in a data center network in order to make many of the management and plan-
is often formulated as a variant of the vector bin-packing ning decisions.
problem [11], which is an NP-hard optimization problem. However, there are several challenges for existing traf-
Various heuristics have been proposed for this problem fic measurement and analysis methods in Internet Service
[33, 46]. Additionally, dependencies among VMs, such as Providers (ISPs) networks and enterprise to extend to data
10. 16 J Internet Serv Appl (2010) 1: 7–18
centers. Firstly, the density of links is much higher than applications leverage MapReduce frameworks such as
that in ISPs or enterprise networks, which makes the worst- Hadoop for scalable and fault-tolerant data processing. Re-
case scenario for existing methods. Secondly, most existing cent work has shown that the performance and resource con-
methods can compute traffic matrices between a few hun- sumption of a MapReduce job is highly dependent on the
dreds end hosts, but even a modular data center can have type of the application [29, 42, 56]. For instance, Hadoop
several thousand servers. Finally, existing methods usually tasks such as sort is I/O intensive, whereas grep requires
assume some flow patterns that are reasonable in Internet significant CPU resources. Furthermore, the VM allocated
and enterprises networks, but the applications deployed on to each Hadoop node may have heterogeneous character-
data centers, such as MapReduce jobs, significantly change istics. For example, the bandwidth available to a VM is
the traffic pattern. Further, there is tighter coupling in appli- dependent on other VMs collocated on the same server.
cation’s use of network, computing, and storage resources, Hence, it is possible to optimize the performance and cost
than what is seen in other settings. of a MapReduce application by carefully selecting its con-
Currently, there is not much work on measurement and figuration parameter values [29] and designing more effi-
analysis of data center traffic. Greenberg et al. [21] report cient scheduling algorithms [42, 56]. By mitigating the bot-
data center traffic characteristics on flow sizes and concur- tleneck resources, execution time of applications can be
rent flows, and use these to guide network infrastructure de- significantly improved. The key challenges include perfor-
sign. Benson et al. [16] perform a complementary study of mance modeling of Hadoop jobs (either online or offline),
traffic at the edges of a data center by examining SNMP and adaptive scheduling in dynamic conditions.
traces from routers. Another related approach argues for making MapReduce
frameworks energy-aware [50]. The essential idea of this ap-
6.6 Data security proach is to turn Hadoop node into sleep mode when it has
Data security is another important research topic in cloud finished its job while waiting for new assignments. To do so,
computing. Since service providers typically do not have ac- both Hadoop and HDFS must be made energy-aware. Fur-
cess to the physical security system of data centers, they thermore, there is often a trade-off between performance and
must rely on the infrastructure provider to achieve full energy-awareness. Depending on the objective, finding a de-
data security. Even for a virtual private cloud, the service sirable trade-off point is still an unexplored research topic.
provider can only specify the security setting remotely, with-
6.8 Storage technologies and data management
out knowing whether it is fully implemented. The infrastruc-
ture provider, in this context, must achieve the following
Software frameworks such as MapReduce and its various
objectives: (1) confidentiality, for secure data access and
implementations such as Hadoop and Dryad are designed
transfer, and (2) auditability, for attesting whether secu-
for distributed processing of data-intensive tasks. As men-
rity setting of applications has been tampered or not. Con-
tioned previously, these frameworks typically operate on
fidentiality is usually achieved using cryptographic proto-
Internet-scale file systems such as GFS and HDFS. These
cols, whereas auditability can be achieved using remote at-
file systems are different from traditional distributed file sys-
testation techniques. Remote attestation typically requires a
trusted platform module (TPM) to generate non-forgeable tems in their storage structure, access pattern and application
system summary (i.e. system state encrypted using TPM’s programming interface. In particular, they do not implement
private key) as the proof of system security. However, in a the standard POSIX interface, and therefore introduce com-
virtualized environment like the clouds, VMs can dynami- patibility issues with legacy file systems and applications.
cally migrate from one location to another, hence directly Several research efforts have studied this problem [4, 40].
using remote attestation is not sufficient. In this case, it is For instance, the work in [4] proposed a method for sup-
critical to build trust mechanisms at every architectural layer porting the MapReduce framework using cluster file sys-
of the cloud. Firstly, the hardware layer must be trusted tems such as IBM’s GPFS. Patil et al. [40] proposed new
using hardware TPM. Secondly, the virtualization platform API primitives for scalable and concurrent data access.
must be trusted using secure virtual machine monitors [43].
VM migration should only be allowed if both source and 6.9 Novel cloud architectures
destination servers are trusted. Recent work has been de-
Currently, most of the commercial clouds are implemented
voted to designing efficient protocols for trust establishment
in large data centers and operated in a centralized fashion.
and management [31, 43].
Although this design achieves economy-of-scale and high
6.7 Software frameworks manageability, it also comes with its limitations such high
energy expense and high initial investment for construct-
Cloud computing provides a compelling platform for host- ing data centers. Recent work [12, 48] suggests that small-
ing large-scale data-intensive applications. Typically, these size data centers can be more advantageous than big data
11. J Internet Serv Appl (2010) 1: 7–18 17
centers in many cases: a small data center does not con- 4. Ananthanarayanan R, Gupta K et al (2009) Cloud analytics: do we
sume so much power, hence it does not require a power- really need to reinvent the storage stack? In: Proc of HotCloud
5. Armbrust M et al (2009) Above the clouds: a Berkeley view of
ful and yet expensive cooling system; small data centers are
cloud computing. UC Berkeley Technical Report
cheaper to build and better geographically distributed than 6. Berners-Lee T, Fielding R, Masinter L (2005) RFC 3986: uniform
large data centers. Geo-diversity is often desirable for re- resource identifier (URI): generic syntax, January 2005
sponse time-critical services such as content delivery and 7. Bodik P et al (2009) Statistical machine learning makes automatic
control practical for Internet datacenters. In: Proc HotCloud
interactive gaming. For example, Valancius et al. [48] stud-
8. Brooks D et al (2000) Power-aware microarchitecture: design
ied the feasibility of hosting video-streaming services using and modeling challenges for the next-generation microprocessors,
application gateways (a.k.a. nano-data centers). IEEE Micro
Another related research trend is on using voluntary re- 9. Chandra A et al (2009) Nebulas: using distributed voluntary re-
sources to build clouds. In: Proc of HotCloud
sources (i.e. resources donated by end-users) for hosting
10. Chang F, Dean J et al (2006) Bigtable: a distributed storage system
cloud applications [9]. Clouds built using voluntary re- for structured data. In: Proc of OSDI
sources, or a mixture of voluntary and dedicated resources 11. Chekuri C, Khanna S (2004) On multi-dimensional packing prob-
are much cheaper to operate and more suitable for non-profit lems. SIAM J Comput 33(4):837–851
applications such as scientific computing. However, this ar- 12. Church K et al (2008) On delivering embarrassingly distributed
cloud services. In: Proc of HotNets
chitecture also imposes challenges such managing heteroge- 13. Clark C, Fraser K, Hand S, Hansen JG, Jul E, Limpach C, Pratt I,
neous resources and frequent churn events. Also, devising Warfield A (2005) Live migration of virtual machines. In: Proc of
incentive schemes for such architectures is an open research NSDI
problem. 14. Cloud Computing on Wikipedia, en.wikipedia.org/wiki/
Cloudcomputing, 20 Dec 2009
15. Cloud Hosting, CLoud Computing and Hybrid Infrastructure from
GoGrid, http://www.gogrid.com
7 Conclusion 16. Dean J, Ghemawat S (2004) MapReduce: simplified data process-
ing on large clusters. In: Proc of OSDI
17. Dedicated Server, Managed Hosting, Web Hosting by Rackspace
Cloud computing has recently emerged as a compelling par- Hosting, http://www.rackspace.com
adigm for managing and delivering services over the Inter- 18. FlexiScale Cloud Comp and Hosting, www.flexiscale.com
net. The rise of cloud computing is rapidly changing the 19. Ghemawat S, Gobioff H, Leung S-T (2003) The Google file sys-
landscape of information technology, and ultimately turning tem. In: Proc of SOSP, October 2003
20. Google App Engine, URL http://code.google.com/appengine
the long-held promise of utility computing into a reality. 21. Greenberg A, Jain N et al (2009) VL2: a scalable and flexible data
However, despite the significant benefits offered by cloud center network. In: Proc SIGCOMM
computing, the current technologies are not matured enough 22. Guo C et al (2008) DCell: a scalable and fault-tolerant network
to realize its full potential. Many key challenges in this structure for data centers. In: Proc SIGCOMM
23. Guo C, Lu G, Li D et al (2009) BCube: a high performance,
domain, including automatic resource provisioning, power server-centric network architecture for modular data centers. In:
management and security management, are only starting Proc SIGCOMM
to receive attention from the research community. There- 24. Hadoop Distributed File System, hadoop.apache.org/hdfs
fore, we believe there is still tremendous opportunity for re- 25. Hadoop MapReduce, hadoop.apache.org/mapreduce
26. Hamilton J (2009) Cooperative expendable micro-slice servers
searchers to make groundbreaking contributions in this field,
(CEMS): low cost, low power servers for Internet-scale services
and bring significant impact to their development in the in- In: Proc of CIDR
dustry. 27. IEEE P802.3az Energy Efficient Ethernet Task Force, www.
In this paper, we have surveyed the state-of-the-art of ieee802.org/3/az
28. Kalyvianaki E et al (2009) Self-adaptive and self-configured CPU
cloud computing, covering its essential concepts, architec-
resource provisioning for virtualized servers using Kalman filters.
tural designs, prominent characteristics, key technologies as In: Proc of international conference on autonomic computing
well as research directions. As the development of cloud 29. Kambatla K et al (2009) Towards optimizing Hadoop provisioning
computing technology is still at an early stage, we hope our in the cloud. In: Proc of HotCloud
work will provide a better understanding of the design chal- 30. Kernal Based Virtual Machine, www.linux-kvm.org/page/
MainPage
lenges of cloud computing, and pave the way for further re- 31. Krautheim FJ (2009) Private virtual infrastructure for cloud com-
search in this area. puting. In: Proc of HotCloud
32. Kumar S et al (2009) vManage: loosely coupled platform and vir-
tualization management in data centers. In: Proc of international
conference on cloud computing
References 33. Li B et al (2009) EnaCloud: an energy-saving application live
placement approach for cloud computing environments. In: Proc
1. Al-Fares M et al (2008) A scalable, commodity data center net- of international conf on cloud computing
work architecture. In: Proc SIGCOMM 34. Meng X et al (2010) Improving the scalability of data center net-
2. Amazon Elastic Computing Cloud, aws.amazon.com/ec2 works with traffic-aware virtual machine placement. In: Proc IN-
3. Amazon Web Services, aws.amazon.com FOCOM
12. 18 J Internet Serv Appl (2010) 1: 7–18
35. Mysore R et al (2009) PortLand: a scalable fault-tolerant layer 2 46. Srikantaiah S et al (2008) Energy aware consolidation for cloud
data center network fabric. In: Proc SIGCOMM computing. In: Proc of HotPower
36. NIST Definition of Cloud Computing v15, csrc.nist.gov/groups/ 47. Urgaonkar B et al (2005) Dynamic provisioning of multi-tier In-
SNS/cloud-computing/cloud-def-v15.doc ternet applications. In: Proc of ICAC
37. Osman S, Subhraveti D et al (2002) The design and implementa- 48. Valancius V, Laoutaris N et al (2009) Greening the Internet with
tion of zap: a system for migrating computing environments. In: nano data centers. In: Proc of CoNext
Proc of OSDI 49. Vaquero L, Rodero-Merino L, Caceres J, Lindner M (2009)
38. Padala P, Hou K-Y et al (2009) Automated control of multiple A break in the clouds: towards a cloud definition. ACM SIG-
virtualized resources. In: Proc of EuroSys COMM computer communications review
39. Parkhill D (1966) The challenge of the computer utility. Addison- 50. Vasic N et al (2009) Making cluster applications energy-aware. In:
Wesley, Reading Proc of automated ctrl for datacenters and clouds
40. Patil S et al (2009) In search of an API for scalable file systems: 51. Virtualization Resource Chargeback, www.vkernel.com/products/
under the table or above it? HotCloud EnterpriseChargebackVirtualAppliance
52. VMWare ESX Server, www.vmware.com/products/esx
41. Salesforce CRM, http://www.salesforce.com/platform
53. Windows Azure, www.microsoft.com/azure
42. Sandholm T, Lai K (2009) MapReduce optimization us-
54. Wood T et al (2007) Black-box and gray-box strategies for virtual
ing regulated dynamic prioritization. In: Proc of SIGMET-
machine migration. In: Proc of NSDI
RICS/Performance 55. XenSource Inc, Xen, www.xensource.com
43. Santos N, Gummadi K, Rodrigues R (2009) Towards trusted cloud 56. Zaharia M et al (2009) Improving MapReduce performance in het-
computing. In: Proc of HotCloud erogeneous environments. In: Proc of HotCloud
44. SAP Business ByDesign, www.sap.com/sme/solutions/ 57. Zhang Q et al (2007) A regression-based analytic model for dy-
businessmanagement/businessbydesign/index.epx namic resource provisioning of multi-tier applications. In: Proc
45. Sonnek J et al (2009) Virtual putty: reshaping the physical foot- ICAC
print of virtual machines. In: Proc of HotCloud