This document introduces cloud computing and its history. It discusses how John McCarthy first proposed the concept of computation being organized and delivered as a public utility in the 1960s. It also summarizes different technologies related to cloud computing including super computing, cluster computing, grid computing, and distributed computing.
Cloud computing is an emerging computing paradigm where large pools of systems are linked in a way that provides scalable and elastic IT capabilities. It abstracts physical infrastructure complexities and delivers resources over the internet. Key drivers of cloud computing include increased data growth, mobility, collaboration, and pressure on companies' data centers. Emerging trends include utility computing, software as a service, and platform as a service. The future of cloud computing involves both public and private clouds in a hybrid model to achieve standardization, customization, efficiency and other benefits.
This article introduces multimedia cloud computing and presents a novel framework. It addresses multimedia cloud computing from both a multimedia-aware cloud and cloud-aware multimedia perspective. First, it presents a multimedia-aware cloud architecture called a media-edge cloud that aims to provide distributed multimedia processing and quality of service provisioning at the edge of the cloud. It then discusses how multimedia applications can optimally utilize cloud computing resources to achieve high quality of experience for users.
1. The document discusses the emergence of cloud computing and how it transforms IT infrastructure by consolidating servers, storage, and networks across organizations into centralized clouds.
2. Key technologies like virtualization, advanced CPUs, network virtualization, improved storage, and software-defined networks enable the cloud by allowing resources to be dynamically allocated and scaled on demand.
3. There are different types of cloud models including private, public, hybrid and community clouds depending on who manages the infrastructure, and different service models including Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS).
This document discusses the future of cloud computing. It begins with a brief history of computing evolution and the origins of the term "cloud computing." Key drivers that led to cloud computing include increased mobility, collaboration, interconnectivity, and pressure on companies' data centers. The document then discusses how cloud computing provides scalable IT resources as a service and how it will transform the IT industry. It provides several definitions of cloud computing and outlines trends like increased parallelism, virtualization, and outsourcing that enable and are driven by cloud computing. Finally, it outlines different cloud computing delivery models like Software as a Service, Infrastructure as a Service, and public versus private clouds.
21st Century Project Management to deliver a variety of Cloud Computing Proj...VSR *
Cloud computing enables providing affordable computing power to drive economic growth, improve industries and services like healthcare and education. It allows accessing computing resources like applications, platforms and infrastructure through the internet. Project management needs to adapt to challenges of cloud computing like security, availability and payment models. India aims to become a global leader in cloud computing through better delivery of related projects and programs. PMI commits to help through a council to influence policies and build resources for managing cloud computing projects.
This Presentation created by me Mayur Verma when i was pursuing IT Security Diploma. In this i Describe the about cloud computing and example of some cloud OS
Cloud computing provides on-demand access to shared computing resources like networks, servers, storage, applications and services over the internet. It has evolved from earlier concepts like grid computing, web services, and virtualization. Key characteristics of cloud computing include shared resources that are scalable and elastic, self-service access, and pay-per-use metering. While cloud computing provides opportunities for cost savings and innovation, concerns around security, lock-in, and reliability must be addressed for widespread adoption. Standards and interoperability between cloud platforms are still developing.
The document discusses cloud computing and its applications for multimedia content. It describes how cloud platforms can address scalability issues that plagued early video services like YouTube. The cloud enables on-demand access to storage, processing power, and delivery resources. This allows multimedia applications and services to easily scale up or down based on real-time usage. The document outlines several opportunities cloud computing provides for multimedia production, content management, consumption, and business models. It also describes achievements in developing a cloud-based multimedia platform and applications.
Cloud computing is an emerging computing paradigm where large pools of systems are linked in a way that provides scalable and elastic IT capabilities. It abstracts physical infrastructure complexities and delivers resources over the internet. Key drivers of cloud computing include increased data growth, mobility, collaboration, and pressure on companies' data centers. Emerging trends include utility computing, software as a service, and platform as a service. The future of cloud computing involves both public and private clouds in a hybrid model to achieve standardization, customization, efficiency and other benefits.
This article introduces multimedia cloud computing and presents a novel framework. It addresses multimedia cloud computing from both a multimedia-aware cloud and cloud-aware multimedia perspective. First, it presents a multimedia-aware cloud architecture called a media-edge cloud that aims to provide distributed multimedia processing and quality of service provisioning at the edge of the cloud. It then discusses how multimedia applications can optimally utilize cloud computing resources to achieve high quality of experience for users.
1. The document discusses the emergence of cloud computing and how it transforms IT infrastructure by consolidating servers, storage, and networks across organizations into centralized clouds.
2. Key technologies like virtualization, advanced CPUs, network virtualization, improved storage, and software-defined networks enable the cloud by allowing resources to be dynamically allocated and scaled on demand.
3. There are different types of cloud models including private, public, hybrid and community clouds depending on who manages the infrastructure, and different service models including Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS).
This document discusses the future of cloud computing. It begins with a brief history of computing evolution and the origins of the term "cloud computing." Key drivers that led to cloud computing include increased mobility, collaboration, interconnectivity, and pressure on companies' data centers. The document then discusses how cloud computing provides scalable IT resources as a service and how it will transform the IT industry. It provides several definitions of cloud computing and outlines trends like increased parallelism, virtualization, and outsourcing that enable and are driven by cloud computing. Finally, it outlines different cloud computing delivery models like Software as a Service, Infrastructure as a Service, and public versus private clouds.
21st Century Project Management to deliver a variety of Cloud Computing Proj...VSR *
Cloud computing enables providing affordable computing power to drive economic growth, improve industries and services like healthcare and education. It allows accessing computing resources like applications, platforms and infrastructure through the internet. Project management needs to adapt to challenges of cloud computing like security, availability and payment models. India aims to become a global leader in cloud computing through better delivery of related projects and programs. PMI commits to help through a council to influence policies and build resources for managing cloud computing projects.
This Presentation created by me Mayur Verma when i was pursuing IT Security Diploma. In this i Describe the about cloud computing and example of some cloud OS
Cloud computing provides on-demand access to shared computing resources like networks, servers, storage, applications and services over the internet. It has evolved from earlier concepts like grid computing, web services, and virtualization. Key characteristics of cloud computing include shared resources that are scalable and elastic, self-service access, and pay-per-use metering. While cloud computing provides opportunities for cost savings and innovation, concerns around security, lock-in, and reliability must be addressed for widespread adoption. Standards and interoperability between cloud platforms are still developing.
The document discusses cloud computing and its applications for multimedia content. It describes how cloud platforms can address scalability issues that plagued early video services like YouTube. The cloud enables on-demand access to storage, processing power, and delivery resources. This allows multimedia applications and services to easily scale up or down based on real-time usage. The document outlines several opportunities cloud computing provides for multimedia production, content management, consumption, and business models. It also describes achievements in developing a cloud-based multimedia platform and applications.
2019 Top IT Trends - Understanding the fundamentals of the next generation ...Tony Pearson
This session covers six major IT trends for 2019: Internet of Things (IoT), Big Data Analytics, Artificial Intelligence (AI), Containers and Orchestration, Blockchain, and Hybrid Multicloud. Presented at IBM TechU in Johannesburg, South Africa September 2019
Virtualization plays a key role in cloud computing by allowing for the efficient sharing of hardware resources. It allows a single physical machine to run multiple virtual machines, maximizing resource utilization. Common forms of virtualization include server, storage, network, desktop, and memory virtualization. A hypervisor manages virtual machines and provides an abstraction layer between hardware and software. Virtualization provides benefits like cost effectiveness, flexibility, and isolation of applications and operating systems. It is an important technology enabling cloud computing services.
This document discusses security issues and challenges related to cloud computing. It begins with an introduction to cloud computing and its benefits and types of cloud deployments including private cloud, public cloud, hybrid cloud, and community cloud. Each cloud deployment model has different security considerations. The main security issues discussed for public clouds include multi-tenancy concerns and transferring data over the internet. Private clouds provide fewer security concerns but require a higher investment. Hybrid clouds offer flexibility but new operational processes are needed. Overall, the document examines the tradeoffs between different cloud deployment models in terms of security.
ITCamp 2012 - Dan Fizesan - Serving 10 million requests per dayITCamp
Dan Fizesan presented on solving the architecture challenges of a high traffic ASP.NET website serving 10 million requests per day. Some key challenges included achieving high availability through redundancy, using load balancers for scalability, implementing distributed caching for performance, and designing for horizontal scalability. Fizesan demonstrated approaches for parallelizing code to improve performance of a web application processing 300 requests.
Cloud computing refers to a model of network computing where applications and services run on remote servers accessed over the internet rather than local devices. It allows users to access computing resources like servers, storage, databases, networking, software and analytics from anywhere. Key characteristics include on-demand self-service, broad network access, resource pooling, rapid elasticity and measured service. Major cloud service models are software as a service (SaaS), platform as a service (PaaS) and infrastructure as a service (IaaS).
Cloud computing is the delivery of computing services over the Internet that are managed by third parties at remote locations. It allows users to access applications and store data on shared servers rather than local hardware. Some key advantages of cloud computing include reduced maintenance needs, continuous availability, scalability to pay only for resources used, and automatic backups of data. Major cloud providers like Amazon Web Services, Microsoft Azure, and Google App Engine offer infrastructure and platform services to help build and deploy cloud applications. Virtualization allows multiple operating systems to run simultaneously on the same host computer by simulating hardware resources in software.
Cloud computing allows users to access software and data storage over the Internet. It has evolved from earlier technologies like grid computing and utility computing. Major cloud providers like Amazon, Google, Microsoft and IBM offer infrastructure, platform and software services on massive networks of servers. While cloud computing provides benefits like reduced costs and improved scalability, concerns around security and data location must still be addressed. It is an emerging technology that is expected to continue growing rapidly in both business and personal use.
Micro Server Design - Open Compute ProjectHitesh Jani
The document discusses specifications for micro server design as outlined by the Open Compute Project (OCP). It describes how micro servers were developed to address changing workload needs like hyperscaling and reduce total cost of ownership compared to traditional servers. The OCP provides open specifications for micro server hardware designs based on system-on-chip technology. Key aspects covered include functional specifications for ARM-based multicore SOCs, memory, storage, interfaces, and power requirements. Mechanical specifications are also defined for the micro server module and chassis integration.
Cloud computing involves delivering computing resources as a service over the internet. It allows users to access software, storage, and computing power from any device with an internet connection. Major advantages include lower costs compared to owning software/hardware, access from any device, automatic updates, and not needing large internal storage on devices. However, security, privacy, and reliance on consistent internet access are potential disadvantages.
Why z/OS is a Great Platform for Developing and Hosting APIsTeodoro Cipresso
z/OS Connect Enterprise Edition makes it possible to create new value by enabling the creation of APIs that bring together multiple, disparate, z subsystem assets.
Cloud computing refers to applications and services delivered over the Internet through shared computing resources that can be rapidly provisioned on-demand. Key aspects include virtualization which allows multiple virtual machines to run on shared hardware, infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). Hadoop is a distributed system for large-scale data storage and processing of diverse data types across clustered systems.
The document discusses IBM's hybrid cloud storage solutions and how various IBM storage products integrate with OpenStack. It provides an overview of OpenStack and how IBM storage such as IBM Spectrum Virtualize, XIV, DS8000, A9000, Spectrum Scale and Spectrum Protect integrate with OpenStack. It also outlines Tony Pearson's speaking schedule for the week which includes topics on IBM Cloud Object Storage and IBM hybrid cloud storage solutions.
Cloud computing allows users to access software, storage, and other computing services over the Internet rather than maintaining these resources locally. It has grown from early concepts like distributed, cluster, and utility computing. Major cloud providers include Amazon Web Services, Google, and Microsoft Azure. There are three main types of cloud deployment models - public, private, and hybrid clouds. Cloud computing provides advantages like lower costs, universal access, and scalability but also risks like security vulnerabilities and reliance on internet connectivity.
Cloud computing allows for on-demand access to shared computing resources like networks, servers, storage, applications and services. It provides accessibility, agility and flexibility through rapid provisioning and releasing of resources with minimal management effort. Some key aspects of cloud computing include virtualization, multi-tenancy, broad network access, resource pooling and measured service. Cloud computing is changing the nature of IT by moving computing resources from local desktops and data centers to the internet.
Mobile digital platforms, grid and cloud computing, virtualization, and consumerization of IT are emerging as contemporary hardware trends, along with high-performance yet power-saving processors and green computing initiatives that aim to reduce energy usage through autonomic and self-managing systems.
Randy Bias, Co-Founder and CTO of Cloudscaling, speaks on open storage, fault tolerance and the concept of failure "blast radius" at the Open Storage Summit, hosted by Nexenta in May 2012.
MapReduce is a programming model and framework developed by Google for processing and generating large datasets in a distributed computing environment. It allows parallel processing of large datasets across clusters of computers using a simple programming model. It works by breaking the processing into many small fragments of work that can be executed in parallel by the different machines, and then combining the results at the end.
HBase is a distributed, scalable, big data store that is modeled after Google's BigTable. It uses HDFS for storage and is written in Java. HBase provides a key-value data model and allows for fast lookups by row keys. It does not support SQL queries or transactions. Clients can access HBase data via Java APIs, REST, Thrift or MapReduce. The architecture consists of a master server and multiple region servers that host regions and serve client requests.
This document provides guidelines for a 2010 programming contest preliminary round, including details on the preliminary topic, expected concepts, file system requirements, testing procedures, documentation and code review criteria, and judgment process. The preliminary round aims to test design completeness and creativity within defined restrictions. Main concepts like distributed computing and fault tolerance should be demonstrated, and designs must be clearly explained and implement the specified requirements. Installability, functionality, and passing test cases will be key judgment points.
2019 Top IT Trends - Understanding the fundamentals of the next generation ...Tony Pearson
This session covers six major IT trends for 2019: Internet of Things (IoT), Big Data Analytics, Artificial Intelligence (AI), Containers and Orchestration, Blockchain, and Hybrid Multicloud. Presented at IBM TechU in Johannesburg, South Africa September 2019
Virtualization plays a key role in cloud computing by allowing for the efficient sharing of hardware resources. It allows a single physical machine to run multiple virtual machines, maximizing resource utilization. Common forms of virtualization include server, storage, network, desktop, and memory virtualization. A hypervisor manages virtual machines and provides an abstraction layer between hardware and software. Virtualization provides benefits like cost effectiveness, flexibility, and isolation of applications and operating systems. It is an important technology enabling cloud computing services.
This document discusses security issues and challenges related to cloud computing. It begins with an introduction to cloud computing and its benefits and types of cloud deployments including private cloud, public cloud, hybrid cloud, and community cloud. Each cloud deployment model has different security considerations. The main security issues discussed for public clouds include multi-tenancy concerns and transferring data over the internet. Private clouds provide fewer security concerns but require a higher investment. Hybrid clouds offer flexibility but new operational processes are needed. Overall, the document examines the tradeoffs between different cloud deployment models in terms of security.
ITCamp 2012 - Dan Fizesan - Serving 10 million requests per dayITCamp
Dan Fizesan presented on solving the architecture challenges of a high traffic ASP.NET website serving 10 million requests per day. Some key challenges included achieving high availability through redundancy, using load balancers for scalability, implementing distributed caching for performance, and designing for horizontal scalability. Fizesan demonstrated approaches for parallelizing code to improve performance of a web application processing 300 requests.
Cloud computing refers to a model of network computing where applications and services run on remote servers accessed over the internet rather than local devices. It allows users to access computing resources like servers, storage, databases, networking, software and analytics from anywhere. Key characteristics include on-demand self-service, broad network access, resource pooling, rapid elasticity and measured service. Major cloud service models are software as a service (SaaS), platform as a service (PaaS) and infrastructure as a service (IaaS).
Cloud computing is the delivery of computing services over the Internet that are managed by third parties at remote locations. It allows users to access applications and store data on shared servers rather than local hardware. Some key advantages of cloud computing include reduced maintenance needs, continuous availability, scalability to pay only for resources used, and automatic backups of data. Major cloud providers like Amazon Web Services, Microsoft Azure, and Google App Engine offer infrastructure and platform services to help build and deploy cloud applications. Virtualization allows multiple operating systems to run simultaneously on the same host computer by simulating hardware resources in software.
Cloud computing allows users to access software and data storage over the Internet. It has evolved from earlier technologies like grid computing and utility computing. Major cloud providers like Amazon, Google, Microsoft and IBM offer infrastructure, platform and software services on massive networks of servers. While cloud computing provides benefits like reduced costs and improved scalability, concerns around security and data location must still be addressed. It is an emerging technology that is expected to continue growing rapidly in both business and personal use.
Micro Server Design - Open Compute ProjectHitesh Jani
The document discusses specifications for micro server design as outlined by the Open Compute Project (OCP). It describes how micro servers were developed to address changing workload needs like hyperscaling and reduce total cost of ownership compared to traditional servers. The OCP provides open specifications for micro server hardware designs based on system-on-chip technology. Key aspects covered include functional specifications for ARM-based multicore SOCs, memory, storage, interfaces, and power requirements. Mechanical specifications are also defined for the micro server module and chassis integration.
Cloud computing involves delivering computing resources as a service over the internet. It allows users to access software, storage, and computing power from any device with an internet connection. Major advantages include lower costs compared to owning software/hardware, access from any device, automatic updates, and not needing large internal storage on devices. However, security, privacy, and reliance on consistent internet access are potential disadvantages.
Why z/OS is a Great Platform for Developing and Hosting APIsTeodoro Cipresso
z/OS Connect Enterprise Edition makes it possible to create new value by enabling the creation of APIs that bring together multiple, disparate, z subsystem assets.
Cloud computing refers to applications and services delivered over the Internet through shared computing resources that can be rapidly provisioned on-demand. Key aspects include virtualization which allows multiple virtual machines to run on shared hardware, infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). Hadoop is a distributed system for large-scale data storage and processing of diverse data types across clustered systems.
The document discusses IBM's hybrid cloud storage solutions and how various IBM storage products integrate with OpenStack. It provides an overview of OpenStack and how IBM storage such as IBM Spectrum Virtualize, XIV, DS8000, A9000, Spectrum Scale and Spectrum Protect integrate with OpenStack. It also outlines Tony Pearson's speaking schedule for the week which includes topics on IBM Cloud Object Storage and IBM hybrid cloud storage solutions.
Cloud computing allows users to access software, storage, and other computing services over the Internet rather than maintaining these resources locally. It has grown from early concepts like distributed, cluster, and utility computing. Major cloud providers include Amazon Web Services, Google, and Microsoft Azure. There are three main types of cloud deployment models - public, private, and hybrid clouds. Cloud computing provides advantages like lower costs, universal access, and scalability but also risks like security vulnerabilities and reliance on internet connectivity.
Cloud computing allows for on-demand access to shared computing resources like networks, servers, storage, applications and services. It provides accessibility, agility and flexibility through rapid provisioning and releasing of resources with minimal management effort. Some key aspects of cloud computing include virtualization, multi-tenancy, broad network access, resource pooling and measured service. Cloud computing is changing the nature of IT by moving computing resources from local desktops and data centers to the internet.
Mobile digital platforms, grid and cloud computing, virtualization, and consumerization of IT are emerging as contemporary hardware trends, along with high-performance yet power-saving processors and green computing initiatives that aim to reduce energy usage through autonomic and self-managing systems.
Randy Bias, Co-Founder and CTO of Cloudscaling, speaks on open storage, fault tolerance and the concept of failure "blast radius" at the Open Storage Summit, hosted by Nexenta in May 2012.
MapReduce is a programming model and framework developed by Google for processing and generating large datasets in a distributed computing environment. It allows parallel processing of large datasets across clusters of computers using a simple programming model. It works by breaking the processing into many small fragments of work that can be executed in parallel by the different machines, and then combining the results at the end.
HBase is a distributed, scalable, big data store that is modeled after Google's BigTable. It uses HDFS for storage and is written in Java. HBase provides a key-value data model and allows for fast lookups by row keys. It does not support SQL queries or transactions. Clients can access HBase data via Java APIs, REST, Thrift or MapReduce. The architecture consists of a master server and multiple region servers that host regions and serve client requests.
This document provides guidelines for a 2010 programming contest preliminary round, including details on the preliminary topic, expected concepts, file system requirements, testing procedures, documentation and code review criteria, and judgment process. The preliminary round aims to test design completeness and creativity within defined restrictions. Main concepts like distributed computing and fault tolerance should be demonstrated, and designs must be clearly explained and implement the specified requirements. Installability, functionality, and passing test cases will be key judgment points.
The document discusses the history and concepts of cloud computing including distributed computing, virtualization, different cloud service models (IaaS, PaaS, SaaS), web 2.0, and major cloud platforms. It also describes Trend Micro's Smart Protection Network and how it utilizes the cloud and big data analytics to detect emerging threats.
Hadoop is an open-source software framework for distributed storage and processing of large datasets using the MapReduce programming model. It includes HDFS for data storage and MapReduce for data processing across clusters of compute nodes. HDFS provides high throughput access to application data and is suitable for applications that have large data sets. It provides reliability through data replication and distributed architecture.
This document discusses the history and evolution of cloud computing, including its origins in concepts like virtualization, supercomputing, cluster computing, grid computing, and distributed computing. It traces the development of these technologies over time, from early concepts in the 1960s proposed by John McCarthy regarding computation as a public utility, to modern implementations like Google's large network of clustered commodity servers. Key people and technologies that advanced each concept are described, such as Seymour Cray's work in supercomputing and Ian Foster and Carl Kesselman's contributions to grid computing through projects like Globus Toolkit.
This document discusses cloud computing and provides examples of different cloud models. It defines cloud computing as data and applications existing on remote servers accessed over the internet. It outlines various cloud service models like software as a service. The document also cautions that while cloud computing offers benefits, it can also exacerbate organizational issues and conflict with outdated policies if not implemented carefully. It concludes by presenting different models for how organizations can leverage and own cloud resources.
This document provides an agenda for a presentation on cloud computing and big data. The presentation will include:
- An introduction of the presenter and a review of handouts on related topics.
- A discussion of cloud computing, including definitions of cloud computing, the history and types of clouds, implications, and the future of cloud computing.
- A break in the presentation.
- A discussion of big data, including the scale of big data, the importance of big data, how it differs from traditional data, how to deal with big data, and the future of big data.
- A wrap-up of the presentation and Q&A session.
TU_BCA_7TH_SEM_CC_INTRODUCTION TO CLOUD COMPUTINGSujit Jha
This document provides an overview of cloud computing concepts and applications. It introduces cloud computing, describing its goal of providing flexible computing infrastructure. It then outlines the course content, which covers cloud service models, building cloud networks, and security. Various cloud computing concepts are defined, such as service models, deployment models, and applications. Benefits include cost savings and scalability, while challenges involve security and vendor lock-in. Overall the document provides a high-level introduction to cloud computing fundamentals.
The document traces the history of cloud computing from early precursors in the 1960s involving timesharing mainframe computers through the development of virtualization, web-based apps, and major cloud platforms from Amazon, Google, IBM and others in the 2000s and 2010s. Key developments include Project MAC in 1963, ARPANET in 1969, virtualization in the 1970s-90s, Amazon Web Services in 2006, Google Docs and Netflix streaming in 2007, and increased focus on security in cloud services from 2014 onward.
I do not have enough information in the provided document to answer your questions about Fog Computing. The document focuses on technologies that enable Cloud Computing, but does not mention Fog Computing.
Virtualization allows multiple operating systems to run concurrently on a single physical machine by presenting each virtual operating system with a simulated hardware environment. A hypervisor manages shared access to physical resources and provides isolation between virtual machines. Virtualization provides benefits like server consolidation and containment to improve hardware utilization rates and reduce data center costs. It enables dynamic infrastructure that can rapidly scale resources up or down based on demand. Hyperconverged infrastructure further unifies storage, computing and networking resources to simplify and optimize data center management.
Cloud computing provides on-demand access to shared computing resources like networks, servers, storage, applications and services that can be provisioned with minimal management effort. It has characteristics like on-demand self-service, broad network access, resource pooling, rapid elasticity and measured service. The cloud services models are Software as a Service (SaaS), Platform as a Service (PaaS), Infrastructure as a Service (IaaS). The deployment models are private cloud, community cloud, public cloud and hybrid cloud.
The document discusses cloud computing and provides an overview of related topics:
- It defines computing and lists trends in computing such as distributed computing, grid computing, cluster computing, and utility computing that led to cloud computing.
- It describes cloud computing architecture including service models (IaaS, PaaS, SaaS), deployment models, and management of services, resources, data, security, and research trends in cloud computing.
Cloud computing provides on-demand access to shared computing resources like networks, servers, storage, applications and services. It has essential characteristics like on-demand self-service, broad network access, resource pooling and rapid elasticity. The cloud services models include Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). The deployment models are private cloud, community cloud, public cloud and hybrid cloud.
Assistant is an AI assistant created by Anthropic to be helpful, harmless, and honest. It is designed to be helpful by answering questions, harmless by avoiding potential harms, and honest by disclosing its identity and capabilities.
Some popular education applications in cloud computing are:
- Google Classroom: Google Classroom is a free web service developed by Google for schools that aims to simplify creating, distributing, and grading assignments in a paperless way.
- Blackboard: Blackboard is a virtual learning environment and course management system designed to help educators create online courses and manage all aspects of teaching.
- Edmodo: Edmodo is a social learning platform that helps connect all learners with the people and resources needed
Cloud computing provides on-demand access to shared computing resources like networks, servers, storage, applications, and services. It has characteristics like on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service. The document discusses various cloud service models like SaaS, PaaS, and IaaS and deployment models like private, community, and public clouds. It also covers distributed, grid, cluster, and utility computing concepts related to cloud.
The document provides an overview of the evolution of cloud computing from its roots in mainframe computing, distributed systems, grid computing, and cluster computing. It discusses how hardware virtualization, Internet technologies, distributed computing concepts, and systems management techniques enabled the development of cloud computing. The document then describes several early technologies and models such as time-shared mainframes, distributed systems, grid computing, and cluster computing that influenced the development of cloud computing.
The document discusses cloud computing, providing definitions and describing its evolution, service models (SaaS, PaaS, IaaS), deployment models (public, private, hybrid), and advantages/disadvantages. It defines cloud computing as on-demand access to shared pools of configurable computing resources via the internet. The three main service models are software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS). Public clouds are hosted by third parties, private clouds are operated solely by a single organization, and hybrid clouds combine public and private.
The document discusses different types of parallel computing including process level parallelism using distributed computers like clusters, grids, and mainframe computers. It provides details on distributed computing systems, how they are highly scalable with processing elements connected via a network. Examples are given of distributed systems like computer networks and network applications. The document also summarizes cluster computing using groups of connected computers, massively parallel computing using specialized interconnect networks, and grid computing which uses spare computing resources across administrative domains. Mainframe computers are described as used for critical enterprise applications requiring high availability, security, and bulk data processing.
The document discusses different types of information system architectures including mainframes, client-server systems, distributed systems, cloud computing, and domain name servers. Mainframes are large, expensive computers capable of supporting hundreds of users simultaneously, while client-server systems have powerful servers managing resources for client computers. Distributed systems and cloud computing spread computing tasks across networked computers. Domain name servers translate human-readable website names to machine-readable IP addresses.
Sample of workshop given at CloudAsia 2012. Workshop is 700 slides, so this is just a small sample to give a feel for the content, depth and independent approach.
This document provides an overview of cloud computing, including its history and origins dating back to mainframe computers in the 1950s and time sharing networks in the 1960s. It describes the types of cloud models including infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). The key characteristics of cloud computing are also summarized such as resource pooling, broad network access, elasticity, measured service, and on-demand self-service.
Cloud computing, big data, and mobile are three major trends that will change the world. Cloud computing provides scalable and elastic IT resources as services over the internet. Big data involves large amounts of both structured and unstructured data that can generate business insights when analyzed. The hadoop ecosystem, including components like HDFS, mapreduce, pig, and hive, provides an architecture for distributed storage and processing of big data across commodity hardware.
Cloud computing, big data, and mobile technologies are driving major changes in the IT world. Cloud computing provides scalable computing resources over the internet. Big data involves extremely large data sets that are analyzed to reveal business insights. Hadoop is an open-source software framework that allows distributed processing of big data across commodity hardware. It includes tools like HDFS for storage and MapReduce for distributed computing. The Hadoop ecosystem also includes additional tools for tasks like data integration, analytics, workflow management, and more. These emerging technologies are changing how businesses use and analyze data.
MapReduce is a framework for processing large datasets in a distributed manner. It involves two functions: map and reduce. The map function processes individual elements to generate intermediate key-value pairs, and the reduce function merges all intermediate values with the same key. Hadoop is an open-source implementation of MapReduce that uses HDFS for storage. A typical MapReduce job in Hadoop involves defining map and reduce functions, configuring the job, and submitting it to the JobTracker which schedules tasks across nodes and monitors execution.
HBase is a distributed, scalable, big data store that provides fast lookup capabilities like Google BigTable. It uses a table-like data structure with rows indexed by a key and stores data in columns grouped by families. HBase is designed to operate on top of Hadoop HDFS for scalability and high availability. It allows for fast lookups, full table scans, and range scans across large datasets distributed across clusters of commodity servers.
Hadoop is an open-source software framework for distributed storage and processing of large datasets across clusters of computers. It consists of Hadoop Distributed File System (HDFS) for storage, and MapReduce for distributed processing. HDFS stores large files across multiple machines, with automatic replication of data for fault tolerance. It has a master/slave architecture with a NameNode managing the file system namespace and DataNodes storing file data blocks.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/how-axelera-ai-uses-digital-compute-in-memory-to-deliver-fast-and-energy-efficient-computer-vision-a-presentation-from-axelera-ai/
Bram Verhoef, Head of Machine Learning at Axelera AI, presents the “How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-efficient Computer Vision” tutorial at the May 2024 Embedded Vision Summit.
As artificial intelligence inference transitions from cloud environments to edge locations, computer vision applications achieve heightened responsiveness, reliability and privacy. This migration, however, introduces the challenge of operating within the stringent confines of resource constraints typical at the edge, including small form factors, low energy budgets and diminished memory and computational capacities. Axelera AI addresses these challenges through an innovative approach of performing digital computations within memory itself. This technique facilitates the realization of high-performance, energy-efficient and cost-effective computer vision capabilities at the thin and thick edge, extending the frontier of what is achievable with current technologies.
In this presentation, Verhoef unveils his company’s pioneering chip technology and demonstrates its capacity to deliver exceptional frames-per-second performance across a range of standard computer vision networks typical of applications in security, surveillance and the industrial sector. This shows that advanced computer vision can be accessible and efficient, even at the very edge of our technological ecosystem.
“How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-eff...
Introduction to cloud computing
1. 雲端運算簡介
趨勢科技騰雲駕霧程式競賽
趨勢科技研發實驗室
Public
05/18/09
1
2. Why Cloud ?
• When drawing Computer Network Diagrams, we used to
represent Internet as a “Cloud”.
• Connect a computer to Cloud by a line, meaning that the
computer is connected to Internet, and accesses Internet
services or content.
• Web Mail、Instant Messaging、Web Pages、Online virus
scanning.
05/18/09 Copyright 2009 - Trend Micro Inc.
Public
2
4. • The underlying concept of “Cloud Computing” dates back to
1960 and is attributed to computer scientist, “John McCarthy”.
• John McCarthy proposed the concept of “Computation being
organized and delivered as a public utility”.
• The characteristics of the concept are similar to the “Service
bureaus” in 1960s. (A service bureaus is a company which
provides business services for a fee.)
05/18/09 Copyright 2009 - Trend Micro Inc.
Classification
4
5. Super Computing
• Supercomputers were introduced in
the 1960s and primarily designed
by Seymour Cray, called “the father
of supercomputing”.
• Supercomputer is a massive
computer with hundreds or
thousands of CPUs, sharing
common memory and I/O.
• IBM Roadrunner is currently the
fastest supercomputer in the world,
achieving 1.026 petaflops on May
25, 2008.
05/18/09 Copyright 2009 - Trend Micro Inc.
Classification
5
6. Cluster Computing
• Another approach to build a supercomputer, as adopted by
Google.
• A cluster (group) of hundreds of thousands of COTS low-cost
computers interconnected via fast LAN, configured so that
they appear as a single machine.
• Achieve high availability and scalability.
• Typically cluster computing is much more cost-effective than
a single supercomputer.
05/18/09 Copyright 2009 - Trend Micro Inc.
Classification
6
7. Cluster Computing – Programming Tools
• Message Passing Interface (MPI)
– API for sending/receiving messages
– Designed for high performance computing on both massively
parallel machines and on workstation clusters.
– Group communication, scalable file I/O, dynamic process
management, synchronization.
• Parallel Virtual Machine (PVM)
– Emulate a general purpose heterogeneous computing
framework on interconnected computers.
– Present a view of virtual processing elements
– Create and manage processing tasks, basic message passing.
05/18/09 Copyright 2009 - Trend Micro Inc.
Classification
7
8. Cluster Computing – Google example
• Estimated 450,000 low-cost commodity servers in 2006.
• A Google’s rack contains twenty 2U or forty 1U servers on
each side.
• The servers on each side of a rack are interconnected via a
100-Mbps Ethernet switch.
• Each rack Ethernet switch has one or two Gigabit uplinks to a
core Gigabit switch that connects all racks together.
05/18/09 Copyright 2009 - Trend Micro Inc.
Classification
8
9. Cluster Computing – Google example
05/18/09 Copyright 2009 - Trend Micro Inc.
Classification
9
10. Grid Computing
• Early 1990s, Ian Foster and Carl Kesselman came up with a
new concept of “The Grid: Blueprint for a new computing
infrastructure”.
• Expand the techniques of cluster computing where multiple
independent computer clusters act like a grid due to their
nature of not being located in a single administrative domain,
as distributed and large-scale cluster computing, as well as a
form of network-distributed parallel processing.
• What distinguishes grid computing from conventional cluster
computing systems is that grids tend to be more loosely
coupled, heterogeneous, and geographically dispersed.
05/18/09 Copyright 2009 - Trend Micro Inc.
Classification
10
11. Grid Computing
• Applied to computationally intensive scientific, mathematical,
and academic problems as volunteer computing.
• Used in commercial enterprises for diverse applications and
commercial transactions in support of E-commerce and Web
services as utility computing.
05/18/09 Copyright 2009 - Trend Micro Inc.
Classification
11
12. Grid Computing
• SETI@home
– A grid computing project using Internet-connected computers for
scientific experiments in the Search for Extraterrestrial
Intelligence, hosted by Space Sciences Laboratory, at the
University of California, Berkeley.
• BOINC
– Non-commercial middleware system for volunteer and grid
computing.
– For researchers to tap into the enormous processing power of
personal computers around the world.
– About 550,000 active hosts worldwide processing on average
1.5 petaflops as of March 2009.
05/18/09 Copyright 2009 - Trend Micro Inc.
Classification
12
13. Grid Computing
• Globus Toolkit
– A fundamental enabling
technology for the "Grid," letting
people share computing power,
databases, and other tools
securely online across corporate,
institutional, and geographic
boundaries without sacrificing
local autonomy.
– Open Grid Services Architecture
• GRIP (Grid Resource Information
Protocol)
• GRAM (Grid Resource Access
and Management)
• GridFTP
05/18/09 Copyright 2009 - Trend Micro Inc.
Classification
13
15. Distributed Computing
• Multiple CPUs across multiple computers over a network,
working together to solve distributed computing problems.
• A process is split up into parts that run simultaneously on
multiple computers communicating over a network.
• A form of parallel programming. Parallel programming usually
means vector processing of data or multi-tasked
programming.
05/18/09 Copyright 2009 - Trend Micro Inc.
Classification
15
16. Distributed Computing
• Distributed Problem Domains:
– Large amounts of data
– Lots of Computation
– Parallelizable applications
– Many computers
• Indexing the World Wide Web (Google)
• Analyzing the Internet threats (Trend Micro)
05/18/09 Copyright 2009 - Trend Micro Inc.
Classification
16
17. Distributed Computing
• Distributed System
– An application that executes a collection of protocols to
coordinate the actions of multiple processes on a network, such
that all components cooperate together to perform a single or
small set of related tasks.
– Fault-Tolerance
– High Availability
– Recoverability
– Consistency
– Scalability
– Performance Predictability
– Security
05/18/09 Copyright 2009 - Trend Micro Inc.
Classification
17
18. Distributed Computing
• Expectation and handling of failure on both hardware and
software.
• 8 Fallacies:
– 1. The network is reliable.
– 2. Latency is Zero.
– 3. Bandwidth is infinite.
– 4. The network is secure.
– 5. Topology does not change.
– 6. There is one administrator.
– 7. Transport cost is Zero.
– 8. The network is homogeneous.
05/18/09 Copyright 2009 - Trend Micro Inc.
Classification
18
19. Distributed Computing
• One of the basic architectures in
distributed computing: Client-Server
• Clients request services from servers
over a computer network.
• Services are provided by a group of
servers of a particular type.
• DNS services, Web services.
05/18/09 Copyright 2009 - Trend Micro Inc.
Classification
19
20. Distributed Computing
• Actual service server processing client requests may base on
locality or load balance.
• Locality
– Local machine -> Local server -> Remote server
• Load balance
– Spread request works among service servers.
– Optimal resource utilization, maximize performance, minimize
response time.
05/18/09 Copyright 2009 - Trend Micro Inc.
Classification
20
21. Distributed Computing
• Data replication
– Multiple copies of service data at multiple servers
– Permit access at multiple locations
– Increase availability
– Avoid SPOF (Single Point of Failure)
• Caching
– Local data copy for quick access
– Validate cached data before use, commonly TTL.
– Browser and web proxy cache frequently accessed web pages.
05/18/09 Copyright 2009 - Trend Micro Inc.
Classification
21
22. Distributed Computing
• Reliable communication via TCP/IP protocols
• Internet Protocol (IP)
– Protocol for communication data across a packet-switched
internetwork.
– Addressing methods: IPv4 and IPv6.
• Transmission Control Protocol (TCP)
– A reliable stream delivery service that guarantees in-order
delivery of data stream between sender and receiver.
– Socket programming: socket(), bind(), connect(), accept(),
send(), recv(), ...
05/18/09 Copyright 2009 - Trend Micro Inc.
Classification
22
23. Distributed Computing
• Application layer
– HTTP, FTP, RPC, SSH
• Transport layer
– TCP, UDP
• Internet layer
– IP, ICMP
• Link layer
– Physical transmission of data
– Ethernet, MAC
05/18/09 Copyright 2009 - Trend Micro Inc.
Classification
23
24. Distributed Computing
• Google Big Table
– Large scale and distributed structure storage system
05/18/09 Copyright 2009 - Trend Micro Inc.
Classification
24
25. Distributed Computing
• Hadoop Distributed File System (HDFS)
– Distributed file system designed to run on COTS hardware.
– Detection of failures and automatic recovery
– High throughput of streaming data access
– Batch processing, rather interactive
– Large data sets, favor large files
– Write-once-read-many coherency model
– Moving computation is cheaper than moving data
– Portability (Java)
05/18/09 Copyright 2009 - Trend Micro Inc.
Classification
25
27. MapReduce
• A simple programming model that applies to many large-scale
computing problems and processes large data sets.
• Automatic parallelization and distribution of large-scale
computations on a large cluster of commodity machines.
• It allows programmers without any experience with parallel
and distributed systems to easily utilize the resources of a
large distributed system.
05/18/09 Copyright 2009 - Trend Micro Inc.
Classification
27
28. MapReduce
• Automatic parallelization and distribution
• Fault-tolerance via re-execution
• I/O scheduling
• Status and Monitoring
05/18/09 Copyright 2009 - Trend Micro Inc.
Classification
28
31. Web 2.0
• The Web as a Platform
– Communication
– Information sharing
– Collaboration
– Interaction
– Content
• Delivery of web sites to users as services.
• Allow users to run software applications via browsers.
05/18/09 Copyright 2009 - Trend Micro Inc.
Public
31
32. Web 2.0 網站
05/18/09 Copyright 2009 - Trend Micro Inc.
Classification
32
33. Software as a Service
• A model of software deployment whereby a provider licenses
an application to customers for use as a service on demand.
• Globally and remotely access and manage commercial
software via Web.
• No client-side software/hardware installation, maintenance,
update. Reduce total cost of ownership.
• Integration of interconnected software services, Mashups.
05/18/09 Copyright 2009 - Trend Micro Inc.
Public
33
35. Platform as a Service
• By Salesforce.com. Delivery of a platform as a service for
building and hosting web applications.
• Cloud-based web application development, testing,
deployment, hosting lifecycle.
• http://www.youtube.com/watch?v=EbnCUqAL6UM
05/18/09 Copyright 2009 - Trend Micro Inc.
Public
35
36. Platform-as-a-Service Providers
• Force.com (from Salesforce.com)
– http://www.salesforce.com/platform/
• Google App Engine
– http://code.google.com/appengine/
• Microsoft Azure Services Platform
– http://www.microsoft.com/azure/default.mspx
• Amazon Web Services
– http://aws.amazon.com/
05/18/09 Copyright 2009 - Trend Micro Inc.
Classification
36
37. Infrastructure as a Service
• (Originally Hardware as a Service, HaaS) Delivery of
computing infrastructure as a service.
• Platform / hardware virtualization environment.
• Servers, Network equipment, RAM, Disk, CPU, etc.
• Dynamic resource allocation based on the needs of your
applications.
• Only pay for what you use.
05/18/09 Copyright 2009 - Trend Micro Inc.
Classification
37
38. Infrastructure-as-a-Service Providers
• Amazon EC2 (Elastic Compute Cloud)
“a web service that provides resizable compute capacity in the
cloud. ...”
– http://aws.amazon.com/ec2/
– Small Instance (Default) 1.7 GB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit),
160 GB of instance storage, 32-bit platform
05/18/09 Copyright 2009 - Trend Micro Inc.
Classification
38
40. Cloud Computing
• Integrate the concepts of
IaaS、PaaS、SaaS、Web
2.0、and related technologies (ex,
MapReduce、Ajax、Virtualization)
.
• Based on the Internet Cloud, to
satisfy the computing needs of the
users on demand.
05/18/09 Copyright 2009 - Trend Micro Inc.
Classification
40
41. Cloud Computing
• A distributed computing model
• In the Internet Cloud, use dynamic, scalable, and virtualized
computing resources to provide web services
• For complex computation on large-scale data, use dynamic
and virtualized computer clusters to execute distributed and
parallel computation in the Cloud.
05/18/09 Copyright 2009 - Trend Micro Inc.
Public
41
43. Cloud Architecture
• The architecture of the software systems involved in the
delivery of clouding computing, [...]. It typically involves
multiple cloud components communicating with each other
over application programming interfaces, usually web
services.
05/18/09 Copyright 2009 - Trend Micro Inc.
Public
43
44. Cloud Architecture
• Designs of software applications that use Internet-accessible
on-demand services.
• Applications built on Cloud Architectures are such that the
underlying computing infrastructure is used only when it is
needed (for example to process a user request), draw the
necessary resources on-demand (like compute servers or
storage), perform a specific job, then relinquish the unneeded
resources.
05/18/09 Copyright 2009 - Trend Micro Inc.
Public
44
45. Cloud Architecture
• Address key difficulties surrounding large-scale data
processing.
– difficult to get as many machines as an application needs.
– difficult to get the machines when one needs them.
– difficult to distribute and co-ordinate a large-scale job on
different machines, run processes on them, and provision
another machine to recover if one machine fails.
– difficult to auto-scale up and down based on dynamic workloads.
– difficult to get rid of all those machines when the job is done.
05/18/09 Copyright 2009 - Trend Micro Inc.
Public
45
47. Cloud Architecture - Client
• Cloud client that relies on application services on cloud
computing architecture.
• Browsers
– Microsoft Internet Explorer
– Mozilla Firefox
– Google Chrome
• Mobile devices
– Android
– iPhone
– Windows Mobile
05/18/09 Copyright 2009 - Trend Micro Inc.
Classification
47
48. Cloud Architecture – Platform
• Cloud platform for developing and hosting the application
services.
• Google AppEngine
– Let you run your web applications on Google infrastructure
– Java and Python runtime environments
– Full support for common web technologies and services
– Distributed Datastore (BigTable)
• Hadoop MapReduce
– Open source distributed computing platform for running
MapReduce applications
– Batch processing
05/18/09 Copyright 2009 - Trend Micro Inc.
Classification
48
49. Cloud Computing - Storage
• Distributed persistent data storage in the cloud.
• Unstructured data storage
– Traditional file system
• HDFS
– Key-Object pair
• Amazon S3 (Simple Storage Service)
• Structured data storage
– Amazon SimpleDB
– Google AppEngine DataStore
05/18/09 Copyright 2009 - Trend Micro Inc.
Classification
49
50. Cloud Architecture - Infrastructure
• Infrastructure as a Service
– Pay for what you use
– On demand
• Virtualized computing infrastructure
– Computer Clusters
– Hardware abstraction
– Virtualization
• Xen, VMware
• User configurable computing environment
– Operation system, networking
– Memory, disk, CPU
05/18/09 Copyright 2009 - Trend Micro Inc.
Classification
50