One does not simply explain "cloud". A continuum from virtual machines to the cloud, with a Star Trek bias. Holodeck, virtual machines, hypervisors, pulbic cloud, private cloud, hybrid cloud, VirtualBox, Ubuntu, OpenStack, and finally, Make it so!
OpenStack on SmartOS allows running OpenStack on the SmartOS hypervisor platform. SmartOS provides an efficient and secure hypervisor through its use of zones, KVM, ZFS, and DTrace. The presentation outlines work to integrate OpenStack Nova compute and network services with SmartOS, with plans to integrate Quantum network virtualization and leverage ZFS and DTrace for monitoring. The goal is to provide an optimized OpenStack deployment on SmartOS' efficient and flexible virtualization architecture.
The document discusses using Python as an interactive computing platform for tasks like data analysis, visualization, workflow automation, and parallel computing. It provides examples of Python libraries and tools for tasks like interactive notebooks, 2D/3D plotting, data manipulation, command line parsing, concurrency, and connecting to cloud services. It also covers strategies for high performance computing with Python, leveraging C/C++ libraries, and approaches for building platforms with Python.
The document discusses OpenStack, an open source cloud computing platform that aims to be simple to implement, massively scalable, and able to meet the needs of both public and private clouds regardless of size. It provides a high-level overview of OpenStack's core components, including modules for image storage (Glance), compute resources (Nova), networking (Quantum), and object storage (Swift). The networking module (Quantum) currently focuses on L2 connectivity but may expand to L2-L7 capabilities through network containers.
This document discusses Beowulf clusters, which are low-cost high-performance computing systems built from commodity off-the-shelf computers. It provides details on a specific Beowulf cluster built at Caltech in 1996 using 16 Pentium Pro processors that achieved a total of 1.25 billion floating-point operations per second (Gflops) at a much lower cost than conventional supercomputers. It also outlines the benefits of computational modeling and simulation using Beowulf clusters and provides steps for building and programming a basic Beowulf cluster.
This document discusses using Unix/BSD as a cloud computing platform. It notes that while large tech companies have pioneered cloud computing, their approaches are centralized and inefficient. Unix was not originally designed for cloud tasks like email, web hosting, and file storage. However, a case study demonstrates how Unix can be adapted to run tasks on 100-1000+ machines in a heterogeneous, fault-tolerant, and automated way like modern clouds. The document argues that Unix communities should embrace cloud ideology and sharing at the OS level to make BSD more useful for large organizations and seize new opportunities.
The document discusses Ceph storage deployments at IBM Research Zurich and on IBM's Softlayer cloud infrastructure. At IBM Research Zurich, an initial Ceph cluster using SSD storage reached capacity limits, so it was replaced with a larger cluster using HDD storage. IBM is also deploying Ceph on Softlayer for private managed cloud storage, using Ceph block storage with OpenStack. Future plans include improving multi-tenant access control and disaster recovery across data centers with Ceph.
OpenStack on SmartOS allows running OpenStack on the SmartOS hypervisor platform. SmartOS provides an efficient and secure hypervisor through its use of zones, KVM, ZFS, and DTrace. The presentation outlines work to integrate OpenStack Nova compute and network services with SmartOS, with plans to integrate Quantum network virtualization and leverage ZFS and DTrace for monitoring. The goal is to provide an optimized OpenStack deployment on SmartOS' efficient and flexible virtualization architecture.
The document discusses using Python as an interactive computing platform for tasks like data analysis, visualization, workflow automation, and parallel computing. It provides examples of Python libraries and tools for tasks like interactive notebooks, 2D/3D plotting, data manipulation, command line parsing, concurrency, and connecting to cloud services. It also covers strategies for high performance computing with Python, leveraging C/C++ libraries, and approaches for building platforms with Python.
The document discusses OpenStack, an open source cloud computing platform that aims to be simple to implement, massively scalable, and able to meet the needs of both public and private clouds regardless of size. It provides a high-level overview of OpenStack's core components, including modules for image storage (Glance), compute resources (Nova), networking (Quantum), and object storage (Swift). The networking module (Quantum) currently focuses on L2 connectivity but may expand to L2-L7 capabilities through network containers.
This document discusses Beowulf clusters, which are low-cost high-performance computing systems built from commodity off-the-shelf computers. It provides details on a specific Beowulf cluster built at Caltech in 1996 using 16 Pentium Pro processors that achieved a total of 1.25 billion floating-point operations per second (Gflops) at a much lower cost than conventional supercomputers. It also outlines the benefits of computational modeling and simulation using Beowulf clusters and provides steps for building and programming a basic Beowulf cluster.
This document discusses using Unix/BSD as a cloud computing platform. It notes that while large tech companies have pioneered cloud computing, their approaches are centralized and inefficient. Unix was not originally designed for cloud tasks like email, web hosting, and file storage. However, a case study demonstrates how Unix can be adapted to run tasks on 100-1000+ machines in a heterogeneous, fault-tolerant, and automated way like modern clouds. The document argues that Unix communities should embrace cloud ideology and sharing at the OS level to make BSD more useful for large organizations and seize new opportunities.
The document discusses Ceph storage deployments at IBM Research Zurich and on IBM's Softlayer cloud infrastructure. At IBM Research Zurich, an initial Ceph cluster using SSD storage reached capacity limits, so it was replaced with a larger cluster using HDD storage. IBM is also deploying Ceph on Softlayer for private managed cloud storage, using Ceph block storage with OpenStack. Future plans include improving multi-tenant access control and disaster recovery across data centers with Ceph.
Cleantech 3.0: Urbanization and Supply Chains Ontario and Masdar as Transfo...MaRS Discovery District
The document discusses how population growth, urbanization, and increasing resource demands will lead to 10 billion people and 400 megacities by 2050. It also notes that private capital investment in cleantech innovation has topped $100 billion. Finally, it proposes that Ontario and Masdar could be transformative partners in addressing these challenges through a cleantech accelerator that identifies client goals and barriers, recommends target partners, and accelerates cleantech solutions.
This document provides information about Sameer Verma's work with open source software at San Francisco State University. It discusses three significant open source projects implemented at the university: Moodle for learning management, Drupal for content management, and Mahara for e-portfolios. It also describes Verma's research on serial entrepreneurs and international startups. Additionally, it outlines Verma's involvement with the Drupal community and initiatives to expand open source projects beyond campus, such as with the One Laptop per Child program in Jamaica.
The vision of Masdar City (the world’s first zero-carbon city to be created before 2020) was shared by the Masdar City team at a September 16, 2009, business-to-business seminar held at MaRS.
The seminar attracted nearly 70 cleantech suppliers, green technology leaders, government policy makers and sector funders. This presentation is from Sustainable Development and Technology Canada, created for this seminar.
Bridging the Divide with Education, Technology and Outreach. Presentation at the School of Education, University of the West Indies, Mona Campus, Jamaica.
Innovation Across Borders - Session 8 wang rong for toronto conferenceMaRS Discovery District
The document discusses international cooperation programs between Shanghai business incubators and organizations in other countries and regions. It provides an overview of business incubation in Shanghai and four models of transnational bilateral programs. The programs are intended to encourage entrepreneurship and commercialization of technologies by helping companies establish connections and operations overseas.
Juju, LXC, OpenStack: Fun with Private CloudsSameer Verma
Description: Private clouds fill an interesting space in the cloud roadmap. They can provide a scalable, reliable, fault-tolerant cloud platform on your own infrastructure, and can be balanced with public cloud offerings. We will look at three technologies. OpenStack is a cloud operating system that controls large pools of compute, storage, and networking resources throughout a datacenter, all managed through a dashboard that gives administrators control while empowering their users to provision resources through a web interface. Juju, a cloud orchestration platform from Ubuntu, enables you to build entire environments in the cloud with only a few commands on public clouds like Amazon Web Services and HP Cloud, to private clouds built on OpenStack. LXC is the userspace control package for Linux Containers, a lightweight virtual system mechanism sometimes described as “chroot on steroids”. LXC builds up from chroot to implement complete virtual systems, adding resource management and isolation mechanisms to Linux’s existing process management infrastructure. How cool would it be, to walk around with a private cloud on your laptop?
An introduction to virtualization as a concept, its implementation in VirtualBox and an extension into an OpenStack private cloud. Done at SF State University. See more at http://commons.sfsu.edu/virtualization-and-cloud
Presented at NSA User Group. Steps through recent activities and technologies in use across NSA and the IC. Specifically mentions data ingress/egress with JBoss Messaging and MRG-M, storage of data with XFS and GFS, and data presentation capabilities with JBoss Enterprise Middleware Portfolio. 15-20min on Security Automation with SCAP.
The Lies We Tell Our Code (#seascale 2015 04-22)Casey Bisson
This document discusses various lies and forms of virtualization that are commonly used in computing. It begins by summarizing different virtualization technologies used at Joyent like zones, SmartOS, and Triton. It then discusses lies told at different layers of the stack, from virtual memory to network virtualization. Some key lies discussed include hyperthreading, paravirtualization, hardware virtual machines, Docker containers, filesystem virtualization techniques, and network virtualization. The document argues that many of these lies are practical choices that improve performance and workload density despite not perfectly representing the underlying hardware. It concludes by acknowledging the need to be mindful of security issues but also not to stop lying at the edge of the compute node.
The lies we tell our code, LinuxCon/CloudOpen 2015-08-18Casey Bisson
As presented at LinuxCon/CloudOpen 2015: http://sched.co/3Y3v
We tell our code lies from development to deploy. The most common of these lies start with the simple act of launching a virtual machine. These lies are critical to our applications. Some of them protect applications from themselves and each other, some even improve performance. Some, however, decrease performance, and others create barriers to simply getting things done.
We lie about the systems, networks, storage, RAM, CPU and other resources our applications use, but how we tell those lies is critical to how the applications that depend on them perform. Joyent's Casey Bisson will explore the lies we tell our code and demonstrate examples of how they sometimes help and hurt us.
Machine Learning for Big Data Analytics: Scaling In with Containers while Sc...Ian Lumb
Watch On Demand Anytime via http://www.univa.com/resources/webinar-machine-learning.php
Armed with nothing more than an Apache Spark toting laptop, you have all the trappings required to prototype the application of Machine Learning against your data-science needs. From programmability in Scala, Java or Python, to built-in support for Machine Learning via MLlib, Spark is an exceedingly effective enabler that allows you to rapidly produce results.
Of course, as soon as your prototyping proves successful, you'll want to scale out to embrace the volume, variety and velocity that characterizes today's Big Data demands... in production. Because Spark is as comfortable on an isolated laptop as it is in a distributed-computing environment, addressing Big Data requirements in production boils down to effectively and efficiently embracing containers and clusters for Big Data Analytics.
And this is where offerings from Univa shine - i.e., in making the transition from prototype to production completely seamless. For some use cases, it makes sense to scale-in Spark based applications within Docker containers via Univa Grid Engine Container Edition or Navops by Univa; whereas in others, Spark is interfaced (as a Mesos-compliant framework) with Univa Universal Resource Broker, to permit scaling out on a cluster. In both scenarios, your production Spark applications are scheduled alongside other classes of workload - without a need for dedicated resources.
Agenda:
• Overview of Apache Spark as a platform for Deep Learning - from Python-based Jupyter Notebooks to Spark's Machine Learning library MLlib
• Overview of prototyping Machine Learning via Apache Spark on a laptop - without and within Docker containers
• Introductions to Univa Grid Engine Container Edition and Univa Universal Resource Broker plus Navops by Univa
• Overview of production Big Data Analytics platforms for Machine Learning
• Docker-containerized Apache Spark and Univa Grid Engine Container Edition
• Docker-containerized Apache Spark and Navops by Univa
• Apache Spark plus Univa Universal Resource Broker
• Introducing support for GPUs without and within Docker containers
• Use case example - using Machine Learning to classify data from Twitter without and within Docker containers
• Summary and next steps
Watch On Demand Anytime via http://www.univa.com/resources/webinar-machine-learning.php
This document discusses Java Card, an open platform for smart cards that allows "write once, run anywhere" functionality. It provides background on Java Card, including that it allows multi-application smart cards and secure application loading. It then summarizes the Java Card architecture, including the hardware features, native functions, Java Card Virtual Machine (JCVM), Java Card Runtime Environment (JCRE), Java Card APIs, and card management. Finally, it provides an overview of the Java Card programming process.
This document provides an introduction to cloud computing, including definitions of cloud characteristics, infrastructure as a service using Amazon EC2, and platforms like Azure and AppEngine. It outlines some challenges of cloud computing like bandwidth, lack of standards, security issues, and limited SLAs. As a case study, it discusses developing a scalable cloud-based search service and benchmarks for response time and throughput.
Accelerating Spark Genome Sequencing in Cloud—A Data Driven Approach, Case St...Spark Summit
Spark data processing is shifting from on-premises to cloud service to take advantage of its horizontal resource scalability, better data accessibility and easy manageability. However, fully utilizing the computational power, fast storage and networking offered by cloud service can be challenging without deep understanding of workload characterizations and proper software optimization expertise. In this presentation, we will use a Spark based programing framework – Genome Analysis Toolkit version 4 (GATK4, under development), as an example to present a process of configuring and optimizing a proficient Spark cluster on Google Cloud to speed up genome data processing. We will first introduce an in-house developed data profiling framework named PAT, and discuss how to use PAT to quickly establish the best combination of VM configurations and Spark configurations to fully utilize cloud hardware resources and Spark computational parallelism. In addition, we use PAT and other data profiling tools to identify and fix software hotspots in application. We will show a case study in which we identify a thread scalability issue of Java Instanceof operator. The fix in Scala language hugely improves performance of GATK4 and other Spark based workloads.
Running your Java EE 6 applications in the Cloud @ Silicon Valley Code Camp 2010Arun Gupta
Arun Gupta presented on running Java EE 6 applications in the cloud. He discussed Java EE 6 support on various cloud platforms including Amazon, RightScale, Elastra, and Joyent. He also compared features of different cloud vendors and how Java EE can evolve to better support cloud computing. Gupta concluded that Java EE 6 applications can easily be deployed to various clouds and GlassFish provides a feature-rich implementation of Java EE 6.
Running your Java EE 6 Applications in the CloudArun Gupta
This document discusses running Java EE 6 applications in the cloud. It provides an overview of Java EE 6 and demonstrates deploying applications to various cloud platforms including Amazon Web Services, RightScale, Microsoft Azure, and Joyent. It also compares these platforms and discusses how Java EE can evolve to better support cloud computing.
JFokus 2011 - Running your Java EE 6 apps in the CloudArun Gupta
Oracle provides Java EE 6 application servers and databases that can run on various cloud platforms including Amazon Web Services, RightScale, Microsoft Azure, and Joyent. These cloud platforms offer virtual servers, storage, databases and additional services that allow flexible deployment of Java EE 6 applications in public, private and hybrid cloud environments. Pricing models vary between platforms and include consumption-based or commitment-based options.
JavaOne 2014: Taming the Cloud Database with jcloudszshoylev
This document provides information and instructions for setting up a project using Apache jclouds to create a database in the cloud. It discusses initializing the necessary APIs from jclouds to interact with cloud database services, and provides code samples for creating a database user, database instance, and connecting to the database to test it. The document also discusses next steps like contributing to jclouds examples projects and documentation.
Apache Mesos is a cluster manager that provides efficient resource sharing for distributed applications across a shared pool of nodes. It allows organizations to run applications like Hadoop, Spark, and Storm on large clusters with high utilization. Mesos addresses issues with prior solutions that constrained everything as "jobs" or required static partitioning. It has been adopted by companies like Twitter, Airbnb, and Hubspot to improve efficiency and allow applications to dynamically scale resources.
Running Accurate, Scalable, and Reproducible Simulations of Distributed Syste...Rafael Ferreira da Silva
Scientific workflows are used routinely in numerous scientific domains, and Workflow Management Systems (WMSs) have been developed to orchestrate and optimize workflow executions on distributed platforms. WMSs are complex software systems that interact with complex software infrastructures. Most WMS research and development activities rely on empirical experiments conducted with full-fledged software stacks on actual hardware platforms. Such experiments, however, are limited to hardware and software infrastructures at hand and can be labor- and/or time-intensive. As a result, relying solely on real- world experiments impedes WMS research and development. An alternative is to conduct experiments in simulation.
In this work we present WRENCH, a WMS simulation framework, whose objectives are (i) accurate and scalable simula- tions; and (ii) easy simulation software development. WRENCH achieves its first objective by building on the SimGrid framework. While SimGrid is recognized for the accuracy and scalability of its simulation models, it only provides low-level simulation abstractions and thus large software development efforts are required when implementing simulators of complex systems. WRENCH thus achieves its second objective by providing high- level and directly re-usable simulation abstractions on top of SimGrid. After describing and giving rationales for WRENCH’s software architecture and APIs, we present a case study in which we apply WRENCH to simulate the Pegasus production WMS. We report on ease of implementation, simulation accuracy, and simulation scalability so as to determine to which extent WRENCH achieves its two above objectives. We also draw both qualitative and quantitative comparisons with a previously proposed workflow simulator.
Cleantech 3.0: Urbanization and Supply Chains Ontario and Masdar as Transfo...MaRS Discovery District
The document discusses how population growth, urbanization, and increasing resource demands will lead to 10 billion people and 400 megacities by 2050. It also notes that private capital investment in cleantech innovation has topped $100 billion. Finally, it proposes that Ontario and Masdar could be transformative partners in addressing these challenges through a cleantech accelerator that identifies client goals and barriers, recommends target partners, and accelerates cleantech solutions.
This document provides information about Sameer Verma's work with open source software at San Francisco State University. It discusses three significant open source projects implemented at the university: Moodle for learning management, Drupal for content management, and Mahara for e-portfolios. It also describes Verma's research on serial entrepreneurs and international startups. Additionally, it outlines Verma's involvement with the Drupal community and initiatives to expand open source projects beyond campus, such as with the One Laptop per Child program in Jamaica.
The vision of Masdar City (the world’s first zero-carbon city to be created before 2020) was shared by the Masdar City team at a September 16, 2009, business-to-business seminar held at MaRS.
The seminar attracted nearly 70 cleantech suppliers, green technology leaders, government policy makers and sector funders. This presentation is from Sustainable Development and Technology Canada, created for this seminar.
Bridging the Divide with Education, Technology and Outreach. Presentation at the School of Education, University of the West Indies, Mona Campus, Jamaica.
Innovation Across Borders - Session 8 wang rong for toronto conferenceMaRS Discovery District
The document discusses international cooperation programs between Shanghai business incubators and organizations in other countries and regions. It provides an overview of business incubation in Shanghai and four models of transnational bilateral programs. The programs are intended to encourage entrepreneurship and commercialization of technologies by helping companies establish connections and operations overseas.
Juju, LXC, OpenStack: Fun with Private CloudsSameer Verma
Description: Private clouds fill an interesting space in the cloud roadmap. They can provide a scalable, reliable, fault-tolerant cloud platform on your own infrastructure, and can be balanced with public cloud offerings. We will look at three technologies. OpenStack is a cloud operating system that controls large pools of compute, storage, and networking resources throughout a datacenter, all managed through a dashboard that gives administrators control while empowering their users to provision resources through a web interface. Juju, a cloud orchestration platform from Ubuntu, enables you to build entire environments in the cloud with only a few commands on public clouds like Amazon Web Services and HP Cloud, to private clouds built on OpenStack. LXC is the userspace control package for Linux Containers, a lightweight virtual system mechanism sometimes described as “chroot on steroids”. LXC builds up from chroot to implement complete virtual systems, adding resource management and isolation mechanisms to Linux’s existing process management infrastructure. How cool would it be, to walk around with a private cloud on your laptop?
An introduction to virtualization as a concept, its implementation in VirtualBox and an extension into an OpenStack private cloud. Done at SF State University. See more at http://commons.sfsu.edu/virtualization-and-cloud
Presented at NSA User Group. Steps through recent activities and technologies in use across NSA and the IC. Specifically mentions data ingress/egress with JBoss Messaging and MRG-M, storage of data with XFS and GFS, and data presentation capabilities with JBoss Enterprise Middleware Portfolio. 15-20min on Security Automation with SCAP.
The Lies We Tell Our Code (#seascale 2015 04-22)Casey Bisson
This document discusses various lies and forms of virtualization that are commonly used in computing. It begins by summarizing different virtualization technologies used at Joyent like zones, SmartOS, and Triton. It then discusses lies told at different layers of the stack, from virtual memory to network virtualization. Some key lies discussed include hyperthreading, paravirtualization, hardware virtual machines, Docker containers, filesystem virtualization techniques, and network virtualization. The document argues that many of these lies are practical choices that improve performance and workload density despite not perfectly representing the underlying hardware. It concludes by acknowledging the need to be mindful of security issues but also not to stop lying at the edge of the compute node.
The lies we tell our code, LinuxCon/CloudOpen 2015-08-18Casey Bisson
As presented at LinuxCon/CloudOpen 2015: http://sched.co/3Y3v
We tell our code lies from development to deploy. The most common of these lies start with the simple act of launching a virtual machine. These lies are critical to our applications. Some of them protect applications from themselves and each other, some even improve performance. Some, however, decrease performance, and others create barriers to simply getting things done.
We lie about the systems, networks, storage, RAM, CPU and other resources our applications use, but how we tell those lies is critical to how the applications that depend on them perform. Joyent's Casey Bisson will explore the lies we tell our code and demonstrate examples of how they sometimes help and hurt us.
Machine Learning for Big Data Analytics: Scaling In with Containers while Sc...Ian Lumb
Watch On Demand Anytime via http://www.univa.com/resources/webinar-machine-learning.php
Armed with nothing more than an Apache Spark toting laptop, you have all the trappings required to prototype the application of Machine Learning against your data-science needs. From programmability in Scala, Java or Python, to built-in support for Machine Learning via MLlib, Spark is an exceedingly effective enabler that allows you to rapidly produce results.
Of course, as soon as your prototyping proves successful, you'll want to scale out to embrace the volume, variety and velocity that characterizes today's Big Data demands... in production. Because Spark is as comfortable on an isolated laptop as it is in a distributed-computing environment, addressing Big Data requirements in production boils down to effectively and efficiently embracing containers and clusters for Big Data Analytics.
And this is where offerings from Univa shine - i.e., in making the transition from prototype to production completely seamless. For some use cases, it makes sense to scale-in Spark based applications within Docker containers via Univa Grid Engine Container Edition or Navops by Univa; whereas in others, Spark is interfaced (as a Mesos-compliant framework) with Univa Universal Resource Broker, to permit scaling out on a cluster. In both scenarios, your production Spark applications are scheduled alongside other classes of workload - without a need for dedicated resources.
Agenda:
• Overview of Apache Spark as a platform for Deep Learning - from Python-based Jupyter Notebooks to Spark's Machine Learning library MLlib
• Overview of prototyping Machine Learning via Apache Spark on a laptop - without and within Docker containers
• Introductions to Univa Grid Engine Container Edition and Univa Universal Resource Broker plus Navops by Univa
• Overview of production Big Data Analytics platforms for Machine Learning
• Docker-containerized Apache Spark and Univa Grid Engine Container Edition
• Docker-containerized Apache Spark and Navops by Univa
• Apache Spark plus Univa Universal Resource Broker
• Introducing support for GPUs without and within Docker containers
• Use case example - using Machine Learning to classify data from Twitter without and within Docker containers
• Summary and next steps
Watch On Demand Anytime via http://www.univa.com/resources/webinar-machine-learning.php
This document discusses Java Card, an open platform for smart cards that allows "write once, run anywhere" functionality. It provides background on Java Card, including that it allows multi-application smart cards and secure application loading. It then summarizes the Java Card architecture, including the hardware features, native functions, Java Card Virtual Machine (JCVM), Java Card Runtime Environment (JCRE), Java Card APIs, and card management. Finally, it provides an overview of the Java Card programming process.
This document provides an introduction to cloud computing, including definitions of cloud characteristics, infrastructure as a service using Amazon EC2, and platforms like Azure and AppEngine. It outlines some challenges of cloud computing like bandwidth, lack of standards, security issues, and limited SLAs. As a case study, it discusses developing a scalable cloud-based search service and benchmarks for response time and throughput.
Accelerating Spark Genome Sequencing in Cloud—A Data Driven Approach, Case St...Spark Summit
Spark data processing is shifting from on-premises to cloud service to take advantage of its horizontal resource scalability, better data accessibility and easy manageability. However, fully utilizing the computational power, fast storage and networking offered by cloud service can be challenging without deep understanding of workload characterizations and proper software optimization expertise. In this presentation, we will use a Spark based programing framework – Genome Analysis Toolkit version 4 (GATK4, under development), as an example to present a process of configuring and optimizing a proficient Spark cluster on Google Cloud to speed up genome data processing. We will first introduce an in-house developed data profiling framework named PAT, and discuss how to use PAT to quickly establish the best combination of VM configurations and Spark configurations to fully utilize cloud hardware resources and Spark computational parallelism. In addition, we use PAT and other data profiling tools to identify and fix software hotspots in application. We will show a case study in which we identify a thread scalability issue of Java Instanceof operator. The fix in Scala language hugely improves performance of GATK4 and other Spark based workloads.
Running your Java EE 6 applications in the Cloud @ Silicon Valley Code Camp 2010Arun Gupta
Arun Gupta presented on running Java EE 6 applications in the cloud. He discussed Java EE 6 support on various cloud platforms including Amazon, RightScale, Elastra, and Joyent. He also compared features of different cloud vendors and how Java EE can evolve to better support cloud computing. Gupta concluded that Java EE 6 applications can easily be deployed to various clouds and GlassFish provides a feature-rich implementation of Java EE 6.
Running your Java EE 6 Applications in the CloudArun Gupta
This document discusses running Java EE 6 applications in the cloud. It provides an overview of Java EE 6 and demonstrates deploying applications to various cloud platforms including Amazon Web Services, RightScale, Microsoft Azure, and Joyent. It also compares these platforms and discusses how Java EE can evolve to better support cloud computing.
JFokus 2011 - Running your Java EE 6 apps in the CloudArun Gupta
Oracle provides Java EE 6 application servers and databases that can run on various cloud platforms including Amazon Web Services, RightScale, Microsoft Azure, and Joyent. These cloud platforms offer virtual servers, storage, databases and additional services that allow flexible deployment of Java EE 6 applications in public, private and hybrid cloud environments. Pricing models vary between platforms and include consumption-based or commitment-based options.
JavaOne 2014: Taming the Cloud Database with jcloudszshoylev
This document provides information and instructions for setting up a project using Apache jclouds to create a database in the cloud. It discusses initializing the necessary APIs from jclouds to interact with cloud database services, and provides code samples for creating a database user, database instance, and connecting to the database to test it. The document also discusses next steps like contributing to jclouds examples projects and documentation.
Apache Mesos is a cluster manager that provides efficient resource sharing for distributed applications across a shared pool of nodes. It allows organizations to run applications like Hadoop, Spark, and Storm on large clusters with high utilization. Mesos addresses issues with prior solutions that constrained everything as "jobs" or required static partitioning. It has been adopted by companies like Twitter, Airbnb, and Hubspot to improve efficiency and allow applications to dynamically scale resources.
Running Accurate, Scalable, and Reproducible Simulations of Distributed Syste...Rafael Ferreira da Silva
Scientific workflows are used routinely in numerous scientific domains, and Workflow Management Systems (WMSs) have been developed to orchestrate and optimize workflow executions on distributed platforms. WMSs are complex software systems that interact with complex software infrastructures. Most WMS research and development activities rely on empirical experiments conducted with full-fledged software stacks on actual hardware platforms. Such experiments, however, are limited to hardware and software infrastructures at hand and can be labor- and/or time-intensive. As a result, relying solely on real- world experiments impedes WMS research and development. An alternative is to conduct experiments in simulation.
In this work we present WRENCH, a WMS simulation framework, whose objectives are (i) accurate and scalable simula- tions; and (ii) easy simulation software development. WRENCH achieves its first objective by building on the SimGrid framework. While SimGrid is recognized for the accuracy and scalability of its simulation models, it only provides low-level simulation abstractions and thus large software development efforts are required when implementing simulators of complex systems. WRENCH thus achieves its second objective by providing high- level and directly re-usable simulation abstractions on top of SimGrid. After describing and giving rationales for WRENCH’s software architecture and APIs, we present a case study in which we apply WRENCH to simulate the Pegasus production WMS. We report on ease of implementation, simulation accuracy, and simulation scalability so as to determine to which extent WRENCH achieves its two above objectives. We also draw both qualitative and quantitative comparisons with a previously proposed workflow simulator.
Sharing High-Performance Interconnects Across Multiple Virtual Machinesinside-BigData.com
In this deck from the Stanford HPC Conference, Mohan Potheri from VMware presents: Sharing High-Performance Interconnects Across Multiple Virtual Machines.
"Virtualized devices offer maximum flexibility: sharing of hardware between virtual machines, the use of VMware vMotion to handle migration and take snapshots. However, when performance is the most critical requirement there are other options. VMware Direct Path I/O delivers excellent performance, but only for a single virtual machine. Single root I/O virtualization (SR-IOV), on the other hand, offers the performance of pass-through mode while allowing devices to be shared by multiple virtual machines.
This session introduces SR-IOV, explains how it is enabled in VMware vSphere, and provides details of specific use cases that important for machine learning and high-performance computing. It includes performance comparisons that demonstrate the benefits of SR-IOV and information on how to configure and tune these configurations."
Watch the video: https://youtu.be/-iYYmsBw8SU
Learn more: https://www.vmware.com
and
http://hpcadvisorycouncil.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Jam BW is a Java-based biologist created by Afsheen Khalid, a student at the University of Education in Okara, Pakistan. The document discusses how Java applications are platform independent and can be run on any computer. It also covers some of the software problems with third-party applications, the services provided by networked computers, how laboratory devices could be programmed with Java, and how this could lead to the creation of virtual robots.
JavaOne India 2011 - Running your Java EE 6 Apps in the CloudArun Gupta
This document discusses running Java EE 6 applications in the cloud. It provides an overview of deploying Java EE 6 applications to various cloud platforms including Amazon Web Services, RightScale, Microsoft Azure, and Joyent. It also discusses the Java EE 7 specification and how it will further support cloud deployments with a focus on multi-tenancy and elasticity. Lastly, it outlines the GlassFish Server distributions for both open source and commercial use on private and public clouds.
Running your Java EE 6 Apps in the Cloud - JavaOne India 2011Arun Gupta
This document discusses running Java EE 6 applications in the cloud. It provides an overview of deploying Java EE 6 applications to various cloud platforms including Amazon Web Services, RightScale, Microsoft Azure, and Joyent. It also discusses the Java EE 7 specification and how it will further support cloud deployments with a focus on multi-tenancy and elasticity. Lastly, it outlines the GlassFish Server distributions for both open source and commercial use on private and public clouds.
Similar to "Computer, end program": Virtualization and the Cloud (20)
From Efficiency to Innovation: Transforming Business Value through Gen AISameer Verma
The world of Al is undergoing a metamorphosis. Traditional Al, programmed for specific tasks like playing chess, is being eclipsed by the new era of learning Al. This new breed can adapt, analyze data, and even create content. This shift is a game-changer for enterprises. Repetitive tasks can be automated, vast datasets can be analyzed for insights, and even entirely new products can be Al-powered. But the workforce needs to adapt too. Collaboration with Al tools will be key, requiring new skillsets like critical thinking and problem-solving. Generative Al, with its ability to craft images, music, and even code, holds immense promise. However, current offerings are in their infancy they can be impressive but prone to stumbles and biases.
The future of business is a partnership with Al. Businesses must carefully assess current tools and invest in human-Al collaboration and continuous learning. This will be the key to navigating the exciting, but uncertain path ahead. Eventually, we must not lose sight of the true purpose of an enterprise to provide value to the consumers, in order to improve their lives, and to do so responsibly, and in a sustainable way that provides acceptable returns to stakeholders.
A Framework for Information Access in Rural and Remote CommunitiesSameer Verma
Access to information is predicated on the access to a digital infrastructure. However, access to electricity and the Internet remain elusive for a significant percentage of the world's population, let alone a sustainable access in one’s local language, local context, and relating to local culture. This paper examines the issues of resource constraints, and proposes a framework to classify them. It then proceeds to utilize this framework to look at three different case studies of implementations of offline Internet access in Madagascar, Jamaica and India.
Presented at IEEE ISTAS 2016. http://istas2016.org
This document describes the XOVis learning analytics and visualization tool. XOVis collects metadata from students' work on their laptops to provide insights into learning and engagement. Student work is stored locally and then synced across schools and to the cloud using CouchDB and eventual consistency. This allows analytics even when internet is unavailable. XOVis processing and reporting is done both at local school appliances and in the cloud. The goal is to help educators better understand learning through visualized analytics on student computer usage.
Creativity and Innovation with One Laptop per ChildSameer Verma
How the One Laptop per Child project comes up with creative and innovative solutions to challenging problems by changing the constraints to the problems.
The document discusses the One Laptop per Child (OLPC) initiative which aims to provide low-cost and rugged laptops to empower education for children in developing areas of the world. It has distributed over 3 million laptops to children in over 40 countries speaking over 30 languages. The laptops use the Sugar interface and are designed for collaborative, joyful learning through activities like TurtleArt, Scratch, and measuring. OLPC has implementations in specific areas described like Nigeria, Thailand, India, Mongolia, Ethiopia, and more.
The Joy of Z Axis: Creativity and Innovation through 3D PrintingSameer Verma
Presentation on creativity and innovation through 3D printing. Featuring the Printrbot Jr. V2 at the College of Business, San Francisco State University.
One Laptop per Child and Sugar: Collaborative, Joyful and Self-empowered Lear...Sameer Verma
The One Laptop Per Child (OLPC) project has had several beginnings. The idea has roots in the 60s. It gained momentum in the last 15 years. OLPC released the idea to the world in 2005, and its first product in 2007. A lot has changed since then. We'll look at an update on the projects, learning through robotics, assessment through learning analytics, offline mirco-clouds, HTML5 apps, Sugar on tablets and Raspberry Pi, and other new initiatives. In a world of cheap, Android-driven tablets, how does the idea of OLPC fit? What role does the Sugar learning platform continue to play inside and outside of OLPC? Help us grow the initiatives so that children of the world may continue to have a chance at collaborative, joyful, and self-empowered learning.
Pathagar is a book publishing company founded by Sameer Verma that focuses on open source textbooks. It maintains a GitHub repository where it publishes free and open source textbooks that can be accessed and customized by students and educators around the world to help improve access to affordable education. The company aims to lower the costs of textbooks while increasing their availability.
Education and Social Inclusion through InformationSameer Verma
The document discusses the One Laptop Per Child (OLPC) organization, which aims to empower children worldwide through education. Its mission is to provide each child with a low-cost, rugged laptop to support collaborative and self-directed learning. OLPC has distributed over 3 million laptops to children in over 40 countries. The document outlines OLPC's educational approach and principles, technical specifications for its XO laptop, and its software platform and learning content. It also describes OLPC's architecture which utilizes cloud, on-site micro-cloud, and individual devices to enable learning even without internet connectivity.
Data by itself is simply a collection of numbers. It only becomes meaningful when we weave it through context. A context of relevance that creates information - provides insight, creates solutions and solves problems. The Web gives us a fabric of connectedness, but if the data isn't substantiated semantically, the information we create isn't very useful. By building effective web assets using platforms like Drupal, we build ways to solve problems across the spectrum from local to global. We not only build the Web the way it was meant to be, but we also build it to support a commons across community, enterprise and government for generations to come.
Social Justice and Equity through InformationSameer Verma
This document summarizes a presentation by Sameer Verma on social justice and equity through information. It discusses how free and open source software can help increase access to information for underserved communities and reduce the digital divide. It provides examples of how One Laptop Per Child is working to provide low-cost laptops and educational resources to children in over 40 countries worldwide, especially in rural areas lacking technology and infrastructure. The presentation emphasizes using technology and information to empower communities and further social justice and equity goals.
Social Justice and Equity through InformationSameer Verma
This document summarizes a presentation about social justice and equity through information and technology. It discusses how free and open source software can help increase access to information globally. It provides examples of the One Laptop Per Child (OLPC) initiative that aims to provide low-cost laptops to children in developing countries around the world. Specific examples of OLPC programs in countries like India, Jamaica, Afghanistan and partnerships with San Francisco State University are mentioned. The document advocates that technologies like OLPC can help more of the world gain access to education and information.
Facilitating a Digital Commons for Generations to ComeSameer Verma
This document discusses facilitating a digital commons for future generations. It covers topics such as using Creative Commons licenses to enable legal sharing and collaboration of educational resources. It provides examples of how open licensing policies have been applied to funding for educational grants and open high school curriculum development. The importance of open access platforms for curating and disseminating resources like books, music and videos is also covered. Examples discussed include the Internet Archive and low-cost solutions like Dreamplug to provide access in remote areas. The overall message is the importance of keeping educational resources open and accessible for generations to come.
Social Justice and Equity in the AcademySameer Verma
This document discusses social justice, equity, and access to information through open source software and initiatives like One Laptop Per Child (OLPC). It notes that approximately 30% of the world is online and questions whether free and open source software can help ease the flow of information and create feedback loops to benefit producers and consumers. Specific OLPC projects mentioned include work in Afghanistan, Armenia, Haiti, Honduras, India, Jamaica, Madagascar, Pakistan, Philippines, Senegal, South Africa, Uganda, Tuva, and San Francisco.
Herding Cats: Governance in Free and Open Source SoftwareSameer Verma
This document discusses governance in free and open source software projects. It begins by providing background on key individuals and organizations involved in starting the free software movement, including Richard Stallman who founded the GNU project. It then explains important concepts like the four freedoms of free software and the open source definition. The document also covers different free software licenses like the GPL and explores examples of governance models in projects like Debian and Ubuntu.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
Gender and Mental Health - Counselling and Family Therapy Applications and In...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
Chapter wise All Notes of First year Basic Civil Engineering.pptxDenish Jangid
Chapter wise All Notes of First year Basic Civil Engineering
Syllabus
Chapter-1
Introduction to objective, scope and outcome the subject
Chapter 2
Introduction: Scope and Specialization of Civil Engineering, Role of civil Engineer in Society, Impact of infrastructural development on economy of country.
Chapter 3
Surveying: Object Principles & Types of Surveying; Site Plans, Plans & Maps; Scales & Unit of different Measurements.
Linear Measurements: Instruments used. Linear Measurement by Tape, Ranging out Survey Lines and overcoming Obstructions; Measurements on sloping ground; Tape corrections, conventional symbols. Angular Measurements: Instruments used; Introduction to Compass Surveying, Bearings and Longitude & Latitude of a Line, Introduction to total station.
Levelling: Instrument used Object of levelling, Methods of levelling in brief, and Contour maps.
Chapter 4
Buildings: Selection of site for Buildings, Layout of Building Plan, Types of buildings, Plinth area, carpet area, floor space index, Introduction to building byelaws, concept of sun light & ventilation. Components of Buildings & their functions, Basic concept of R.C.C., Introduction to types of foundation
Chapter 5
Transportation: Introduction to Transportation Engineering; Traffic and Road Safety: Types and Characteristics of Various Modes of Transportation; Various Road Traffic Signs, Causes of Accidents and Road Safety Measures.
Chapter 6
Environmental Engineering: Environmental Pollution, Environmental Acts and Regulations, Functional Concepts of Ecology, Basics of Species, Biodiversity, Ecosystem, Hydrological Cycle; Chemical Cycles: Carbon, Nitrogen & Phosphorus; Energy Flow in Ecosystems.
Water Pollution: Water Quality standards, Introduction to Treatment & Disposal of Waste Water. Reuse and Saving of Water, Rain Water Harvesting. Solid Waste Management: Classification of Solid Waste, Collection, Transportation and Disposal of Solid. Recycling of Solid Waste: Energy Recovery, Sanitary Landfill, On-Site Sanitation. Air & Noise Pollution: Primary and Secondary air pollutants, Harmful effects of Air Pollution, Control of Air Pollution. . Noise Pollution Harmful Effects of noise pollution, control of noise pollution, Global warming & Climate Change, Ozone depletion, Greenhouse effect
Text Books:
1. Palancharmy, Basic Civil Engineering, McGraw Hill publishers.
2. Satheesh Gopi, Basic Civil Engineering, Pearson Publishers.
3. Ketki Rangwala Dalal, Essentials of Civil Engineering, Charotar Publishing House.
4. BCP, Surveying volume 1
"Computer, end program": Virtualization and the Cloud
1. “Computer, end program”
Making virtual worlds possible
Sameer Verma, Ph.D.
Professor, Information Systems Department
College of Business, San Francisco State University
San Francisco, CA 94132 USA
http://verma.sfsu.edu/
sverma@sfsu.edu
Unless noted otherwise
4. Final scene of Star Trek: Enterprise
http://youtu.be/pXotJu1CapU
5. As it was in the beginning
● Mainframe virtualization.
● IBM's CP-40 research system in 1967.
● Compartmentalize large processing
capabilities.
● Run processes separately.
● Lease “slices” to different customers.
6. Too many servers?
● Data center challenges
● One physical server for one application
– Web
– Storage
– Authentication
– Network
Power, Cooling, Bandwidth...
= 4
7. Rise of Apache
● Apache VirtualHost.
● Multiple virtual web hosts in each physical
server.
● Led to the adoption of Apache in server
rooms.
● Eventually led to Linux to run these websites.
● Still one underlying OS.
Single point of failure?
8. Hypervisor
This is not a hypervisor
...although it is a VISOR
http://en.wikipedia.org/wiki/Geordi_La_Forge#VISOR
http://startrek.asatem.cz/storage/laforge_geordi01.jpg
18. *aaS
● Software as a Service (SaaS)
– Salesforce.com, GoogleDocs
● Platform as a Service (PaaS)
– Google App Engine, Heroku, OpenShift
● Infrastructure as a Service (IaaS)
– OpenStack, Eucalyptus, CloudStack
● Metal as a Service (MaaS)
– Ubuntu MaaS