QTube is a new product from Quantel that enables global broadcast workflows by providing instant access to live assets and frame accurate editing from anywhere over the internet. QTube uses technologies like Microsoft SMB2, Silverlight, and IIS Smooth Streaming combined with Quantel's file virtualization and media management to deliver live and proxy media streams to remote users. This allows users to view, log, select, and edit content from other locations as if they were on a local network. QTube is expected to be delivered in Q2 2011 and will work with Quantel's Enterprise sQ production system.
A Survey of Performance Comparison between Virtual Machines and Containersprashant desai
Since the onset of Cloud computing and its inroads into infrastructure as a service, Virtualization has become peak
of importance in the field of abstraction and resource management. However, these additional layers of abstraction provided by virtualization come at a trade-off between performance and cost in a cloud environment where everything is on a pay-per-use basis. Containers which are perceived to be the future of virtualization are developed to address these issues. This study paper scrutinizes the performance of a conventional virtual machine and contrasts them with the containers. We cover the critical
assessment of each parameter and its behavior when its subjected to various stress tests. We discuss the implementations and their performance metrics to help us draw conclusions on which one is ideal to use for desired needs. After assessment of the result and discussion of the limitations, we conclude with prospects for future research
Hitchhiker's Guide to Open Source Cloud ComputingMark Hinkle
Imagine it’s eight o’clock on a Thursday morning and you awake to see a bulldozer out your window ready to plow over your data center. Normally you may wish to consult the Encyclopedia Galáctica to discern the best course of action but your copy is likely out of date. And while the Hitchhiker’s Guide to the Galaxy (HHGTTG) is a wholly remarkable book it doesn’t cover the nuances of cloud computing. That’s why you need the Hitchhiker’s Guide to Cloud Computing (HHGTCC) or at least to attend this talk understand the state of open source cloud computing. Specifically this talk will cover infrastructure-as-a-service, platform-as-a-service and developments in big data and how to more effectively take advantage of these technologies using open source software. Technologies that will be covered in this talk include Apache CloudStack, Chef, CloudFoundry, NoSQL, OpenStack, Puppet and many more.
Specific topics for discussion will include:
Infrastructure-as-a-Service - The Systems Cloud - Get a comparision of the open source cloud platforms including OpenStack, Apache CloudStack, Eucalyptus, OpenNebula
Platform-as-a-Service - The Developers Cloud - Find out what tools are availble to build portable auto-scaling applications including CloudFoundry, OpenShift, Stackato and more.
Data-as-a-Service - The Analytics Cloud - Want to figure out the who, what , where , when and why of big data ? You get an overview of open source NoSQL databases and technologies like MapReduce to help crunch massive data sets in the cloud.
Finally you'll get a overview of the tools that can help you really take advantage of the cloud? Want to auto-scale virtual machiens to serve millions of web pages or want to automate the configuration of cloud computing environments. You'll learn how to combine these tools to provide continous deployment systems that will help you earn DevOps cred in any data center.
[Finally, for those of you that are Douglas Adams fans please accept the deepest apologies for bad analogies to the HHGTTG.]
The Axigen Docker image is provided for users to be able to run an Axigen based mail service within a Docker container.
The following services are enabled and mapped as 'exposed' TCP ports in Docker:
§ SMTP (25 - non secure, 465 - TLS)
§ IMAP (143 - non secure, 993 - TLS)
§ POP3 (110 - non secure, 995 - TLS)
§ WEBMAIL (80 - non secure, 443 - TLS)
§ WEBADMIN (9000 - non secure, 9443 - TLS)
CLI (7000 - non secure
A Survey of Performance Comparison between Virtual Machines and Containersprashant desai
Since the onset of Cloud computing and its inroads into infrastructure as a service, Virtualization has become peak
of importance in the field of abstraction and resource management. However, these additional layers of abstraction provided by virtualization come at a trade-off between performance and cost in a cloud environment where everything is on a pay-per-use basis. Containers which are perceived to be the future of virtualization are developed to address these issues. This study paper scrutinizes the performance of a conventional virtual machine and contrasts them with the containers. We cover the critical
assessment of each parameter and its behavior when its subjected to various stress tests. We discuss the implementations and their performance metrics to help us draw conclusions on which one is ideal to use for desired needs. After assessment of the result and discussion of the limitations, we conclude with prospects for future research
Hitchhiker's Guide to Open Source Cloud ComputingMark Hinkle
Imagine it’s eight o’clock on a Thursday morning and you awake to see a bulldozer out your window ready to plow over your data center. Normally you may wish to consult the Encyclopedia Galáctica to discern the best course of action but your copy is likely out of date. And while the Hitchhiker’s Guide to the Galaxy (HHGTTG) is a wholly remarkable book it doesn’t cover the nuances of cloud computing. That’s why you need the Hitchhiker’s Guide to Cloud Computing (HHGTCC) or at least to attend this talk understand the state of open source cloud computing. Specifically this talk will cover infrastructure-as-a-service, platform-as-a-service and developments in big data and how to more effectively take advantage of these technologies using open source software. Technologies that will be covered in this talk include Apache CloudStack, Chef, CloudFoundry, NoSQL, OpenStack, Puppet and many more.
Specific topics for discussion will include:
Infrastructure-as-a-Service - The Systems Cloud - Get a comparision of the open source cloud platforms including OpenStack, Apache CloudStack, Eucalyptus, OpenNebula
Platform-as-a-Service - The Developers Cloud - Find out what tools are availble to build portable auto-scaling applications including CloudFoundry, OpenShift, Stackato and more.
Data-as-a-Service - The Analytics Cloud - Want to figure out the who, what , where , when and why of big data ? You get an overview of open source NoSQL databases and technologies like MapReduce to help crunch massive data sets in the cloud.
Finally you'll get a overview of the tools that can help you really take advantage of the cloud? Want to auto-scale virtual machiens to serve millions of web pages or want to automate the configuration of cloud computing environments. You'll learn how to combine these tools to provide continous deployment systems that will help you earn DevOps cred in any data center.
[Finally, for those of you that are Douglas Adams fans please accept the deepest apologies for bad analogies to the HHGTTG.]
The Axigen Docker image is provided for users to be able to run an Axigen based mail service within a Docker container.
The following services are enabled and mapped as 'exposed' TCP ports in Docker:
§ SMTP (25 - non secure, 465 - TLS)
§ IMAP (143 - non secure, 993 - TLS)
§ POP3 (110 - non secure, 995 - TLS)
§ WEBMAIL (80 - non secure, 443 - TLS)
§ WEBADMIN (9000 - non secure, 9443 - TLS)
CLI (7000 - non secure
Christian Kniep presented this deck at the 2016 HPC Advisory Council Switzerland Conference.
"With Docker v1.9 a new networking system was introduced, which allows multi-host network- ing to work out-of-the-box in any Docker environment. This talk provides an introduction on what Docker networking provides, followed by a demo that spins up a full SLURM cluster across multiple machines. The demo is based on QNIBTerminal, a Consul backed set of Docker Images to spin up a broad set of software stacks."
Watch the video presentation:
http://wp.me/p3RLHQ-f7G
See more talks in the Swiss Conference Video Gallery:
http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter:
http://insidehpc.com/newsletter
Until recently Windows Azure has been a Platform-as-a-Service (PaaS) offering. PaaS is great in terms of scalability, availability, lower TCO and time-to-market, but there are a lot of real world scenarios that either are hard to implement on PaaS or still require on-premises infrastructure. June 7th this year Microsoft launched a preview offering of Infrastructure-as-a-Service as well. Now, we have Windows Azure Virtual Machines and Windows Azure Virtual Network at our disposal, which makes a lot of these real world scenarios feasible in Windows Azure without harming the business case for that scenario.
OSDC 2012 | OpenNebula Tutorial by Constantino Vazquez BlancoNETWAYS
OpenNebula ist ein OpenSource Programm: Gründlich geprüft, anpassbar, mit erweiterbarem Framework und besonderen Features, excellenter Perfomance und massiver Skalierbarkeit um hunderttausend virtuelle Maschinen zu verwalten. Das Ganze von einer private Cloud mit Xen, KVM und VMware bishin zur Hybrid Cloud mit Amazon EC2 und anderen Anbietern.
Ziel des Vortrags ist es, eine globale Übersicht über den Installations- und Konfigurationsprozess und das Einsetzen von private, public und hybrid Clouds bei der Nutzung des OpenNebula Toolkits bereitzustellen. Drei Schlüsselaspekte werden zum Management einer virtuellen Umgebung fokusiert: Image management, networking und hypervisors. Zudem spricht das Tutorial den scale-out durch die Zuweisung von extra Kapazität auf Amazon EC2 an. Basierend auf Open Source Cloud Komponenten ist OpenNebula für Systemadministratoren geeignet, die dieses gerne zum erstellen einer Cloud Umgebung nutzen möchten.
Component upgrades from Intel and Dell can increase VM density and boost perf...Principled Technologies
As the needs of your business grow, so must the power of your server infrastructure. Rather than purchasing replacement servers with base configurations, consider upgrading key components to ensure you get the performance you need.
We found that upgrading to the Dell PowerEdge R730 with the Intel Xeon processor E5-2699 v3, Microsoft Windows Server 2012 R2 operating system, Intel SSD DC S3700 series drive, and Intel Ethernet CNA X520 series adapters supported an extra 16 VMs, 67 percent more VMs than the previous-generation Dell PowerEdge R720 solution.
When you purchase a server, wisely selecting these components offered by Dell and Intel can allow your business to hit the sweet spot of supporting all your users without breaking the bank. The option to upgrade server components can provide your infrastructure with room to grow in the future, as your business needs increase.
Finally, these select upgrades could translate to savings for your business—fewer servers you need to purchase now to meet performance demands and a longer lifespan for these servers as your business continues to grow.
The switching method you choose for your SBC environment can help determine performance and the experience that end-users have. We found that unifying switching with Cisco VM-FEX resulted in up to 29 percent lower latency than a solution using a traditional vSwitch when running a Citrix XenApp hosted shared desktop farm. Furthermore, the Cisco VM-FEX solution used up to 53 percent less CPU than the vSwitch solution did under extreme network conditions. In addition to these performance advantages, Cisco UCS Manager provides a central point of management and a simplified method to add vSphere hosts to the VM-FEX-enabled vSwitch, which can reduce management time and costs.
As our results show, switching to Cisco VM-FEX can provide your users with a more responsive environment.
HPC Best Practices: Application Performance Optimizationinside-BigData.com
Pak Lui from the HPC Advisory Council presented this deck at the Switzerland HPC Conference.
"To achieve good scalability performance on the HPC scientific applications typically involves good understanding of the workload though performing profile analysis, and comparing behaviors of using different hardware which pinpoint bottlenecks in different areas of the HPC cluster. In this session, a selection of HPC applications will be shown to demonstrate various methods of profiling and analysis to determine the bottleneck, and the effectiveness of the tuning to improve on the application performance."
Watch the video presentation: http://wp.me/p3RLHQ-f8h
Learn more: http://www.hpcadvisorycouncil.com/best_practices.php
See more talks from the Switzerland HPC Conference:
http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Linux Containers : dupliquer Linux à volonté - David Hueber - Grégory Steulet...dbi services
Découvrez comment virtualiser vos serveurs avec Linux Containers (LXC), afin de dupliquer à volonté des environnements Linux avec un impact minimum sur les performances. Vous apprendrez comment isoler ces différents environnements virtuels tout en limitant les ressources qui leurs sont allouées. Nous vous montrerons également les avantages et inconvénients de cette solution par rapport aux autres possibilités de virtualisation.
Delivering Infrastructure-as-a-Service with Open Source SoftwareMark Hinkle
The web was build using open source software like Linux, Apache, MySQL and the pervasive PHP, Python and Perl. Just as with the web, open source is one of the core foundations of cloud computing as early cloud pioneers used the freely available, freely-distributable model to power their web-scale deployments—achieving an unprecedented level of scale at a bare-bones cost that had never been seen in the history of computing. The first movers in cloud computing services found the open source software model most appealing, but to businesses today the attraction of open source is about the ability to develop a more flexible infrastructure and avoid vendor lock-in that often results from proprietary systems.
Server virtualization is a fundamental technological innovation that is used extensively in IT enterprises. Server virtualization enables creation of multiple virtual machines on single underlying physical machine. It is realized either in form of hypervisors or containers. Hypervisor is an extra layer of abstraction between the hardware and virtual machines that emulates underlying hardware. In contrast, the more recent container-based virtualization technology runs on host kernel without additional layer of abstraction. Thus container technology is expected to provide near native performance compared to hypervisor based technology. We have conducted a series of experiments to measure and compare the performance of workloads over hypervisor based virtual machines, Docker containers and native bare metal machine. We use a standard benchmark workload suite that stresses CPU, memory, disk IO and system. The results obtained show that Docker containers provide better or similar performance compared to traditional hypervisor based virtual machines in almost all the tests. However as expected the native system still provides the best performance as compared to either containers or hypervisors.
Dell PowerEdge R920 and Microsoft SQL Server 2014 Migration and Benefits GuidePrincipled Technologies
The latest Dell PowerEdge R920 server is designed to provide highly scalable performance for large enterprises, with greater memory capacity, improved and expanded attached storage options, and processor architectures designed for high availability. Microsoft SQL Server 2014 is the perfect companion software to take advantage of the Dell PowerEdge R920’s impressive specifications. Upgrading has never looked more attractive, and with hardware/software upgrades must come data migration.
Migrating legacy database applications to the latest database technologies on newer Dell server platforms is a common task for businesses upgrading their hardware/software stack. As this guide shows, the process is straightforward and the cost benefits can be enormous. We calculated the savings attainable from multiple consolidation ratios, as well as how long it would take to pay off the replacement server. We found that a consolidation ratio of 13 to 1 could yield $531,725 in software savings, many times the cost of the replacement hardware itself. So not only will the business benefit from the massively-scalable current-generation Dell server technology paired with Microsoft Windows Server 2012 R2 running SQL Server 2014, but you can save money in the process.
Proper resource allocation is critical to achieving top application performance in a virtualized environment. Resource contention degrades performance and underutilization can lead to costly server sprawl.
We found that adding VMTurbo to a VMware vSphere 5.5 cluster and following its reallocation recommendations gave our application performance a big boost. After reducing vCPU count, increasing memory allocation to active databases, and moving VMs to more responsive storage as VMTurbo directed, online transactions increased by 23.7 percent while latency dropped significantly. Avoid the pitfalls of poorly allocated VM resources and give your virtualized application every advantage by gaining control of your environment at every level.
Cloud Computing Architecture with Open Nebula - HPC Cloud Use Cases - NASA A...Ignacio M. Llorente
OpenNebula is a fully open-source cloud management platform, with excellent performance and scalability to manage tens of thousands of virtual machines, and with the most advanced functionality for building virtualized enterprise data centers and private cloud infrastructures. OpenNebula is the result of many years of research and development in efficient and scalable management of virtual machines on large-scale distributed infrastructures. Its innovative features have been developed to address the requirements of business use cases from leading companies in the context of flagship international projects in cloud computing. OpenNebula is being used by many supercomputing and leading research centers to build HPC and science clouds for hosting virtualized computational environments, such as batch farms and computing clusters, or for providing users with new "HPC as a service" resource provisioning models. The talk describes how to design a cloud architecture with OpenNebula and its innovative features to enable the execution of flexible and elastic cluster and high performance computing services on demand while reducing the associated cost of building the datacenter infrastructure.
Challenge: Recent success of Docker containers reveals arrival of a new era: the number of CPUs is exploding 10-100 folds up, and cloud networking is already in a new movement of scalability upgrade
Question: To scale UP or OUT? I.e., UPgrade or OUTgrade?
Answer from DaoliCloud’s practice: Better scale OUT, ,,.and Openflow can help
Dell PowerEdge R720 server with VMware vSphere 5 - Exchange server consolidationPrincipled Technologies
Better performance and lower costs are driving factors in the hardware you choose for your data center. In choosing a newer server and storage solution, you can acquire enough processing power to virtualize older servers and consolidate them onto the new solution, which can increase performance dramatically while keeping costs down.
In our tests, the Dell PowerEdge R720 server with Compellent storage was able to take the place of four older HP ProLiant DL380 G6 servers by supporting the same 2,000-user Exchange Server 2010 workload as the older four combined. In addition to supporting four workloads, the Dell PowerEdge R720 was also more power efficient, supporting 116.0 percent more users per watt than the HP ProLiant DL380 G6. With such increases in performance and decreases in power consumption, the Dell PowerEdge R720 can deliver payback in as little as 18 months, providing an affordable consolidation platform for your business.
Christian Kniep presented this deck at the 2016 HPC Advisory Council Switzerland Conference.
"With Docker v1.9 a new networking system was introduced, which allows multi-host network- ing to work out-of-the-box in any Docker environment. This talk provides an introduction on what Docker networking provides, followed by a demo that spins up a full SLURM cluster across multiple machines. The demo is based on QNIBTerminal, a Consul backed set of Docker Images to spin up a broad set of software stacks."
Watch the video presentation:
http://wp.me/p3RLHQ-f7G
See more talks in the Swiss Conference Video Gallery:
http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter:
http://insidehpc.com/newsletter
Until recently Windows Azure has been a Platform-as-a-Service (PaaS) offering. PaaS is great in terms of scalability, availability, lower TCO and time-to-market, but there are a lot of real world scenarios that either are hard to implement on PaaS or still require on-premises infrastructure. June 7th this year Microsoft launched a preview offering of Infrastructure-as-a-Service as well. Now, we have Windows Azure Virtual Machines and Windows Azure Virtual Network at our disposal, which makes a lot of these real world scenarios feasible in Windows Azure without harming the business case for that scenario.
OSDC 2012 | OpenNebula Tutorial by Constantino Vazquez BlancoNETWAYS
OpenNebula ist ein OpenSource Programm: Gründlich geprüft, anpassbar, mit erweiterbarem Framework und besonderen Features, excellenter Perfomance und massiver Skalierbarkeit um hunderttausend virtuelle Maschinen zu verwalten. Das Ganze von einer private Cloud mit Xen, KVM und VMware bishin zur Hybrid Cloud mit Amazon EC2 und anderen Anbietern.
Ziel des Vortrags ist es, eine globale Übersicht über den Installations- und Konfigurationsprozess und das Einsetzen von private, public und hybrid Clouds bei der Nutzung des OpenNebula Toolkits bereitzustellen. Drei Schlüsselaspekte werden zum Management einer virtuellen Umgebung fokusiert: Image management, networking und hypervisors. Zudem spricht das Tutorial den scale-out durch die Zuweisung von extra Kapazität auf Amazon EC2 an. Basierend auf Open Source Cloud Komponenten ist OpenNebula für Systemadministratoren geeignet, die dieses gerne zum erstellen einer Cloud Umgebung nutzen möchten.
Component upgrades from Intel and Dell can increase VM density and boost perf...Principled Technologies
As the needs of your business grow, so must the power of your server infrastructure. Rather than purchasing replacement servers with base configurations, consider upgrading key components to ensure you get the performance you need.
We found that upgrading to the Dell PowerEdge R730 with the Intel Xeon processor E5-2699 v3, Microsoft Windows Server 2012 R2 operating system, Intel SSD DC S3700 series drive, and Intel Ethernet CNA X520 series adapters supported an extra 16 VMs, 67 percent more VMs than the previous-generation Dell PowerEdge R720 solution.
When you purchase a server, wisely selecting these components offered by Dell and Intel can allow your business to hit the sweet spot of supporting all your users without breaking the bank. The option to upgrade server components can provide your infrastructure with room to grow in the future, as your business needs increase.
Finally, these select upgrades could translate to savings for your business—fewer servers you need to purchase now to meet performance demands and a longer lifespan for these servers as your business continues to grow.
The switching method you choose for your SBC environment can help determine performance and the experience that end-users have. We found that unifying switching with Cisco VM-FEX resulted in up to 29 percent lower latency than a solution using a traditional vSwitch when running a Citrix XenApp hosted shared desktop farm. Furthermore, the Cisco VM-FEX solution used up to 53 percent less CPU than the vSwitch solution did under extreme network conditions. In addition to these performance advantages, Cisco UCS Manager provides a central point of management and a simplified method to add vSphere hosts to the VM-FEX-enabled vSwitch, which can reduce management time and costs.
As our results show, switching to Cisco VM-FEX can provide your users with a more responsive environment.
HPC Best Practices: Application Performance Optimizationinside-BigData.com
Pak Lui from the HPC Advisory Council presented this deck at the Switzerland HPC Conference.
"To achieve good scalability performance on the HPC scientific applications typically involves good understanding of the workload though performing profile analysis, and comparing behaviors of using different hardware which pinpoint bottlenecks in different areas of the HPC cluster. In this session, a selection of HPC applications will be shown to demonstrate various methods of profiling and analysis to determine the bottleneck, and the effectiveness of the tuning to improve on the application performance."
Watch the video presentation: http://wp.me/p3RLHQ-f8h
Learn more: http://www.hpcadvisorycouncil.com/best_practices.php
See more talks from the Switzerland HPC Conference:
http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Linux Containers : dupliquer Linux à volonté - David Hueber - Grégory Steulet...dbi services
Découvrez comment virtualiser vos serveurs avec Linux Containers (LXC), afin de dupliquer à volonté des environnements Linux avec un impact minimum sur les performances. Vous apprendrez comment isoler ces différents environnements virtuels tout en limitant les ressources qui leurs sont allouées. Nous vous montrerons également les avantages et inconvénients de cette solution par rapport aux autres possibilités de virtualisation.
Delivering Infrastructure-as-a-Service with Open Source SoftwareMark Hinkle
The web was build using open source software like Linux, Apache, MySQL and the pervasive PHP, Python and Perl. Just as with the web, open source is one of the core foundations of cloud computing as early cloud pioneers used the freely available, freely-distributable model to power their web-scale deployments—achieving an unprecedented level of scale at a bare-bones cost that had never been seen in the history of computing. The first movers in cloud computing services found the open source software model most appealing, but to businesses today the attraction of open source is about the ability to develop a more flexible infrastructure and avoid vendor lock-in that often results from proprietary systems.
Server virtualization is a fundamental technological innovation that is used extensively in IT enterprises. Server virtualization enables creation of multiple virtual machines on single underlying physical machine. It is realized either in form of hypervisors or containers. Hypervisor is an extra layer of abstraction between the hardware and virtual machines that emulates underlying hardware. In contrast, the more recent container-based virtualization technology runs on host kernel without additional layer of abstraction. Thus container technology is expected to provide near native performance compared to hypervisor based technology. We have conducted a series of experiments to measure and compare the performance of workloads over hypervisor based virtual machines, Docker containers and native bare metal machine. We use a standard benchmark workload suite that stresses CPU, memory, disk IO and system. The results obtained show that Docker containers provide better or similar performance compared to traditional hypervisor based virtual machines in almost all the tests. However as expected the native system still provides the best performance as compared to either containers or hypervisors.
Dell PowerEdge R920 and Microsoft SQL Server 2014 Migration and Benefits GuidePrincipled Technologies
The latest Dell PowerEdge R920 server is designed to provide highly scalable performance for large enterprises, with greater memory capacity, improved and expanded attached storage options, and processor architectures designed for high availability. Microsoft SQL Server 2014 is the perfect companion software to take advantage of the Dell PowerEdge R920’s impressive specifications. Upgrading has never looked more attractive, and with hardware/software upgrades must come data migration.
Migrating legacy database applications to the latest database technologies on newer Dell server platforms is a common task for businesses upgrading their hardware/software stack. As this guide shows, the process is straightforward and the cost benefits can be enormous. We calculated the savings attainable from multiple consolidation ratios, as well as how long it would take to pay off the replacement server. We found that a consolidation ratio of 13 to 1 could yield $531,725 in software savings, many times the cost of the replacement hardware itself. So not only will the business benefit from the massively-scalable current-generation Dell server technology paired with Microsoft Windows Server 2012 R2 running SQL Server 2014, but you can save money in the process.
Proper resource allocation is critical to achieving top application performance in a virtualized environment. Resource contention degrades performance and underutilization can lead to costly server sprawl.
We found that adding VMTurbo to a VMware vSphere 5.5 cluster and following its reallocation recommendations gave our application performance a big boost. After reducing vCPU count, increasing memory allocation to active databases, and moving VMs to more responsive storage as VMTurbo directed, online transactions increased by 23.7 percent while latency dropped significantly. Avoid the pitfalls of poorly allocated VM resources and give your virtualized application every advantage by gaining control of your environment at every level.
Cloud Computing Architecture with Open Nebula - HPC Cloud Use Cases - NASA A...Ignacio M. Llorente
OpenNebula is a fully open-source cloud management platform, with excellent performance and scalability to manage tens of thousands of virtual machines, and with the most advanced functionality for building virtualized enterprise data centers and private cloud infrastructures. OpenNebula is the result of many years of research and development in efficient and scalable management of virtual machines on large-scale distributed infrastructures. Its innovative features have been developed to address the requirements of business use cases from leading companies in the context of flagship international projects in cloud computing. OpenNebula is being used by many supercomputing and leading research centers to build HPC and science clouds for hosting virtualized computational environments, such as batch farms and computing clusters, or for providing users with new "HPC as a service" resource provisioning models. The talk describes how to design a cloud architecture with OpenNebula and its innovative features to enable the execution of flexible and elastic cluster and high performance computing services on demand while reducing the associated cost of building the datacenter infrastructure.
Challenge: Recent success of Docker containers reveals arrival of a new era: the number of CPUs is exploding 10-100 folds up, and cloud networking is already in a new movement of scalability upgrade
Question: To scale UP or OUT? I.e., UPgrade or OUTgrade?
Answer from DaoliCloud’s practice: Better scale OUT, ,,.and Openflow can help
Dell PowerEdge R720 server with VMware vSphere 5 - Exchange server consolidationPrincipled Technologies
Better performance and lower costs are driving factors in the hardware you choose for your data center. In choosing a newer server and storage solution, you can acquire enough processing power to virtualize older servers and consolidate them onto the new solution, which can increase performance dramatically while keeping costs down.
In our tests, the Dell PowerEdge R720 server with Compellent storage was able to take the place of four older HP ProLiant DL380 G6 servers by supporting the same 2,000-user Exchange Server 2010 workload as the older four combined. In addition to supporting four workloads, the Dell PowerEdge R720 was also more power efficient, supporting 116.0 percent more users per watt than the HP ProLiant DL380 G6. With such increases in performance and decreases in power consumption, the Dell PowerEdge R720 can deliver payback in as little as 18 months, providing an affordable consolidation platform for your business.
Software based video, audio, web conferencing - can standard servers deliver?Anders Løkke
Video and audio conferencing infrastructure technology has traditionally been hardware based.
Pexip Infinity is a virtualized software based platform that aims to deliver better and more flexible performance from standard off-the-shelf servers.
This white paper discusses findings and performance of such.
Technical presentation demonstrating Ubuntu running on EC2 as well as management tools to manage EC2 with some preview of the upcoming Ubuntu Enterprise Cloud
Aquila Streaming(LIVE) is designed to provide a rich experience across all devices leveraging its state-of-the art components (Encoding Live and Packaging Live).
Kubeflow: Machine Learning en Cloud para todosGlobant
Speaker: Juan Camilo Díaz
Video: https://youtu.be/jfH93vdRmTk
Kubeflow hace que implementar workflows de Machine Learning en Kubernetes sean simples, portátiles y escalables. Kubeflow es el kit de herramientas que permite implementar procesos de Machine Learning, ampliando la capacidad de Kubernetes para ejecutar pasos independientes y configurables, con bibliotecas y frameworks específicos.
---------------------------------------------------------------------------------------------------------------------------------------------------------------
Hay trabajos y hay carreras. Las oportunidades vienen a golpear la puerta cuando menos lo esperas. La decisión es tuya. Desde tener la oportunidad de hacer algo significativo día tras día, hasta estar rodeado de gente supremamente inteligente y motivada.
¿Estás listo?
Descúbre todas nuestras oportunidades acá: https://bit.ly/2PWKky9
---------------------------------------------------------------------------------------------------------------------------------------------------------------
Síguenos en:
Facebook: https://www.facebook.com/Globant/
Twitter: https://twitter.com/Globant
Instagram: https://www.instagram.com/globantpics/
Linkedin: https://www.linkedin.com/company/globant
TV is changing in 2017 ! Step into the future of Broadcast (www.tecsys.tv)Rony Weinfeld
We invite you to step into the future of Broadcast. Launch your TV channel directly from the cloud within minutes without any CAEPX investment (Hardware, space, power consumption, broadcast satellite / radio or IP links, maintenance, technical staffs etc). All on a flexible System as a service basis (OPEX) without commitment. More information and free demo registration is available on our web site: http://www.tecsys.tv
Cozystack: Free PaaS platform and framework for building cloudsAndrei Kvapil
With Cozystack, you can transform your bunch of servers into an intelligent system with a simple REST API for spawning Kubernetes clusters, Database-as-a-Service, virtual machines, load balancers, HTTP caching services, and other services with ease.
You can use Cozystack to build your own cloud or to provide a cost-effective development environments.
Cloud computing and OpenStack basic introduction. This presentation was given on November 13, 2014 at Universitat Politecnica de Catalunya. Barcelona, Spain.
How Technology Architecture Choices Impact Business
QTube Global Media Workflow
1. 1 QTube Global Media Workflow
QTube Global Media
Workflow
QTube is an exciting new development from Quantel that
will enable truly global broadcast workflows to be created.
QTube delivers:
Instant access to
Live assets with
Frame accurate editing from
Anywhere over the
Internet
QTube products will be delivered in Q2 2011.
Technology and application details are in this whitepaper
Trevor Francis, Quantel
Quantel Whitepaper, Issue 1.0, Nov 2010
2. 2 QTube Global Media Workflow
QTube Global Media Workflow
Introduction
QTube is an exciting new development which adds global workflow to Quantel Enterprise sQ production
systems over the internet. QTube enables users anywhere in the world with an internet connection to
interact with media in other locations. The workflow remains the same independent of geography. With
QTube users can view, log, select and frame accurately edit content anywhere.
Beyond the corporate network
Fixed remote sites, such as regional stations or partner broadcasters in other cities or countries may
access content, create edits and initiate transfers between sites.
Temporary remote sites, like outside broadcast units or facilities at major sports, cultural or political
events are connected to the home base. Production tasks, such as logging, editing and approval may be
shared, with reduced requirement for on-site staffing and better access to home content.
Mobile users include staff who need to work from home and roving ENG units creating finished stories for
broadcast that combine locally-acquired and studio-based content.
QTube technology
QTube combines off-the-shelf technologies including Microsoft SMB2, Silverlight and IIS Smooth
Streaming with Quantel-developed file virtualization and FrameMagic media management. Together they
build into a powerful and versatile set of workflow tools which is scalable in many dimensions:
Number of sites
Quality of connection
Production tools
The first QTube products will be designed to work with the Quantel Enterprise sQ system, but the
techniques may be applied to any storage and filing system in the future.
QTube technology was first shown at IBC 2010 and you can watch the demo here>
Quantel Whitepaper, Issue 1.0, Nov 2010
3. 3 QTube Global Media Workflow
QTube Technology Components
Today QTube operates around an Enterprise sQ system, which comprises one or more sQ servers,
managed by an ISA database/management unit. The sQ Server ingests and stores broadcast-quality
content alongside an H.264 proxy copy. Quantel’s frame-based Identity model is asserted on this content,
(See Tracking media assets in a shared-storage production system) binding together the two media
resolutions and associated metadata. Any clip in the system, whether a live or completed recording, shot
selection or complex edit is represented to the outside world via the QVFS or Quantel Virtual Filing
System.
The QVFS is central to the operation of the Enterprise sQ architecture. The standard server holds two
qualities of media, distributed across many disks, each with its own filing system. The Virtual Filing
System is an abstraction layer which presents all the media assets as a list of clips. The user has no regard
for their file type or the actual storage location; the system delivers the media in the form required,
automatically. For example, users of studio-based desktop editors will be delivered proxy copies; craft
editors will see broadcast quality. [An unsatisfactory alternative would be to present two lists of media
objects with different file extensions and require the user to select the one appropriate to their task.]
The management of media in sQ server is based on a principle called ‘FrameMagic’, described in detail in
the above referenced paper. Acquired media is stored and managed as a (typically large) number of files,
each representing a video frame and more files representing audio samples. The Identity model attaches
a unique reference to each frame, based on a GUID (Globally Unique IDentifier) plus frame offset.
In this example, the logical clip “Original” contains three scenes: red, green and blue. The green scene
could be described in the form: 001A456B789C {Start 101; End 200}
Quantel Whitepaper, Issue 1.0, Nov 2010
4. 4 QTube Global Media Workflow
Editing may generate new frames, through keying, transitions or color-correction; these are stored as
new ‘Delta’ (difference) frames, with a unique Identity of their own. A real clip may point to many
sequences of frames derived from multiple acquired ‘clips’ as well as sequences of delta frames.
The Quantel VFS generates a ‘manifest’, in xml form, which is a list of the identities of the frames
comprising a logical clip. In the picture below we have generated an edit, based on the ‘Original’ clip. The
manifest shows the references to the original frames.
GUID: 019F 779E 631A (It has a GUID of its own)
Title: “Edit”
Type: .clip
Length 200 frames
Source1 001A 456B 789D {Start 101; End 200}
Source2 001A 456B 789D {Start 201; End 300}
The manifest performs an essential function in QTube. It allows us to refer to a large media object, the
original frames, using only a few lines of xml. We can generate a cuts-only edit by creating a new
manifest describing the chosen frames and send this back to the home server. In this simple example, the
data travelling the return path is extremely small.
Another problem to be solved is how to deliver the best viewing experience at a remote location. QTube
uses the off-the-shelf internet streaming technology Microsoft Smooth Streaming combined with
Quantel’s unique live ‘mutation’ of virtualized files. The ability to service requests for a range of files
containing different image sizes and compression quality levels, on demand, is at the heart of QTube.
Microsoft Smooth Streaming demands files of varying image qualities according to the instantaneous
performance of the internet path. It would be possible to create all of these and store them, but this
would be expensive in resources, limited in scalability and could introduce unwanted latency. Quantel’s
solution is to effectively ‘fool’ Smooth Streaming that the necessary files of different quality exist by
presenting them as virtual files but only actually create them on demand. A request for a file of a given
type is automatically translated into a live transcoding (or mutation) of the original data to populate the
virtual file.
Building on the foundation of Quantel’s Virtual Filing System means that QTube can work on live
recordings, not just completed files. A live recording is represented as a complete set of video frames
which are initially ‘blank’ and gradually are replaced with incoming video and audio data. News and
Sports applications depend on the ability to access media during acquisition; with QTube this access is
unrestricted by location. The sQ workflow has gone global.
Quantel Whitepaper, Issue 1.0, Nov 2010
5. 5 QTube Global Media Workflow
QTube user experience
The user is offered two choices of interface, based on their operational requirements.
Passive viewing of content plus creation/editing of metadata is offered via a Silverlight-based viewer.
This web client requires no local install of software and works on vanilla PCs. It can also be used to
initiate transfers between different QTube enabled locations.
The prototype Quantel Silverlight viewer
A pop-up search tool allows users to interrogate the database of remote servers and select the required
content using a filtered search of the standard sQ metadata fields. Users are able to navigate randomly
through the clip; the QVFS virtualization reacts immediately to create a view of the chosen frames from
the source file at the resolution/quality demanded by Smooth Streaming to match the current
performance of the internet connection.
A Quantel client provides frame accurate remote editing.
QTube Remote Editor showing the server bin and clip viewer
Quantel Whitepaper, Issue 1.0, Nov 2010
6. 6 QTube Global Media Workflow
The Quantel client uses the standard, scalable Quantel interface and is key to preserving the integrity of
the Quantel workflow beyond the confines of the studio. The editor looks and works in the same way at
base or on the road.
This has been achieved by re-engineering the server communications back-end to the editor while leaving
the user interface and all the other processes totally unchanged.
Media caching techniques used in the studio-based editors are retained in the remote version. Once
content has been viewed it is stored locally on the client PC.
Later in the development cycle it is expected that the HTTP interface will be become standardized for all
workstations, whether on the production LAN or elsewhere.
FrameMagic
During the viewing and editing of remote content, no media is moved from the home servers to the
remote clients. The picture shows the virtualized blue home clips viewed in the remote editor.
Movement of clips between remote clients and studio-based sQ servers obeys Quantel’s FrameMagic
rules. These state that no individual frames are moved more than once.
Quantel Whitepaper, Issue 1.0, Nov 2010
7. 7 QTube Global Media Workflow
In an ENG operation, selected sequences from locally-acquired content (the red frames) may be mixed
with server content. On publish, only the red frames are moved, minimising the transfer of what could be
100 Mb/s essence. The published edit AAF file (and the manifest) will point to these selected red frames
and the blue frames of the server-based footage.
To speed publish times still further, QTube will offer secondary compression of remote essence before it
is sent over the internet. It is transcoded back to a supported server file format at the receiving station.
The user will select the degree of additional compression based on time available, moderated by the
organisation’s quality standards, of course. As part of the development, Quantel plans to include
automatic calculation of extra compression based on the quality of media to be moved, the available
network bandwidth and the time allowed. There will be an option, of course, to re-send later with less or
no additional compression. Additional compression is never lossless, but the subjective quality is
adequate to the needs of breaking news, where a perfect edit delivered late may be worthless.
QTube Media Mover
QTube will enable the automatic transfer of content at one QTube–enabled site to another. An
additional metadata field will identify clips that need to be transferred between sites and QTube will
Quantel Whitepaper, Issue 1.0, Nov 2010
8. 8 QTube Global Media Workflow
move clips to the correct destination. System configuration will allow human friendly destinations to be
translated into the IP addresses needed by QTube. Remote publishes will also support this new metadata
field so a remote editor could easily publish back to its home base and flag the clip for transfer to another
location.
There are many new workflows, including the accessing of content from unmanned stations out of
normal working hours or even when evacuated during a disaster-recovery event.
Anticipated use cases for QTube technology
1. Viewing of server-based content anywhere there is an internet connection
2. Editing clip metadata including logging via the internet
3. Remote frame accurate video/audio editing
4. Picture research
5. Review / quality control
6. Loading of content – example: upload of field-acquired media
7. Remote editing with only occasional server connection
8. Super User / Media Management
9. Inter-site media movements
Product timeline
Quantel plans to start shipping QTube products in Q2 2011. The first products will be:
Silverlight web browser with viewing of video and audio content plus Quantel ‘rush-tag’
metadata
Frame accurate remote editor
o Viewing
o Drag/drop and timeline editing
o Appending and publishing of locally-ingested content
MediaMover to enable clips published to one location to be automatically transferred to any
other
Detailed product specifications and pricing will be available by the end of 2010.
Quantel Whitepaper, Issue 1.0, Nov 2010