OSDC 2018 | Apache Ignite - the in-memory hammer for your data science toolki...NETWAYS
Machine learning is a method of data analysis that automates the building of analytical models. By using algorithms that iteratively learn from data, computers are able to find hidden insights without the help of explicit programming. These insights bring tremendous benefits into many different domains. For business users, in particular, these insights help organizations improve customer experience, become more competitive, and respond much faster to opportunities or threats. The availability of very powerful in-memory computing platforms, such as Apache Ignite, means that more organizations can benefit from machine learning today. In this presentation we will look at some of the main components of Apache Ignite, such as the Compute Grid, Data Grid and the Machine Learning Grid. Through examples, attendees will learn how Apache Ignite can be used for data analysis.
Cloud computing and OpenStack basic introduction. This presentation was given on November 13, 2014 at Universitat Politecnica de Catalunya. Barcelona, Spain.
OpenStack is an open source cloud project and community with broad commercial and developer support. OpenStack is currently developing two interrelated technologies: OpenStack Compute and OpenStack Object Storage. OpenStack Compute is the internal fabric of the cloud creating and managing large groups of virtual private servers and OpenStack Object Storage is software for creating redundant, scalable object storage using clusters of commodity servers to store terabytes or even petabytes of data. In this tutorial, Bret Piatt will explain how to deploy OpenStack Compute and Object Storage, including an overview of the architecture and technology requirements.
In Pravega's first community meeting as a CNCF project, we overviewed experimental features of Pravega:
* Schema Registry - preserving the structure of data in an unstructured storage system and controlling for safe schema evolution
* Consumption-Based Retention - stream truncation based on subscriber positions
* Simplified Long-Term Storage (SLTS) - abstracting the distributed management of segments while removing complicated problems such as fencing
* SLTS Plugin for BookKeeper - an implementation of the SLTS interfaces for BlobIt! object stores on BookKeeper: https://github.com/diegosalvi/pravega-blobit-chunkmanager
OSDC 2018 | Apache Ignite - the in-memory hammer for your data science toolki...NETWAYS
Machine learning is a method of data analysis that automates the building of analytical models. By using algorithms that iteratively learn from data, computers are able to find hidden insights without the help of explicit programming. These insights bring tremendous benefits into many different domains. For business users, in particular, these insights help organizations improve customer experience, become more competitive, and respond much faster to opportunities or threats. The availability of very powerful in-memory computing platforms, such as Apache Ignite, means that more organizations can benefit from machine learning today. In this presentation we will look at some of the main components of Apache Ignite, such as the Compute Grid, Data Grid and the Machine Learning Grid. Through examples, attendees will learn how Apache Ignite can be used for data analysis.
Cloud computing and OpenStack basic introduction. This presentation was given on November 13, 2014 at Universitat Politecnica de Catalunya. Barcelona, Spain.
OpenStack is an open source cloud project and community with broad commercial and developer support. OpenStack is currently developing two interrelated technologies: OpenStack Compute and OpenStack Object Storage. OpenStack Compute is the internal fabric of the cloud creating and managing large groups of virtual private servers and OpenStack Object Storage is software for creating redundant, scalable object storage using clusters of commodity servers to store terabytes or even petabytes of data. In this tutorial, Bret Piatt will explain how to deploy OpenStack Compute and Object Storage, including an overview of the architecture and technology requirements.
In Pravega's first community meeting as a CNCF project, we overviewed experimental features of Pravega:
* Schema Registry - preserving the structure of data in an unstructured storage system and controlling for safe schema evolution
* Consumption-Based Retention - stream truncation based on subscriber positions
* Simplified Long-Term Storage (SLTS) - abstracting the distributed management of segments while removing complicated problems such as fencing
* SLTS Plugin for BookKeeper - an implementation of the SLTS interfaces for BlobIt! object stores on BookKeeper: https://github.com/diegosalvi/pravega-blobit-chunkmanager
State of the Stack v4 - OpenStack in All It's GloryRandy Bias
The almost annual State of the Stack, version 4, an end-to-end view of OpenStack. This edition focuses on what the challenges are within the community and how they can be addressed.
v1 of SOTS has over 90,000 views and is one of the highest viewed OpenStack presentations ever.
Do you think that Nova, Cinder, Heat, Ceilometer, and Neutron are all references to global warming and looming apocalypse? For all those who come to the OpenStack community and wonder what all the fuss is about, this quick introduction will answer your many questions. It includes a short history of the largest Open Source project in history and will touch on
the basic OpenStack components, so you will be prepared the next time someone mentions Keystone, Nova and Swift in the same sentence.
This session was presented by Beth Cohen at the OpenStack meetup on Feb 19th, 2014 in Boston. Beth works for Verizon developing cool Cloud based products that she can't talk about without a strict NDA. She is a technical leader with over 25 years of experience architecting leading-edge system infrastructures and managing complex projects in the telecom, manufacturing, financial services, government, and technology industries. She has been involved in building some of the world's largest OpenStack architectures and has way too much fun at OpenStack Summits!
[OpenStack Day in Korea 2015] Keynote 2 - Leveraging OpenStack to Realize the...OpenStack Korea Community
OpenStack Day in Korea 2015 - Keynote 2
Leveraging OpenStack to Realize the SKT Software-Defined Data Center
Jinsung Choi, Ph.D - CTO, Corporate R&D Center, SK Telecom
Are you overwhelmed by storage capacity requirements? Are you wondering how web giants are able to store large amounts of data at a fraction of your storage costs?
OpenStack is the fastest growing open-source project to date, and its community builds cloud software. Join us to learn about the two OpenStack storage projects and how your company can take advantage of them.
OpenStack storage allows the use of commodity hardware at massive scales that you can consume as a public, private, or hybrid cloud.
View the on-demand webinar. Special guest speaker Randy Bias, founder and CEO of Cloudscaling and member of the Board of Directors for OpenStack Foundation, and EVault big data expert Joey Yep will inform you about this fast-growing, open-source project: OpenStack.
• OpenStack Swift and Cinder storage projects
• High-level functionality and architecture
• Public, private, and hybrid use-cases
[OpenStack Day in Korea 2015] Keynote 5 - The evolution of OpenStack NetworkingOpenStack Korea Community
OpenStack Day in Korea 2015 - Keynote 5
The evolution of OpenStack Networking
Guido Appenzeller - Chief Technology Strategy Officer, Networking & Security, VMWare
Latest (storage IO) patterns for cloud-native applications OpenEBS
Applying micro service patterns to storage giving each workload its own Container Attached Storage (CAS) system. This puts the DevOps persona within full control of the storage requirements and brings data agility to k8s persistent workloads. We will go over the concept and the implementation of CAS, as well as its orchestration.
Verizon's Beth Cohen explains the process of creating the OpenStack Architecture Guide, as delivered to the Boston OpenStack Meetup September 10, 2014.
Rightscale Webinar: Designing Private & Hybrid Clouds (Hosted by Citrix)RightScale
Do you want to turn your existing data center into a private cloud? Exploring how to integrate your private cloud with a public cloud? In this webinar, we will discuss key considerations when designing a private cloud from internal resources and best practices for architecture of private and hybrid clouds. The webinar will include a demonstration, plus real-world examples of customers running their private cloud implementations on Citrix CloudPlatform using RightScale.
Topics to be covered:
• When to use private clouds
• Hardware selection
• Reference architectures and design considerations
• Use cases and real-life scenarios
• Managing your cloud resources effectively
Project Onboarding gives attendees a chance to meet some of the project team and get to know the project. Attendees will learn about the project itself, the code structure/ overall architecture, etc, and places where contribution is needed. Attendees will also get to know some of the core contributors and other established community members.
The Potential Impact of Software Defined Networking SDN on SecurityBrent Salisbury
The Potential Impact of Software Defined Networking SDN on Security. The video of the presentation is at http://networkstatic.net/the-potential-impact-of-software-defined-networking-sdn-on-security/ by Brent Salisbury. It is the first cut so a lot is not in the deck yet on the example use case front.
State of the Stack v4 - OpenStack in All It's GloryRandy Bias
The almost annual State of the Stack, version 4, an end-to-end view of OpenStack. This edition focuses on what the challenges are within the community and how they can be addressed.
v1 of SOTS has over 90,000 views and is one of the highest viewed OpenStack presentations ever.
Do you think that Nova, Cinder, Heat, Ceilometer, and Neutron are all references to global warming and looming apocalypse? For all those who come to the OpenStack community and wonder what all the fuss is about, this quick introduction will answer your many questions. It includes a short history of the largest Open Source project in history and will touch on
the basic OpenStack components, so you will be prepared the next time someone mentions Keystone, Nova and Swift in the same sentence.
This session was presented by Beth Cohen at the OpenStack meetup on Feb 19th, 2014 in Boston. Beth works for Verizon developing cool Cloud based products that she can't talk about without a strict NDA. She is a technical leader with over 25 years of experience architecting leading-edge system infrastructures and managing complex projects in the telecom, manufacturing, financial services, government, and technology industries. She has been involved in building some of the world's largest OpenStack architectures and has way too much fun at OpenStack Summits!
[OpenStack Day in Korea 2015] Keynote 2 - Leveraging OpenStack to Realize the...OpenStack Korea Community
OpenStack Day in Korea 2015 - Keynote 2
Leveraging OpenStack to Realize the SKT Software-Defined Data Center
Jinsung Choi, Ph.D - CTO, Corporate R&D Center, SK Telecom
Are you overwhelmed by storage capacity requirements? Are you wondering how web giants are able to store large amounts of data at a fraction of your storage costs?
OpenStack is the fastest growing open-source project to date, and its community builds cloud software. Join us to learn about the two OpenStack storage projects and how your company can take advantage of them.
OpenStack storage allows the use of commodity hardware at massive scales that you can consume as a public, private, or hybrid cloud.
View the on-demand webinar. Special guest speaker Randy Bias, founder and CEO of Cloudscaling and member of the Board of Directors for OpenStack Foundation, and EVault big data expert Joey Yep will inform you about this fast-growing, open-source project: OpenStack.
• OpenStack Swift and Cinder storage projects
• High-level functionality and architecture
• Public, private, and hybrid use-cases
[OpenStack Day in Korea 2015] Keynote 5 - The evolution of OpenStack NetworkingOpenStack Korea Community
OpenStack Day in Korea 2015 - Keynote 5
The evolution of OpenStack Networking
Guido Appenzeller - Chief Technology Strategy Officer, Networking & Security, VMWare
Latest (storage IO) patterns for cloud-native applications OpenEBS
Applying micro service patterns to storage giving each workload its own Container Attached Storage (CAS) system. This puts the DevOps persona within full control of the storage requirements and brings data agility to k8s persistent workloads. We will go over the concept and the implementation of CAS, as well as its orchestration.
Verizon's Beth Cohen explains the process of creating the OpenStack Architecture Guide, as delivered to the Boston OpenStack Meetup September 10, 2014.
Rightscale Webinar: Designing Private & Hybrid Clouds (Hosted by Citrix)RightScale
Do you want to turn your existing data center into a private cloud? Exploring how to integrate your private cloud with a public cloud? In this webinar, we will discuss key considerations when designing a private cloud from internal resources and best practices for architecture of private and hybrid clouds. The webinar will include a demonstration, plus real-world examples of customers running their private cloud implementations on Citrix CloudPlatform using RightScale.
Topics to be covered:
• When to use private clouds
• Hardware selection
• Reference architectures and design considerations
• Use cases and real-life scenarios
• Managing your cloud resources effectively
Project Onboarding gives attendees a chance to meet some of the project team and get to know the project. Attendees will learn about the project itself, the code structure/ overall architecture, etc, and places where contribution is needed. Attendees will also get to know some of the core contributors and other established community members.
The Potential Impact of Software Defined Networking SDN on SecurityBrent Salisbury
The Potential Impact of Software Defined Networking SDN on Security. The video of the presentation is at http://networkstatic.net/the-potential-impact-of-software-defined-networking-sdn-on-security/ by Brent Salisbury. It is the first cut so a lot is not in the deck yet on the example use case front.
Intel développe une "ONP" (Open Network Platform) dit autrement un switch ouvert offrant les fonctions de base nécessaires au SDN. Si vous souhaitez connaitre le matériel utilisé, les stack logicielle exploitée et les compatibilité avec notamment les orchestrateurs, ce doc est fait pour vous.
Security of software defined networking (sdn) and cognitive radio network (crn)Ameer Sameer
Security of Software Defined Networking (SDN)
Overview
Definition Software Defined Networking (SDN)
SDN security & Security Challenges
SDN Attack Surface & Attacks Examples
SDN Threat Model
Open Research issues SDN
Future Research Directions
Simulator for Software Defined Networking
Security of Cognitive Radio Network (CRN)
Overview
Definition Cognitive Network
Security of Cognitive Radios & Threats
Security issues in cognitive radio
Attacks and the proposed defense mechanisms
Open Research issues in Cognitive Radio
Evaluation Methodologies for Cognitive Networking
Future Research Directions
Simulator for Cognitive Radio
The 2015 Guide to SDN and NFV: Part 2 – Network Functions Virtualization (NFV)EMC
The 2015 Guide to SDN and NFV: Part 2 – Network Functions Virtualization (NFV)
The goals of The 2015 Guide to SDN and NFV are to eliminate the confusion and accelerate the analysis and potential adoption of these new architectural approaches. Part 2 focuses on Network Functions Virtualization.
Part 1 - http://www.slideshare.net/emcacademics/2015ebook-sdnnfvch1
Dans ce document vous trouverez les dernières améliorations faites sur OpenStack et comment certaines technologies Intel dopent la performance et la sécurité de l'environnement Cloud. Quelques exemple avec :
Comment créer des "pool" de VM sécurisées avec possibilité de géo tagging (technologies Intel présentent dans les serveurs HP, DELL, IBM… + Folsom, Nova, Horizon, Open Attestation)
Comment doper la sécurité du nouveau module de gestion des clés d'OpenStack (technologies Intel + Barbican)
Comment benchmarker le stockage object Swift avec COSBench (qui supporte maintenant Ceph, S3 et Amplidata)
Auteurs:
Girish Gopal - Strategic Planning, Intel Corporation
Malini Bhandaru - Security Architect, Intel Corporation
Radisys' CTO, Andrew Alleman, was one of the featured speakers at the OCP Telco Engineering Workshop during the 2017 Big Communications Event. Andrew discussed carrier-grade open rack architecture (CG-OpenRack-19), the future of open hardware standards and commercial products in the OCP pipeline during his presentation.
Red Hat multi-cluster management & what's new in OpenShiftKangaroot
More and more organisations are not only using container platforms but starting to run multiple clusters of containers. And with that comes new headaches of maintaining, securing, and updating those multiple clusters. In this session we'll look into how Red Hat has solved multi-cluster management, covering cluster lifecycle, app lifecycle, and governance/risk/compliance.
OpenStack at the speed of business with SolidFire & Red Hat NetApp
When it comes to OpenStack® and the enterprise, it’s critical that you can rapidly deploy a plug-and-play solution that delivers mixed workload capabilities on a shared infrastructure. Join Red Hat and SolidFire to see how Agile Infrastructure for OpenStack can help your cloud move at the speed of business.
Workday has built one of the largest OpenStack-based private clouds in the world, hosting a workload of over a million physical cores on over 16,000 compute nodes in 5 data centers for over ten years. However, there was a growing need for a newer, more maintainable deployment model that would closely follow the upstream community. We would like to share our new architecture and deployment approach as well as lessons learned from our experience.
We’ve converted many of our technologies in the process, from…
Migrating from Mitaka, to Victoria
Converting from OpenContrail, to pure L3 Calico with BGP on the host
Deploying with Chef, to deploying with Ansible
Building home-grown container images, to Kolla
Monitoring with Sensu and Wavefront, to Prometheus and Grafana
CI/CD in Jenkins, to Zuul
CentOS 7, to CentOS 8 Stream
We'll also talk about some internal tools we wrote that, while Workday-specific, may inspire you to see what value-add you can make for your customers.
Introducing QuickStack, a converged cloud solution powered by The Canonical Distribution of Ubuntu OpenStack. QuickStack delivers the fastest and reliable way to build an OpenStack cloud with the verified and thoroughly tested architecture, which dramatically reduces the time and risk associated with your OpenStack cloud projects. With QuickStack, building an OpenStack cloud is no longer complicated, but instead fast and easy
Cozystack: Free PaaS platform and framework for building cloudsAndrei Kvapil
With Cozystack, you can transform your bunch of servers into an intelligent system with a simple REST API for spawning Kubernetes clusters, Database-as-a-Service, virtual machines, load balancers, HTTP caching services, and other services with ease.
You can use Cozystack to build your own cloud or to provide a cost-effective development environments.
The OpenStack Havana release had more than 910 contributors and delivers nearly 400 new features, including two new services: Orchestration and Metering.
Tips to deploy a production grade Kubernetes cluster using SUSE CaaS Platfrom v3.
Created joinly with my colleague Martin Weiss to be used on SUSECON 2019
Similar to Intel open stack-summit-session-nov13-final (20)
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
Intel open stack-summit-session-nov13-final
1. Intel and OpenStack:
Contributions and Deployment
Das Kamhout, Principal Engineer, Intel IT
Dr. Malini Bhandaru, Open Source Technology Center, Intel SSG
OpenStack Summit, Hong Kong, Nov’13
2. Helping Fuel Innovation—and Opportunities
#2 Linux Contributor
improving performance, stability &
efficiency
Across the Stack
contributions span every layer of the
stack
Red Hat
11.1%
Intel SUSE IBM
9.3%
4.9%
4.2%
Proven Components
building blocks simplify development, reduce costs and speed time-to-market
0% 20% 40% 60% 80% 100%
QT
KVM
Ofono
Clutter
Code Contributions to Open Source Projects
Intel is single largest contributor to these
projects
Intel in
Open Source
Project Contributor
X.org GNU
Webkit JQuery
Eclipse
OpenStackYocto
Project
Hadoop
3,000
2,500
2,000
1,500
1,000
500
0
KVM
Throughput
MC-DP WSM-EP SNB-EP WSM-EX
SPCEvirt_sc2010* Performance
01.org
kernel.org
2
3. Intel Enables OpenStack Cloud Deployments
Contributions
Intel® IT
Open Cloud
Intel® Cloud
Builders
• Across OpenStack projects
• Open Source Tools
• Top contributor to Grizzly and Havana releases1
• Optimizations, validation, and patches
• Intel IT Open Cloud with OpenStack
• Delivering Consumable Services
• Single Control Plane for all Infrastructure
• Collection of best practices
• Intel IT Open Cloud Reference Arch
• Share best practices with IT and CSPs
• http://www.intel.com/cloudbuilders
1Source: www.stackalytics.com
3
4. Stress on Datacenter Operations
1: Source: Intel IT internal estimate; 2: 3: IDC’s Digital Universe Study, sponsored by EMC, December 2012; 4: IDC Server Virtualization and The Cloud 2012
Network
2-3 weeks to provision
new services1
Storage
40% data growth CAGR,
90% unstructured3
Server
Average utilization <50%
despite virtualization4
New Challenges are coming….
4
5. The Intel SDI Vision
1: Source: Intel IT internal estimate
Datacenter Today Software-defined
Infrastructure
Time to Provision New Service: Minutes1Time to Provision New Service: Months1
Idea for
service
IT scopes
needs
Balance
user demands
Idea for
service
Service
running
Manually
configure
devices
Set up service
components,
assemble software
Service
running
Software
components assembled
Private
Public
Self service
catalog &
services
orchestration
Automated
composition
of resources
5
Self-provisioning, automated orchestration, composable resource pools
6. Open Data Center Alliance
Cloud Adoption Roadmap
Year 1 Year 2 Year 3 Year 4 Year 5
End
User
App
Dev
App
Owner
IT Ops
Federated,
Interoperable,
and Open
Cloud
Simple SaaS
Enterprise
Legacy Apps
Compute,
Storage, and
Network
Simple
Compute IaaS
Simple SaaS
Enterprise
Legacy Apps
Cloud Aware
Apps
Complex
Compute IaaS
Simple
Compute IaaS
Compute,
Storage, and
Network
Complex SaaS Hybrid SaaS
Full Private
IaaS
Hybrid IaaS
Cloud Aware
Apps
Legacy Apps
Private PaaS Hybrid PaaS
Cloud Aware
Apps
Legacy Apps
Consumers
LegacyApplicationsondedicated
Infrastructure
Start
6
7. Intel IT Quick History
Design Grid since 1990’s
60k servers across 60+
datacenters
Cloud’s Uncle
Enterprise Private Cloud 2010
13k VMs across 10 datacenters
75% of Enterprise Server
Requests
80% virtualized
Open Source Private Cloud
2012
1.5k VMs across 2 datacenters
Running cloud-aware and
some traditional apps
9. Top Challenges & Technical Responses
Security &
Compliance
Unit Cost
Reduction
Business
Uptime
• Trusted Compute Pools
• Geo-tagging
• Key Management
• Enhanced Platform Awareness (crypto processing)
• Intelligent storage allocation in Cinder
• Multiple publisher support in ceilometer
• Erasure code in Icehouse release
• COSbench performance measurement tool
• Erasure Code (storage cost)
• Enhanced Platform Awareness (PCIe Accelerators etc.)
• Intelligent workload & storage scheduling
• Live Migration, Rack-level redundancies
• Intel® Virtualization Technology with FlexMigration 9
10. Intel Contributions* to OpenStack
*Note: A mixture of features that are completed, in development or in Planning
Compute Networking Storage
• Enhanced Platform Awareness
• CPU Feature Detection
• PCIe SR-IOV Accelerators
• OVF Meta-Data Import
• Trusted Compute Pools
• With Geo Tagging
• Key Management
• Intelligent Workload
Scheduling (Metrics)
• Intel® DPDK vSwitch
• VPN-as-a-Service with
Intel® QuickAssist
Acceleration
• Advanced Services in
VMs
• Filter Scheduler
• Erasure Code
• Object Storage
Policies
User Interface (Horizon)
Object Store (Swift)
Image Store (Glance)
Compute (Nova) Block Storage (Cinder)
Network Services (Neutron)
Key Service (Barbican)
Trusted Compute Pools
(Extended with Geo Tagging)
OVF Meta-Data Import
Intel® DPDK vSwitch
Enhanced Platform Awareness
Erasure
Code
Expose Enhancements
Filter Scheduler
Monitoring/Metering
(Ceilometer)
Object Storage
Policy
Key Encryption & Management
Advanced Services in VMs
Intelligent Workload Scheduling
Metrics
10
VPN-as-a-Service (with Intel® QuickAssist Technology)
11. Trusted Compute Pools (TCP)
Enhance visibility, control and compliance
TCP Solution
- Platform Trust - new attribute for Management
- Intel® TXT initiates Measured Boot
- basis for Platform Trust
- Open Attestation (OAT) SDK – Remote Attestation
Mechanism
https://github.com/OpenAttestation/OpenAttestation
- TCP-aware scheduler controls placement & migration
of workloads in trusted pools
1source: McCann “what’s holding the cloud back?” cloud security global IT survey, sponsored by Intel, May 2012No computer system can provide absolute security under all conditions. Intel® Trusted Execution Technology (Intel® TXT) requires a
computer system with Intel® Virtualization Technology, an Intel TXT-enabled processor, chipset, BIOS, Authenticated Code Modules and an
Intel TXT-compatible measured launched environment (MLE). The MLE could consist of a virtual machine monitor, an OS or an application. In
addition, Intel TXT requires the system to contain a TPM v1.2, as defined by the Trusted Computing Group and specific software for some
uses. For more information, see here
TCP is enabled in OpenStack (Folsom release)
11
12. Trusted Compute Pools with Geo-Tagging
• OpenStack*
Enhancements
• Secure mechanism for Provisioning geo certificates
• Dashboard – display VM/storage geo
• Nova flavor extra spec – geo
• Enhanced TCP scheduler filter
• Geo Attestation Service (OAT +)
• Geo-tagged Storage
• Volumes
• Objects
12
Work in progress - Provide feedback, use cases
Use geo-location descriptor stored in TPM on Trusted Servers to
control workload placement & migration
13. Cloud Service
Provider Portal
Trust Attestation
OAT/MTW
Key Mgt
Service
Keys
CSP-Image
Server
(Glance)
Host + VMM
OAT
MH: OVF
Plug-in
DOM0
TXT + TPM
1
2
3
4
6
5
7
8
9
Customer
Data Center
MH Client
Cloud Service Provider
Data Center
Encrypted VM Image
Launch request
(from anywhere)
Encryption Key (enveloped)
Policy
Encrypted VM Image
Launch command
Request Encryption Key (AIK, KeyID)
Request Host Trust Attestation
Encrypted VM
SymKey
Response Trust Status, BindPubKey
MH ClientMH Client
Concept: Trusted Compute Pools (TCP) – VM Protection
Tenant-Controlled, Hardware-Assisted VM Protection in the Cloud
Concept Demo in Citrix Booth
14. Key Management
Ease Security Adoption, new use cases, compliance
• Server-side encryption
• Data-at-rest security
• Random high quality keys
• Secure Key Storage
• Controlled key access via Keystone
• High availability
• Pluggable backend – HSM, TPM
• Barbican Key Manager:
- https://github.com/cloudkeep/barbican
Intel technologies: Intel® Secure Key, Intel® AES-NI
Prototype in Havana, incubate in Icehouse
14
15. Filter Scheduler (Cinder)
Volume Service 1
Volume Service 2
Volume Service 3
Volume Service 4
Volume Service 5
Volume Service 1
Volume Service 2
Volume Service 3
Volume Service 4
Volume Service 5
Weight = 25
Weight = 20
Weight = 41
Volume Service 2
Volume Service 4
Volume Service 5
Filters Weighers
Winner!
• AvailabilityZone
Filter
• Capabilities
Filter
• JsonFilter
• CapacityFilter
• RetryFilter
• CapacityWeigher
• AllocatedVolumesWeigher
• AllocatedSpaceWeigher
Example Use Case: Differentiated Service with Different Storage Back-ends
• CSP: 3 different storage systems, offers 4 levels
of volume services
• Volume service criteria dictates which storage
system can be used
• Filter scheduler allows CSP to name storage
services and allocate correct volume
15 15
16. Data Collection for Efficiency:
Intelligent Workload Scheduling
Enhanced usage statistics allow advanced scheduling
decisions
• Pluggable metric data
collecting framework
• Compute (Nova) - New filters
/ weighers for utilization-based
scheduling
16
Metering in Havana release, scheduling in future release
17. Enhanced Platform Awareness
Allows OpenStack* to have a greater awareness of the
capabilities of the hardware platforms
• Expose CPU & platform features to
OpenStack Nova scheduler
• Use ComputeCapabilities filter to
select hosts with required features
- Intel® AES-NI or PCI Express accelerators
for security and I/O workloads
- Upto 10x encryption & 8x decryption performance
improvement observed 1
17Intel® AES-NI = Intel® Advanced Encryption Standard New Instructions
See http://www.oracle.com/us/corporate/press/173758
Some features in Havana, more in future releases
Processor
Unencrypted
Data
ABCDEFGH
IJKLMNOP
QRSTUVW
Faster Encryptions
Faster Decryptions
Data In Motion
Encrypted
Data
#@$%&%@#&
%@#$@&%$@
#$@%&&
18. SDN & NFV: Driving Architectural Transformation
To This:
Networking within VMs
Standard x86 COTS HW
Open SDN standard solutions
From This:
Traditional networking topology
Monolithic vertical integrated box
TEM proprietary solutions
VM:
Firewall
VM:
VPN
VM:
IDS/IPS
SDN/NFV
Firewall VPN IDS/IPS
IA CPU
Chipset
Acceleration
Switch
Silicon
NIC
Silicon
Wind River
Linux + Apps
TEM/OEM
Proprietary OS
ASIC, DSP, FPGA, ASSP
18
19. 19
Intel® DPDK Accelerated Open vSwitch In Neutron
Open vSwitch ML2 Driver/Agent in Development
Neutron API
API
Extensions
Neutron-ML2-Plugin
DB
External
Controller
vSwitch
VMVMVMVM
L2 Agent
DPDK vSwitch
VMVMVMVM
DPDK vSwitch
L2 Agent
DPDK vSwitch
Mechanism Driver
Intel DPDK vSwitch
10x
Unleashing Intel® DPDK vSwitch Performance in Neutron
20. 20
Capacity Tier (Storage)
Access Tier (Concurrency)
OpenStack* Swift With Erasure Code
Zone 1 Zone 2 Zone 3 Zone 4 Zone 5
Clients
RESTful API, Similar to S3
Download
Frag 1
Frag 2
Frag 3
Frag 4
Frag N
Decoder
Upload
Encoder
Obj A Obj A
• New Storage Policy capability
• Applications control policy
• EC can be inline or offline
• Supports multiple policies at the
same time via container tag
• EC flexibility via plug-in
Auth
Service
Detailed Tutorial at: https://intel.activeevents.com/sf13/connect/sessionDetail.ww?SESSION_ID=1180&tclass=popup
Community Collaboration: https://intel.activeevents.com/sf13/connect/sessionDetail.ww?SESSION_ID=1180&tclass=popup
21. Intel actively contributing to OpenStack
Delivering interoperable, federated, efficient and secure Open Cloud solutions
Security &
Compliance
Unit Cost
Reduction
Business
Uptime
• Trusted Compute Pools
• Geo-tagging
• Key Management
• Enhanced Platform Awareness (crypto processing)
• Intelligent storage allocation in Cinder
• Multiple publisher support in ceilometer
• Erasure code in Icehouse release
• COSbench performance measurement tool
• Erasure Code (storage cost)
• Enhanced Platform Awareness (PCIe Accelerators etc.)
• Intelligent workload & storage scheduling
• Live Migration, Rack-level redundancies
• Intel® Virtualization Technology with FlexMigration 21
24. Legal Disclaimers and Notices
Intel Trademark Notice: Celeron, Intel, Intel logo, Intel Core, Intel® Core™ i7, Intel® Core™ i5, Intel® Core™ i3, Intel® Atom™ Intel Inside, Intel Inside logo, Intel.
Leap ahead., Intel. Leap ahead. logo, Intel NetBurst, Intel SpeedStep, Intel XScale, Itanium, Pentium, Pentium Inside, VTune, Xeon, and Xeon Inside are trademarks or
registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.
Non-Intel Trademark Notice: *Other names and brands may be claimed as the property of others.
General Performance Disclaimer/"Your Mileage May Vary"/Benchmark: Software and workloads used in performance tests may have been optimized for
performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software,
operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you
in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.
Performance tests and ratings are measured using specific computer systems and/or components and reflect the approximate performance of Intel® products as measured
by those tests. Any difference in system hardware or software design or configuration may affect actual performance. Buyers should consult other sources of information to
evaluate the performance of systems or components they are considering purchasing. For more information on performance tests and on the performance of Intel products,
visit http://www.intel.com/performance/resources/limits.htm or call (U.S.) 1-800-628-8686 or 1-916-356-3104.
Estimated Results Benchmark Disclaimer: Results have been estimated based on internal Intel analysis and are provided for informational purposes only. Any difference
in system hardware or software design or configuration may affect actual performance.
Pre-release Notice: This document contains information on products in the design phase of development.
Processor Numbering Notice: Intel processor numbers are not a measure of performance. Processor numbers differentiate features within each processor family, not
across different processor families: Go to: http://www.intel.com/products/processor_number
Roadmap Notice: All products, computer systems, dates and figures specified are preliminary based on current expectations, and are subject to change without notice.
Excerpted Product Roadmap Notice: Intel product plans in this presentation do not constitute Intel plan of record product roadmaps. Please contact your Intel
representative to obtain Intel's current plan of record product roadmaps.
Intel® AES-New Instructions (Intel® AES-NI): Intel® AES-NI requires a computer system with an AES-NI enabled processor, as well as non-Intel software to execute
the instructions in the correct sequence. AES-NI is available on select Intel® processors. For availability, consult your reseller or system manufacturer. For more
information, see http://software.intel.com/en-us/articles/intel-advanced-encryption-standard-instructions-aes-ni/
Enhanced Intel SpeedStep® Technology : See the Processor Spec Finder at http://ark.intel.com or contact your Intel representative for more information.
Intel® Hyper-Threading Technology (Intel® HT Technology): Available on select Intel® Core™ processors. Requires an Intel® HT Technology-enabled
system. Consult your PC manufacturer. Performance will vary depending on the specific hardware and software used. For more information including details on which
processors support HT Technology, visit http://www.intel.com/info/hyperthreading.
Intel® 64 architecture: Requires a system with a 64-bit enabled processor, chipset, BIOS and software. Performance will vary depending on the specific hardware and
software you use. Consult your PC manufacturer for more information. For more information, visit http://www.intel.com/info/em64t
Intel® Turbo Boost Technology: Requires a system with Intel® Turbo Boost Technology. Intel Turbo Boost Technology and Intel Turbo Boost Technology 2.0 are only
available on select Intel® processors. Consult your PC manufacturer. Performance varies depending on hardware, software, and system configuration. For more
information, visit http://www.intel.com/go/turbo
24
25. 6
Months
6
Months
Infrastructure
AsaService
Compute Storage Network 12-18
Months
Physical
Infrastructure
IaaS
Compute
(Nova*)
Block Storage
(Cinder*)
Object Storage
(Swift*)
Network
(Neutron*)
Dashboard (Horizon*)
OS Images
(Glance*)
Open-Source (OpenStack*)
Manageability
3
Months
Monitoring
AsaService
Watcher
(Nagios*, Shinken*, Heat*)
Decider
(Heat)
Collector
(Hadoop*)
Actor
(Puppet*, Cfengine*)
Open-Source Foundation
Interfaces GUI
(Graphical User Interface)
API
(Application Programming Interface)
Release
Cadence
AppPlatform
Services
PaaS
Analytics Messaging Data Web
3
Months
Intel IT Open Cloud Components
25
26. Benefits of Enhanced Platform Awareness
26
Enabler for Enhanced Cloud Efficiency & Deploying SDN/NFV Workloads
Some features enabled in Havana, more coming in future releases
Intel® QuickAssist Accelerator Intel® Data Plane Development Kit
Intel® AES New Instructions Intel® Advanced Vector
Extensions 2 (AVX2)
Intel® Secure Key
28. Summary: Key Intel Contributions into OpenStack
Contribution Project Release Comments
Trusted Filter Nova Folsom Place VMs in Trusted Compute Pools
Trusted Filter UI Horizon Folsom GUI interface for Trusted Compute Pool management
Filter Scheduler Cinder Grizzly Intelligent storage allocation
Multiple Publisher
Support
Ceilometer Havana Pipeline manager; pipelines of collectors, transformers,
publishers
Open Attestation SDK To Open Source Remote Attestation service for Trusted Compute Pools
COSBench To Open Source Object store benchmarking tool
Enhanced Platform
Awareness
Havana + future Leverages advanced CPU and PCIe device features for
increased performance
Key Manager Icehouse+ Makes data protection more readily available via server side
encryption with key management
Erasure Code Icehouse Augments tri-replication algorithm in Swift enabling application
selection of alternate storage policies
28
29. Re-architect the Datacenter
1: Source: Intel IT internal estimate
Datacenter Today Software-defined Infrastructure
Time to Provision New Service: Minutes1Time to Provision New Service: Months1
Idea for
service
IT scopes
needs
Balance
user demands
Idea for
service
Service
running
Manually
configure
devices
Set up service
components,
assemble software
Service
running
Software
components assembled
Private
Public
Self service
catalog &
services
orchestration
Automated
composition
of resources
29
Intel actively contributes to a breadth of open-source projects across every layer of the solution stack.
Intel is proud to be a part of the open-source community. In fact, we’ve been there from the very beginning, long before it was a major force, and our high level of commitment has remained consistent from the start.
Intel employs thousands of software developers around the globe to ensure open-source software delivers top-notch performance, power-efficiency, scalability and security. We are second-leading contributor to the Linux kernel, behind only Red Hat, whose business model is based on open source. Among silicon vendors, we are the leading contributor. Moreover, we lead even software companies in our contributions.
Intel is committed to helping enable our hardware for open-source software, and our commitment goes way beyond that, spanning every layer of the solution stack, including middleware and applications.
Our work has resulted in phenomenal performance enhancements and product-quality software, delivering exceptional developer and end-user experiences.
Das leads off talking about
we understand some of the constraints the datacenter is facing. You’re required to increase your storage capacity almost without limit, taking away budget from what could allow you to provide new services such as cloud-like capabilities or improving efficiencies to be parity with the best practices in the industry. Then let’s talk about the time – that storage capacity you purchased isn’t useful if it isn’t installed and provisioned. How do you connect it? Getting the network connections provisioned is not automated – you have to touch each vendor specific CLI. That’s not cheap either – how many of you have an entire team whose primary expertise is how to work the ins and outs of a vendor’s CLI? Intel IT has over a hundred in our shop ….so storage is taking space, budget & power. Networking is taking your time. How do you possibly get out of this spiral?
Today, how do services get provisioned? Somebody has an idea for a service and then they have to call IT. A number of people in IT go ahead and scope their needs. IT sharpens their pencils, they look at what the requirements are for reliability, for capacity, how much web services access do they need? Then they balance that against the rest of the infrastructure and all of the user demands.
They're having to look at profitability, they're looking at cost, they're looking at the capacity that they know they have based on their archives and databases, to give them a paper estimate of the capacity they really have installed. Once they've got that in place, and procure the needed equipment, they have to manually configure it.
Manually configuring a device means you touch everyone. Whether it is having to actually physically plug in an Ethernet plug and make the connection between different boxes, or whether it's simply having to touch the command line interface of every single box to configure and provision it appropriately there is a human touch at every point along the way.
Once those are connected and configured, then they actually have to set up service. This pulls together the server and the storage and the data store so that the service is actually running and allows the original service requestor to develop the software and services they had in mind in the first place
Then and only then do you have the service running. The service is available & ready for customers to do business. The time to provision there is months, minimally speaking, about eight weeks according to the Intel IT internal estimates.
What should we be moving to? The end state of the future, the vision of re-architecting the data center and the result is something called software-defined infrastructure.
Once there's an idea for a service, the LOB customer can pull together very quickly from private or, public, cloud services, or from their own internal capabilities, using a self-service portal that orchestrates the services they need from an online catalog. Things like location, security, online payments can be pulled together automatically –then the customer can assemble the software components from a list that's available to them, whether from their internal IT department or a repository like GitHub. The service level agreement that the orchestrator creates tells the infrastructure orchestrator what resources do I need? What kind of availability do I need? How much storage do I need? How fast of a connection do I need between the compute and the storage? Then, how do I manage power and how do I manage temperature demands if I'm in a particularly intense workload?
All of this happens automatically. After the services are orchestrated, the infrastructure is orchestrated, the service is running. The time to provision a new service is minutes. Depending on how quickly somebody can put the software together, it should be push button - done
Das leads off talking about
Malini to present high level roadmap showing Intel contributions in most of the OpenStack projects.
Intention is not to dwell long on this slide, but to highlight our strategy on compute, networking and storage.
Suggested Time budget (1 minute)
OpenStack Policy Engine / Console
Trust level of VM specified as Trusted
Compute (Nova) – Trust Filter
Dashboard (Horizon) – Trust Filter UI
Key Message: Intel TXT enables isolation and tamper detection in the boot process and provides verification that’s useful in compliance and by security and policy applications to control workloads.
Intel TXT is not new technology—it has been available on Intel vPro-branded clients for years. But it is now available for servers—and the use models there are quite compelling
Intel® TXT helps prevent software-based attacks on areas that are relatively unprotected today, such as
Attempts to insert non-trusted VMM (rootkit hypervisor)
Reset attacks designed to compromise platform secrets in memory
BIOS and firmware update attacks
Looking at it another way, Intel® TXT enforces control through measurement, memory locking and sealing secrets—essentially isolating the launch time environment. As such, it works cooperatively with Intel® Virtualization Technology (Intel® VT)
Intel® TXT is providing hardware-based protections in the processor, chipset and 3rd party Trusted Platform Modules (TPMs) that can better resist software attacks, making platforms more robust
This helps lower support costs, but also provides higher value capabilities such as enhanced control of workloads via security policy and reporting into security compliance dashboards—we’ll get into that in a moment.
Intel TXT provides high value by enabling trust in the platform—verifying launch time components and enforcing “known good” configurations of the critical software that will control the platform
The three key use models are:
Trusted launch – which is the basic verification of platform integrity, with lower risk from critical system malware and reducing support costs and data breach risks
Anecdote: we’ve heard from a number of EBOA customers that while they trust the hypervisor they use in their own datacenter, they trust that same hypervisor a lot less when it is run in another location—so verification of trust if a very useful assurance for them
Then we have 2 new use models that have even added benefits for virtual and cloud use models
2. Trusted pools – aggregation of multiple trusted systems and enabling platform trust status as a data point for security applications to enforce control of workload assignment – such as restricting sensitive VMs to only run on trusted systems
3. Compliance Support – using TXT hardware capabilities to establish and verify adherence to data protection and control standards—allowing hardware-based reporting of platform trust locally and remotely. This provides new visibility into their data protection capabilities
With these, we’ve really extended Intel’s leadership into server security and give customers more visibility and control they seek for their clouds
Built on Intel TXT:
Addresses concerns over limited visibility into capabilities of cloud infrastructure
Trust status usable by security and policy applications to control workloads to meet requirements
Hardware based trust and attestation provides verification useful in compliance
Visibility into security thanks to TXT
Control to decide how to act based on visibility
Connection to Compliance
Speaking points:
Geo controlled placement
VM placement, migration
Storage
objects and block/volume
Understanding value of enabling the true random number generator (Secure Key), as true randomness of keys/seeds in the crypto algorithms is incredibly important in terms of setting up the security in the system.
The Cinder Filter Scheduler, a new addition to the Grizzly release, intelligently allocates the storage volume based on the workload and the type of service required. This is achieved by applying a series of filters and weighers to each available volume service.
Filters:
AvailabilityZoneFilter
Filters volume services by availability zone. This filter must be enabled for the scheduler to respect availability zones in requests.
CapabilitiesFilter
Matches properties defined in a volume type's extra specs against volume services’ capabilities. Each storage back-end reports its capabilities to scheduler. Capabilities some common capabilities, such as
JsonFilter
The JsonFilter allows a user to construct a custom filter by passing a scheduler hint (the scheduler hint extension for cinder is not just merged ) in JSON formatCapacityFilter
Only schedule volumes on back-ends if there is sufficient space available. If this filter is not set, the scheduler may overprovision a back-end based on capacity (i.e., the space allocated for volumes may exceed the physical storage capacity). Note that some storage back-end supports more advanced feature such as thin-provisioning, de-duplication etc, therefore they may report ‘infinite’ (means unlimited) or ‘unknown’ free space instead of firm numbers. CapacityFilter handles these cases by simply let ‘infinite’ or ‘unknown’ pass.
RetryFilter
Filter out volume services that have already been attempted for scheduling purposes. If the scheduler selects a volume service to respond to a service request, and that volume services fails to complete the request (e.g. volume service report ‘unknown’ free space but unable to allocated enough space for requested volume when trying), this filter will prevent the scheduler from retrying that volume service for the service request.
This filter is only useful if the scheduler_max_attempts configuration option is set to a value greater than zero.
The Filter Scheduler takes the volume services that remain after the filters have been applied and applies one or more weighers to each of them to get numerical scores for each volume service. Each score is multiplied by a weighting multiplier specified in the cinder.conf config file. If there are multiple weighers, then the weighted scores are added together. The scheduler selects the host that has the maximum weights. Cinder comes with three weighers:
CapacityWeigher
This weigher calculates scores via multiplying volume back-end’s free_capacity_gb value by capacity_weight_multiplier. The default value for capacity_weight_multiplier is 1.0, thus the default behavior of this weigher is to select back-ends with most available space. If capacity_weight_multiplier is changed to negative value, say -1.0, the behavior will changed to picking back-end with least free capacity.
AllocatedVolumesWeigher
This weigher sorts back-ends by allocated volumes. The default value of ‘allocated_volume_weight_multiplier’ is -1.0, which is equivalent to choosing back-ends with fewest allocated volumes.
AllocatedSpaceWeigher
If the desired behavior is to consider allocated space of back-ends, this is the right weigher. The weight multiplier value for this weigher is by default -1.0, which is equivalent to picking back-ends that allocated least space.
Use Case A: Differentiated Service with Different Storage Back-ends
Cloud vendors usually provides different level of volume services to address all kinds of needs of end users. They may implement their Volume Service using more than one type of storage back-ends, each back-end has different capabilities.
To simplify the case, assuming our Cloud Vendor has 3 different storage systems:
low-cost storage system A, which has lowest performance but plenty of space;
middle range storage system B equipped with faster spindle thus better performance and is more reliable than A, and it supports fast snapshot but not fast cloning;
high-end storage system C provides best performance and reliability at highest cost per GB, and it has most advanced features support such as fast cloning, fast snapshot, de-duplication, and 3 level of QoS, etc.
Cloud Vendor would like to provide 4 different type of volumes for end users:
standard volume - cheap, no performance guarantee;
Fast-n-Safe volume - better performance (best effort, no guarantee) and more reliable than standard volume;
Premium volume X1 - better performance than standard volume, with best effort (no guarantee), supports fast snapshot;
Premium volume X2 - better performance than Fast-n-Safe with minimum performance guaranteed, supports fast cloning/snapshot;
As we can see here, the term Volume Type here is an abstraction of various properties of a volume. It is _NOT_ type of back-end storage. Remember this is true in the whole Cinder context.
So now we have (almost) everything in place, let’s see how to map those requirement (volume types) to storage systems: Standard volume can be created on all 3 storage back-ends, Fast-n-Safe & Premium X1 volume can only created on Storage B and C and Premium X2 volume requires capabilities on Storage C.
Figure Mapping between volume types and storage back-ends
One we figured out mapping between requirement and storage back-ends, we can now connect them via creating Cinder volume type. Other considerations such cost effectiveness shall be taken into account in real deployment. But again, to simplify this example, let’s limit Standard volume to be created only on Storage A; Fast-n-Safe & Premium X1 volume be created on Storage B; Premium X2 be created on Storage C.
To achieve that we need to created 4 Cinder volume types, here’s one possible combination:
type 1: name ‘standard’, with extra specs {‘volume_backend_name’: ‘Storage System A’}
type 2: name ‘fast-n-safe’, with extra specs {‘volume_backend_name’: ‘Storage System B’}
type 3: name ‘premium-x1’, with extra specs {‘QoS’: ‘false’, ‘fast snapshot’: ‘true’}
type 4: name ‘premium-x2’, with extra specs {‘fast clone’’: ‘true’, ‘fast snapshot’: ‘true’, ‘QoS:level’: ‘guarantee:200IOPS’}
Look into the details these 4 volume types: we can see first two types have extra specs explicitly specified the name of storage system. This is pretty straightward because we know which storage back-ends satisfy their need and we don’t want Storage C to serve request of these two types. The interesting part is in the definition of ‘premium-x1’ type, with that we added two capabilities in extra specs rather than explicitly specifying which storage back-end to use. The ‘QoS’:’false’ key value pair will rule of Storage C when processing with CapabilitiesFitler in filter scheduler, and ‘fast snapshot’:’true’ will reject Storage A. Actually this is more generic/portable definition we should use when creating volume types rather than putting non-portable limit (e.g. names of storage system) in. Notice there is a scoped key (‘QoS:level’) in ‘premium-x2’ type extra specs, this will be ignored by CapabilitiesFilter but can be utilized by back-end driver.
Speaking points:
Collects data via plug-ins
Sends data to notification bus for use by other OpenStack* components
the bp is at https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling.
Basically you can introduce the current arch and tell what we want to change.
Nova has API, scheduler, compute and conductor. When a scheduler gets a VM creation request from API, it will ask compute to spawn according to the filters and the weighers. The compute will call hypervisor to create the VM and ask the conductor to sync up the information with nova DB. However, the current mechanism is very simple. What we want to do is to reuse the resource tracker(s) in the compute(s) to retrieve the information of the host machine(s), e.s.p. utilization information (CPU utilization, network traffic, and so on), and then send the data into DB, later on the schedule could take advantage of the data for future scheduling with the new filter and the new weigher we defined.
What are we asking developers to do re. UBS?
the pluggable framework: the framework allows any user to create its plugin to retrieve the data about utilization. Note that actually the resource tracker above will call the plugins to get the data; and the framework allows to send the data into the message bus so other OpenStack components (not Nova only) can use the data, e.g. Ceilometer to monitor the hosts. 2) New filter and new weigher: that will be implemented for nova scheduler to do intelligent scheduling. With the pluggable framework, developers in other companies, e.g. Cisco, can add their own plugins to monitor network. Also, we can add node manager monitoring into a NM plugin in the future.
Enables premium flavors
Enhanced capabilities for cloud customers
Enhanced revenue for cloud providers
Adrian’s key points:
SDN/NFV driving a change in how appliances are developed, deployed and managed.
CapEX
Reduce dependency on proprietary hardware
Virtualize network functions on COTS HW
OpEX
Power
Ease of Maintenance via uniformity of the physical network
Automation
Running production, test, service upgrades on same infrastructure
Service Revenue
New Services
Broader ecosystem for faster innovation
Faster TTM to deploy new services
Targeted services per Geo/Customer Type
Das leads off talking about
Has Intel’s work on Decider moved to the open source version (Heat), and/or did we make a contribution into Heat itself?
Adrian’s Key Points:
Incredible efficiencies are possible when you leverage advanced capabilities in Intel platforms.
Don’t spend much time on any of the data points.
SDN & NFV workloads are sensitive to performance and latency characteristics and need to leverage platform capabilities.
Expose CPU & platform features to OpenStack Nova scheduler
Use ComputeCapabilities filter to select hosts with required features
Intel® AES-NI or PCI Express accelerators for security and I/O workloads
Mention Neutron extensions for DPDK vSwitch and VPN-as-a-Service optimized with Intel QuickAssist Accelerator.
References:
Top Left: Quick Assist Ref: http://www.intel.ie/content/dam/www/public/us/en/documents/articles/itj-cryptographic-security-article.pdf
Top Right: DPDK Ref: http://www.intel.com/content/www/us/en/communications/communications-packet-processing-brief.html
Bottom Left: AES-NI & IPSec Ref: http://www.intel.com/content/dam/www/public/us/en/documents/white-papers/aes-ipsec-performance-linux-paper.pdf
Bottom Middle: Intel Secure Key Ref: https://intel.activeevents.com/sf13/connect/fileDownload/session/F5D69EE8DC6A4A29309B97176C1121F0/SF13_SECS002_100.pdf
Bottom Right: AVX Ref: https://intel.activeevents.com/sf13/connect/sessionDetail.ww?SESSION_ID=1164 & https://intel.activeevents.com/sf13/connect/fileDownload/session/9E8861F1BA9547056E03A017ACCD9D3F/SF13_SECS005_100.pdf
Enable industry leading manageability by exposing health, state, resource availability for optimal workload placement and configuration
Enables premium flavors
Enhanced capabilities for cloud customers
Enhanced revenue for cloud providers
Shout-
Today, how do services get provisioned? Somebody has an idea for a service and then they have to call IT. A number of people in IT go ahead and scope their needs. IT sharpens their pencils, they look at what the requirements are for reliability, for capacity, how much web services access do they need? Then they balance that against the rest of the infrastructure and all of the user demands.
They're having to look at profitability, they're looking at cost, they're looking at the capacity that they know they have based on their archives and databases, to give them a paper estimate of the capacity they really have installed. Once they've got that in place, and procure the needed equipment, they have to manually configure it.
Manually configuring a device means you touch everyone. Whether it is having to actually physically plug in an Ethernet plug and make the connection between different boxes, or whether it's simply having to touch the command line interface of every single box to configure and provision it appropriately there is a human touch at every point along the way.
Once those are connected and configured, then they actually have to set up service. This pulls together the server and the storage and the data store so that the service is actually running and allows the original service requestor to develop the software and services they had in mind in the first place
Then and only then do you have the service running. The service is available & ready for customers to do business. The time to provision there is months, minimally speaking, about eight weeks according to the Intel IT internal estimates.
What should we be moving to? The end state of the future, the vision of re-architecting the data center and the result is something called software-defined infrastructure.
Once there's an idea for a service, the LOB customer can pull together very quickly from private or, public, cloud services, or from their own internal capabilities, using a self-service portal that orchestrates the services they need from an online catalog. Things like location, security, online payments can be pulled together automatically –then the customer can assemble the software components from a list that's available to them, whether from their internal IT department or a repository like GitHub. The service level agreement that the orchestrator creates tells the infrastructure orchestrator what resources do I need? What kind of availability do I need? How much storage do I need? How fast of a connection do I need between the compute and the storage? Then, how do I manage power and how do I manage temperature demands if I'm in a particularly intense workload?
All of this happens automatically. After the services are orchestrated, the infrastructure is orchestrated, the service is running. The time to provision a new service is minutes. Depending on how quickly somebody can put the software together, it should be push button - done
There are many different versions and levels of the Intel SDI vision – at the highest most abstract level, Intel’s SDI vision is simple . the customer has a self service portal, and the service level agreements are driven automatically. Beneath that, datacenter operations are orchestrated, automated & intelligent about real time workload health and utilization. Beneath the operations is the actual infrastructure itself, storage network and servers. Finally, it all connects seamlessly to the power, cooling, location data provided in an automated ongoing manner whether in a virtualized or non virtualized environment
. The benefits are automated provisioning – the resources needed to meet the requirements of an SLA are assigned and provisioned automatically based on the orchestration layer’s ongoing intelligence about the available and required capacity across the entire datacenter. The orchestrator worries about policy – security, data governance, workload placement, power, energy use, etc. Facilitating all the automation and agility are pools of composable resources – flexible, defined and managed via software on standard high volume servers