A key reason for using dynamic tiering for mainframe storage is performance. This session will focus on dynamic tiering in mainframe environments and how to configure and control tiering. The session ends with a detailed discussion of performance considerations when using Hitachi Dynamic Tiering. By viewing this webcast, you will: Understand Hitachi Dynamic Tiering and the options for configuring and controlling tiering. Understand the performance considerations and the type of performance improvements you might experience when you implement Hitachi Dynamic Tiering. For more information on Hitachi Dynamic Tiering please visit: http://www.hds.com/products/storage-software/hitachi-dynamic-tiering.html?WT.ac=us_mg_pro_dyntir
Why Networked FICON Storage Is Better Than Direct Attached StorageHitachi Vantara
With the many enhancements and improvements in mainframe I/O technology in the past 5 years, the question "Do I need FICON switching technology, or should I go with direct attached storage?" is frequently asked. This webcast explores both technical and business reasons for implementing a switched FICON architecture instead of a direct attached storage FICON architecture for mainframe attached storage. The discussion will also include an overview of the Hitachi Data Systems and Brocade solutions for mainframe environments. By viewing this webcast, you’ll learn: The business and technical value of networking FICON attached storage instead of direct attached. The business and technical value of Hitachi mainframe storage capabilities. The offerings from Hitachi Data Systems and Brocade that can help you achieve the benefits of networked FICON storage. For more information on our mainframe solutions please read: http://www.hds.com/solutions/infrastructure/mainframe/?WT.ac=us_mg_sol_mnfr
Jean Thomas Acquaviva from DDN present this deck at the 2016 HPC Advisory Council Switzerland Conference.
"Thanks to the arrival of SSDs, the performance of storage systems can be boosted by orders of magnitude. While a considerable amount of software engineering has been invested in the past to circumvent the limitations of rotating media, there is a misbelief than a lightweight software approach may be sufficient for taking advantage of solid state media. Taking the data protection as an example, this talk will present some of the limitations of current storage software stacks. We will then discuss how this unfold to a more radical re-design of the software architecture and ultimately is making a case for an I/O interception layer."
Learn more: http://ddn.com
Watch the video presentation: http://wp.me/p3RLHQ-f7J
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Attendees of Red Hat Storage Day New York on 1/19/16 heard from Red Hat's Ross Turk why software-defined storage matters and how it can help solve data challenges at the petabyte scale and beyond.
The Importance of Fast, Scalable Storage for Today’s HPCIntel IT Center
Today, data drives discovery. And discoveries create are key to creating sustained advantages. The better your critical workflows are able to create and access data – the better you’ll be able to discover new, innovative solutions to important problems, or to create entirely new products. More than ever before, data intensive applications need the sustained performance and virtually unlimited scalability that only parallel storage software delivers.
Designed for maximum performance and scale, storage solutions powered by Lustre software deliver the performance at scale to meet today’s storage requirements. As the most widely used parallel storage system for HPC, Lustre-powered storage is the ideal storage foundation.
But scalable performance storage by itself only solves half the problem. Today’s users expect storage solutions that deliver sustained performance, scale upward to near limitless capacities, and are simple to install and manage. Intel(r) Enterprise Edition for Lustre* software combines the straight line speed and scale of Lustre with the bottom line need for lowered management complexity and cost.
As the recognized leaders in the development and support of the Lustre file system, Intel has the expertise to make storage solutions for data intensive applications faster, smarter and easier.
A key reason for using dynamic tiering for mainframe storage is performance. This session will focus on dynamic tiering in mainframe environments and how to configure and control tiering. The session ends with a detailed discussion of performance considerations when using Hitachi Dynamic Tiering. By viewing this webcast, you will: Understand Hitachi Dynamic Tiering and the options for configuring and controlling tiering. Understand the performance considerations and the type of performance improvements you might experience when you implement Hitachi Dynamic Tiering. For more information on Hitachi Dynamic Tiering please visit: http://www.hds.com/products/storage-software/hitachi-dynamic-tiering.html?WT.ac=us_mg_pro_dyntir
Why Networked FICON Storage Is Better Than Direct Attached StorageHitachi Vantara
With the many enhancements and improvements in mainframe I/O technology in the past 5 years, the question "Do I need FICON switching technology, or should I go with direct attached storage?" is frequently asked. This webcast explores both technical and business reasons for implementing a switched FICON architecture instead of a direct attached storage FICON architecture for mainframe attached storage. The discussion will also include an overview of the Hitachi Data Systems and Brocade solutions for mainframe environments. By viewing this webcast, you’ll learn: The business and technical value of networking FICON attached storage instead of direct attached. The business and technical value of Hitachi mainframe storage capabilities. The offerings from Hitachi Data Systems and Brocade that can help you achieve the benefits of networked FICON storage. For more information on our mainframe solutions please read: http://www.hds.com/solutions/infrastructure/mainframe/?WT.ac=us_mg_sol_mnfr
Jean Thomas Acquaviva from DDN present this deck at the 2016 HPC Advisory Council Switzerland Conference.
"Thanks to the arrival of SSDs, the performance of storage systems can be boosted by orders of magnitude. While a considerable amount of software engineering has been invested in the past to circumvent the limitations of rotating media, there is a misbelief than a lightweight software approach may be sufficient for taking advantage of solid state media. Taking the data protection as an example, this talk will present some of the limitations of current storage software stacks. We will then discuss how this unfold to a more radical re-design of the software architecture and ultimately is making a case for an I/O interception layer."
Learn more: http://ddn.com
Watch the video presentation: http://wp.me/p3RLHQ-f7J
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Attendees of Red Hat Storage Day New York on 1/19/16 heard from Red Hat's Ross Turk why software-defined storage matters and how it can help solve data challenges at the petabyte scale and beyond.
The Importance of Fast, Scalable Storage for Today’s HPCIntel IT Center
Today, data drives discovery. And discoveries create are key to creating sustained advantages. The better your critical workflows are able to create and access data – the better you’ll be able to discover new, innovative solutions to important problems, or to create entirely new products. More than ever before, data intensive applications need the sustained performance and virtually unlimited scalability that only parallel storage software delivers.
Designed for maximum performance and scale, storage solutions powered by Lustre software deliver the performance at scale to meet today’s storage requirements. As the most widely used parallel storage system for HPC, Lustre-powered storage is the ideal storage foundation.
But scalable performance storage by itself only solves half the problem. Today’s users expect storage solutions that deliver sustained performance, scale upward to near limitless capacities, and are simple to install and manage. Intel(r) Enterprise Edition for Lustre* software combines the straight line speed and scale of Lustre with the bottom line need for lowered management complexity and cost.
As the recognized leaders in the development and support of the Lustre file system, Intel has the expertise to make storage solutions for data intensive applications faster, smarter and easier.
In this video from the DDN User Group at SC14, Robert Triendl presents: Optimizing Lustre and GPFS Solutions with DDN.
Learn more: http://www.ddn.com/hpc-matters/
Highlights
●● ● ●Complete: combining compute, interconnect,
storage and system software
●● ● ●Modular and Extensible: match the
right combination of components and
configurations to meet your workload
●● ● ●Integrated: racked and tested in
IBM manufacturing to reduce time to
compute
Overview of the architecture, and benefits of Dell HPC Storage with Intel EE Lustre in High Performance Computing and Big Science workloads.
Presented by Andrew Underwood at the Melbourne Big Data User Group - January 2016.
Lustre is a trademark of Seagate Technology.
In this presentation from the DDN User Meeting at SC13, Jeff Denworth provides a product update,
Watch the video presentation: http://insidehpc.com/2013/11/13/ddn-user-meeting-coming-sc13-nov-18/
IBM Data Engine for Hadoop and Spark - POWER System Edition ver1 March 2016Anand Haridass
This document describes the IBM Data Engine for Hadoop and Spark (IDEHS) - Power Systems Edition, an IBM integrated solution. This solution features a technical-computing architecture that supports running Big Data-related workloads more easily and with higher performance. It includes the servers, network switches, and software needed to run MapReduce and Spark-based workloads.
A lecture on Apace Spark, the well-known open source cluster computing framework. The course consisted of three parts: a) install the environment through Docker, b) introduction to Spark as well as advanced features, and c) hands-on training on three (out of five) of its APIs, namely Core, SQL \ Dataframes, and MLlib.
DDN GS7K - Easy-to-deploy, High Performance Scale-Out Parallel File System Ap...inside-BigData.com
In this deckt, Uday Mohan from DataDirect Networks presents: DDN GS7K - Easy-to-deploy, High Performance Scale-Out Parallel File System Appliance.
High performance computing is critical in commercial markets, spanning a wide range of applications across multiple industries, and this trend is only growing. The GS7K from DDN will help bring the latest high-performance storage technologies to more of these markets, connecting companies to their next innovations faster while satisfying their enterprise standards.”
Watch the video presentation: http://wp.me/p3RLHQ-d99
Key trends in Big Data and new reference architecture from Hewlett Packard En...Ontico
Динамичное развитие инструментов для обработки Больших Данных порождает новые подходы к повышению производительности. Ключевые новые технологии в Hadoop 2.0, такие как Yarn labeling и Storage Tiering, уже используются компаниями Yahoo и Ebay. Эти новые технологии открывают путь для серьезного повышения эффективности ИТ-инфраструктуры для Hadoop, достигая прироста производительности в несколько десятков процентов при одновременном снижении потребления памяти и электроэнергии.
Эталонная архитектура для Hadoop от HP — HP Big Data Reference Architecture — предлагает использование специализированных "микросерверов" HP Moonshot вкупе с высокоплотными узлами хранения HP Apollo для достижения лучших на сегодня показателей полезной отдачи от железа в Hadoop.
In this video from the DDN User Group at SC14, Robert Triendl presents: Optimizing Lustre and GPFS Solutions with DDN.
Learn more: http://www.ddn.com/hpc-matters/
Highlights
●● ● ●Complete: combining compute, interconnect,
storage and system software
●● ● ●Modular and Extensible: match the
right combination of components and
configurations to meet your workload
●● ● ●Integrated: racked and tested in
IBM manufacturing to reduce time to
compute
Overview of the architecture, and benefits of Dell HPC Storage with Intel EE Lustre in High Performance Computing and Big Science workloads.
Presented by Andrew Underwood at the Melbourne Big Data User Group - January 2016.
Lustre is a trademark of Seagate Technology.
In this presentation from the DDN User Meeting at SC13, Jeff Denworth provides a product update,
Watch the video presentation: http://insidehpc.com/2013/11/13/ddn-user-meeting-coming-sc13-nov-18/
IBM Data Engine for Hadoop and Spark - POWER System Edition ver1 March 2016Anand Haridass
This document describes the IBM Data Engine for Hadoop and Spark (IDEHS) - Power Systems Edition, an IBM integrated solution. This solution features a technical-computing architecture that supports running Big Data-related workloads more easily and with higher performance. It includes the servers, network switches, and software needed to run MapReduce and Spark-based workloads.
A lecture on Apace Spark, the well-known open source cluster computing framework. The course consisted of three parts: a) install the environment through Docker, b) introduction to Spark as well as advanced features, and c) hands-on training on three (out of five) of its APIs, namely Core, SQL \ Dataframes, and MLlib.
DDN GS7K - Easy-to-deploy, High Performance Scale-Out Parallel File System Ap...inside-BigData.com
In this deckt, Uday Mohan from DataDirect Networks presents: DDN GS7K - Easy-to-deploy, High Performance Scale-Out Parallel File System Appliance.
High performance computing is critical in commercial markets, spanning a wide range of applications across multiple industries, and this trend is only growing. The GS7K from DDN will help bring the latest high-performance storage technologies to more of these markets, connecting companies to their next innovations faster while satisfying their enterprise standards.”
Watch the video presentation: http://wp.me/p3RLHQ-d99
Key trends in Big Data and new reference architecture from Hewlett Packard En...Ontico
Динамичное развитие инструментов для обработки Больших Данных порождает новые подходы к повышению производительности. Ключевые новые технологии в Hadoop 2.0, такие как Yarn labeling и Storage Tiering, уже используются компаниями Yahoo и Ebay. Эти новые технологии открывают путь для серьезного повышения эффективности ИТ-инфраструктуры для Hadoop, достигая прироста производительности в несколько десятков процентов при одновременном снижении потребления памяти и электроэнергии.
Эталонная архитектура для Hadoop от HP — HP Big Data Reference Architecture — предлагает использование специализированных "микросерверов" HP Moonshot вкупе с высокоплотными узлами хранения HP Apollo для достижения лучших на сегодня показателей полезной отдачи от железа в Hadoop.
Gestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMFSUSE Italy
In questa sessione HPE e SUSE illustrano con casi reali come HPE Data Management Framework e SUSE Enterprise Storage permettano di risolvere i problemi di gestione della crescita esponenziale dei dati realizzando un’architettura software-defined flessibile, scalabile ed economica. (Alberto Galli, HPE Italia e SUSE)
Webinar: Three Reasons Why NAS is No Good for AI and Machine LearningStorage Switzerland
Artificial Intelligence (AI) and Machine Learning (ML) are becoming mainstream initiatives at many organizations. Data is at the heart of AI and ML. Immediate access to large data sets is pivotal to successful ML outcomes. Without data, there is no learning. The goal of AI and ML is to try to simulate human thinking and understanding. AI and ML initiatives cannot however be realized unless the data processing layer has immediate access to, and a constant supply of, data.
The problem is that NAS solutions, often those designed for HPC environments, is what most organizations try to leverage as the AI/ML storage architectures. Legacy storage systems, like NAS, cannot support AI and ML workloads, because they were architected when spinning disk and slower networking technologies were the industry standard.
Join Storage Switzerland and WekaIO for our on demand webinar to learn the three reasons why NAS is no good for AI and ML:
* NAS wasn’t architected to leverage today’s flash technology and can’t keep pace with the I/O demands, leaving GPUs starved for data
* NAS has no or very rudimentary Cloud Integration. Tiering to the cloud can play an integral role in AI and ML workloads
* NAS data protection schemes are expensive given the amount of data required to feed an AI/ML environment
Lessons learned processing 70 billion data points a day using the hybrid cloudDataWorks Summit
NetApp receives 70 billion data points of telemetry information each day from its customer’s storage systems. This telemetry data contains configuration information, performance counters, and logs. All of this data is processed using multiple Hadoop clusters, and feeds a machine learning pipeline and a data serving infrastructure that produces insights for customers via an application called Active IQ. We describe the evolution of our Hadoop infrastructure from a traditional on-premises architecture to the hybrid cloud, and lessons learned.
We’ll discuss the insights we are able to produce for our customers, and the techniques used. Finally, we describe the data management challenges with our multi-petabyte Hadoop data lake. We solved these problems by building a unified data lake on-premises and using the NetApp Data Fabric to seamlessly connect to public clouds for data science and machine learning compute resources.
Architecting a truly hybrid cloud implementation allowed NetApp to free up our data scientists to use any software on any cloud, kept the customer log data safe on NetApp Private Storage in Equinix, resulted in faster ability to innovate and release new code and provided flexibility to use any public cloud at the same time with data on NetApp in Equinix.
Speaker
Pranoop Erasani, NetApp, Senior Technical Director, ONTAP
Shankar Pasupathy, NetApp, Technical Director, ACE Engineering
Hear first about the new innovative technology coming from HP to help SMB IT professionals like you. See how this new technology can drive innovation in your own business, while delivering breakthrough energy, space and cost savings.
Speaker Bio: Julian has 30 years’ experience at HP, working with HP’s channel partners and customers through various hardware product sets ranging from the initial Digital PC’s through PDPs, MicroServers and Vaxes onto HP ProLiants, where he has been working for the past 10 years overseeing Generation 1 through to the current Generation 8 models.
Over the past two decades, the Big Data stack has reshaped and evolved quickly with numerous innovations driven by the rise of many different open source projects and communities. In this meetup, speakers from Uber, Alibaba, and Alluxio will share best practices for addressing the challenges and opportunities in the developing data architectures using new and emerging open source building blocks. Topics include data format (ORC) optimization, storage security (HDFS), data format (Parquet) layers, and unified data access (Alluxio) layers.
Accelerate Analytics and ML in the Hybrid Cloud EraAlluxio, Inc.
Alluxio Webinar
April 6, 2021
For more Alluxio events: https://www.alluxio.io/events/
Speakers:
Alex Ma, Alluxio
Peter Behrakis, Alluxio
Many companies we talk to have on premises data lakes and use the cloud(s) to burst compute. Many are now establishing new object data lakes as well. As a result, running analytics such as Hive, Spark, Presto and machine learning are experiencing sluggish response times with data and compute in multiple locations. We also know there is an immense and growing data management burden to support these workflows.
In this talk, we will walk through what Alluxio’s Data Orchestration for the hybrid cloud era is and how it solves the performance and data management challenges we see.
In this tech talk, we'll go over:
- What is Alluxio Data Orchestration?
- How does it work?
- Alluxio customer results
Workload Centric Scale-Out Storage for Next Generation DatacenterCloudian
For performance workloads, SolidFire provides a scale-out all-flash storage platform designed
to deliver guaranteed storage performance to thousands of application workloads side-by-side,
allowing performance workload consolidation under a single storage platform. The SolidFire system
can be combined together over standard networking technologies in clusters ranging from 4 to 100
nodes, providing high performance capacity from 35TB to 3.4PB, and can deliver between 200,000
and 7.5M guaranteed IOPS to more than 100,000 volumes / applications within a single cluster.
New Ceph capabilities and Reference ArchitecturesKamesh Pemmaraju
Have you heard about Inktank Ceph and are interested to learn some tips and tricks for getting started quickly and efficiently with Ceph? Then this is the session for you!
In this two part session you learn details of:
• the very latest enhancements and capabilities delivered in Inktank Ceph Enterprise such as a new erasure coded storage back-end, support for tiering, and the introduction of user quotas.
• best practices, lessons learned and architecture considerations founded in real customer deployments of Dell and Inktank Ceph solutions that will help accelerate your Ceph deployment.
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?Red_Hat_Storage
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About? By: Kamesh Pemmaraju,Neil Levine
Have you heard about Inktank Ceph and are interested to learn some tips and tricks for getting started quickly and efficiently with Ceph? Then this is the session for you! In this two part session you learn details of: • the very latest enhancements and capabilities delivered in Inktank Ceph Enterprise such as a new erasure coded storage back-end, support for tiering, and the introduction of user quotas. • best practices, lessons learned and architecture considerations founded in real customer deployments of Dell and Inktank Ceph solutions that will help accelerate your Ceph deployment.
New Generation of IBM Power Systems Delivering value with Red Hat Enterprise ...Filipe Miranda
New Generation of IBM Power Systems Delivering value with Red Hat Enterprise Linux - Learn about the new IBM Power8 architecture, about Red Hat Enterprise Linux 7 for Power Systems and additional information on EnterpriseDB on how to migrate from Oracle to PostgreSQL.
UPDATED!
Data is the fuel for the idea economy, and being data-driven is essential for businesses to be competitive. HPE works with all the Hadoop partners to deliver packaged solutions to become data driven. Join us in this session and you’ll hear about HPE’s Enterprise-grade Hadoop solution which encompasses the following
-Infrastructure – Two industrialized solutions optimized for Hadoop; a standard solution with co-located storage and compute and an elastic solution which lets you scale storage and compute independently to enable data sharing and prevent Hadoop cluster sprawl.
-Software – A choice of all popular Hadoop distributions, and Hadoop ecosystem components like Spark and more. And a comprehensive utility to manage your Hadoop cluster infrastructure.
-Services – HPE’s data center experts have designed some of the largest Hadoop clusters in the world and can help you design the right Hadoop infrastructure to avoid performance issues and future proof you against Hadoop cluster sprawl.
-Add-on solutions – Hadoop needs more to fill in the gaps. HPE partners with the right ecosystem partners to bring you solutions such an industrial grade SQL on Hadoop with Vertica, data encryption with SecureData, SAP ecosystem with SAP HANA VORA, Multitenancy with Blue Data, Object storage with Scality and more.
Artem Bykovets: Чому люди не стають раптово кросс-функціональними, хоча в нас...Lviv Startup Club
Artem Bykovets: Чому люди не стають раптово кросс-функціональними, хоча в нас Agile? (UA)
Kyiv PMDay 2024 Summer
Website – www.pmday.org
Youtube – https://www.youtube.com/startuplviv
FB – https://www.facebook.com/pmdayconference
Natalia Renska & Roman Astafiev: Нарциси і психопати в організаціях. Як це вп...Lviv Startup Club
Natalia Renska & Roman Astafiev: Нарциси і психопати в організаціях. Як це впливає на розробку продуктів та реалізацію інноваційних рішень (UA)
Kyiv PMDay 2024 Summer
Website – www.pmday.org
Youtube – https://www.youtube.com/startuplviv
FB – https://www.facebook.com/pmdayconference
Igor Protsenko: Difference between outsourcing and product companies for prod...Lviv Startup Club
Igor Protsenko: Difference between outsourcing and product companies for product managers and related challenges (UA)
Kyiv PMDay 2024 Summer
Website – www.pmday.org
Youtube – https://www.youtube.com/startuplviv
FB – https://www.facebook.com/pmdayconference
Kseniya Leshchenko: Shared development support service model as the way to ma...Lviv Startup Club
Kseniya Leshchenko: Shared development support service model as the way to make small projects with small budgets profitable for the company (UA)
Kyiv PMDay 2024 Summer
Website – www.pmday.org
Youtube – https://www.youtube.com/startuplviv
FB – https://www.facebook.com/pmdayconference
Anna Kompanets: Проблеми впровадження проєктів, про які б ви ніколи не подума...Lviv Startup Club
Anna Kompanets: Проблеми впровадження проєктів, про які б ви ніколи не подумали (UA)
Kyiv PMDay 2024 Summer
Website – www.pmday.org
Youtube – https://www.youtube.com/startuplviv
FB – https://www.facebook.com/pmdayconference
Anton Hlazkov: Впровадження змін – це процес чи проєкт? Чому важливо розуміти...Lviv Startup Club
Anton Hlazkov: Впровадження змін – це процес чи проєкт? Чому важливо розуміти різницю і як це впливає на результат (UA)
Kyiv PMDay 2024 Summer
Website – www.pmday.org
Youtube – https://www.youtube.com/startuplviv
FB – https://www.facebook.com/pmdayconference
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
Thinking of getting a dog? Be aware that breeds like Pit Bulls, Rottweilers, and German Shepherds can be loyal and dangerous. Proper training and socialization are crucial to preventing aggressive behaviors. Ensure safety by understanding their needs and always supervising interactions. Stay safe, and enjoy your furry friends!
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
MATATAG CURRICULUM: ASSESSING THE READINESS OF ELEM. PUBLIC SCHOOL TEACHERS I...NelTorrente
In this research, it concludes that while the readiness of teachers in Caloocan City to implement the MATATAG Curriculum is generally positive, targeted efforts in professional development, resource distribution, support networks, and comprehensive preparation can address the existing gaps and ensure successful curriculum implementation.
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
4. IMPORTANT DATES IN HP(E) HISTORY
1939
A new
company;
HP invents
first product
1994
Planet
Partners
program
launched
1959
Going
global
1966
HP enters
computer
industry;
HP Labs
opens
1972
Replacing the
slide rule—HP
invents the
pocket
calculator
1980
Our first PCs
1984
A print
revolution:
HP
introduces
both the
ThinkJet and
the LaserJet
2003
Cooler
servers
2005
Halo
Collaboration
Studio
2008
Commitment
to cloud
computing
30s 00s90s80s70s60s50s
5. Hewlett Packard Enterprise: At a glance
People
Values
Quality
Market leadership
Living Progress
Revenue
Servers1
1. IDC Worldwide Quarterly Server Tracker 2Q18, Sept 2018. Market share on a global level for
HPE includes New H3C Group . All data points are worldwide.
2. IDC Worldwide Quarterly Enterprise Storage Tracker 2Q18, Sept 2018. Market share on a global level for HPE includes
New H3C Group All data points are worldwide
3. Hyperion Research HPC Qview for 2Q18, September 2018. TOP500 List of Supercomputer sites, Nov 2017
#1 x86 blade server revenue
#1 Modular server revenue
#1 Four-socket x86 server revenue
#1 Mid-Range Enterprise x86 server
Storage2
#1 Product brand worldwide midrange SAN revenue :
HPE 3PAR StoreServ
#1 Worldwide Internal OEM storage revenue
High Performance Compute3
#1 HPC Server Revenue3
Provider of Top500 energy efficient supercomputers
Hyperconverged Infrastructure4
Fastest growing HCI systems vendor of the top 3,
growing YoY and faster than overall market
HPE Market Leadership
Enterprise WLAN5
#2 Worldwide Enterprise WLAN Vendor
Campus Switching6
#2 Worldwide Enterprise WLAN Vendor
Gartner:
• 2018 Magic Quadrant for Wired and Wireless LAN access
• 2018 Magic Quadrant for Operations Support Systems
• 2018 Magic Quadrant for Hyperconverged Infrastructure
• Highest Scores in 5 out of 6 Gartner use cases for
Critical Capabilities for Wired and Wireless LAN Access
Infrastructure
Forrester:
• Q3-18 The Forrester Wave: Hyperconverged Infrastructure
IDC:
• IDC MarketScape for Wireless LAN
InfoTech Research Group:
• HPE Aruba “Champion Wired and Wireless LAN Vendor
Landscape
HPE Named Leader7
4. IDC Worldwide Converged Systems Tracker for 2Q18, Sept 25, 2018
5. Worldwide Quarterly IDC Enterprise WLAN Tracker 4Q1
6. 650 Group 2QCY18, September 2018
7. Sources provided via hyperlinks
6. Hewlett Packard Enterprise: At a glance
People
Values
Quality
Market leadership
Living Progress
Revenue
Partnership first
We believe in the power of
collaboration – building long
term relationships with our
customers, our partners and
each other.
Bias for action
We never sit still – we take
advantage of every opportunity.
Innovators at heart
We are driven to innovate –
creating both practical and
breakthrough advancements.
7. Together, shaping and leading the next generation of High Performance
Computing (HPC) and Artificial Intelligence (AI)
7
8. HPC Solutions Business Unit Solutions Areas.
HP HPC BU Solutions Areas
HighPerformance
Computing
- Oil & Gas computations
- Meteo/Weather forecast
- Manufacturing CAE
- Life sciences (Bio, Chem,…)
BigDataapplications
- Hadoop & SPARK
- Content delivery
- Rendering
- In memory compute & DB
Scale-OUTStorage
- Scale out digital archive
- Media assets archives
- Geo distributed storage
- Video surveillance archive
PerformanceOptimized
Datacenters
- Modular datacenters
- Mobile datacenters
- EMI/EMR protected DC
- Portable miniDC
10. HPE Data Management Framework
• Efficient storage utilization and cost management
• Streamline data workflows
• Data assurance and protection
Tape
Zero
Watt
Storage
Object
Storage
& Cloud
Data Management | Fast & Slow Tier Models
Aggregated Storage-in-Compute
10
Ethernet / InfiniBand / Slingshot
HPE Compute Node
File System Access
HPE Compute Node
File System Access
HPE Compute Node
File System Access
HPE Compute Node
File System Access
Flash Tier Storage Server
NVMe
NVMe
NVMe
NVMe
NVMe
NVMe
NVMe
NVMe
NVMe
NVMe
Flash Tier Storage Server
NVMe
NVMe
NVMe
NVMe
NVMe
NVMe
NVMe
NVMe
NVMe
NVMe
• All nodes have full POSIX access to the flash tier and parallel file system
• Aggregated Storage-in-Compute model is where multiple NVMe devices are placed
in dense compute nodes (e.g. 1U nodes with 10 NVMe devices)
• Flash configuration provides burst buffer capabilities and persistent shareable
POSIX file system functionality in a single layer
• For expanded tiered data management capabilities, DMF can tier data from/to this
layer into object & cloud storage, Zero Watt buffer storage or tape in order to
deliver virtually infinite capacity as well as integrated backup, archive and disaster
recovery capabilities
SOLUTION ATTRIBUTES
Lustre
HDFS
11. Apollo 4000 Cluster
More data from the edge means more storage in the core
1
1
Apollo 4200 Gen9
– 2U platform;
28 LFF HDD or 54 SFF HDD
Apollo 4510 Gen10
– 4U platform; 60 LFF HDD
JBOD Option (D8000)
- 4U 106 LFF HDD
Data lakeHot
Warm
Cold
Tiered storage for Big Data Analytics
Process Train
Data storage for AI workflows
12. Zero Watt Storage
HPE Data Management Framework
High Performance
Power Optimized
Extended Drive Lifespan
• Near 20 GB/s per JBOD performance
provides ‘fast’ hard disk tier to stream data
to active ‘hot’ storage
• Each drive is individually managed by DMF
to track data activity and data layout
• Drives can be spun down when not in use
to significantly reduce power and cooling
costs and increase drive lifespan
• HPE D6020 5U 70 bay JBOD is qualified
today
1
2
Software-based DMF warm tier storage
option with minimized power utilization
paired with the HPE D6020 JBOD
13. HPE Scalable Object Storage – Scality
Object storage (and some file)
• Key attributes
− Scalable software-defined storage for object (S3) and
file access (SMB/NFS) at the same time
− erasure coding (variable) and replication (small files)
− Native data protection in a shared-nothing, distributed
architecture with no single point of failure
− Multi-node, multi-site, multi-geo data distribution for
extreme data durability (up to 13x 9s)
− Connectors for multiple file and cloud access protocols
to easily support various business applications
− Easy and proven growth path
− Large (reference) customer installed base
• Tight collaboration with HPE; HW encryption
• Certified as Cloud Bank Storage target
• Various whitepapers and reference architectures
available
• Architecture/building blocks
• Sweet spot 500TB – 100s of PBs (scales to Exabytes)
• Minimum: 6 nodes with 10 HDDs/node
3-node min. support (200TB+) – single/2-site only
• Connector nodes need to be configured separately
14. Object store resilience – through Geo-distributed Erasure Coding
Drive failures Node failures Zone and region failures
Compute Compute
Storage Storage Storage Storage
Data Center A Data Center B
• Component and network failures are to be expected – and thus considered a normal
state
• System functions properly in spite of multiple failures
Compute Compute Compute Compute
Storage Storage Storage Storage Storage Storage Storage StorageStorage Storage
Compute Compute Compute
Storage Storage Storage Storage
Data Center C
1
4
15. How does erasure coding work?
9MB
1 M B 1 M B
1 M B 1 M B
1 M B
1 M B
1 M B 1 M B 1 M B
data chunks parity chunksoriginal file
Example: ARC(9,3)
Provides three-disk failure protection with ~33% overhead
RING Erasure Coding
Reed-Solomon EC algorithm (custom XOR acceleration
library)
Dynamically configurable schema – Up to 64 data +
parity chunks to protect against variable number of
failures
Flexible & Efficient
Configurable replication or erasure coding per connector
Great for large objects – avoids replication overhead
Data chunks stored in the clear to avoid read
performance penalties
Scales easily – more cost savings and less overhead
with multiple sites
1 M B 1 M B 1 M B
1 M B
1 M B
1 M B
1 M B
1 M B
1 M B
1 M B
1 M B
1 M B
1 M B
1 M B
1 M B
Erasure coding is a cost-effective way to store big files
16. WekaIO Parallel File System for All-Flash Environments
Applications and storage share the compute & fabric infrastructure
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
ColdData
Unified Namespace
Nodes can be
• FS Clients
• FS Servers or
• Both
Ethernet or InfiniBand Network
Apollo 2000
Apollo 6000
Apollo 6500
SGI 8600
DL 360/380
Option for Apollo 2000-based
storage server model with 4
nodes per 2U chassis loaded
with NVMe storage
17. o Problem: Could not achieve the bandwidth
required to keep GPU cluster saturated
o Pain Point: Wasted cycle time ($$$$) on
very expensive GPU clusters.
o Test Platform: 10 Node HPE Apollo 2000
vs. local disk and Pure Storage Flashblade
server
o Result:
– WekaIO – 42% faster than Local Disk
– WekaIO – 4.4x faster than FlashBlade
WekaIO is Faster Performance than Local Disk
MB/Second
Higher is Better
Analytics Cluster Results to Single GPU Client
Actual measured data at an autonomous vehicle training installation
0
500
1,000
1,500
2,000
2,500
3,000
3,500
4,000
4,500
5,000
WekaIO 3.0 Local Disk SSD Pure Storage
1MB Read Performance – Single GPU Client
18. AI data node using Apollo 4200 Gen10
For tiered and hybrid solutions
Apollo 2000 Gen10
DL360 Gen10
Scale-out S3-compatible archive
Petabytes of geo-dispersed data
AI Workloads
GPU-driven data training, recognition, visualization, and simulation
High-performance File
All-NVMe flash storage
Apollo 4200 Gen10
Scality
Apollo 4200 Gen10
Scality
RING
AI data node
NVMe-optimized storage
Scale-out HDD bulk storage
Combined in one storage platform
Current Approach: Tiered Storage New Approach: Hybrid Storage
HPE validated solution
https://www.hpe.com/h20195/v2/Getdocument.aspx?docname=a00065979enw
19. 1
9
CLUSTERSTOR E1000 HARDWARE
“Zero Bottleneck” End-to-End PCIe 4.0 Design
Up to 24 x 2.5” NVMe
PCIe 4.0 SSD in
2 rack units
2 embedded
storage servers each
with 1 AMD “Rome”
socket and PCIe 4.0
Up to 6 x 100/200
Gbps PCIe 4.0 NICs
(Slingshot, GbE, IB)
Up to 230 TB usable
in 2 rack units
Lustre Flash Optimized
Metadata Servers,
Object Storage Servers
and
All Flash Arrays
60 GB/sec Write
80 GB/sec Read
20. 2
0
CLUSTERSTOR E1000 DISK ARRAY
Ultra-dense for less enclosures, racks, floor space
Up to 106 x 3.5” SAS
HDD in 4 rack units
Usable capacity points
• 1.07 PB (14 TB HDD) in 2019
• 1.22 PB (16 TB HDD) in 2020
• 1.53 PB (20 TB HDD) in 2021
Separate Disk Server with:
• 2 embedded storage servers
each with 1 AMD “Rome” socket
and PCEe 4.0
• Up to 4 x 100/200 Gbps
PCIe 4.0 NICs
(GbE, IB, OPA)
• 2 or 4 SSDs for WIBs,
Journals and NXD
21. Flexible modularity AND extreme scale for HPC & AI workloads
HPE Superdome Flex - Advanced SMP
21
5U, 4-socket
chassis
Scales up to 8 chassis and 32
sockets as a single system in a
single rack
Unparalleled Scale
– Modular scale-up architecture
– Scales seamlessly from 4 to 32 sockets as a single system
with both Gold and Platinum processors
– Designed to provide 768GB-48TB of shared memory
– High bandwidth (13.3GB/sec- bi-directional per link)/low latency (<400ns)
HPE Flex Grid, ~1TB/s total aggregation bandwidth
– Intel ® Xeon® Scalable processors, 1st and 2nd generation, with up to 28 cores
Unbounded I/0
– Up to 128 PCIe standup cards, LP/FH PCIe
Optimum Flexibility
– 4-socket chassis building blocks, low entry cost; HPE nPars
– NVIDIA GPUs, Intel SDVis
– 1/10/25 Gbe, 16/32Gb FC, IB EDR/Ethernet 100 GB, IB HDR, Omni-Path
– SAS, Multi-Rail LNet for Lustre; NVMe SSD
– MPI, OpenMP
Extreme Availability
– Advanced memory resilience, Firmware First, diagnostic engine,
self-healing
– HPE Serviceguard for Linux
Simplified User Experience
– HPE OneView, IRS, OpenStack, Redfish API
– HPE Datacenter Care, HPE Proactive Care
23. HPE’s architecture innovation addresses declining system ratios despite
improvements in processing performance
2
3
0.0001
0.0010
0.0100
0.1000
1.0000
2010 2012 2013 2013 2016 2016 2018 2018
Hopper Sequoia Titan Edison Cori Hsw Trinity KNL Aurora Summit
Memory (wAvg) / Flops Memory bw (wAvg) / Flops Injection bw / Flops Bissection bw / Flops
2022
Logarithmic
Time 2010 - 2022
Balanced System
Architecture
Memory Driven
Programming Model
Energy Efficiency from
Chip to Cooling Tower
Open Architecture,
Open Ecosystem
HPE is developing advanced system architecture for more balanced systems at scale
24. Memory
Bandwidth
− Embrace co-packaged
memory transition
(HBM, HMC …)
− Minimize latency for
Gen-Z attached
memory
Memory
Capacity
− Drive co-packaged
memory cost as low
as possible
− Enable Gen-Z
attached memory as
second memory tier
(DRAM or NVM)
Fabric
Injection Rate
− Embed the HCA to the
CPU leveraging
SerDes generalization
thanks to Gen-Z
− Integrated switches
close to compute for
multiple rails option
Fabric Bisection
Bandwidth
− Design high-radix
switches
− Integrate and optimize
for cost and usability
optical technologies
(vcsel -> SiP)
HPE’s technological innovation includes new memory, photonics and fabric
technology for data intensive workloads
2
4
25. Here is Edward Bear, coming downstairs
now, bump, bump, bump, on the back of
his head, behind Christopher Robin. It is,
as far as he knows, the only way of
coming downstairs, but sometimes he
feels that there really is another way, if
only he could stop bumping for a moment
and think of it.
A. A. Milne, Winnie-the-Pooh
26. For highest possible level of performance applications must change
Evolving the Software Ecosystem for Persistent Memory
Controller
Cache
File system
I/O Buffers
Drivers
Objects
Interpreters
Libraries
Media
~25k instructions
3+ data copies
Bottleneck
Application
Operating System
SSD/HDD
Objects
Interpreters
Media
Application
Persistent Memory
Bottleneck ?3 instructions
0 data copies
Libraries
27. The Traditional Memory/Storage Hierarchy
2
7
Processor
Hot Tier
Cold Tier
Super Fast
Super Expensive
Tiny Capacity
Processor
Registers
Level 1 (L1)
Level 2 (L2)
Level 3 (L3)
Physical
Memory
Random Access
Memory
Faster
Expensive
Small Capacity
Fast
Reasonably Priced
Average Capacity
Non-Volatile
Flash-based Memory
Solid State
Storage
Average Speed
Priced Reasonably
Average Capacity
Magnetic
Storage
File-based Memory
Slow
Inexpensive
Large Capacity
Processor
Cache
SAS/SATA HDD
SAS/SATA SSD
NVMe SSD
CPU
DRAM
Capacity
28. Redefining the Memory/Storage Hierarchy
2
8
SAS/SATA HDD
SAS/SATA SSD
NVMe SSD
CPU
DRAM
Memory
Storage
Persistent
Memory
• Data is volatile
• System DRAM is used
as a cache
• Data is persistent
• System DRAM is used
as main memory
Work as DRAM
Work as SSD
29. Storage Devices Access Modes IO Stack Comparison
App
File
System
Volsnap
Volmgr /
Partmgr
Disk /
ClassPnP
StorPort
MiniPort
HDD/SSD
Traditional
App
File
System
Volsnap
Volmgr /
Partmgr
PMM
Disk Driver
PMM
PMM Block Mode
PMM
Bus Driver
App
PMM-Aware
File System
Volmgr /
Partmgr
PMM
PMM Direct Access (DAX)
PMM
Bus Driver
Non-CachedIO
CachedIO
MemoryMapped
User Mode
Kernel Mode
4-10μs
read(fileptr,offset)
write(fileptr,offset)
/* OS call */
1-3μs 0.3μs
2
9
load(address)
store(address)
/* CPU opcode */
30. Storage over App Direct and Direct Access Applications
3
0
Persistent Memory Devices
Storage over App Direct
Applications
Load/Store
File System + DAX
PMM Drivers
mmapread / write Syscalls
PMDK APIs
Page Cache
Read/Write
User space
Kernel
FW/HW
Direct Access
Applications
Volatile DRAM used for system memory
Persistent memory devices from PMM
PMEM device(s) presented to OS
Can be formatted and mounted as a
filesystem in fsdax: ext4, xfs, NTFS
Storage over App Direct (SToAD)
Applications can access through the
storage software layer (legacy, no
application change): open(), read(),
write()
Direct Access
NVM programing model NVMPM
load/store: mmap(), memcpy(), PMDK
32. 3
2
Now “slow” (non-DAX) access to the Persistent Memory via
standard OS I/O system calls
The same single 4 KB read, but…
- logical I/O
- NO physical I/O anymore
green
NO yellowNO changes for any Db/App required!
Total 4 KB read time <2 us
0.000003 cpu=13 pid=5979 tgid=0 pread64 [17] entry fd=3 *buf=0x55b883ff2000 count=4096
offset=0x15c145d000
0.000004 cpu=13 pid=5979 tgid=0 pread64 [17] ret=0x1000 syscallbeg= 0.000002 fd=3
*buf=0x55b883ff2000 count=4096 offset=0x15c145d000
33. 3
3
Fastest ever
access
via
Direct Access
(DAX)
Total 4 KB read time ? – NO read!, latencies 350 ns or less!
NO physical I/O, NO logical I/O, NO block device layer, NO buffers, NO queues!!
Nothing
beyond!
App Direct !!
34. 3
4
CPU Avg_MHz Busy% Bzy_MHz TSC_MHz IRQ POLL C1 C1E C6 POLL% C1% C1E% C6%
55 617 33.54 1843 2694 120930 50017 58378 27884 29561 6.01 36.41 15.46 15.90
And let me explain why…
CPU Avg_MHz Busy% Bzy_MHz TSC_MHz IRQ POLL C1 C1E C6 POLL% C1% C1E% C6%
55 3786 99.97 3796 2694 1792 0 0 0 0 0.00 0.00 0.00 0.00
linux-tg7k:/home/anton # numactl --physcpubind=55 --membind=1 fio --filename=/mnt2/file --rw=randwrite --
ioengine=psync --direct=1 --bs=4k --iodepth=1 --numjobs=1 --runtime=60 --group_reporting --name=perf_test
linux-tg7k:/home/anton # numactl --physcpubind=55 --membind=1 fio --filename=/mnt1/file --rw=randwrite --
ioengine=psync --direct=1 --bs=4k --iodepth=1 --numjobs=1 --runtime=60 --group_reporting --name=perf_test
NVMe – real CPU utilization is 617 MHz
PMEM – real CPU utilization is 3786 MHz, Intel’s Turbo-Boost activated!!
Simplest log-writer like workload
not true CPU idle state; not true CPU work state; “do nothing” CPU state
you shouldn’t pay money for that!
35. CPUs Accelerators
Memory technologies I/O
– High bandwidth
– Low latency
– Advanced workloads & technologies
– Scalable from IoT to exascale
– Compatible
– Economical
– Supports electrical or optical interconnects
– Open standard
– Security built-in at the hardware level
Gen-Z: new open interconnect protocol
Key enabler of the Memory-Driven Computing open architecture
FPGA
GPU
SoC ASICNEURO
Memory
Memory
Network Storage
Direct Attach, Switched, or Fabric Topology
NVM NVM NVM
SoC
Memory
43
37. DZNE discovered HPE’s Memory-Driven
Computing — and saw unprecedented
computational speed improvements that hold
new promise in the race against Alzheimer’s
60% power reduction cuts research costs
101x
increase in analytics speed blasts
research bottlenecks, leading to shorter
processing time — from 22 minutes to
13seconds
Memory-Driven Computing helps outpace the global time bomb of
neurodegenerative disease