The document summarizes emerging computing trends in data centers, including:
1) The shift to multi-core CPU designs after Dennard scaling broke down, driven by the need for energy efficient designs for cloud computing.
2) The rise of heterogeneous computing using application-specific accelerators like GPUs and FPGAs to improve efficiency for targeted workloads like machine learning.
3) How technologies developed for mobile and edge computing like ARM cores can improve data center server efficiency through typical-use optimization rather than just peak performance.
HKG18-500K1 - Keynote: Dileep Bhandarkar - Emerging Computing Trends in the D...Linaro
Session ID: HKG18-500K1
Session Name: HKG18-500K1 - Keynote: Dileep Bhandarkar - Emerging Computing Trends in the Datacenter
Speaker: Not Available
Track: Keynote
★ Session Summary ★
For decades we have been able to take advantage of Moore’s Law to improve single thread performance, reduce power and cost with each generation of semiconductor technology. While technology has advanced after the end of Dennard scaling more than 10 years ago, the advances have slowed down. Server performance increases have relied on increasing core counts and power budgets.
At the same time, workloads have changed in the era of cloud computing. Scale out is becoming more important than scale up. Domain specific architectures have started to emerge to improve the energy efficiency of emerging workloads like deep learning
This talk will provide a historical perspective and discuss emerging trends driving the development of modern servers processors.
---------------------------------------------------
★ Resources ★
Event Page: http://connect.linaro.org/resource/hkg18/hkg18-500k1/
Presentation: http://connect.linaro.org.s3.amazonaws.com/hkg18/presentations/hkg18-500k1.pdf
Video: http://connect.linaro.org.s3.amazonaws.com/hkg18/videos/hkg18-500k1.mp4
---------------------------------------------------
★ Event Details ★
Linaro Connect Hong Kong 2018 (HKG18)
19-23 March 2018
Regal Airport Hotel Hong Kong
---------------------------------------------------
Keyword: Keynote
'http://www.linaro.org'
'http://connect.linaro.org'
---------------------------------------------------
Follow us on Social Media
https://www.facebook.com/LinaroOrg
https://www.youtube.com/user/linaroorg?sub_confirmation=1
https://www.linkedin.com/company/1026961
For decades we have been able to take advantage of Moore’s Law to improve single thread performance, reduce power and cost with each generation of semiconductor technology. While technology has advanced after the end of Dennard scaling more than 10 years ago, the advances have slowed down. Server performance increases have relied on increasing core counts and power budgets.
At the same time, workloads have changed in the era of cloud computing. Scale out is becoming more important than scale up. Domain specific architectures have started to emerge to improve the energy efficiency of emerging workloads like deep learning.
This talk will provide a historical perspective and discuss emerging trends driving the development of modern processors.
Simplify Data Management and Go Green with Supermicro & QumuloRebekah Rodriguez
Data is growing faster than existing systems are designed to ingest and then analyze. As a result, storage sprawl, wasted resources, and time-consuming complexity are holding back employees and customers from making better business decisions. Supermicro and Qumulo have teamed up to create a simple, sustainable, and fast system to store and manage massive amounts of unstructured data.
Join this webinar to learn how to bring a highly performant and dense infrastructure platform that meets business requirements by taming unstructured data management challenges with Qumulo and Supermicro.
Watch the webinar: https://www.brighttalk.com/webcast/17278/513928
As the industry strives toward immersive VR experiences, we are guided by the extreme requirements associated with intuitive interactions, visual quality, and sound quality, in order to achieve the ultimate mobile VR experience. Precise, low-latency motion tracking of head movements is crucial for intuitive interactions with the virtual world, and visual-inertial odometry (VIO) is the ideal complementary subsystem to achieve this goal. VIO allows for six-degrees of freedom (6 DoF) in VR experiences, reduces latency, and cuts the cord. In this presentation, you will learn about:
• The enhanced user experiences that 6 DoF provides over 3 DoF
• The evolution of motion tracking
• How Qualcomm’s on-device VIO implementation provides a precise head pose at a high frequency yet at low latency and power
• The impact of 6 DoF on VR content development
Kevin Shaw at AI Frontiers: AI on the Edge: Bringing Intelligence to Small De...AI Frontiers
The edge is the domain of the Internet of Things, of personal medical devices, of cars that understand the world, of machines that self-regulate and more. These devices share a common constraint: they can't send full data to the cloud for processing. This talk will review the changing needs for AI at the edge, the demands of learning networks on small cores and changing hardware being provided to meet these demands.
The next evolution in cloud computing is a smarter application not in the cloud. As the cloud has continued to evolve, the applications that utilize it have had more and more capabilities of the cloud. This presentation will show how to push logic and machine learning from the cloud to an edge application. Afterward, creating edge applications which utilize the intelligence of the cloud should become effortless.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/xilinx/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Nick Ni, Director of Product Marketing at Xilinx, presents the "Xilinx AI Engine: High Performance with Future-proof Architecture Adaptability" tutorial at the May 2019 Embedded Vision Summit.
AI inference demands orders- of-magnitude more compute capacity than what today’s SoCs offer. At the same time, neural network topologies are changing too quickly to be addressed by ASICs that take years to go from architecture to production. In this talk, Ni introduces the Xilinx AI Engine, which complements the dynamically- programmable FPGA fabric to enable ASIC-like performance via custom data flows and a flexible memory hierarchy. This combination provides an orders-of-magnitude boost in AI performance along with the hardware architecture flexibility needed to quickly adapt to rapidly evolving neural network topologies.
HKG18-500K1 - Keynote: Dileep Bhandarkar - Emerging Computing Trends in the D...Linaro
Session ID: HKG18-500K1
Session Name: HKG18-500K1 - Keynote: Dileep Bhandarkar - Emerging Computing Trends in the Datacenter
Speaker: Not Available
Track: Keynote
★ Session Summary ★
For decades we have been able to take advantage of Moore’s Law to improve single thread performance, reduce power and cost with each generation of semiconductor technology. While technology has advanced after the end of Dennard scaling more than 10 years ago, the advances have slowed down. Server performance increases have relied on increasing core counts and power budgets.
At the same time, workloads have changed in the era of cloud computing. Scale out is becoming more important than scale up. Domain specific architectures have started to emerge to improve the energy efficiency of emerging workloads like deep learning
This talk will provide a historical perspective and discuss emerging trends driving the development of modern servers processors.
---------------------------------------------------
★ Resources ★
Event Page: http://connect.linaro.org/resource/hkg18/hkg18-500k1/
Presentation: http://connect.linaro.org.s3.amazonaws.com/hkg18/presentations/hkg18-500k1.pdf
Video: http://connect.linaro.org.s3.amazonaws.com/hkg18/videos/hkg18-500k1.mp4
---------------------------------------------------
★ Event Details ★
Linaro Connect Hong Kong 2018 (HKG18)
19-23 March 2018
Regal Airport Hotel Hong Kong
---------------------------------------------------
Keyword: Keynote
'http://www.linaro.org'
'http://connect.linaro.org'
---------------------------------------------------
Follow us on Social Media
https://www.facebook.com/LinaroOrg
https://www.youtube.com/user/linaroorg?sub_confirmation=1
https://www.linkedin.com/company/1026961
For decades we have been able to take advantage of Moore’s Law to improve single thread performance, reduce power and cost with each generation of semiconductor technology. While technology has advanced after the end of Dennard scaling more than 10 years ago, the advances have slowed down. Server performance increases have relied on increasing core counts and power budgets.
At the same time, workloads have changed in the era of cloud computing. Scale out is becoming more important than scale up. Domain specific architectures have started to emerge to improve the energy efficiency of emerging workloads like deep learning.
This talk will provide a historical perspective and discuss emerging trends driving the development of modern processors.
Simplify Data Management and Go Green with Supermicro & QumuloRebekah Rodriguez
Data is growing faster than existing systems are designed to ingest and then analyze. As a result, storage sprawl, wasted resources, and time-consuming complexity are holding back employees and customers from making better business decisions. Supermicro and Qumulo have teamed up to create a simple, sustainable, and fast system to store and manage massive amounts of unstructured data.
Join this webinar to learn how to bring a highly performant and dense infrastructure platform that meets business requirements by taming unstructured data management challenges with Qumulo and Supermicro.
Watch the webinar: https://www.brighttalk.com/webcast/17278/513928
As the industry strives toward immersive VR experiences, we are guided by the extreme requirements associated with intuitive interactions, visual quality, and sound quality, in order to achieve the ultimate mobile VR experience. Precise, low-latency motion tracking of head movements is crucial for intuitive interactions with the virtual world, and visual-inertial odometry (VIO) is the ideal complementary subsystem to achieve this goal. VIO allows for six-degrees of freedom (6 DoF) in VR experiences, reduces latency, and cuts the cord. In this presentation, you will learn about:
• The enhanced user experiences that 6 DoF provides over 3 DoF
• The evolution of motion tracking
• How Qualcomm’s on-device VIO implementation provides a precise head pose at a high frequency yet at low latency and power
• The impact of 6 DoF on VR content development
Kevin Shaw at AI Frontiers: AI on the Edge: Bringing Intelligence to Small De...AI Frontiers
The edge is the domain of the Internet of Things, of personal medical devices, of cars that understand the world, of machines that self-regulate and more. These devices share a common constraint: they can't send full data to the cloud for processing. This talk will review the changing needs for AI at the edge, the demands of learning networks on small cores and changing hardware being provided to meet these demands.
The next evolution in cloud computing is a smarter application not in the cloud. As the cloud has continued to evolve, the applications that utilize it have had more and more capabilities of the cloud. This presentation will show how to push logic and machine learning from the cloud to an edge application. Afterward, creating edge applications which utilize the intelligence of the cloud should become effortless.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/xilinx/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Nick Ni, Director of Product Marketing at Xilinx, presents the "Xilinx AI Engine: High Performance with Future-proof Architecture Adaptability" tutorial at the May 2019 Embedded Vision Summit.
AI inference demands orders- of-magnitude more compute capacity than what today’s SoCs offer. At the same time, neural network topologies are changing too quickly to be addressed by ASICs that take years to go from architecture to production. In this talk, Ni introduces the Xilinx AI Engine, which complements the dynamically- programmable FPGA fabric to enable ASIC-like performance via custom data flows and a flexible memory hierarchy. This combination provides an orders-of-magnitude boost in AI performance along with the hardware architecture flexibility needed to quickly adapt to rapidly evolving neural network topologies.
Open Source Edge Computing Platforms - OverviewKrishna-Kumar
IEEE 11th International Conference - COMSNETS 2019 - Last MilesTalk - Jan 2019. This talk is for Beginner or intermediate levels only. Kubernetes and related edge platforms are discussed.
Dell Networking’s Unified Network Architecture enables customers to build campus networks in a new way. The C9010 and C1048P convert your entire Enterprise network into a single switching entity, simplifying initial configuration and on-going operational aspects. Learn more: http://dell.to/1WtTO33
Virtualization and Migration in Cloud - Edge Computing models using OpenStack...Sai praveen Seva
The main incentive of this project/thesis is to leverage the OpenStack Cloud platform and make it efficient to facilitate Cloud - Edge computing models for IoT devices termed as “Edge nodes” in the context of smart city as a part of SmartME initiative by University of Messina, Italy.
The Software Based Data Center. Is It For You?Dell World
As more and more IT organizations experience the benefits of server virtualization, expanding to a broader implementation that encompasses networking and storage seems like the next logical step. But is it for you? In this session, discover the benefits of tomorrow's software-based data centers and learn how they can help maximize the delivery of IT services.
Your customers have an insatiable appetite for video content. But, they expect a captivating video experience, on every screen. Until now, this meant you had to manually handle all the complexity – from optimal encoding, re-framing, resizing, and enhancing to tying up your creative, marketing, and technical teams. Thanks to AI and cutting edge technologies, the tedious and manual workload for managing videos can be eliminated.
Listen in to a discussion on how to leverage a dynamic media platform for managing, optimizing, and delivering engaging video experiences along with a product demo that will cover:
- Automating transcoding and quality compression
- Adapting video content for mobile devices and social platforms
- Auto-generating previews and subtitles
- Providing a custom viewing experience
https://info.cloudinary.com/Delivering-Compelling-Video-Experiences-at-Scale.html
Rightscale Webinar: Building Blocks for Private and Hybrid CloudsRightScale
Looking for some solid guidance to help build your private or hybrid cloud? Want to turn your existing data center into a private cloud? Or perhaps you want to integrate your private cloud with a public cloud, but you’re not sure where to get started.
In this webinar you'll learn the key considerations for building a private or hybrid cloud, presented by the pros at RightScale who help our customers do this every single day.
We’ll discuss:
- Selecting hardware: How to decide which compute, networking and storage options to select.
- Private cloud considerations such as workload and infrastructure interaction, security, latency, user experience, and cost.
-Reference architectures and design considerations such as the location of physical hardware and configuration for availability and redundancy.
- Use cases and real-life scenarios: Private and hybrid clouds are especially well-suited for scalable applications with uncertain demand, disaster recovery and self-service IT portals.
- How to select the cloud solution provider that’s right for you, and how to manage your cloud resources effectively.
You’ll leave this webinar with a thorough understanding of building blocks for private and hybrid clouds.
Building Blocks for Private and Hybrid CloudsRightScale
Learn key considerations about building a private or hybrid cloud, including selecting hardware, cloud infrastructure software, hosting vendors, systems integrators, and reference architectures.
The Evolving Data Center Network: Open and Software-DefinedDell World
Can the network be managed as easily and cost-effectively as a server or PC? We think so. That is why Dell is working to make this possible through our Open Networking initiative. In this session, learn how Dell can help you move to a software-defined network and make your data center more agile and efficient. We will discuss new open networking platforms ranging from 1GbE to 100GbE with next-generation, multi-rate architectures and a choice of operating systems. We will also explore how these new network solutions from Dell help enable private/hybrid cloud, Hadoop, convergence, and VDI implementations.
In this slidecast, Bill Mannel from SGI presents an update on the company's innovative HPC solutions.
Learn more at: http://sgi.com
Watch the presentation video: http://insidehpc.com/2013/07/01/slidecast-sgi-product-update-for-june-2013/
Tailoring Converged Solutions To Fit Your Business Needs, Not The Other Way A...Dell World
Data center modernization, simplified management and cost reduction has led to an industry shift towards converged infrastructure. Yet, most converged infrastructure solutions are sold as 'one-size-fits-all' and typically don't align well to your business objectives. At Dell, we don't believe that you should have to change the way you run your business to fit your technology solution. In this session, you will learn about our broad and innovative portfolio of converged platforms and unified management tools that allow you to tailor fit your infrastructure to meet your unique requirements—and achieve the benefits of convergence on your terms.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/cadence/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit-desai
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Pulin Desai, Vision Product Marketing Director at Cadence, presents the "Highly Efficient, Scalable Vision and AI Processors IP for the Edge" tutorial at the May 2019 Embedded Vision Summit.
This presentation describes the architecture of the latest Tensilica-based vision and AI processor family, and illustrates how easily vision algorithms (e.g., SLAM, 3D capture) and AI inference can be implemented on these processors. See how this low-power architecture simplifies development of a scalable vision and AI solution from low to high end for mobile, AR/VR, surveillance and automotive markets.
Introducing the new On-Demand Private Cloud. Supermicro and InMotion Hosting joined forces to design a Data Center POD solution that allows data centers to take control of their cloud costs by lowering the total cost per VM. This all-in-one solution consisting of small hyper-converged building blocks, are built for your business to achieve significant on-demand flexibility and scalability. Utilizing a consumption based model eliminates multiple vendors by consolidating hardware, software, networking, management, and administration enabling your data center to grow and shrink based on your business’ needs. Bring your business to the next level with an On-Demand Private Cloud, let it help you achieve greater profitability by eliminating overpriced inflated, high-cost licensing fees. This streamlined model removes overpriced licensing fees, and enables you to increase profits while reducing overhead costs. This On-Demand Private Cloud solution is built with Supermicro’s green, power efficient and high density, compute servers; OpenStack’s open-source software, and Ceph’s object, file, and block storage.
Join this webinar to hear industry experts from InMotion Hosting and Supermicro discuss:
- Building your next-generation data center infrastructure
- Enables private data centers with validated solutions from Supermicro and InMotion
- Hosting to improve operational efficiency
- Reach new segments with Kubernetes, Machine Learning, and Artificial Intelligence
- Achieve on-demand cloud computing, high availability, data redundancy, and flexibility
- Take control of your cloud costs and lower your total cost of ownership
- Hyper-dense hardware enables economies of scale for power, cooling, and physical space
Madhu Rangarajan will provide an overview of Networking trends they are seeing in Cloud, various network topologies and tradeoffs, and trends in the acceleration of packet processing workloads. They will also talk about some of the work going on in Intel to address these trends, including FPGAs in the datacenter.
Open Source Edge Computing Platforms - OverviewKrishna-Kumar
IEEE 11th International Conference - COMSNETS 2019 - Last MilesTalk - Jan 2019. This talk is for Beginner or intermediate levels only. Kubernetes and related edge platforms are discussed.
Dell Networking’s Unified Network Architecture enables customers to build campus networks in a new way. The C9010 and C1048P convert your entire Enterprise network into a single switching entity, simplifying initial configuration and on-going operational aspects. Learn more: http://dell.to/1WtTO33
Virtualization and Migration in Cloud - Edge Computing models using OpenStack...Sai praveen Seva
The main incentive of this project/thesis is to leverage the OpenStack Cloud platform and make it efficient to facilitate Cloud - Edge computing models for IoT devices termed as “Edge nodes” in the context of smart city as a part of SmartME initiative by University of Messina, Italy.
The Software Based Data Center. Is It For You?Dell World
As more and more IT organizations experience the benefits of server virtualization, expanding to a broader implementation that encompasses networking and storage seems like the next logical step. But is it for you? In this session, discover the benefits of tomorrow's software-based data centers and learn how they can help maximize the delivery of IT services.
Your customers have an insatiable appetite for video content. But, they expect a captivating video experience, on every screen. Until now, this meant you had to manually handle all the complexity – from optimal encoding, re-framing, resizing, and enhancing to tying up your creative, marketing, and technical teams. Thanks to AI and cutting edge technologies, the tedious and manual workload for managing videos can be eliminated.
Listen in to a discussion on how to leverage a dynamic media platform for managing, optimizing, and delivering engaging video experiences along with a product demo that will cover:
- Automating transcoding and quality compression
- Adapting video content for mobile devices and social platforms
- Auto-generating previews and subtitles
- Providing a custom viewing experience
https://info.cloudinary.com/Delivering-Compelling-Video-Experiences-at-Scale.html
Rightscale Webinar: Building Blocks for Private and Hybrid CloudsRightScale
Looking for some solid guidance to help build your private or hybrid cloud? Want to turn your existing data center into a private cloud? Or perhaps you want to integrate your private cloud with a public cloud, but you’re not sure where to get started.
In this webinar you'll learn the key considerations for building a private or hybrid cloud, presented by the pros at RightScale who help our customers do this every single day.
We’ll discuss:
- Selecting hardware: How to decide which compute, networking and storage options to select.
- Private cloud considerations such as workload and infrastructure interaction, security, latency, user experience, and cost.
-Reference architectures and design considerations such as the location of physical hardware and configuration for availability and redundancy.
- Use cases and real-life scenarios: Private and hybrid clouds are especially well-suited for scalable applications with uncertain demand, disaster recovery and self-service IT portals.
- How to select the cloud solution provider that’s right for you, and how to manage your cloud resources effectively.
You’ll leave this webinar with a thorough understanding of building blocks for private and hybrid clouds.
Building Blocks for Private and Hybrid CloudsRightScale
Learn key considerations about building a private or hybrid cloud, including selecting hardware, cloud infrastructure software, hosting vendors, systems integrators, and reference architectures.
The Evolving Data Center Network: Open and Software-DefinedDell World
Can the network be managed as easily and cost-effectively as a server or PC? We think so. That is why Dell is working to make this possible through our Open Networking initiative. In this session, learn how Dell can help you move to a software-defined network and make your data center more agile and efficient. We will discuss new open networking platforms ranging from 1GbE to 100GbE with next-generation, multi-rate architectures and a choice of operating systems. We will also explore how these new network solutions from Dell help enable private/hybrid cloud, Hadoop, convergence, and VDI implementations.
In this slidecast, Bill Mannel from SGI presents an update on the company's innovative HPC solutions.
Learn more at: http://sgi.com
Watch the presentation video: http://insidehpc.com/2013/07/01/slidecast-sgi-product-update-for-june-2013/
Tailoring Converged Solutions To Fit Your Business Needs, Not The Other Way A...Dell World
Data center modernization, simplified management and cost reduction has led to an industry shift towards converged infrastructure. Yet, most converged infrastructure solutions are sold as 'one-size-fits-all' and typically don't align well to your business objectives. At Dell, we don't believe that you should have to change the way you run your business to fit your technology solution. In this session, you will learn about our broad and innovative portfolio of converged platforms and unified management tools that allow you to tailor fit your infrastructure to meet your unique requirements—and achieve the benefits of convergence on your terms.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/cadence/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit-desai
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Pulin Desai, Vision Product Marketing Director at Cadence, presents the "Highly Efficient, Scalable Vision and AI Processors IP for the Edge" tutorial at the May 2019 Embedded Vision Summit.
This presentation describes the architecture of the latest Tensilica-based vision and AI processor family, and illustrates how easily vision algorithms (e.g., SLAM, 3D capture) and AI inference can be implemented on these processors. See how this low-power architecture simplifies development of a scalable vision and AI solution from low to high end for mobile, AR/VR, surveillance and automotive markets.
Introducing the new On-Demand Private Cloud. Supermicro and InMotion Hosting joined forces to design a Data Center POD solution that allows data centers to take control of their cloud costs by lowering the total cost per VM. This all-in-one solution consisting of small hyper-converged building blocks, are built for your business to achieve significant on-demand flexibility and scalability. Utilizing a consumption based model eliminates multiple vendors by consolidating hardware, software, networking, management, and administration enabling your data center to grow and shrink based on your business’ needs. Bring your business to the next level with an On-Demand Private Cloud, let it help you achieve greater profitability by eliminating overpriced inflated, high-cost licensing fees. This streamlined model removes overpriced licensing fees, and enables you to increase profits while reducing overhead costs. This On-Demand Private Cloud solution is built with Supermicro’s green, power efficient and high density, compute servers; OpenStack’s open-source software, and Ceph’s object, file, and block storage.
Join this webinar to hear industry experts from InMotion Hosting and Supermicro discuss:
- Building your next-generation data center infrastructure
- Enables private data centers with validated solutions from Supermicro and InMotion
- Hosting to improve operational efficiency
- Reach new segments with Kubernetes, Machine Learning, and Artificial Intelligence
- Achieve on-demand cloud computing, high availability, data redundancy, and flexibility
- Take control of your cloud costs and lower your total cost of ownership
- Hyper-dense hardware enables economies of scale for power, cooling, and physical space
Madhu Rangarajan will provide an overview of Networking trends they are seeing in Cloud, various network topologies and tradeoffs, and trends in the acceleration of packet processing workloads. They will also talk about some of the work going on in Intel to address these trends, including FPGAs in the datacenter.
The number of internet-connected devices is growing exponentially, enabling an increasing number of edge applications in environments such as smart cities, retail, and industry 4.0. These intelligent solutions often require processing large amounts of data, running models to enable image recognition, predictive analytics, autonomous systems, and more. Increasing system workloads and data processing capacity at the edge is essential to minimize latency, improve responsiveness, and reduce network traffic back to data centers. Purpose-built systems such as Supermicro’s short-depth, multi-node SuperEdge, powered by 3rd Gen Intel® Xeon® Scalable processors, increase compute and I/O density at the edge and enable businesses to further accelerate innovation.
Join this webinar to discover new insights in edge-to-cloud infrastructures and learn how Supermicro SuperEdge multi-node solutions leverage data center scale, performance, and efficiency for 5G, IoT, and Edge applications.
Supermicro AI Pod that’s Super Simple, Super Scalable, and Super AffordableRebekah Rodriguez
The worlds of HPC and AI are evolving at a tremendous rate. The demands of modern-day applications put immense pressure on local IT teams and resources. More often than not, this pressure can come from requiring an AI strategy to speed up mission-critical applications - but this can come at a cost which can hinder adoption. In this webinar, Supermicro, together with International Computer Concepts (ICC) and Define Tech, will demonstrate their AI Super Pod that delivers on AI strategy needs without breaking the bank.
HKG15-The Machine: A new kind of computer- Keynote by Dejan MilojicicLinaro
HKG15-The Machine: A new kind of computer- Keynote by Dejan Milojicic
---------------------------------------------------
Speaker: Dejan Milojicic
Date: February 10, 2015
---------------------------------------------------
★ Session Summary ★
The Machine is a new system from HP, based on Memristor Non-Volatile Memory (NVM) and photonic interconnects, enabling new innovative solutions and applications. This talk will discuss the changes we are introducing to the system software stack to leverage The Machine.
--------------------------------------------------
★ Resources ★
Pathable: https://hkg15.pathable.com/meetings/250777
Video:
---------------------------------------------------
★ Event Details ★
Linaro Connect Hong Kong 2015 - #HKG15
February 9-13th, 2015
Regal Airport Hotel Hong Kong Airport
---------------------------------------------------
http://www.linaro.org
http://connect.linaro.org
Get ready to dive into the exciting world of IoT data processing! 🌐📊
Join us for a thought-provoking webinar on "Processing: Turning IoT Data into Intelligence" hosted by industry visionary Deepak Shankar, founder of Mirabilis Design. Discover how to harness the potential of IoT devices by strategically choosing processors that optimize power, performance, and space.
In this engaging session, you'll explore key insights:
✅ Impact of processor architecture on Power-Performance-Area optimization
✅ Enabling AI and ML algorithms through precise compute and storage requirements
✅ Future trends in IoT hardware innovation
✅ Strategies for extending battery life and cost prediction through system design
Don't miss the chance to learn how to leverage a single IoT Edge processor for multiple applications and much more. This is your opportunity to gain a competitive edge in the evolving IoT landscape.
Accelerated adoption of Internet of Things (IoT) with In-network computing an...Infosys
In-network computing gives you the ability to compute at a particular point in the network where it can deliver maximum value. This opens new avenues of how applications and services are conceptualized or implemented, harvesting the benefits of distributed computing. In-network computing has significant benefits for the network infrastructure as it improves latency for end user/ devices while it also reduces the network traffic to a great extent. Emerging technologies like IoT and its application can immensely benefit by using In-network computing technology in conjunction with cloud technologies.
Micro Server Design - Open Compute ProjectHitesh Jani
The Micro Server is a cluster of Low Power, High Density Servers which can be applied in growing workloads such as Distributed Computing, Cloud Computing, Internet of Things (IoT) and Big Data.
Intel® Ethernet Series Delivering Real-World Value. As computing and networking scale in performance, interconnect technologies play a critical role in ensuring systems reach their full potential in the speed at which they move data. Intel has been at the forefront of research and development into interconnect technologies since the dawn of the PC era. Today in the data center, Intel is working to deliver greater levels of intelligence within its connectivity solutions to overcome network bottlenecks and accelerate applications. Between PC and peripherals, Intel is heavily involved with the industry as it brings the latest technologies to market for the best user experiences. At the chip level, Intel is leading the industry in advanced packaging with technologies that connect chiplets and modules in order to deliver Moore’s Law advances, while also working to reduce latency between memory and CPU. From “Microns to Miles,” Intel’s investments in interconnect technologies are among the broadest in the industry.
Delivering Carrier Grade OCP for Virtualized Data CentersRadisys Corporation
This webinar explores the requirements for carrier grade Open Compute Project (OCP) infrastructure for virtualized telecom data centers delivering SDN and NFV for digital services.
Performance Characterization of the Pentium Pro ProcessorDileep Bhandarkar
HPCA 3 Paper
In this paper, we characterize the performance of several business and technical benchmarks on a Pentium Pro processor based system. Various architectural data are collected using a performance monitoring counter tool. Results show that the Pentium Pro processor achieves significantly lower cycles per instruction than the Pentium processor due to its out of order and speculative execution, and non-blocking cache and memory system. Its higher clock frequency also contributes to even higher performance.
Performance from Architecture: Comparing a RISC and a CISC with Similar Hardw...Dileep Bhandarkar
This is the paper that Dave Patterson referred to in his Turing Lecture.
Performance comparisons across different computer architectures cannot usually separate the architectural contribution from various implementation and technology contributions to performance. This paper compares an example implementation from the RISC and CISC architectural schools
(a MIPS M/2000 and a Digital VAX 8700) on nine of the ten
SPEC benchmarks. The organizational similarity of these
machines provides an opportunity to examine the purely
architect ural advantages of RISC. The RISC approach offers,
compared with VAX, many fewer cycles per instruction but somewhat more instructions per program. Using results from a software monitor on the MIPS machine and a hardware monitor on the VAX, this paper shows that the esulting advantage in cycles per program ranges from slightly
under a factor of 2 to almost a factor of 4, with a geometric
mean of 2,7. It also demonstrates the correlation between
cycles per instruction and relative instruction count.
Qualcomm centriq 2400 hot chips final submission correctedDileep Bhandarkar
World's 1st 10 nm Server Chip
QDT-designed custom core powering Qualcomm Centriq2400 Processor
5thgeneration custom core design
Designed from the ground up to meet the needs of cloud service providers
Fully ARMv8-compliant
AArch64 only
Supports EL3 (TrustZone) and EL2 (hypervisor)
•
Includes optional cryptography acceleration instructions
AES, SHA1, SHA2-256
Designed for performance, optimized for power
The Yellow Brick Road of Semiconductor Technology
The talk provides a historical perspective on how the computer industry has taken advantage of Moore's Law and how we got to the era of multi-core processors. The talk will also address some of the challenges facing the industry in the future.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Linaro connect 2018 keynote final updated
1. Qualcomm Datacenter Technologies, Inc.
Emerging Computing Trends in the Datacenter
Dileep Bhandarkar, Ph. D.
Vice President, Technology
Linaro Connect Keynote – 23 March 2018, Hong Kong
Created using DilEEP Neural Network
2. Outline
• Historical Perspective on 40 Years of Moore’s Law
– Single Core Era enabled by Dennard Scaling
• Post Dennard Scaling Drives Multi-Core Era
• The Shift to Energy Efficient Multi-Core Designs for
the Cloud
• Heterogenous Computing Era with Application
Specific Accelerators
3. The First 50 Years
after
Shockley’s Transistor Invention
4. 1958: Jack Kilby’s
Integrated Circuit
My 40+ Year Journey From Mainframes to Smartphones https://www.youtube.com/watch?v=7ptXpNFY3XM
Bob Noyce’s
Integrated Circuit
5. From 2300 to >1Billion Transistors
Moore’s Law video at http://www.cs.ucr.edu/~gupta/hpca9/HPCA-PDFs/Moores_Law_Video_HPCA9.wmv
6. Dennard Scaling
Device or Circuit Parameter Scaling Factor
Device dimension tox, L, W 1/K
Doping concentration Na K
Voltage V 1/K
Current I 1/K
Capacitance eA/t 1/K
Delay time per circuit VC/I 1/K
Power dissipation per circuit VI 1/K2
Power density VI/A 1
The benefits of scaling : as transistors get smaller, they can switch faster and use less power.
Each new generation of process technology was expected to reduce minimum feature size by
approximately 0.7x (K ~1.4). A 0.7x reduction in linear features size provided roughly a 2x
increase in transistor density.
Dennard scaling broke down around 2004 with unscaled interconnect delays and our inability
to scale the voltage and current due to reliability concerns.
But increasing transistor density (Moore’s Law) has continued to enable multicore designs.
7. THE MULTICORE ERA
SINGLE THREAD PERFORMANCE IMPROVEMENT SLOWING DOWN
PERFORMANCE DRIVEN BY HIGHER CORE COUNT
Post Dennard Scaling
9. The last 5 Generations of ~135W Xeon Processors
Slow Improvement in IPC but per thread performance constrained by power
Performance data from www.spec.org
8 cores
Mar 2012
10 cores
Sep 2013
12 cores
Sep 2014
14 cores
Apr 2016
18 cores
Jul 2017
10. No Improvement in Perf/Watt per Core
even with higher power
Performance data from www.spec.org
14. Disruptions Come from Below!
Mainframes
Minicomputers
RISC Systems
Desktop PCs
Notebooks
Smart Phones
Volume
Performance
Bell’s Law:
hardware technology,
networks, and interfaces
allows new, smaller, more
specialized computing
devices to be introduced to
serve a computing need.
15. 15
Qualcomm Datacenter
Technologies
Uniquely positioned to leverage
mobile growth and drive datacenter
process leadership
65nm 45nm 28nm 20nm 10nm
1st in the
industry
14nm
Mobile driven
NowThen
Fab process tech
driven by PC
Fab process tech driven
by mobile phones
PC driven
2008 2010 2012
2016
20182014 1.5B
units
256M
units
Smartphone unitsPC units
45nm 32nm 10nm14nm22nm
A new world in datacenter:
Manufacturing
process
Mobile Technology Disrupting the Cloud Datacenter
16. 16
Qualcomm Centriq
™
2400
Throughput performance
Thread Density
Quality of Service
Energy Efficiency
What Cloud means for
Processor Architecture
Key metrics
• Perf / thread
• Perf / Watt
• Perf / mm2
The future requires a new approach to CPU design
17. Computational + server growth
fuel datacenter energy efficiency considerations
• 2014: US datacenters consumed 70 billion kilowatt-
hours of electricity
• Datacenters can cost between $10M and $20M
per megawatt
• Unused datacenter capacity can be expensive
• 1W of server power can cost $1 per year in energy
costs at 10 cents per KWH
• Server power related costs can be 30-50% of overall
datacenter operating costs
• Servers need to be designed for average power
consumption (not just max peak output)
• Hyper-efficient designs necessary to improve server
energy efficiency
19. 19
Qualcomm Centriq 2400 Drives Perf/W and Perf/Thread Leadership
1
1.71
1.04
1.25
1.38
1
1.18
0.77
0.93
0.99
1
0.69
0.74
0.75
0.72
1
2.02
1.84
1.86
1.70
1
1.01
0.92
0.93
0.85
1
0.24
0.59
0.40
0.27
QDF 2460 PLATINUM 8180 GOLD 6138 PLATINUM 8160 PLATINUM 8170
Power SPECintrate2006 Perf/Watt Perf/Core Perf/Thread Perf/$
IsoPower IsoPerf
48 cores
120 W TDP
657 SIR2006
$1,995
20 cores
125 W TDP
504 SIR2006
$2,612
26 cores
165 W TDP
653 SIR2006
$7,405
28 cores
205 W TDP
775 SIR2006
$10,009
Top Bin
E7 Price
24 cores
150 W TDP
612 SIR2006
$4,702
Top Bin E5 Price
SKU
Performance based on internal tests for SPECintrate2006 (SIR) estimates using gcc O2
20. 20
Qualcomm Centriq 2460 Lowers Average and Idle Power
to Improve Cloud Server Density in Datacenters
0
20
40
60
80
100
120
AveragePower(Watts)
8W idle power
400.
perlbench
401.
bzip2
403.
gcc
429.
mcf
445.
gobmk
456.
hmmer
458.
libquantum
464.
h264ref
471.
omnetpp
473.
astar
458.
sieng
483.
xalancbmk
SPECint®_rate2006 subtests
120W TDP
Median = 65W
21. • Are we really serious about energy efficiency?
• What should the Cost and Power constraints be?
• How many instruction sets is too many?
• X86, ARM, MIPS, Power, RISC V
• Have we reached the limit of high core count? SW Scalability?
• Do we need to improve single thread general purpose performance?
• What should the power limit be for a single socket?
• How much performance are we willing to sacrifice for better security?
• Is there a fundamental conflict between multi-tenancy and security?
• Cost and convenience vs extreme security?
• When does device scaling end? Will there be a sub nm era?
Many Questions to Ponder?
23. • Energy efficiency must be a implicit design target
• Desktop PC CPU cores are too power hungry and not energy efficient
• Wimpy cores are not good enough for servers
• Servers can be designed by scaling up energy efficient mobile core design philosophy
• Many workloads run best on different kinds of specialized processing engines
• Each processing engine has its own strengths
Lessons from Mobile Computing
24. • Order of Magnitude higher computational efficiency than general
purpose processors
• Can accept inefficient implementation to reduce time to market
• Many potential applications
– Machine Learning
– Encryption
– Data Compression
– Video processing
• Need reasonable volume for business case
• Algorithms need to be stable
• Can they be programmable? Where do FPGAs fit?
The Age of Application Specific Accelerators
25. Before the emergence of DNNs
Algorithms and rule based systems were laboriously hand-coded
But by 2012, the ingredients for change were available
Sufficiently powerful GPU’s
Readily available large data sets on the internet
The Emergence of Deep Neural Networks
Deep Neural Networks are becoming Pervasive
The turning point - ImageNet Competition 2012
“ImageNet Classification with Deep Convolutional Neural Networks”, Neural Information
Processing Systems Conference (NIPS 2012)
Deep Neural Net enabled a performance breakthrough
Now - DNN’s are simpler to develop and deploy, ushering in radical change in many fields and
entire industries
26. Deep Learning is Growing Exponentially
Source: Google
Source: Google
29. 29
Where does compute need to be and why?
. . .
• Bandwidth / Backhaul traffic
• Compute Resources
• Power/Thermal Envelope
• Privacy & Security
• Latency
• Reliability
Central CloudDevices Edge Cloud
30. 30
What is “Edge”?
Cloudlets / edge nodes / edge
gateways
◦ 5-20ms latency
◦ Optionally co-located with access
networks
◦ Few server racks per site
. . .
Customer devices
◦ Smartphones, connected cars, drones,
IoT sensors/devices
◦ < 2 ms latency; millions of devices
Customer premises
◦ Enterprises, homes, stadiums, cars
◦ < 5 ms latency; 1000s of devices
Centralized clouds
◦ > 100 ms latency
◦ 5-100 per operator or cloud
service provider
◦ 100s-1000s of server racks
per site
EDGE
32. CPU
• Free cycles available
• ISA enhancements
• Complementary with
other accelerators
GPU
• Over-design (cost,
power) for AI
FPGA
• Offers flexibility
• Typically hard to
program &
expensive
ASIC
• Purpose-built
• Energy and cost
efficient
• Expensive to
design
• Least flexible
33. Training tends toward concentrated, centralized computation
Inference tends toward wide distribution
GPUs
Large DPU
CPUs
Small DPU
CPUs
Small DPU
Low cost
GPUs
Large DPU
Higher Cost
34. CPUs are not powerful enough for training, but have free cycles available for
inference – opportunity for add-in accelerator cards
Instruction Set enhancements can improve performance
GPUs have too much “extra baggage” that add cost and power for features not
needed for AI – opportunity for domain specific accelerators
FPGAs offer more flexibility, but are difficult to program and expensive
ASICs are energy and product cost efficient, but less flexible
Deep neural networks are making significant strides in many areas
speech, vision, language, search, robotics, medical imaging & treatment, drug discovery …
We have an opportunity to dramatically reshape our computing devices to
better serve this emerging and growing market
Expect to see lots of innovation and excitement in the years to come
Thoughts on Future Silicon for Deep Learning
35. • Single thread general purpose performance improvement is slowing down
• Energy efficiency is extremely important in datacenters
• ARM architecture enables energy efficient designs with good performance
• Typical-use efficiency is becoming more important than peak output efficiency
in enterprise data centers
• Idle mode power will become more important for servers
• Smart power management can dynamically optimize server operation to
improve efficiency in normal use
• Security improvements need even if they cost performance
• There is plenty of opportunity for innovation on new application specific
architectures targeted for specific workloads
Concluding Remarks
Speculation Can Lead to a Meltdown!