Christian Kniep presented this deck at the 2016 HPC Advisory Council Switzerland Conference.
"With Docker v1.9 a new networking system was introduced, which allows multi-host network- ing to work out-of-the-box in any Docker environment. This talk provides an introduction on what Docker networking provides, followed by a demo that spins up a full SLURM cluster across multiple machines. The demo is based on QNIBTerminal, a Consul backed set of Docker Images to spin up a broad set of software stacks."
Watch the video presentation:
http://wp.me/p3RLHQ-f7G
See more talks in the Swiss Conference Video Gallery:
http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter:
http://insidehpc.com/newsletter
[2015-05월 세미나] Network Bottlenecks Mutiply with NFV Don't Forget Performance ...OpenStack Korea Community
6wind 솔루션의 특징은
l Linux 베어메탈 및 가상화 환경에서 최고의 패킷처리 성능을 제공 합니다.
l 다양한 멀티프로세서(Intel, Cavium, Broadcom, EZchip/Tilera 등)와 최적화된 고성능의 L2/L3/L4 네트워크 프로토콜 스택을 제공 합니다.
l Linux OS, Hypervisor, OVS, Openflow, Openstack 등과 투명하게 동작 합니다.
l 개발기간 단축 등으로 비용절감이 가능 합니다.
고객은 용도에 따라 소스코드(제품명: 6WINDGate) 또는 바이너리 솔루션의 라이선스가 가능하며,
통신/네트워크/보안/클라우드 솔루션의 성능 업그레이드 또는 고성능의 신규 솔루션 개발에 사용이 가능 합니다.
클라우드 사업자의 경우 가상스위치 가속솔루션(Virtual Accelerator)을 이용하면 서버당 운용 가능한 가상머신의 수를 증가시킬 수 있으며,
또한 각 가상머신에 더 높은 네트워크 대역폭 제공이 가능 합니다. 이를 통하여 고품질의 서비스 제공 및 경쟁력 확보가 가능하며, TCO 절감 및 ROI 극대화가 가능 합니다.
일부 클라우드 사업자의 경우 소스코드(6WINDGate)를 라이선스 하여 자사의 서비스에 필요한 다양한 솔루션들을 직접 개발하여 사용하기도 합니다.
l 소스코드 솔루션 (6WINDGate)
n 기능별로 모듈화된 76여개의 소스코드 모듈로 구성이 되어 있으며, 용도에 따라 고객이 필요한 모듈을 선택하여 라이선스 가능 합니다.
n 통신/네트웍/보안/클라우드 솔루션의 성능향상 또는 고성능 신규 솔루션 개발을 위해 사용가능 합니다.
l 바이너리 솔루션: 6WINDGate 및 DPDK를 기반으로 제작됨.
n Virtual Accelerator: 가상화 환경에서 KVM hypervisor의 네트워킹 성능가속 솔루션이며 리눅스 기반의 OVS에 비해 월등한 처리 성능 갖으며,
Fast path 기반의 IP forwarding, VRF, Filtering, NAT, VXLAN, GRE 등의 부가 기능을 포함하고 있습니다.
n Turbo Router: 리눅스 베어메탈 및 가상화 환경에서 사용 가능한 고성능의 소프트웨어 기반 라우터(vRouter) 입니다.
n Turbo IPsec gateway: 리눅스 베어메탈 및 가상화 환경에서 사용 가능한 고성능의 소프트웨어 기반 IPsec 게이트웨이(vIPsec GW)이며 Turbo Router를 포함하고 있습니다.
HPC Best Practices: Application Performance Optimizationinside-BigData.com
Pak Lui from the HPC Advisory Council presented this deck at the Switzerland HPC Conference.
"To achieve good scalability performance on the HPC scientific applications typically involves good understanding of the workload though performing profile analysis, and comparing behaviors of using different hardware which pinpoint bottlenecks in different areas of the HPC cluster. In this session, a selection of HPC applications will be shown to demonstrate various methods of profiling and analysis to determine the bottleneck, and the effectiveness of the tuning to improve on the application performance."
Watch the video presentation: http://wp.me/p3RLHQ-f8h
Learn more: http://www.hpcadvisorycouncil.com/best_practices.php
See more talks from the Switzerland HPC Conference:
http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
[OpenStack Day in Korea 2015] Keynote 5 - The evolution of OpenStack NetworkingOpenStack Korea Community
OpenStack Day in Korea 2015 - Keynote 5
The evolution of OpenStack Networking
Guido Appenzeller - Chief Technology Strategy Officer, Networking & Security, VMWare
Unified Underlay and Overlay SDNs for OpenStack CloudsPLUMgrid
Slides from the SFBay OpenStack Meetup
TOPIC: Unified Underlay and Overlay SDNs for OpenStack Clouds
ABSTRACT: With unified underlay and overlay SDNs, IT and operators can leverage best of both technologies to build service-rich SDNs for OpenStack clouds. At this meet up, PLUMgrid will discuss an overlay SDN architecture for service rich SDNs with service function chaining for 3rd party VNFs and demonstrate how to build that using Cisco Nexus 9K as the underlay to leverage the power and throughput of the Nexus fabric.
VMware NSX provides a platform for deployment of software-defined network (SDN) and network function virtualization (NFV) services across physical network devices in a way that is analogous to server virtualization.
[2015-05월 세미나] Network Bottlenecks Mutiply with NFV Don't Forget Performance ...OpenStack Korea Community
6wind 솔루션의 특징은
l Linux 베어메탈 및 가상화 환경에서 최고의 패킷처리 성능을 제공 합니다.
l 다양한 멀티프로세서(Intel, Cavium, Broadcom, EZchip/Tilera 등)와 최적화된 고성능의 L2/L3/L4 네트워크 프로토콜 스택을 제공 합니다.
l Linux OS, Hypervisor, OVS, Openflow, Openstack 등과 투명하게 동작 합니다.
l 개발기간 단축 등으로 비용절감이 가능 합니다.
고객은 용도에 따라 소스코드(제품명: 6WINDGate) 또는 바이너리 솔루션의 라이선스가 가능하며,
통신/네트워크/보안/클라우드 솔루션의 성능 업그레이드 또는 고성능의 신규 솔루션 개발에 사용이 가능 합니다.
클라우드 사업자의 경우 가상스위치 가속솔루션(Virtual Accelerator)을 이용하면 서버당 운용 가능한 가상머신의 수를 증가시킬 수 있으며,
또한 각 가상머신에 더 높은 네트워크 대역폭 제공이 가능 합니다. 이를 통하여 고품질의 서비스 제공 및 경쟁력 확보가 가능하며, TCO 절감 및 ROI 극대화가 가능 합니다.
일부 클라우드 사업자의 경우 소스코드(6WINDGate)를 라이선스 하여 자사의 서비스에 필요한 다양한 솔루션들을 직접 개발하여 사용하기도 합니다.
l 소스코드 솔루션 (6WINDGate)
n 기능별로 모듈화된 76여개의 소스코드 모듈로 구성이 되어 있으며, 용도에 따라 고객이 필요한 모듈을 선택하여 라이선스 가능 합니다.
n 통신/네트웍/보안/클라우드 솔루션의 성능향상 또는 고성능 신규 솔루션 개발을 위해 사용가능 합니다.
l 바이너리 솔루션: 6WINDGate 및 DPDK를 기반으로 제작됨.
n Virtual Accelerator: 가상화 환경에서 KVM hypervisor의 네트워킹 성능가속 솔루션이며 리눅스 기반의 OVS에 비해 월등한 처리 성능 갖으며,
Fast path 기반의 IP forwarding, VRF, Filtering, NAT, VXLAN, GRE 등의 부가 기능을 포함하고 있습니다.
n Turbo Router: 리눅스 베어메탈 및 가상화 환경에서 사용 가능한 고성능의 소프트웨어 기반 라우터(vRouter) 입니다.
n Turbo IPsec gateway: 리눅스 베어메탈 및 가상화 환경에서 사용 가능한 고성능의 소프트웨어 기반 IPsec 게이트웨이(vIPsec GW)이며 Turbo Router를 포함하고 있습니다.
HPC Best Practices: Application Performance Optimizationinside-BigData.com
Pak Lui from the HPC Advisory Council presented this deck at the Switzerland HPC Conference.
"To achieve good scalability performance on the HPC scientific applications typically involves good understanding of the workload though performing profile analysis, and comparing behaviors of using different hardware which pinpoint bottlenecks in different areas of the HPC cluster. In this session, a selection of HPC applications will be shown to demonstrate various methods of profiling and analysis to determine the bottleneck, and the effectiveness of the tuning to improve on the application performance."
Watch the video presentation: http://wp.me/p3RLHQ-f8h
Learn more: http://www.hpcadvisorycouncil.com/best_practices.php
See more talks from the Switzerland HPC Conference:
http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
[OpenStack Day in Korea 2015] Keynote 5 - The evolution of OpenStack NetworkingOpenStack Korea Community
OpenStack Day in Korea 2015 - Keynote 5
The evolution of OpenStack Networking
Guido Appenzeller - Chief Technology Strategy Officer, Networking & Security, VMWare
Unified Underlay and Overlay SDNs for OpenStack CloudsPLUMgrid
Slides from the SFBay OpenStack Meetup
TOPIC: Unified Underlay and Overlay SDNs for OpenStack Clouds
ABSTRACT: With unified underlay and overlay SDNs, IT and operators can leverage best of both technologies to build service-rich SDNs for OpenStack clouds. At this meet up, PLUMgrid will discuss an overlay SDN architecture for service rich SDNs with service function chaining for 3rd party VNFs and demonstrate how to build that using Cisco Nexus 9K as the underlay to leverage the power and throughput of the Nexus fabric.
VMware NSX provides a platform for deployment of software-defined network (SDN) and network function virtualization (NFV) services across physical network devices in a way that is analogous to server virtualization.
I work at Red Hat, the world's leading provider of open source software solutions and the company ranked #23 Best Place to Work in 2014 by Glassdoor.com. I'm part of the Solution Engineering Team, responsible for developing innovative IT solutions that drive business value focusing on DevOps and Platform as a Service.
For the past 20 years, Red Hat's open source software development model has produced high-performing, cost-effective solutions. Our model mirrors the highly interconnected world we live in—where ideas and information can be shared worldwide in seconds. Today, more than 90% of Fortune 500 companies rely on Red Hat. We offer the only fully open technology stack, from operating system to middleware, storage to cloud and virtualization solutions. We also provide a variety of services, including award-winning support, consulting, and training.
Introduction to the Helium release of OpenDaylightSDN Hub
"Helium" is the second release of OpenDaylight made on Oct 2, 2014. This release has more expanded support for Yang, modeling and autogeneration of REST API, improved performance of MD-SAL datastore using Tree-based Akka storage, better integration with OpenStack Neutron API, support for Group-based Policy and support for Service Function Chaining.
VMware NSX + Cumulus Networks: Software Defined NetworkingCumulus Networks
Witness the enablement of a true integration of a virtual network platform and an underlay physical network for a scalable data center orchestration, automation and multi-tenancy solution over high-capacity IP fabrics. With the integration of VMware NSX Layer 2 gateway services on networking hardware running Cumulus Linux, customers can now connect virtual workloads to physical workloads with no performance impact.
Building Resilient Applications with Cloudflare DNSDevOps.com
DNS is a mission-critical component for any online business. Yet this component is often overlooked and forgotten until something breaks.
As DNS attacks become more prevalent, businesses are starting to realize that the lack of a resilient DNS creates a weak link in their security strategy. Also, adopting the right DNS posture is important for achieving 100% uptime and ensuring uninterrupted superior performance. This becomes even more important during this crisis environment as your online presence is the only bridge connecting your business to customers and prospects.
Join this webinar to learn more about:
Risks posed by a weak DNS strategy,
Different ways to accomplish a redundant DNS setup,
How Cloudflare makes it easy to deploy a secure and resilient DNS.
Containers are becoming part of mainstream DevOps architectures and cloud deployments. Application owners and data center infrastructure teams are both aiming to shorten development life cycle and reduce operational cost and complexity by deploying containers This session will provide an overview of container ecosystems and container architectures including Docker, Linux Containers and rkt/CoreOS. Join us and learn about the options to network containers. Projects including Docker Bridge, Contiv, Calico and Magnum/Kuryr will be highlighted in this session. Demos of containers on OpenStack will also featured in this session. Finally, the audience will also learn the advantages that Cisco UCS and Nexus platforms provide in building a cloud platform for containers, virtual machines and bare-metal.
Get a technical understanding of the components of NSX, including how switching, routing, firewalling, load-balancing and other services work within NSX.
Satyajit Tripathi has presented and evangelized OpenSolaris and Its Advanced Technologies at MSC OS Conference 2009 at KL Malaysia. He is also blogging on http://blogs.sun.com/stripathi.
SDN Service Provider use cases Network Function Virtualization (NFV)Brent Salisbury
SDN for Service Providers as Defined by Service Providers. This was from the Software Defined Networking Summit | 13-14 November 2012. Thoughts at http://networkstatic.net/sdn-use-cases-for-service-providers/
SDN Scale-out Testing at OpenStack Innovation Center (OSIC)PLUMgrid
The OpenStack Innovation Center (OSIC), established by Intel and Rackspace, is created to accelerate adoption of open source cloud operating system while supporting open source principles. OSIC provides ready-to-use data center facilities to the OpenStack community for development and test. This case study presentation highlights a scale-out test performed within a 3 week period using OpenStack Ansible Community based on Liberty with an SDN overlay network connecting 131 nodes running over 1,000 VMs. Tempest and Rally tests were conducted to validate functions including high availability failure scenarios. Join this session to find out more about OSIC and the SDN scale-out test configuration, scenarios, and results.
Customers are using NSX to drive business benefits as show in the figure below. The main themes for NSX deployments are Security, IT automation and Application Continuity.
Figure 3: NSX Use Cases
• Security:
NSX can be used to create a secure infrastructure, which can create a zero-trust security model. Every virtualized workload can be protected with a full stateful firewall engine at a very granular level. Security can be based on constructs such as MAC, IP, ports, vCenter objects and tags, active directory groups, etc. Intelligent dynamic security grouping can drive the security posture within the infrastructure.
NSX can be used in conjunction with 3rd party security vendors such as Palo Alto Networks, Checkpoint, Fortinet, or McAffee to provide a complete DMZ like security solution within a cloud infrastructure.
NSX has been deployed widely to secure virtual desktops to secure some of the most vulnerable workloads, which reside in the data center to prohibit desktop-to-desktop hacking.
• Automation:
VMware NSX provides a full RESTful API to consume networking, security and services, which can be used to drive automation within the infrastructure. IT admins can reduce the tasks and cycles required to provision workloads within the datacenter using NSX.
NSX is integrated out of the box with automation tools such as vRealize automation, which can provide customers with a one-click deployment option for an entire application, which includes the compute, storage, network, security and L4-L7 services.
6
Developers can use NSX with the OpenStack platform. NSX provides a neutron plugin that can be used to deploy applications and topologies via OpenStack
• Application Continuity:
NSX provides a way to easily extend networking and security up to eight vCenters either within or across data center In conjunction with vSphere 6.0 customers can easily vMotion a virtual machine across long distances and NSX will ensure that the network is consistent across the sites and ensure that the firewall rules are consistent. This essentially maintains the same view across sites.
NSX Cross vCenter Networking can help build active – active data centers. Customers are using NSX today with VMware Site Recovery Manager to provide disaster recovery solutions. NSX can extend the network across data centers and even to the cloud to enable seamless networking and security.
Guido Appenzeller
CEO
Big Switch Networks
ONS2015: http://bit.ly/ons2015sd
ONS Inspire! Webinars: http://bit.ly/oiw-sd
Watch the talk (video) on ONS Content Archives: http://bit.ly/ons-archives-sd
DPDK IPSec performance benchmark ~ Georgii TkachukIntel
DPDK IPSec performance benchmark ~ Georgii Tkachuk
IPSec and cryptodev overview and performance numbers by Intel Benchmarking team.
Part of 2 day SDN/NFV/DPDK dev lab
https://www.meetup.com/Out-Of-The-Box-Network-Developers/events/237028223/
OpenStack: Everything You Need to Know To Get StartedAll Things Open
All Things Open 2014 - Day 2
Thursday, October 23rd, 2014
Mark Voelker
Technical Leader with Cisco
Cloud/OpenStack
OpenStack: Everything You Need to Know To Get Started
Find more by Mark here: http://www.slideshare.net/markvoelker
Calista Redmond from IBM presented this deck at the Switzerland HPC Conference.
“The OpenPOWER Foundation was founded in 2013 as an open technical membership organization that will enable data centers to rethink their approach to technology. Today, nearly 200 member companies are enabled to customize POWER CPU processors and system platforms for optimization and innovation for their business needs. These innovations include custom systems for large or warehouse scale data centers, workload acceleration through GPU, FPGA or advanced I/O, platform optimization for SW appliances, or advanced hardware technology exploitation. OpenPOWER members are actively pursing all of these innovations and more and welcome all parties to join in moving the state of the art of OpenPOWER systems design forward.”
Watch the video presentation: http://insidehpc.com/2016/03/openpower-foundation/
See more talks in the Swiss Conference Video Gallery: http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
I work at Red Hat, the world's leading provider of open source software solutions and the company ranked #23 Best Place to Work in 2014 by Glassdoor.com. I'm part of the Solution Engineering Team, responsible for developing innovative IT solutions that drive business value focusing on DevOps and Platform as a Service.
For the past 20 years, Red Hat's open source software development model has produced high-performing, cost-effective solutions. Our model mirrors the highly interconnected world we live in—where ideas and information can be shared worldwide in seconds. Today, more than 90% of Fortune 500 companies rely on Red Hat. We offer the only fully open technology stack, from operating system to middleware, storage to cloud and virtualization solutions. We also provide a variety of services, including award-winning support, consulting, and training.
Introduction to the Helium release of OpenDaylightSDN Hub
"Helium" is the second release of OpenDaylight made on Oct 2, 2014. This release has more expanded support for Yang, modeling and autogeneration of REST API, improved performance of MD-SAL datastore using Tree-based Akka storage, better integration with OpenStack Neutron API, support for Group-based Policy and support for Service Function Chaining.
VMware NSX + Cumulus Networks: Software Defined NetworkingCumulus Networks
Witness the enablement of a true integration of a virtual network platform and an underlay physical network for a scalable data center orchestration, automation and multi-tenancy solution over high-capacity IP fabrics. With the integration of VMware NSX Layer 2 gateway services on networking hardware running Cumulus Linux, customers can now connect virtual workloads to physical workloads with no performance impact.
Building Resilient Applications with Cloudflare DNSDevOps.com
DNS is a mission-critical component for any online business. Yet this component is often overlooked and forgotten until something breaks.
As DNS attacks become more prevalent, businesses are starting to realize that the lack of a resilient DNS creates a weak link in their security strategy. Also, adopting the right DNS posture is important for achieving 100% uptime and ensuring uninterrupted superior performance. This becomes even more important during this crisis environment as your online presence is the only bridge connecting your business to customers and prospects.
Join this webinar to learn more about:
Risks posed by a weak DNS strategy,
Different ways to accomplish a redundant DNS setup,
How Cloudflare makes it easy to deploy a secure and resilient DNS.
Containers are becoming part of mainstream DevOps architectures and cloud deployments. Application owners and data center infrastructure teams are both aiming to shorten development life cycle and reduce operational cost and complexity by deploying containers This session will provide an overview of container ecosystems and container architectures including Docker, Linux Containers and rkt/CoreOS. Join us and learn about the options to network containers. Projects including Docker Bridge, Contiv, Calico and Magnum/Kuryr will be highlighted in this session. Demos of containers on OpenStack will also featured in this session. Finally, the audience will also learn the advantages that Cisco UCS and Nexus platforms provide in building a cloud platform for containers, virtual machines and bare-metal.
Get a technical understanding of the components of NSX, including how switching, routing, firewalling, load-balancing and other services work within NSX.
Satyajit Tripathi has presented and evangelized OpenSolaris and Its Advanced Technologies at MSC OS Conference 2009 at KL Malaysia. He is also blogging on http://blogs.sun.com/stripathi.
SDN Service Provider use cases Network Function Virtualization (NFV)Brent Salisbury
SDN for Service Providers as Defined by Service Providers. This was from the Software Defined Networking Summit | 13-14 November 2012. Thoughts at http://networkstatic.net/sdn-use-cases-for-service-providers/
SDN Scale-out Testing at OpenStack Innovation Center (OSIC)PLUMgrid
The OpenStack Innovation Center (OSIC), established by Intel and Rackspace, is created to accelerate adoption of open source cloud operating system while supporting open source principles. OSIC provides ready-to-use data center facilities to the OpenStack community for development and test. This case study presentation highlights a scale-out test performed within a 3 week period using OpenStack Ansible Community based on Liberty with an SDN overlay network connecting 131 nodes running over 1,000 VMs. Tempest and Rally tests were conducted to validate functions including high availability failure scenarios. Join this session to find out more about OSIC and the SDN scale-out test configuration, scenarios, and results.
Customers are using NSX to drive business benefits as show in the figure below. The main themes for NSX deployments are Security, IT automation and Application Continuity.
Figure 3: NSX Use Cases
• Security:
NSX can be used to create a secure infrastructure, which can create a zero-trust security model. Every virtualized workload can be protected with a full stateful firewall engine at a very granular level. Security can be based on constructs such as MAC, IP, ports, vCenter objects and tags, active directory groups, etc. Intelligent dynamic security grouping can drive the security posture within the infrastructure.
NSX can be used in conjunction with 3rd party security vendors such as Palo Alto Networks, Checkpoint, Fortinet, or McAffee to provide a complete DMZ like security solution within a cloud infrastructure.
NSX has been deployed widely to secure virtual desktops to secure some of the most vulnerable workloads, which reside in the data center to prohibit desktop-to-desktop hacking.
• Automation:
VMware NSX provides a full RESTful API to consume networking, security and services, which can be used to drive automation within the infrastructure. IT admins can reduce the tasks and cycles required to provision workloads within the datacenter using NSX.
NSX is integrated out of the box with automation tools such as vRealize automation, which can provide customers with a one-click deployment option for an entire application, which includes the compute, storage, network, security and L4-L7 services.
6
Developers can use NSX with the OpenStack platform. NSX provides a neutron plugin that can be used to deploy applications and topologies via OpenStack
• Application Continuity:
NSX provides a way to easily extend networking and security up to eight vCenters either within or across data center In conjunction with vSphere 6.0 customers can easily vMotion a virtual machine across long distances and NSX will ensure that the network is consistent across the sites and ensure that the firewall rules are consistent. This essentially maintains the same view across sites.
NSX Cross vCenter Networking can help build active – active data centers. Customers are using NSX today with VMware Site Recovery Manager to provide disaster recovery solutions. NSX can extend the network across data centers and even to the cloud to enable seamless networking and security.
Guido Appenzeller
CEO
Big Switch Networks
ONS2015: http://bit.ly/ons2015sd
ONS Inspire! Webinars: http://bit.ly/oiw-sd
Watch the talk (video) on ONS Content Archives: http://bit.ly/ons-archives-sd
DPDK IPSec performance benchmark ~ Georgii TkachukIntel
DPDK IPSec performance benchmark ~ Georgii Tkachuk
IPSec and cryptodev overview and performance numbers by Intel Benchmarking team.
Part of 2 day SDN/NFV/DPDK dev lab
https://www.meetup.com/Out-Of-The-Box-Network-Developers/events/237028223/
OpenStack: Everything You Need to Know To Get StartedAll Things Open
All Things Open 2014 - Day 2
Thursday, October 23rd, 2014
Mark Voelker
Technical Leader with Cisco
Cloud/OpenStack
OpenStack: Everything You Need to Know To Get Started
Find more by Mark here: http://www.slideshare.net/markvoelker
Calista Redmond from IBM presented this deck at the Switzerland HPC Conference.
“The OpenPOWER Foundation was founded in 2013 as an open technical membership organization that will enable data centers to rethink their approach to technology. Today, nearly 200 member companies are enabled to customize POWER CPU processors and system platforms for optimization and innovation for their business needs. These innovations include custom systems for large or warehouse scale data centers, workload acceleration through GPU, FPGA or advanced I/O, platform optimization for SW appliances, or advanced hardware technology exploitation. OpenPOWER members are actively pursing all of these innovations and more and welcome all parties to join in moving the state of the art of OpenPOWER systems design forward.”
Watch the video presentation: http://insidehpc.com/2016/03/openpower-foundation/
See more talks in the Swiss Conference Video Gallery: http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Better performance and cost effectiveness empower better results in the cognitive era. For more information, visit: http://www.ibm.com/systems/power/hardware/linux-lc.html
Oracle Solaris Simple, Flexible, Fast: Virtualization in 11.3OTN Systems Hub
Oracle Solaris
Simple, Flexible, Fast:
Virtualization in 11.3
Duncan Hardie – Principal Product Manager
Edward Pilatowicz – Senior Principal Software Engineer
Oracle Solaris
June 14, 2016
Dror Goldenberg from Mellanox presented this deck at the HPC Advisory Council Switzerland Conference.
“High performance computing has begun scaling beyond Petaflop performance towards the Exaflop mark. One of the major concerns throughout the development toward such performance capability is scalability – at the component level, system level, middleware and the application level. A Co-Design approach between the development of the software libraries and the underlying hardware can help to overcome those scalability issues and to enable a more efficient design approach towards the Exascale goal.”
Watch the video presentation: http://wp.me/p3RLHQ-f7s
See more talks in the Swiss Conference Video Gallery:
http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter:
http://insidehpc.com/newsletter
Oracle Solaris 11.2 - Engineered for Cloud
Oracle Solaris provides an efficient, secure and compliant, simple, open, and affordable solution for
deploying your enterprise-grade clouds. More than just an operating system, Oracle Solaris 11.2 includes
features and enhancements that deliver no-compromise virtualization, application-driven software-defined
networking, and a complete OpenStack distribution for creating and managing an enterprise cloud, enabling
you to meet IT demands and redefine your business.
For more information: http://www.oracle.com/technetwork/server-storage/solaris11/overview/beta-2182985.html
Accelerating Business Intelligence Solutions with Microsoft Azure passJason Strate
Business Intelligence (BI) solutions need to move at the speed of business. Unfortunately, roadblocks related to availability of resources and deployment often present an issue. What if you could accelerate the deployment of an entire BI infrastructure to just a couple hours and start loading data into it by the end of the day. In this session, we'll demonstrate how to leverage Microsoft tools and the Azure cloud environment to build out a BI solution and begin providing analytics to your team with tools such as Power BI. By end of the session, you'll gain an understanding of the capabilities of Azure and how you can start building an end to end BI proof-of-concept today.
Klaus Gottschalk from IBM presented this deck at the 2016 HPC Advisory Council Switzerland Conference.
"Last year IBM together with partners out of the OpenPOWER foundation won two of the multi-year contacts of the US CORAL program. Within these contacts IBM develops an ac- celerated HPC infrastructure and software development ecosystem that will be a major step towards Exascale Computing. We believe that the CORAL roadmap will enable a massive pull for transformation of HPC codes for accelerated systems. The talk will discuss the IBM HPC strategy, explain the OpenPOWER foundation and the show IBM OpenPOWER roadmap for CORAL and beyond."
Watch the video presentation: http://wp.me/p3RLHQ-f9x
Learn more: http://e.huawei.com/us/solutions/business-needs/data-center/high-performance-computing
See more talks from the Switzerland HPC Conference:
http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
We cover the IBM solution for HPC. In addition to hardware and software stack we show how the rational choice of compilation/running parameters helps to significantly improve the performance of technical computing applications.
Puppet and Nano Server provide an amazing mix when it comes to automated cloud deployments. This slide deck is from my session at PuppetCamp NYC and Boston.
Oracle Solaris Build and Run Applications Better on 11.3OTN Systems Hub
Build and Run Applications Better on Oracle Solaris 11.3
Tech Day, NYC
Liane Praza, Senior Principal Software Engineer
Ikroop Dhillon, Principal Product Manager
June, 2016
In this video from the 2017 HPC Advisory Council Stanford Conference, Christian Kniep from Gaikai presents: Best Practices: State of Linux Containers.
"Linux Containers gain more and more momentum in all IT ecosystems. This talk provides an overview about what happened in the container landscape (in particular Docker) during the course of the last year and how it impacts datacenter operations, HPC and High-Performance Big Data. Furthermore Christian will give an update/extend on the ‘things to explore’ list he presented in the last Lugano workshop, applying what he learned and came across during the year 2016."
Watch the video: http://wp.me/p3RLHQ-glP
Learn more: http://qnib.org
and
http://www.hpcadvisorycouncil.com/events/2017/stanford-workshop/
Sign up for our insideHPC Newsletter: http:/insidehpc.com/newsletter
présentation de l'utilisation de Docker, du niveau 0 "je joue avec sur mon poste" au niveau Docker Hero "je tourne en prod".
Ce talk fait suite à l'intro de @dgageot et ne comporte donc pas l'intro "c'est quoi Docker ?".
Containers and Nutanix - Acropolis Container ServicesNEXTtour
This presentation was given at the London Nutanix user group (NUG) on Oct 26 by Denis Guyadeen. If you would like to join a NUG, you can find more information here http://bit.ly/NTNXUG - Hope to see you at a community meeting!
Docker is in all the news and this talk presents you the technology and shows you how to leverage it to build your applications according to the 12 factor application model.
Accelerate your software development with DockerAndrey Hristov
Docker is in all the news and this talk presents you the technology and shows you how to leverage it to build your applications according to the 12 factor application model.
Docker moves very fast, with an edge channel released every month and a stable release every 3 months. Patrick will talk about how Docker introduced Docker EE and a certification program for containers and plugins with Docker CE and EE 17.03 (from March), the announcements from DockerCon (April), and the many new features planned for Docker CE 17.05 in May.
This talk will be about what's new in Docker and what's next on the roadmap
Docker 1.11 Meetup: Containerd and runc, by Arnaud Porterie and Michael Crosby Michelle Antebi
In this talk, Michal Crosby will present on runC and Containerd, the internals and how they work together to start and manage containers in Docker. Afterwards, Arnaud Porterie will touch on about what was shipped in 1.11 and how it will enable some of the things we are working on for 1.12.
Docker 1.11 Meetup: Containerd and runc, by Arnaud Porterie and Michael CrosbyDocker, Inc.
In this talk, Michal Crosby will present on runC and Containerd, the internals and how they work together to start and manage containers in Docker. Afterwards, Arnaud Porterie will touch on about what was shipped in 1.11 and how it will enable some of the things we are working on for 1.12.
Docker is an open platform for developers and sysadmins to
build, ship, and run distributed applications, whether on
laptops,data center VMs, or the cloud.
History and Basics of containers, LXC, Docker and Kubernetes. This presentation is given to Engineering colleage students at VIT DevFest 2018. Beginner to Intermediate level.
In this deck from the Stanford HPC Conference, Shahin Khan from OrionX describes major market Shifts in IT.
"We will discuss the digital infrastructure of the future enterprise and the state of these trends."
"We work with clients on the impact of Digital Transformation (DX) on them, their customers, and their messages. Generally, they want to track, in one place, trends like IoT, 5G, AI, Blockchain, and Quantum Computing. And they want to know what these trends mean, how they affect each other, and when they demand action, and how to formulate and execute an effective plan. If that describes you, we can help."
Watch the video: https://wp.me/p3RLHQ-lPP
Learn more: http://orionx.net
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Preparing to program Aurora at Exascale - Early experiences and future direct...inside-BigData.com
In this deck from IWOCL / SYCLcon 2020, Hal Finkel from Argonne National Laboratory presents: Preparing to program Aurora at Exascale - Early experiences and future directions.
"Argonne National Laboratory’s Leadership Computing Facility will be home to Aurora, our first exascale supercomputer. Aurora promises to take scientific computing to a whole new level, and scientists and engineers from many different fields will take advantage of Aurora’s unprecedented computational capabilities to push the boundaries of human knowledge. In addition, Aurora’s support for advanced machine-learning and big-data computations will enable scientific workflows incorporating these techniques along with traditional HPC algorithms. Programming the state-of-the-art hardware in Aurora will be accomplished using state-of-the-art programming models. Some of these models, such as OpenMP, are long-established in the HPC ecosystem. Other models, such as Intel’s oneAPI, based on SYCL, are relatively-new models constructed with the benefit of significant experience. Many applications will not use these models directly, but rather, will use C++ abstraction libraries such as Kokkos or RAJA. Python will also be a common entry point to high-performance capabilities. As we look toward the future, features in the C++ standard itself will become increasingly relevant for accessing the extreme parallelism of exascale platforms.
This presentation will summarize the experiences of our team as we prepare for Aurora, exploring how to port applications to Aurora’s architecture and programming models, and distilling the challenges and best practices we’ve developed to date. oneAPI/SYCL and OpenMP are both critical models in these efforts, and while the ecosystem for Aurora has yet to mature, we’ve already had a great deal of success. Importantly, we are not passive recipients of programming models developed by others. Our team works not only with vendor-provided compilers and tools, but also develops improved open-source LLVM-based technologies that feed both open-source and vendor-provided capabilities. In addition, we actively participate in the standardization of OpenMP, SYCL, and C++. To conclude, I’ll share our thoughts on how these models can best develop in the future to support exascale-class systems."
Watch the video: https://wp.me/p3RLHQ-lPT
Learn more: https://www.iwocl.org/iwocl-2020/conference-program/
and
https://www.anl.gov/topic/aurora
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck, Greg Wahl from Advantech presents: Transforming Private 5G Networks.
Advantech Networks & Communications Group is driving innovation in next-generation network solutions with their High Performance Servers. We provide business critical hardware to the world's leading telecom and networking equipment manufacturers with both standard and customized products. Our High Performance Servers are highly configurable platforms designed to balance the best in x86 server-class processing performance with maximum I/O and offload density. The systems are cost effective, highly available and optimized to meet next generation networking and media processing needs.
“Advantech’s Networks and Communication Group has been both an innovator and trusted enabling partner in the telecommunications and network security markets for over a decade, designing and manufacturing products for OEMs that accelerate their network platform evolution and time to market.” Said Advantech Vice President of Networks & Communications Group, Ween Niu. “In the new IP Infrastructure era, we will be expanding our expertise in Software Defined Networking (SDN) and Network Function Virtualization (NFV), two of the essential conduits to 5G infrastructure agility making networks easier to install, secure, automate and manage in a cloud-based infrastructure.”
In addition to innovation in air interface technologies and architecture extensions, 5G will also need a new generation of network computing platforms to run the emerging software defined infrastructure, one that provides greater topology flexibility, essential to deliver on the promises of high availability, high coverage, low latency and high bandwidth connections. This will open up new parallel industry opportunities through dedicated 5G network slices reserved for specific industries dedicated to video traffic, augmented reality, IoT, connected cars etc. 5G unlocks many new doors and one of the keys to its enablement lies in the elasticity and flexibility of the underlying infrastructure.
Advantech’s corporate vision is to enable an intelligent planet. The company is a global leader in the fields of IoT intelligent systems and embedded platforms. To embrace the trends of IoT, big data, and artificial intelligence, Advantech promotes IoT hardware and software solutions with the Edge Intelligence WISE-PaaS core to assist business partners and clients in connecting their industrial chains. Advantech is also working with business partners to co-create business ecosystems that accelerate the goal of industrial intelligence."
Watch the video: https://wp.me/p3RLHQ-lPQ
* Company website: https://www.advantech.com/
* Solution page: https://www2.advantech.com/nc/newsletter/NCG/SKY/benefits.html
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...inside-BigData.com
In this deck from the Stanford HPC Conference, Katie Lewis from Lawrence Livermore National Laboratory presents: The Incorporation of Machine Learning into Scientific Simulations at Lawrence Livermore National Laboratory.
"Scientific simulations have driven computing at Lawrence Livermore National Laboratory (LLNL) for decades. During that time, we have seen significant changes in hardware, tools, and algorithms. Today, data science, including machine learning, is one of the fastest growing areas of computing, and LLNL is investing in hardware, applications, and algorithms in this space. While the use of simulations to focus and understand experiments is well accepted in our community, machine learning brings new challenges that need to be addressed. I will explore applications for machine learning in scientific simulations that are showing promising results and further investigation that is needed to better understand its usefulness."
Watch the video: https://youtu.be/NVwmvCWpZ6Y
Learn more: https://computing.llnl.gov/research-area/machine-learning
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...inside-BigData.com
In this deck from the Stanford HPC Conference, DK Panda from Ohio State University presents: How to Achieve High-Performance, Scalable and Distributed DNN Training on Modern HPC Systems?
"This talk will start with an overview of challenges being faced by the AI community to achieve high-performance, scalable and distributed DNN training on Modern HPC systems with both scale-up and scale-out strategies. After that, the talk will focus on a range of solutions being carried out in my group to address these challenges. The solutions will include: 1) MPI-driven Deep Learning, 2) Co-designing Deep Learning Stacks with High-Performance MPI, 3) Out-of- core DNN training, and 4) Hybrid (Data and Model) parallelism. Case studies to accelerate DNN training with popular frameworks like TensorFlow, PyTorch, MXNet and Caffe on modern HPC systems will be presented."
Watch the video: https://youtu.be/LeUNoKZVuwQ
Learn more: http://web.cse.ohio-state.edu/~panda.2/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...inside-BigData.com
In this deck from the Stanford HPC Conference, Nick Nystrom and Paola Buitrago provide an update from the Pittsburgh Supercomputing Center.
Nick Nystrom is Chief Scientist at the Pittsburgh Supercomputing Center (PSC). Nick is architect and PI for Bridges, PSC's flagship system that successfully pioneered the convergence of HPC, AI, and Big Data. He is also PI for the NIH Human Biomolecular Atlas Program’s HIVE Infrastructure Component and co-PI for projects that bring emerging AI technologies to research (Open Compass), apply machine learning to biomedical data for breast and lung cancer (Big Data for Better Health), and identify causal relationships in biomedical big data (the Center for Causal Discovery, an NIH Big Data to Knowledge Center of Excellence). His current research interests include hardware and software architecture, applications of machine learning to multimodal data (particularly for the life sciences) and to enhance simulation, and graph analytics.
Watch the video: https://youtu.be/LWEU1L1o7yY
Learn more: https://www.psc.edu/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck from the Stanford HPC Conference, Ryan Quick from Providentia Worldwide describes how DNNs can be used to improve EDA simulation runs.
"Systems Intelligence relies on a variety of methods for providing insight into the core mechanisms for driving automated behavioral changes in self-healing command and control platforms. This talk reports on initial efforts with leveraging Semiconductor Electronic Design Automation (EDA) telemetry data from cross-domain sources including power, network, storage, nodes, and applications in neural networks as a driving method for insight into SI automation systems."
Watch the video: https://youtu.be/2WbR8tq-XbM
Learn more: http://www.providentiaworldwide.com/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoringinside-BigData.com
In this deck from the Stanford HPC Conference, Nicole Xu from Stanford University describes how she transformed a common jellyfish into a bionic creature that is part animal and part machine.
"Animal locomotion and bioinspiration have the potential to expand the performance capabilities of robots, but current implementations are limited. Mechanical soft robots leverage engineered materials and are highly controllable, but these biomimetic robots consume more power than corresponding animal counterparts. Biological soft robots from a bottom-up approach offer advantages such as speed and controllability but are limited to survival in cell media. Instead, biohybrid robots that comprise live animals and self- contained microelectronic systems leverage the animals’ own metabolism to reduce power constraints and body as an natural scaffold with damage tolerance. We demonstrate that by integrating onboard microelectronics into live jellyfish, we can enhance propulsion up to threefold, using only 10 mW of external power input to the microelectronics and at only a twofold increase in cost of transport to the animal. This robotic system uses 10 to 1000 times less external power per mass than existing swimming robots in literature and can be used in future applications for ocean monitoring to track environmental changes."
Watch the video: https://youtu.be/HrmJFyvInj8
Learn more: https://sanfrancisco.cbslocal.com/2020/02/05/stanford-research-project-common-jellyfish-bionic-sea-creatures/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck from the Stanford HPC Conference, Peter Dueben from the European Centre for Medium-Range Weather Forecasts (ECMWF) presents: Machine Learning for Weather Forecasts.
"I will present recent studies that use deep learning to learn the equations of motion of the atmosphere, to emulate model components of weather forecast models and to enhance usability of weather forecasts. I will than talk about the main challenges for the application of deep learning in cutting-edge weather forecasts and suggest approaches to improve usability in the future."
Peter is contributing to the development and optimization of weather and climate models for modern supercomputers. He is focusing on a better understanding of model error and model uncertainty, on the use of reduced numerical precision that is optimised for a given level of model error, on global cloud- resolving simulations with ECMWF's forecast model, and the use of machine learning, and in particular deep learning, to improve the workflow and predictions. Peter has graduated in Physics and wrote his PhD thesis at the Max Planck Institute for Meteorology in Germany. He worked as Postdoc with Tim Palmer at the University of Oxford and has taken up a position as University Research Fellow of the Royal Society at the European Centre for Medium-Range Weather Forecasts (ECMWF) in 2017.
Watch the video: https://youtu.be/ks3fkRj8Iqc
Learn more: https://www.ecmwf.int/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck, Gilad Shainer from the HPC AI Advisory Council describes how this organization fosters innovation in the high performance computing community.
"The HPC-AI Advisory Council’s mission is to bridge the gap between high-performance computing (HPC) and Artificial Intelligence (AI) use and its potential, bring the beneficial capabilities of HPC and AI to new users for better research, education, innovation and product manufacturing, bring users the expertise needed to operate HPC and AI systems, provide application designers with the tools needed to enable parallel computing, and to strengthen the qualification and integration of HPC and AI system products."
Watch the video: https://wp.me/p3RLHQ-lNz
Learn more: http://hpcadvisorycouncil.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Today RIKEN in Japan announced that the Fugaku supercomputer will be made available for research projects aimed to combat COVID-19.
"Fugaku is currently being installed and is scheduled to be available to the public in 2021. However, faced with the devastating disaster unfolding before our eyes, RIKEN and MEXT decided to make a portion of the computational resources of Fugaku available for COVID-19-related projects ahead of schedule while continuing the installation process.
Fugaku is being developed not only for the progress in science, but also to help build the society dubbed as the “Society 5.0” by the Japanese government, where all people will live safe and comfortable lives. The current initiative to fight against the novel coronavirus is driven by the philosophy behind the development of Fugaku."
Initial Projects
Exploring new drug candidates for COVID-19 by "Fugaku"
Yasushi Okuno, RIKEN / Kyoto University
Prediction of conformational dynamics of proteins on the surface of SARS-Cov-2 using Fugaku
Yuji Sugita, RIKEN
Simulation analysis of pandemic phenomena
Nobuyasu Ito, RIKEN
Fragment molecular orbital calculations for COVID-19 proteins
Yuji Mochizuki, Rikkyo University
In this deck from the Performance Optimisation and Productivity group, Lubomir Riha from IT4Innovations presents: Energy Efficient Computing using Dynamic Tuning.
"We now live in a world of power-constrained architectures and systems and power consumption represents a significant cost factor in the overall HPC system economy. For these reasons, in recent years researchers, supercomputing centers and major vendors have developed new tools and methodologies to measure and optimize the energy consumption of large-scale high performance system installations. Due to the link between energy consumption, power consumption and execution time of an application executed by the final user, it is important for these tools and the methodology used to consider all these aspects, empowering the final user and the system administrator with the capability of finding the best configuration given different high level objectives.
This webinar focused on tools designed to improve the energy-efficiency of HPC applications using a methodology of dynamic tuning of HPC applications, developed under the H2020 READEX project. The READEX methodology has been designed for exploiting the dynamic behaviour of software. At design time, different runtime situations (RTS) are detected and optimized system configurations are determined. RTSs with the same configuration are grouped into scenarios, forming the tuning model. At runtime, the tuning model is used to switch system configurations dynamically.
The MERIC tool, that implements the READEX methodology, is presented. It supports manual or binary instrumentation of the analysed applications to simplify the analysis. This instrumentation is used to identify and annotate the significant regions in the HPC application. Automatic binary instrumentation annotates regions with significant runtime. Manual instrumentation, which can be combined with automatic, allows code developer to annotate regions of particular interest."
Watch the video: https://wp.me/p3RLHQ-lJP
Learn more: https://pop-coe.eu/blog/14th-pop-webinar-energy-efficient-computing-using-dynamic-tuning
and
https://code.it4i.cz/vys0053/meric
Sign up for our insideHPC Newsletter: http://insidehpc.com/newslett
In this deck from GTC Digital, William Beaudin from DDN presents: HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD.
Enabling high performance computing through the use of GPUs requires an incredible amount of IO to sustain application performance. We'll cover architectures that enable extremely scalable applications through the use of NVIDIA’s SuperPOD and DDN’s A3I systems.
The NVIDIA DGX SuperPOD is a first-of-its-kind artificial intelligence (AI) supercomputing infrastructure. DDN A³I with the EXA5 parallel file system is a turnkey, AI data storage infrastructure for rapid deployment, featuring faster performance, effortless scale, and simplified operations through deeper integration. The combined solution delivers groundbreaking performance, deploys in weeks as a fully integrated system, and is designed to solve the world's most challenging AI problems.
Watch the video: https://wp.me/p3RLHQ-lIV
Learn more: https://www.ddn.com/download/nvidia-superpod-ddn-a3i-ai400-appliance-with-the-exa5-filesystem/
and
https://www.nvidia.com/en-us/gtc/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck, Paul Isaacs from Linaro presents: State of ARM-based HPC. This talk provides an overview of applications and infrastructure services successfully ported to Aarch64 and benefiting from scale.
"With its debut on the TOP500, the 125,000-core Astra supercomputer at New Mexico’s Sandia Labs uses Cavium ThunderX2 chips to mark Arm’s entry into the petascale world. In Japan, the Fujitsu A64FX Arm-based CPU in the pending Fugaku supercomputer has been optimized to achieve high-level, real-world application performance, anticipating up to one hundred times the application execution performance of the K computer. K was the first computer to top 10 petaflops in 2011."
Watch the video: https://wp.me/p3RLHQ-lIT
Learn more: https://www.linaro.org/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Versal Premium ACAP for Network and Cloud Accelerationinside-BigData.com
Today Xilinx announced Versal Premium, the third series in the Versal ACAP portfolio. The Versal Premium series features highly integrated, networked and power-optimized cores and the industry’s highest bandwidth and compute density on an adaptable platform. Versal Premium is designed for the highest bandwidth networks operating in thermally and spatially constrained environments, as well as for cloud providers who need scalable, adaptable application acceleration.
Versal is the industry’s first adaptive compute acceleration platform (ACAP), a revolutionary new category of heterogeneous compute devices with capabilities that far exceed those of conventional silicon architectures. Developed on TSMC’s 7-nanometer process technology, Versal Premium combines software programmability with dynamically configurable hardware acceleration and pre-engineered connectivity and security features to enable a faster time-to- market. The Versal Premium series delivers up to 3X higher throughput compared to current generation FPGAs, with built-in Ethernet, Interlaken, and cryptographic engines that enable fast and secure networks. The series doubles the compute density of currently deployed mainstream FPGAs and provides the adaptability to keep pace with increasingly diverse and evolving cloud and networking workloads.
Learn more: https://insidehpc.com/2020/03/xilinx-announces-versal-premium-acap-for-network-and-cloud-acceleration/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Zettar: Moving Massive Amounts of Data across Any Distance Efficientlyinside-BigData.com
In this video from the Rice Oil & Gas Conference, Chin Fang from Zettar presents: Moving Massive Amounts of Data across Any Distance Efficiently.
The objective of this talk is to present two on-going projects aiming at improving and ensuring highly efficient bulk transferring or streaming of massive amounts of data over digital connections across any distance. It examines the current state of the art, a few very common misconceptions, the differences among the three major type of data movement solutions, a current initiative attempting to improve the data movement efficiency from the ground up, and another multi-stage project that shows how to conduct long distance large scale data movement at speed and scale internationally. Both projects have real world motivations, e.g. the ambitious data transfer requirements of Linac Coherent Light Source II (LCLS-II) [1], a premier preparation project of the U.S. DOE Exascale Computing Initiative (ECI) [2]. Their immediate goals are described and explained, together with the solution used for each. Findings and early results are reported. Possible future works are outlined.
Watch the video: https://wp.me/p3RLHQ-lBX
Learn more: https://www.zettar.com/
and
https://rice2020oghpc.rice.edu/program-2/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck from the Rice Oil & Gas Conference, Bradley McCredie from AMD presents: Scaling TCO in a Post Moore's Law Era.
"While foundries bravely drive forward to overcome the technical and economic challenges posed by scaling to 5nm and beyond, Moore’s law alone can provide only a fraction of the performance / watt and performance / dollar gains needed to satisfy the demands of today’s high performance computing and artificial intelligence applications. To close the gap, multiple strategies are required. First, new levels of innovation and design efficiency will supplement technology gains to continue to deliver meaningful improvements in SoC performance. Second, heterogenous compute architectures will create x-factor increases of performance efficiency for the most critical applications. Finally, open software frameworks, APIs, and toolsets will enable broad ecosystems of application level innovation."
Watch the video:
Learn more: http://amd.com
and
https://rice2020oghpc.rice.edu/program-2/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
CUDA-Python and RAPIDS for blazing fast scientific computinginside-BigData.com
In this deck from the ECSS Symposium, Abe Stern from NVIDIA presents: CUDA-Python and RAPIDS for blazing fast scientific computing.
"We will introduce Numba and RAPIDS for GPU programming in Python. Numba allows us to write just-in-time compiled CUDA code in Python, giving us easy access to the power of GPUs from a powerful high-level language. RAPIDS is a suite of tools with a Python interface for machine learning and dataframe operations. Together, Numba and RAPIDS represent a potent set of tools for rapid prototyping, development, and analysis for scientific computing. We will cover the basics of each library and go over simple examples to get users started. Finally, we will briefly highlight several other relevant libraries for GPU programming."
Watch the video: https://wp.me/p3RLHQ-lvu
Learn more: https://developer.nvidia.com/rapids
and
https://www.xsede.org/for-users/ecss/ecss-symposium
Sign up for our insideHPC Newsletter: http://insidehp.com/newsletter
In this deck from FOSDEM 2020, Colin Sauze from Aberystwyth University describes the development of a RaspberryPi cluster for teaching an introduction to HPC.
"The motivation for this was to overcome four key problems faced by new HPC users:
* The availability of a real HPC system and the effect running training courses can have on the real system, conversely the availability of spare resources on the real system can cause problems for the training course.
* A fear of using a large and expensive HPC system for the first time and worries that doing something wrong might damage the system.
* That HPC systems are very abstract systems sitting in data centres that users never see, it is difficult for them to understand exactly what it is they are using.
* That new users fail to understand resource limitations, in part because of the vast resources in modern HPC systems a lot of mistakes can be made before running out of resources. A more resource constrained system makes it easier to understand this.
The talk will also discuss some of the technical challenges in deploying an HPC environment to a Raspberry Pi and attempts to keep that environment as close to a "real" HPC as possible. The issue to trying to automate the installation process will also be covered."
Learn more: https://github.com/colinsauze/pi_cluster
and
https://fosdem.org/2020/schedule/events/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck from ATPESC 2019, Ken Raffenetti from Argonne presents an overview of HPC interconnects.
"The Argonne Training Program on Extreme-Scale Computing (ATPESC) provides intensive, two-week training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future."
Watch the video: https://wp.me/p3RLHQ-luc
Learn more: https://extremecomputingtraining.anl.gov/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
4. 1. “Linux Container” / “Docker Ecosystem” in a Nutshell
2. Confusion about Ecosystem / Vision to tackle it
3. Docker -> SWARM -> SLURM
-> BigData
4. Discussion of Opportunities and Problems
4
Agenda
6. Userland (OS)Userland (OS) Userland (OS)
Userland (OS)
Ubuntu:14.04 Ubuntu:15.10 RHEL7.2
Tiny Core Linux
Linux Containers
6
SERVER
HOST KERNEL
HYPERVISOR
KERNEL
SERVICE
Userland (OS)
KERNEL KERNEL
Userland (OS)Userland (OS) Userland (OS)
SERVICE SERVICE
SERVER
HOST KERNEL
SERVICE SERVICE SERVICE
Traditional Virtualisation Containerisation
Containers do not spin up a distinct kernel
all containers & the host share the same
user-lands are independent
they are separated by Kernel Namespaces
7. Containers are ‘grouped processes’
isolated by Kernel Namespaces
resource restrictions applicable through CGroups (disk/netIO)
HOST
container1
7
Kernel Namespaces
bash
ls -l
container2
apache
container3
mysqld
consul consul
PIDNamespaces: Network Mount IPC UTS
container4
slurmd
ssh
consul
8. Container Runtime Daemon
creates/…/removes containers, exposes REST API
handles Namespaces, CGroups, bind-mounts, etc.
IP connectivity by default via ‘host-only’ network bridge
Docker Engine
8
SERVER
eth0
docker0
container1
container2
Docker-Engine
9. Docker Compose
9
Describes stack of container configurations
instead of writing a small bash script…
… it holds the runtime configuration as YAML file.
10. Docker Networking spans networks across engines
KV-store to synchronise (Zookeeper, etcd, Consul)
VXLAN to pass messages along
SERVER0 SERVER1 SERVER<n>
Docker Networking
10
Consul
Docker-Engine
Consul Consul
Docker-Engine Docker-Engine
Consul DC
global
container0 container1 containerN
11. Docker Swarm proxies docker-engines
serves an API endpoint in front of multiple docker-engines
does placement decisions.
SERVER0 SERVER1 SERVER<n>
Docker Swarm
11
Docker-Engine Docker-Engine Docker-Engine
swarm-client swarm-client swarm-client
swarm-master
:2376 :2376 :2376
:2375
container1
-e constraint:node==SERVER0
17. 1. No special distributions
useful for certain use-cases, such as elasticity and green-field
deployment
not so much for an on-premise datacenter w/ legacy in it.
2. Leverage existing processes/resources
install workflow, syslog, monitoring
security (ssh infrastructure), user auth.
3. keep up with docker ecosystem
incorporate new features of engine, swarm, compose
networking, volumes, user-namespaces
17
Vision
19. Hardware (courtesy of )
8x Sun Fire x2250, 2x 4core XEON, 32GB, Mellanox ConnectX-2)
Software
Base installation
CentOS 7.2 base installation (updated from 7-alpha)
Ansible
consul, sensu
docker v1.10, docker-compose
docker SWARM
19
Testbed
30. 1. Where to base images on?
Ubuntu/Fedora: ~200MB
Debian: ~100MB
Alpine Linux: 5MB (musl-libc)
2. Trimm the Images down at all cost?
How about debugging tools? Possibility to run tools on the host
and ‘inspect’ namespaced processes inside of a container.
If PID-sharing arrives, carving out (e.g.) monitoring could be a
thing.
30
Small vs. Big
31. 1. In an ideal world…
a container only runs one process, e.g. the HPC solver.
2. In reality…
MPI want’s to connect to a sshd within the job-peers
monitoring, syslog, service discovery should be present as well.
3. How fast / aggressive to break traditional
approaches?
31
One vs. Many Processes
33. Running OpenFOAM on small scale is cumbersome
manually install OpenFOAM on a workstation
be confident that the installation works correctly
A containerised OpenFOAM installation tackles both
33
Reproducibility / Downscaling
http://qnib.org/immutablehttp://qnib.org/immutable-paper
34. 1. Since the environments are rather dynamic…
how does the containers discover services?
external registry as part of the framework?
discovery service as part of the container stacks?
34
Service Discovery
35. With Docker Swarm it is rather easy
to spin up a Kubernetes or Mesos cluster within Swarm.
35
Orchestration Frameworks
SERVER0 SERVER1 SERVER<n>
Docker-Engine Docker-Engine Docker-Engine
swarm-client swarm-client swarm-client
swarm-master
etcd
kubelet
scheduler apiserver
etcd
kubelet
etcd
kubelet
36. 1. Containers should be controlled via ENV or flags
External access/change of a running container is discouraged
2. Configuration management
Downgraded to bootstrap a host?
36
Immutable vs. Config Mgmt
37. If containers are immutable within pipeline
testing/deployment should be automated
developers should have a production replica
37
Continuous Dev./Integration
38. 38
Docker Momentum
Software Dev
DatacenterOps
IT Tinkering (Hello World)
Continuous Dev/Int/Dep
Microservices, hyper scale
Big Data
High Performance Computing
HPC
Disclaimer: subjective exaggeration
39. Spinning up production-like environment is great
MongoDB, PostreSQL, memcached as separate containers
python2.7, python3.4
39
Docker in Software Development
Like python’s virtualenv on steroids,
iteration speedup through reproducibility
40. Spinning up production-like environment is…
…not that easy
focus more on engineer/scientist, not the software-developer
1. For development it might work
close to non-HPC software dev
2. But is that the iteration-focus?
rather job settings / input data?
40
Docker in HPC development
41. Split input iteration / development from operation
non-distributed stays vanilla
transition to HPC cluster using tech to foster operation
41
Separation of Concerns?
http://gmkurtzer.github.io/singularity
Input/Dev
42. Docker-Engine 1.11 will not be the parent of containers
runC usage under the hood
42
containerd Integration
43. 1. Separat Dev and Ops
don’t block the momentum fostering
iteration speed in Development
2. Using vanilla docker-tech
keep up with the ecosystem and prevent vendor/ecosystem
lock-in
3. 80/20 rule
have caveats on the radar but don’t bother too much
everything is so fast moving - it’s hard to predict
43
Recap aka. IMHO