This document provides performance benchmarks for cryptographic algorithms such as AES, RSA, and DES implemented on PowerPC processors such as the Freescale 7457. The benchmarks show that these algorithms can be executed very efficiently on PowerPC, with cycle counts in the hundreds for AES and throughput rates over 300 Mbps. The document encourages developing algorithms tailored for PowerPC's capabilities to maximize performance.
DPDK Summit 2015 in San Francisco.
Intel's presentation by Keith Wiles.
For additional details and the video recording please visit www.dpdksummit.com.
This was presented by Yong LU at OpenPOWER summit EU 2019. The original one is uploaded at:
https://static.sched.com/hosted_files/opeu19/16/OpenCAPI%20Acceleration%20Framework_YongLu_ver2.pdf
DPDK Summit 2015 in San Francisco.
Intel's presentation by Keith Wiles.
For additional details and the video recording please visit www.dpdksummit.com.
This was presented by Yong LU at OpenPOWER summit EU 2019. The original one is uploaded at:
https://static.sched.com/hosted_files/opeu19/16/OpenCAPI%20Acceleration%20Framework_YongLu_ver2.pdf
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Odinot Stanislas
(FR)
Voici un excellent document qui explique étape après étape comment installer, monitorer et surtout correctement benchmarker ses SSD PCIe/NVMe (pas si simple que ça). Autre élément clé : comment analyser la charge I/O de véritables applications? Combien d'IOPS, en read, en write, quelle bande passante et surtout quel impact sur la durée de vie des SSD? Bref à mettre en toute les mains, et un merci à mon collègue Andrey Kudryavtsev.
(EN)
An excellent content which describe step by step how to install, monitor and benchmark PCIe/NVMe SSD (many trick not so simple). Another key learning: how to measure real I/O activities on a real workload? How many R/W IOPS, block size, throughtput, and finally what's the impact on SSD endurance and (real)life? A must read, and a huge thanks to my colleague Andrey Kudryavtsev.
Auteurs/Authors:
Andrey Kudryavtsev, SSD Solution Architect, Intel Corporation
Zhdan Bybin, Application Engineer, Intel Corporation
DPDK Summit - 08 Sept 2014 - Intel - Networking Workloads on Intel ArchitectureJim St. Leger
Venky Venkatesan presents information on the Data Plane Development Kit (DPDK) including an overview, background, methodology, and future direction and developments.
DPDK Summit 2015 - Intro - Tim O'DriscollJim St. Leger
DPDK Summit 2015 in San Francisco.
Introductory comments and kick-off by Tim O'Driscoll, Intel.
For additional details and the video recording please visit www.dpdksummit.com.
The Open Coherent Accelerator Processor Interface (OpenCAPI) is an industry-standard architecture targeted for emerging accelerator solutions and workloads. This session will address these following areas : 1.) The latest technology advancements surround OpenCAPI, 2.) The OpenCAPI strategy as it relates to the other industry acceleration standards. ie Intel's CXL, Gen-Z and CCIX, 3.) The open initiatives surrounding OMI and OpenCAPI 3.0 and GitHub, 4.) Industry Open Source Initiatives around OpenCAPI, 5.) OC-Accel - Our new FPGA programming framework, supporting OpenCAPI 3.0, targeting higher level programming languages such as C, C++ 6.) Interesting Use Cases
What are latest new features that DPDK brings into 2018?Michelle Holley
We will provide an overview of the new features of the latest DPDK release including source code browsing and API listing of top two new features of latest DPDK release. And on top of that, there will be a hands-on lab, on the Intel® microarchitecture servers, to learn how getting started with DPDK will become much simpler and powerful.
Abstract: Explore the packet I/O data path from a NIC across PCI-Express to cache/memory and understand how to build efficient CPU code for networked applications.
Speaker: Venky Venkatesan, Intel Fellow, Chief Architect – Packet Processing and Networking Applications
In the design of electronics and semiconductors, challenges are compounded by the integration of AI, multi-core, real-time software, network, connectivity, diagnostics, and security. Performance limits, battery life, and cost are adoption barriers. It is extremely important to have tools and processes that deliver efficiency throughout the design cycle.
Continuous verification from planning to development addresses the multi-discipline needs of hardware, software, and networks. This unique approach accelerates the design phase, defines the test efforts, and finds defects during specification. Architecture modeling is required to meet timing deadlines, generate the lowest power consumption, and attain the highest Quality-of-Service. optimize the electronic design system and designing of custom components.
Automated Out-of-Band management with Ansible and RedfishJose De La Rosa
Ansible is an open source automation engine that automates complex IT tasks such as cloud provisioning, application deployment and a wide variety of system administration tasks. It is a one-to-many agentless mechanism where complex deployment tasks can be controlled and monitored from a central control machine.
Redfish is an open industry-standard specification and schema designed for modern and secure management of platform hardware. On Dell EMC PowerEdge servers the Redfish management APIs are available via the integrated Dell Remote Access Controller (iDRAC), which can be used by IT administrators to easily monitor and manage at scale their entire infrastructure using a wide array of clients on devices such as laptops, tablets and smart phones.
Together, Ansible and Redfish can be used by system administrators to fully automate at large scale server monitoring, provisioning and update tasks from one central location, significantly reducing complexity and helping improve the productivity and efficiency of IT administrators.
DPDK Summit 2015 - NTT - Yoshihiro NakajimaJim St. Leger
DPDK Summit 2015 in San Francisco.
NTT presentation by Yoshihiro Nakajima.
For additional details and the video recording please visit www.dpdksummit.com.
PLNOG14: Architektura oraz rozwiązywanie problemów na routerach IOS-XE - Piot...PROIDEA
Piotr Kupisiewicz - Cisco Systems
Language: Polish
Architektura IOS-XE jest implementowana w każdym nowoczesnym routerze Cisco. Mowa tutaj o routerze ASR1000, jak również o seriach 43xx oraz 44xx.
Skoro IOS oraz IOS-XE "wyglądają" tak samo, jaka jest różnica między nimi ?
W jaki sposób efektywnie rozwiązywać problemy z przepływem ruchu poprzez router oparty o IOS-XE ?
Sesja omawiająca architekturę oraz podejście do rozwiązywania problemów (z prawdziwym "live demo"). Aspekty te mogą okazać się bardzo pomocne dla inżynierów sieciowych, jak również dla architektów sieciowych.
Zarejestruj się na kolejną edycję PLNOG: krakow.plnog.pl
DPDK Summit 2015 - Aspera - Charles ShiflettJim St. Leger
DPDK Summit 2015 in San Francisco.
Presentation by Charles Shiflett, Aspera.
For additional details and the video recording please visit www.dpdksummit.com.
OpenPOWER Solutions overview session from IBM TechU Rome - April 2016Mandie Quartly
An OpenPOWER Solutions overview session written for the IBM TechU in Rome in April 2016.
Presented by Mandie Quartly, OpenPOWER European Alliances, IBM and David Spurway, Power Product Manager, IBM UK.
Note that information in this deck may change since it was created (such is life when talking about solutions which haven't been released yet!) so please check details with the OpenPOWER website / manufacturer / seller.
http://openpowerfoundation.org/
In this deck from the Argonne Training Program on Extreme-Scale Computing 2019, Howard Pritchard from LANL and Simon Hammond from Sandia present: NNSA Explorations: ARM for Supercomputing.
"The Arm-based Astra system at Sandia will be used by the National Nuclear Security Administration (NNSA) to run advanced modeling and simulation workloads for addressing areas such as national security, energy and science.
"By introducing Arm processors with the HPE Apollo 70, a purpose-built HPC architecture, we are bringing powerful elements, like optimal memory performance and greater density, to supercomputers that existing technologies in the market cannot match,” said Mike Vildibill, vice president, Advanced Technologies Group, HPE. “Sandia National Laboratories has been an active partner in leveraging our Arm-based platform since its early design, and featuring it in the deployment of the world’s largest Arm-based supercomputer, is a strategic investment for the DOE and the industry as a whole as we race toward achieving exascale computing.”
Watch the video: https://wp.me/p3RLHQ-l29
Learn more: https://insidehpc.com/2018/06/arm-goes-big-hpe-builds-petaflop-supercomputer-sandia/
and
https://extremecomputingtraining.anl.gov/agenda-2019/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck from the UK HPC Conference, Gunter Roeth from NVIDIA presents: Hardware & Software Platforms for HPC, AI and ML.
"Data is driving the transformation of industries around the world and a new generation of AI applications are effectively becoming programs that write software, powered by data, vs by computer programmers. Today, NVIDIA’s tensor core GPU sits at the core of most AI, ML and HPC applications, and NVIDIA software surrounds every level of such a modern application, from CUDA and libraries like cuDNN and NCCL embedded in every deep learning framework and optimized and delivered via the NVIDIA GPU Cloud to reference architectures designed to streamline the deployment of large scale infrastructures."
Watch the video: https://wp.me/p3RLHQ-l2Y
Learn more: http://nvidia.com
and
http://hpcadvisorycouncil.com/events/2019/uk-conference/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Compare Performance-power of Arm Cortex vs RISC-V for AI applications_oct_2021Deepak Shankar
Abstract: In the Webinar, we will show you how to construct, simulate, analyze, validate, optimize an architecture model using pre-built components. We will compare micro and application benchmarks on system SoC models containing clusters of ARM Cortex A53, SiFive u74, ARM Cortex A77, and other vendor cores. The system will be built around custom switches, Ingress/Egress buffers, credit flow control, AI accelerators, NoC and AMBA AXI buses with multi-level caches, DDR4 DRAM and DMA. The evaluation and optimization criteria will be task latency, dCache hit-ratio, power consumed/task and memory bandwidth. The parameters to be modified are bus topology, cache size, processor clock speed, custom arbiters, task thread allocation and changing the processor pipeline.
Selection of cores is a combination of financial and technical bias. Technical comparison of processor cores requires the understanding of the workload, task partitioning and cache-memory structure. A core must be evaluated in the context of the target application. To evaluate these selections, architecture simulation software must be fortified with a library of Intellectual property for power and timing accurate processor cores, simulator at 100 million events per second, peripherals, and all possible traffic distributions
Key Takeaways:
1. Validating architecture models using mathematical calculus and hardware traces
2. Construct custom policies, arbitrations and configure processor cores
3. Select the right combination of statistics to detect bottlenecks and optimize the architecture
4. Identify the right use of stochastic, transaction, cycle-accurate and traces to construct the model
Speaker Bio:
Alex Su is a FPGA solution architect at E-Elements Technology, Hsinchu, Taiwan. He has been an FPGA Solution Architect and Xilinx FPGA Trainer for a number of years, supporting companies, research centers and universities in China and Taiwan. Prior to that, Mr Su has worked at ARM Ltd for 5 years in technical support of Arm CPU and System IP. Alex has also been engaged with a variety of FPGA-based Hardware Emulation System and over ten years in ASIC/SoC design and verification engineer.
Deepak Shankar is the Founder of Mirabilis Design and has been involved in the architecture exploration of over 250 SoC and processors. Mr. Shankar started Mirabilis Design because of a vacuum in the systems engineering and modeling space with the focus shifting to network design and early software development. Deepak has published over 50 articles and presented at over 30 conferences in EDA, semiconductors and embedded computing. Mr. Shankar has an MBA from UC Berkeley, MS in from Clemson University and BS from Coimbatore Institute of Technology, both in Electronics and Communication.
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Odinot Stanislas
(FR)
Voici un excellent document qui explique étape après étape comment installer, monitorer et surtout correctement benchmarker ses SSD PCIe/NVMe (pas si simple que ça). Autre élément clé : comment analyser la charge I/O de véritables applications? Combien d'IOPS, en read, en write, quelle bande passante et surtout quel impact sur la durée de vie des SSD? Bref à mettre en toute les mains, et un merci à mon collègue Andrey Kudryavtsev.
(EN)
An excellent content which describe step by step how to install, monitor and benchmark PCIe/NVMe SSD (many trick not so simple). Another key learning: how to measure real I/O activities on a real workload? How many R/W IOPS, block size, throughtput, and finally what's the impact on SSD endurance and (real)life? A must read, and a huge thanks to my colleague Andrey Kudryavtsev.
Auteurs/Authors:
Andrey Kudryavtsev, SSD Solution Architect, Intel Corporation
Zhdan Bybin, Application Engineer, Intel Corporation
DPDK Summit - 08 Sept 2014 - Intel - Networking Workloads on Intel ArchitectureJim St. Leger
Venky Venkatesan presents information on the Data Plane Development Kit (DPDK) including an overview, background, methodology, and future direction and developments.
DPDK Summit 2015 - Intro - Tim O'DriscollJim St. Leger
DPDK Summit 2015 in San Francisco.
Introductory comments and kick-off by Tim O'Driscoll, Intel.
For additional details and the video recording please visit www.dpdksummit.com.
The Open Coherent Accelerator Processor Interface (OpenCAPI) is an industry-standard architecture targeted for emerging accelerator solutions and workloads. This session will address these following areas : 1.) The latest technology advancements surround OpenCAPI, 2.) The OpenCAPI strategy as it relates to the other industry acceleration standards. ie Intel's CXL, Gen-Z and CCIX, 3.) The open initiatives surrounding OMI and OpenCAPI 3.0 and GitHub, 4.) Industry Open Source Initiatives around OpenCAPI, 5.) OC-Accel - Our new FPGA programming framework, supporting OpenCAPI 3.0, targeting higher level programming languages such as C, C++ 6.) Interesting Use Cases
What are latest new features that DPDK brings into 2018?Michelle Holley
We will provide an overview of the new features of the latest DPDK release including source code browsing and API listing of top two new features of latest DPDK release. And on top of that, there will be a hands-on lab, on the Intel® microarchitecture servers, to learn how getting started with DPDK will become much simpler and powerful.
Abstract: Explore the packet I/O data path from a NIC across PCI-Express to cache/memory and understand how to build efficient CPU code for networked applications.
Speaker: Venky Venkatesan, Intel Fellow, Chief Architect – Packet Processing and Networking Applications
In the design of electronics and semiconductors, challenges are compounded by the integration of AI, multi-core, real-time software, network, connectivity, diagnostics, and security. Performance limits, battery life, and cost are adoption barriers. It is extremely important to have tools and processes that deliver efficiency throughout the design cycle.
Continuous verification from planning to development addresses the multi-discipline needs of hardware, software, and networks. This unique approach accelerates the design phase, defines the test efforts, and finds defects during specification. Architecture modeling is required to meet timing deadlines, generate the lowest power consumption, and attain the highest Quality-of-Service. optimize the electronic design system and designing of custom components.
Automated Out-of-Band management with Ansible and RedfishJose De La Rosa
Ansible is an open source automation engine that automates complex IT tasks such as cloud provisioning, application deployment and a wide variety of system administration tasks. It is a one-to-many agentless mechanism where complex deployment tasks can be controlled and monitored from a central control machine.
Redfish is an open industry-standard specification and schema designed for modern and secure management of platform hardware. On Dell EMC PowerEdge servers the Redfish management APIs are available via the integrated Dell Remote Access Controller (iDRAC), which can be used by IT administrators to easily monitor and manage at scale their entire infrastructure using a wide array of clients on devices such as laptops, tablets and smart phones.
Together, Ansible and Redfish can be used by system administrators to fully automate at large scale server monitoring, provisioning and update tasks from one central location, significantly reducing complexity and helping improve the productivity and efficiency of IT administrators.
DPDK Summit 2015 - NTT - Yoshihiro NakajimaJim St. Leger
DPDK Summit 2015 in San Francisco.
NTT presentation by Yoshihiro Nakajima.
For additional details and the video recording please visit www.dpdksummit.com.
PLNOG14: Architektura oraz rozwiązywanie problemów na routerach IOS-XE - Piot...PROIDEA
Piotr Kupisiewicz - Cisco Systems
Language: Polish
Architektura IOS-XE jest implementowana w każdym nowoczesnym routerze Cisco. Mowa tutaj o routerze ASR1000, jak również o seriach 43xx oraz 44xx.
Skoro IOS oraz IOS-XE "wyglądają" tak samo, jaka jest różnica między nimi ?
W jaki sposób efektywnie rozwiązywać problemy z przepływem ruchu poprzez router oparty o IOS-XE ?
Sesja omawiająca architekturę oraz podejście do rozwiązywania problemów (z prawdziwym "live demo"). Aspekty te mogą okazać się bardzo pomocne dla inżynierów sieciowych, jak również dla architektów sieciowych.
Zarejestruj się na kolejną edycję PLNOG: krakow.plnog.pl
DPDK Summit 2015 - Aspera - Charles ShiflettJim St. Leger
DPDK Summit 2015 in San Francisco.
Presentation by Charles Shiflett, Aspera.
For additional details and the video recording please visit www.dpdksummit.com.
OpenPOWER Solutions overview session from IBM TechU Rome - April 2016Mandie Quartly
An OpenPOWER Solutions overview session written for the IBM TechU in Rome in April 2016.
Presented by Mandie Quartly, OpenPOWER European Alliances, IBM and David Spurway, Power Product Manager, IBM UK.
Note that information in this deck may change since it was created (such is life when talking about solutions which haven't been released yet!) so please check details with the OpenPOWER website / manufacturer / seller.
http://openpowerfoundation.org/
In this deck from the Argonne Training Program on Extreme-Scale Computing 2019, Howard Pritchard from LANL and Simon Hammond from Sandia present: NNSA Explorations: ARM for Supercomputing.
"The Arm-based Astra system at Sandia will be used by the National Nuclear Security Administration (NNSA) to run advanced modeling and simulation workloads for addressing areas such as national security, energy and science.
"By introducing Arm processors with the HPE Apollo 70, a purpose-built HPC architecture, we are bringing powerful elements, like optimal memory performance and greater density, to supercomputers that existing technologies in the market cannot match,” said Mike Vildibill, vice president, Advanced Technologies Group, HPE. “Sandia National Laboratories has been an active partner in leveraging our Arm-based platform since its early design, and featuring it in the deployment of the world’s largest Arm-based supercomputer, is a strategic investment for the DOE and the industry as a whole as we race toward achieving exascale computing.”
Watch the video: https://wp.me/p3RLHQ-l29
Learn more: https://insidehpc.com/2018/06/arm-goes-big-hpe-builds-petaflop-supercomputer-sandia/
and
https://extremecomputingtraining.anl.gov/agenda-2019/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck from the UK HPC Conference, Gunter Roeth from NVIDIA presents: Hardware & Software Platforms for HPC, AI and ML.
"Data is driving the transformation of industries around the world and a new generation of AI applications are effectively becoming programs that write software, powered by data, vs by computer programmers. Today, NVIDIA’s tensor core GPU sits at the core of most AI, ML and HPC applications, and NVIDIA software surrounds every level of such a modern application, from CUDA and libraries like cuDNN and NCCL embedded in every deep learning framework and optimized and delivered via the NVIDIA GPU Cloud to reference architectures designed to streamline the deployment of large scale infrastructures."
Watch the video: https://wp.me/p3RLHQ-l2Y
Learn more: http://nvidia.com
and
http://hpcadvisorycouncil.com/events/2019/uk-conference/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Compare Performance-power of Arm Cortex vs RISC-V for AI applications_oct_2021Deepak Shankar
Abstract: In the Webinar, we will show you how to construct, simulate, analyze, validate, optimize an architecture model using pre-built components. We will compare micro and application benchmarks on system SoC models containing clusters of ARM Cortex A53, SiFive u74, ARM Cortex A77, and other vendor cores. The system will be built around custom switches, Ingress/Egress buffers, credit flow control, AI accelerators, NoC and AMBA AXI buses with multi-level caches, DDR4 DRAM and DMA. The evaluation and optimization criteria will be task latency, dCache hit-ratio, power consumed/task and memory bandwidth. The parameters to be modified are bus topology, cache size, processor clock speed, custom arbiters, task thread allocation and changing the processor pipeline.
Selection of cores is a combination of financial and technical bias. Technical comparison of processor cores requires the understanding of the workload, task partitioning and cache-memory structure. A core must be evaluated in the context of the target application. To evaluate these selections, architecture simulation software must be fortified with a library of Intellectual property for power and timing accurate processor cores, simulator at 100 million events per second, peripherals, and all possible traffic distributions
Key Takeaways:
1. Validating architecture models using mathematical calculus and hardware traces
2. Construct custom policies, arbitrations and configure processor cores
3. Select the right combination of statistics to detect bottlenecks and optimize the architecture
4. Identify the right use of stochastic, transaction, cycle-accurate and traces to construct the model
Speaker Bio:
Alex Su is a FPGA solution architect at E-Elements Technology, Hsinchu, Taiwan. He has been an FPGA Solution Architect and Xilinx FPGA Trainer for a number of years, supporting companies, research centers and universities in China and Taiwan. Prior to that, Mr Su has worked at ARM Ltd for 5 years in technical support of Arm CPU and System IP. Alex has also been engaged with a variety of FPGA-based Hardware Emulation System and over ten years in ASIC/SoC design and verification engineer.
Deepak Shankar is the Founder of Mirabilis Design and has been involved in the architecture exploration of over 250 SoC and processors. Mr. Shankar started Mirabilis Design because of a vacuum in the systems engineering and modeling space with the focus shifting to network design and early software development. Deepak has published over 50 articles and presented at over 30 conferences in EDA, semiconductors and embedded computing. Mr. Shankar has an MBA from UC Berkeley, MS in from Clemson University and BS from Coimbatore Institute of Technology, both in Electronics and Communication.
Creating Meaningful Digital ExperiencesMark Badger
Do relationships between brands and consumers arise naturally, or are they designed? If the goal of design is to influence, modify, and drive behavior, shouldn’t we explore the meaning of those underlying interactions?
The idea that designed elements must be easy to interpret has greater potential for consumers and brands than just ensuring that a website or web application is usable. Designing interfaces that are usable is important, but what if there is more to designing a successful user experience than merely ensuring that it is usable?
At FutureM in Boston, on October 25th, 2012, Roundarch Isobar hosted a panel discussion titled “Creating Meaningful Digital Experiences: The Semiotics of UX,” which explored these ideas along with questions such as:
- Do relationships between brands and consumers arise naturally, or are they designed?
- If the goal of design is to influence, modify, and drive behavior, shouldn’t we explore the meaning generated by those underlying interactions?
- How do evolving paradigms such as gestural and natural user interfaces (NUIs), wearable tech, and pervasive social media affect the relationship between consumers and brands?
Dal prodotto all'esperienza. Verso un design sistemicoLuca Rosati
World Information Architecture Day – Pescara, 20 febbraio 2016.
* Sintesi: http://lucarosati.it/blog/design-sistemico
* Video: https://www.youtube.com/watch?v=ROU9WPg_m3w (min. 13 - 38).
Como se deben preparar los e-tailers para atender a el nuevo usuario de Internet.
Para obtener más información sobre cómo Exceda está acelerando el ritmo de la innovación en um mundo hiperconectado, visite www.exceda.com/es y siga @exceda en Twitter.
Webinář "Konsolidace Oracle DB na systémech s procesory M7, včetně migrace z konkurenčních serverových platforem"
Prezentuje Josef Šlahůnek, Oracle
9.3.2016
Slide deck for talk at IETF#92 (Dallas, March 2015) at the IETF Light-Weight Implementation Guidance (lwig) working group about the performance of cryptographic algorithms on ARM processors.
Oracle hardware includes a full-suite of scalable engineered systems, servers, and storage that enable enterprises to optimize application and database performance, protect crucial data, and lower costs.
With Oracle, customers have freedom from the complexity of having multiple databases, analytics tools, and machine learning environments. Oracle's data management platform makes it easier and faster for application developers to create microservices-based applications with multiple data types.
I dati al giorno d’oggi sono un elemento di estrema importanza e d’intrinseco valore per ogni entità. Per questo quando parliamo di Oracle Database facciamo riferimento al capitale della nostra azienda, sia essa pubblica che privata. Per poter sfruttare al massimo le potenzialità del database Oracle è però necessario avere a disposizione un’infrastruttura in grado di facilitarne l’accesso, di semplificarne la gestione, di proporzionare il livello di performance necessario al fine di garantire la scalabilità utile a mantenere queste condizioni nel tempo. Il costante cambiamento della società spinge le imprese ad aggiornarsi e, con il passare del tempo, questo processo comporta una crescita dei dati immagazzinati nei nostri Database con conseguente aumento della criticità degli stessi. Oracle Database Appliance è il sistema ingegnerizzato creato da Oracle per gestire in modo efficiente i propri Database, minimizzando lo sforzo necessario per il loro mantenimento e permettendo così di focalizzare i propri sforzi in attività direttamente relazionate con il core business. Durante la webinar analizzeremo use case pratici che dimostreranno come al giorno d’oggi sia possibile approfittare dei vantaggi offerti dall’Oracle Database Appliance per rispondere alle differenti necessità che la gestione di una complessa e performante infrastruttura IT possa richiedere.
Unleashing Data Intelligence with Intel and Apache Spark with Michael GreeneDatabricks
Organizations are developing deep learning applications to derive new insights, identify new opportunities and uncover new efficiencies. However, deep learning application development often means tapping into multiple frameworks, libraries, and clusters—a complex, time-consuming, and costly effort. This keynote will discuss what the newly released BigDL (open source distributed deep learning framework for Apache Spark and Intel® Xeon® clusters) can offer to developers and what solutions Intel has enabled for customers and partners. In addition, plans for expanding BigDL ecosystem will also be highlighted.
“Quantum” Performance Effects: beyond the CoreC4Media
Video and slides synchronized, mp3 and slide download available at URL https://bit.ly/2Sbd5Ws.
Sergey Kuksenko talks about how (and how much) CPU microarchitecture details may have an influence on applications performance. Could it be visible by end-users? How to avoid misjudgment when estimating code performance? CPU is huge (not in size) that is why the talk is limited to those parts which located out of computational core (mostly caches and memory access). Filmed at qconsf.com.
Sergey Kuksenko works as Java Performance Engineer at Oracle. His primary goal is making Oracle JVM faster digging into JVM runtime, JIT compilers, class libraries and etc. His favorite area is an interaction of Java with modern hardware what he is doing since 2005 when he worked at Intel in Apache Harmony Performance team.
Unfortunately libmotovec is currently a library without many “books.” However, we believe that the functions that we do have are important for applications that do a lot of data movement through the processor’s register files. The wider width and improved bandwidth to cache of the AltiVec register set allow functions like memcpy or memset in AltiVec to be faster than the same function using the general purpose register file.
Some data movement functions are very similar to memcpy but are called by a different name such as __copy_tofrom_user which is found in the Linux kernel and copies data from kernel to user space and vice versa. Our AltiVec-enabled memcpy can easily be modified to speed up these functions as well. More importantly, if there is other work to be done on the data being moved – like calculating a checksum – that work can largely be completely hidden under the memory latency of memcpy. Many applications already know this and provide functions like the Linux
checksum calculation or checksum-while-copying function.
Page_copy is a really trivial example of copying 4K byte pages of memory from one location to another.
The library has proven very easy to use. A customer’s existing object modules can be linked with the AltiVec-enabled library by inserting the library on the linker command line ahead of the compiler’s libc library. The customer’s object modules then link to the symbols in the AltiVec library instead of the compiler-supplied functions.