The document discusses the COSMOS supercomputing facility at the University of Cambridge, which uses Intel Xeon and Xeon Phi processors. COSMOS recently acquired an SGI Altix UV2000 with 1856 Intel Xeon cores and 31 Intel Xeon Phi coprocessors. The document focuses on optimizing the "Walls" cosmology simulation code for this new system. Through vectorization, memory and cache improvements, and other optimizations, performance was increased by up to 18x compared to the original version of the code. This demonstrates how modernization techniques can significantly improve performance of scientific applications on parallel systems.
This session showcases the integration between the Unity* game engine and the recently released Intel® Open Image Denoise library for CPU-based lightmap denoising. Learn how the library significantly improves fidelity over bilateral blur by using an AI-based denoiser, which greatly improves time-to-convergence for lightmap rendering.
Embree Ray Tracing Kernels | Overview and New Features | SIGGRAPH 2018 Tech S...Intel® Software
Overview of the new Embree 3 ray tracing framework, including how to use the new API, supported geometry types, and ray intersection methods. Includes a look at new features like normal oriented curves, vertex grids, etc.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2020/11/acceleration-of-deep-learning-using-openvino-3d-seismic-case-study-a-presentation-from-intel/
For more information about edge AI and computer vision, please visit:
https://www.edge-ai-vision.com
Manas Pathak, Global AI Lead for Oil and Gas at Intel, presents the “Acceleration of Deep Learning Using OpenVINO: 3D Seismic Case Study” tutorial at the September 2020 Embedded Vision Summit.
The use of deep learning for automatic seismic data interpretation is gaining the attention of many researchers across the oil and gas industry. The integration of high-performance computing (HPC) AI workflows in seismic data interpretation brings the challenge of moving and processing large amounts of data from HPC to AI computing solutions and vice-versa.
In this presentation, Pathak illustrates this challenge via a case study using a public deep learning model for salt identification applied on a 3D seismic survey from the F3 Dutch block in the North Sea. He presents a workflow to address this challenge and perform accelerated AI on seismic data. The Intel Distribution of OpenVINO toolkit was used to increase the inference performance of a pre-trained model on an Intel CPU. OpenVINO allows CPU users to get significant improvement in AI inference performance for high memory capacity deep learning models used on large datasets without any significant loss in accuracy.
Intel TCE Seth Schneider provides a technical overview, outlines the benefits for Game Optimization and answers questions regarding Intel’s Graphics Processing Analyzer.
This session showcases the integration between the Unity* game engine and the recently released Intel® Open Image Denoise library for CPU-based lightmap denoising. Learn how the library significantly improves fidelity over bilateral blur by using an AI-based denoiser, which greatly improves time-to-convergence for lightmap rendering.
Embree Ray Tracing Kernels | Overview and New Features | SIGGRAPH 2018 Tech S...Intel® Software
Overview of the new Embree 3 ray tracing framework, including how to use the new API, supported geometry types, and ray intersection methods. Includes a look at new features like normal oriented curves, vertex grids, etc.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2020/11/acceleration-of-deep-learning-using-openvino-3d-seismic-case-study-a-presentation-from-intel/
For more information about edge AI and computer vision, please visit:
https://www.edge-ai-vision.com
Manas Pathak, Global AI Lead for Oil and Gas at Intel, presents the “Acceleration of Deep Learning Using OpenVINO: 3D Seismic Case Study” tutorial at the September 2020 Embedded Vision Summit.
The use of deep learning for automatic seismic data interpretation is gaining the attention of many researchers across the oil and gas industry. The integration of high-performance computing (HPC) AI workflows in seismic data interpretation brings the challenge of moving and processing large amounts of data from HPC to AI computing solutions and vice-versa.
In this presentation, Pathak illustrates this challenge via a case study using a public deep learning model for salt identification applied on a 3D seismic survey from the F3 Dutch block in the North Sea. He presents a workflow to address this challenge and perform accelerated AI on seismic data. The Intel Distribution of OpenVINO toolkit was used to increase the inference performance of a pre-trained model on an Intel CPU. OpenVINO allows CPU users to get significant improvement in AI inference performance for high memory capacity deep learning models used on large datasets without any significant loss in accuracy.
Intel TCE Seth Schneider provides a technical overview, outlines the benefits for Game Optimization and answers questions regarding Intel’s Graphics Processing Analyzer.
Gary Brown (Movidius, Intel): Deep Learning in AR: the 3 Year HorizonAugmentedWorldExpo
A talk from the Develop Track at AWE USA 2017 - the largest conference for AR+VR in Santa Clara, California May 31- June 2, 2017.
Gary Brown (Movidius, Intel): Deep Learning in AR: the 3 Year Horizon
Deep learning techniques are gaining in popularity in many facets of embedded vision, and this holds true for AR and VR. Will they soon dominate every facet of vision processing? This talk explores this question by examining the theory and practice of applying deep learning to real world problems for Augmented Reality, with real examples describing how this shift is happening today quickly in some areas, and slower in others.
http://AugmentedWorldExpo.com
At Intel Labs Day 2020, Intel spotlighted research initiatives across multiple domains where its researchers are striving for orders of magnitude advancements to shape the next decade of computing. Themed “In Pursuit of 1000X: Disruptive Research for the Next Decade in Computing,” the event featured several emerging areas including integrated photonics, neuromorphic computing, quantum computing, confidential computing, and machine programming. Together, these domains represent pioneering efforts to address critical challenges in the future of computing, and Intel’s leadership role in pursuing breakthroughs to address them. Rich Uhlig, Intel senior fellow, vice president, and director of Intel Labs was joined by several domain experts across the research organization to share perspectives on the industry and societal impact of these technologies.
IT@Intel: Creating Smart Spaces with All-in-OnesIT@Intel
Intel IT explains how they used all-in-one devices as collaboration tools both in the office as well as lab spaces. By providing efficient collaboration solutions, we help our employees be more productive and have greater job satisfaction.
Unlock Hidden Potential through Big Data and AnalyticsIT@Intel
A presentation by Intel CIO Kim Stevenson @kimsstevenson "Unlock Hidden Potential through Big Data and Analytics." Includes the drivers behind big data and SMAC (social, mobile, analytics and cloud) and how the business value being created at Intel through advanced analytics and using BI as a competitive advantage.
Venue: NOAA Feb. 24, 2014
Achieve Unconstrained Collaboration in a Digital WorldIntel IT Center
Technology is at the center of every digitally-savvy workplace, yet organizations are constrained with bridging current tools to more modern solutions. This session from Gartner Digital Workplace Summit will cover a new way to facilitate employee collaboration that is easy, engaging and gives IT an uncompromised security and management experience.
OIT to Volumetric Shadow Mapping, 101 Uses for Raster-Ordered Views using Dir...Gael Hofemeier
One of the new features of DirectX 12 is Raster-Ordered Views. This adds Ordering back into Unordered Access Views, removing race conditions within a pixel shader when multiple in-flight pixels write to the same XY screen coordinates. This allows algorithms that previously required link lists of pixel data to be efficiently processed in bounded memory. The talk shows how everything from Order Independent Transparency to Volumetric shadow mapping and even post processing can benefit from using Raster-Ordered Views to provide efficient and more importantly robust solutions suitable for real-time games. The session uses a mixture of real-world examples of where these algorithms have already been implemented in games and forward-looking research to show some of the exciting possibilities that open up with this new ability coming to DirectX.
E5 Intel Xeon Processor E5 Family Making the Business Case Intel IT Center
This presentation highlights cloud computing advantages of the Intel® Xeon® processor E5 family and helps you make the business case for investing. Includes access to an ROI calculator.
Gary Brown (Movidius, Intel): Deep Learning in AR: the 3 Year HorizonAugmentedWorldExpo
A talk from the Develop Track at AWE USA 2017 - the largest conference for AR+VR in Santa Clara, California May 31- June 2, 2017.
Gary Brown (Movidius, Intel): Deep Learning in AR: the 3 Year Horizon
Deep learning techniques are gaining in popularity in many facets of embedded vision, and this holds true for AR and VR. Will they soon dominate every facet of vision processing? This talk explores this question by examining the theory and practice of applying deep learning to real world problems for Augmented Reality, with real examples describing how this shift is happening today quickly in some areas, and slower in others.
http://AugmentedWorldExpo.com
At Intel Labs Day 2020, Intel spotlighted research initiatives across multiple domains where its researchers are striving for orders of magnitude advancements to shape the next decade of computing. Themed “In Pursuit of 1000X: Disruptive Research for the Next Decade in Computing,” the event featured several emerging areas including integrated photonics, neuromorphic computing, quantum computing, confidential computing, and machine programming. Together, these domains represent pioneering efforts to address critical challenges in the future of computing, and Intel’s leadership role in pursuing breakthroughs to address them. Rich Uhlig, Intel senior fellow, vice president, and director of Intel Labs was joined by several domain experts across the research organization to share perspectives on the industry and societal impact of these technologies.
IT@Intel: Creating Smart Spaces with All-in-OnesIT@Intel
Intel IT explains how they used all-in-one devices as collaboration tools both in the office as well as lab spaces. By providing efficient collaboration solutions, we help our employees be more productive and have greater job satisfaction.
Unlock Hidden Potential through Big Data and AnalyticsIT@Intel
A presentation by Intel CIO Kim Stevenson @kimsstevenson "Unlock Hidden Potential through Big Data and Analytics." Includes the drivers behind big data and SMAC (social, mobile, analytics and cloud) and how the business value being created at Intel through advanced analytics and using BI as a competitive advantage.
Venue: NOAA Feb. 24, 2014
Achieve Unconstrained Collaboration in a Digital WorldIntel IT Center
Technology is at the center of every digitally-savvy workplace, yet organizations are constrained with bridging current tools to more modern solutions. This session from Gartner Digital Workplace Summit will cover a new way to facilitate employee collaboration that is easy, engaging and gives IT an uncompromised security and management experience.
OIT to Volumetric Shadow Mapping, 101 Uses for Raster-Ordered Views using Dir...Gael Hofemeier
One of the new features of DirectX 12 is Raster-Ordered Views. This adds Ordering back into Unordered Access Views, removing race conditions within a pixel shader when multiple in-flight pixels write to the same XY screen coordinates. This allows algorithms that previously required link lists of pixel data to be efficiently processed in bounded memory. The talk shows how everything from Order Independent Transparency to Volumetric shadow mapping and even post processing can benefit from using Raster-Ordered Views to provide efficient and more importantly robust solutions suitable for real-time games. The session uses a mixture of real-world examples of where these algorithms have already been implemented in games and forward-looking research to show some of the exciting possibilities that open up with this new ability coming to DirectX.
E5 Intel Xeon Processor E5 Family Making the Business Case Intel IT Center
This presentation highlights cloud computing advantages of the Intel® Xeon® processor E5 family and helps you make the business case for investing. Includes access to an ROI calculator.
What is the NIST Cybersecurity Framework?
Why YOU should care?
How would I apply it?
Would you drive BLINDFOLDED?
A false sense of security?
Without a Security Framework…
Why Cyber Security Framework?
How would I measure my effectiveness?
Cyber Security 101: Training, awareness, strategies for small to medium sized...Stephen Cobb
I developed "Cyber Security 101: Training, awareness, strategies for small to medium sized business" for the second annual Small Business Summit on Security, Privacy, and Trust, co-hosted by ADP in New Jersey, October 2013.
BigDL: A Distributed Deep Learning Library on Spark: Spark Summit East talk b...Spark Summit
BigDL is a distributed deep Learning framework built for Big Data platform using Apache Spark. It combines the benefits of “high performance computing” and “Big Data” architecture, providing native support for deep learning functionalities in Spark, orders of magnitude speedup than out-of-box open source DL frameworks (e.g., Caffe/Torch) wrt single node performance (by leveraging Intel MKL), and the scale-out of deep learning workloads based on the Spark architecture. We’ll also share how our users adopt BigDL for their deep learning applications (such as image recognition, object detection, NLP, etc.), which allows them to use their Big Data (e.g., Apache Hadoop and Spark) platform as the unified data analytics platform for data storage, data processing and mining, feature engineering, traditional (non-deep) machine learning, and deep learning workloads.
Learn how Intel worked with Pixar Animation Studios* and Sony Imageworks* to realize dynamic SIMD code generation of Open Shading Language shader networks, achieving 3-9x speedups with Intel® AVX-512.
HPC DAY 2017 | Accelerating tomorrow's HPC and AI workflows with Intel Archit...HPC DAY
HPC DAY 2017 - http://www.hpcday.eu/
Accelerating tomorrow's HPC and AI workflows with Intel Architecture
Atanas Atanasov | HPC solution architect, EMEA region at Intel
Preparing the Data Center for the Internet of ThingsIntel IoT
Intel’s Mark Skarpness provides an overview of the Internet of Things and discusses how the data center is essential for the IoT.
For more information go to www.intel.com/iot
What are latest new features that DPDK brings into 2018?Michelle Holley
We will provide an overview of the new features of the latest DPDK release including source code browsing and API listing of top two new features of latest DPDK release. And on top of that, there will be a hands-on lab, on the Intel® microarchitecture servers, to learn how getting started with DPDK will become much simpler and powerful.
Spring Hill (NNP-I 1000): Intel's Data Center Inference Chipinside-BigData.com
Today at Hot Chips 2019, Intel revealed new details of upcoming high-performance AI accelerators: Intel Nervana neural network processors, with the NNP-T for training and the NNP-I for inference. Intel engineers also presented technical details on hybrid chip packaging technology, Intel Optane DC persistent memory and chiplet technology for optical I/O.
"To get to a future state of ‘AI everywhere,’ we’ll need to address the crush of data being generated and ensure enterprises are empowered to make efficient use of their data, processing it where it’s collected when it makes sense and making smarter use of their upstream resources," said Naveen Rao, Intel vice president and GM, Artificial Intelligence Products Group. "Data centers and the cloud need to have access to performant and scalable general purpose computing and specialized acceleration for complex AI applications. In this future vision of AI everywhere, a holistic approach is needed—from hardware to software to applications.”
Learn more: https://www.intel.ai/accelerating-for-ai/?elq_cid=1192980
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Streamline End-to-End AI Pipelines with Intel, Databricks, and OmniSciIntel® Software
Preprocess, visualize, and Build AI Faster at-Scale on Intel Architecture. Develop end-to-end AI pipelines for inferencing including data ingestion, preprocessing, and model inferencing with tabular, NLP, RecSys, video and image using Intel oneAPI AI Analytics Toolkit and other optimized libraries. Build at-scale performant pipelines with Databricks and end-to-end Xeon optimizations. Learn how to visualize with the OmniSci Immerse Platform and experience a live demonstration of the Intel Distribution of Modin and OmniSci.
This issue’s feature article, Tuning Autonomous Driving Using Intel® System Studio, illustrates how the tools in Intel System Studio give embedded systems and connected device developers an integrated development environment to build, debug, and tune performance and power usage. Continuing the theme of tuning edge applications, Building Fast Data Compression Code for Cloud and Edge Applications shows how to use the Intel® Integrated Performance Primitives
to speed data compression.
DPDK Summit - 08 Sept 2014 - Intel - Networking Workloads on Intel ArchitectureJim St. Leger
Venky Venkatesan presents information on the Data Plane Development Kit (DPDK) including an overview, background, methodology, and future direction and developments.
Deep Learning to Big Data Analytics on Apache Spark Using BigDL with Xianyan ...Databricks
With the continued success of deep learning techniques, there’s been a rapid growth in applications for perception in many modalities, such as image classification, object detection and speech recognition. In response, Intel’s BigDL is an open source distributed deep learning framework for Apache Spark that includes rich deep learning support and Intel Math Kernel Library acceleration, allowing users to quickly develop deep learning applications with extremely high performance on their existing Hadoop ecosystems.
This sessions will explore several key deep learning applications that Intel successfully built on top of Apache Spark with BigDL. Hear about the technologies they developed and what they learned from building such applications, including: the tool stack in the system and design considerations; an application on image recognition and object detection (faster-rcnn using VGG and PVANET); and an application on speech recognition with deep speech and acoustic feature transformers. He’ll also share other insights and experiences Intel gained while building a unified data analytics platform with Apache Spark MLlib and BigDL.
Open Source Interactive CPU Preview Rendering with Pixar's Universal Scene De...Intel® Software
Universal Scene Description* (USD) is an open source initiative developed by Pixar for fast, large scale, and universal asset management across multiple programs including Maya, Houdini, and others.
This session will describe and demo methods to connect the Intel Edison to Amazon AWS in order to create a versatile IoT structure. The Intel Edison is a powerful system on chip module, the size of a postage stamp with powerful on board processing. It can be used as a sensor hub to gather data, a control board for actuators, and a gateway to connect to the cloud. When combined with the powerful services offered by AWS it can form the basis for many IoT solutions.
AWS DevDay San Francisco, June 21, 2016.
Presenter: Martin Kronberg, Intel oT Evengelist
Bring Intelligence to the Edge with Intel® Movidius™ Neural Compute StickDESMOND YUEN
Motiviation to move intelligence to the edge
Edge compute use cases
Barriers to moving intelligence to the edge
Deep learning algorithms – can they run on an edge device?
Movidius Neural Compute Stick (arch,usage, etc)
Similar to Unveiling the Early Universe with Intel Xeon Processors and Intel Xeon Phi at COSMOS (University of Cambridge) (20)
Disrupt Hackers With Robust User AuthenticationIntel IT Center
Hacks are constantly in the headlines, and a clear-cut strategy is needed to proactively secure large enterprises from intrusions before they happen. This session reveals a new approach to user authentication. Attendees will learn how to 1) leverage hardware for authentication, 2) utilize existing network environments to better protect user credentials and authentication policies and 3) provide an intuitive experience for end users.
Strengthen Your Enterprise Arsenal Against Cyber Attacks With Hardware-Enhanc...Intel IT Center
With new “Hacked!” headlines happening every day, modernizing your companies’ endpoint security strategy has never been more important. Software alone is not enough. For cybersecurity, the way forward requires help from the hardware. This session will equip you with an understanding of one of the most promising approaches to enterprise security: hardware-enhanced identity protection and data protection, at the core of your fleet of end point devices.
Harness Digital Disruption to Create 2022’s Workplace TodayIntel IT Center
As the modern workplace evolves, modern devices play a critical role in simplifying work and creating an immersive, seamless experience. This session offers guidance on things to consider as you update your workplace into the secure, managed, collaborative environment employees demand and you require.
Don't Rely on Software Alone.Protect Endpoints with Hardware-Enhanced Security.Intel IT Center
Learn how security solutions built into Intel® Core™ vPro™ processors address top threat vectors. Our comprehensive approach to hardware-enhanced security starts with identity protection with Intel® Authenticate delivering customizable multi factor authentication options, and supports remote remediation with Intel® Active Management Technology.
Intel® Xeon® Scalable Processors Enabled Applications Marketing GuideIntel IT Center
The Future-Ready Data Center platform is here. Whether you navigate in the High Performance Computing, Enterprise, Cloud, or Communications spheres, you will find an Intel® Xeon® processor that is ready to power your data center now and well into the future. An innovative approach to platform design in the Intel® Xeon® Scalable processor platform unlocks the power of scalable performance for today’s data centers—from the smallest workloads to your most mission-critical applications. Powerful convergence and capabilities across compute, storage, memory, network and security deliver unprecedented scale and highly optimized performance across a broad range of workloads—from high performance computing (HPC) and network functions virtualization, to advanced analytics and artificial intelligence (AI). Many examples here show how our software partner ecosystem has optimized their applications and/or taken advantage of inherent platform enhancements to deliver dramatic performance gains, that can translate into tangible business benefits.
#NABshow: National Association of Broadcasters 2017 Super Session Presentatio...Intel IT Center
At NAB, this session covered how technology will transform the way content is created and distributed and accelerate the rate of innovation in the industry. Intel, a revolutionary leader in technology and in transforming industries since 1968, works with other industry partners to enable the transition to new paradigms, infrastructures and technologies.
Join Jim Blakley, General Manager of Intel's Visual Cloud Division, and guests including Dave Ward (Chief Technology Officer, Cisco), AR Rahman (two-time Academy and Grammy Award winner), and Dave Andersen (School of Computer Science, Carnegie Mellon University) to learn more about how this revolution will make amazing visual cloud experiences possible for every person on Earth.
Making the digital workplace a reality requires a modern and strategic approach to identity protection. You will discover ways to build an IAM program that moves you from defense to offense. This presentation will offer practical guidance on how a hardware-based multi-factor authentication strategy is the future for identity protection.
Three Steps to Making a Digital Workplace a RealityIntel IT Center
The workplace is undergoing a dramatic evolution. Work styles are more mobile, changing the way we collaborate and share information while a more mobile workforce means a greater need to thwart cyber-attacks. You'll learn about Intel's three-part approach to help IT leaders sustainably embrace mobility and increase your security posture.
Three Steps to Making The Digital Workplace a Reality - by Intel’s Chad Const...Intel IT Center
The workplace is undergoing a dramatic evolution. Workstyles are more mobile, changing the way we collaborate and share information while a more mobile workforce means a greater need to thwart cyber-attacks. In this presentation, you'll learn about Intel's three-part approach to help IT leaders sustainably embrace mobility and increase your security posture.
Intel® Xeon® Processor E7-8800/4800 v4 EAMG 2.0Intel IT Center
This set of Intel® Xeon® processor E7-8800/4800 v4 family proof points spans several key business segments. The Intel® Xeon® processor E7-8800/4800 v4 product family delivers the horsepower for real-time, high-capacity data analysis that can help businesses derive rapid actionable insights to deliver innovative new services and customer experiences. With high performance, industry’s largest memory, robust reliability, and hardware-enhanced security features, the E7-8800/4800 v4 is optimal for scale-up platforms, delivering rapid in-memory computing for today’s most demanding real-time data and transaction-intensive workloads.
Intel® Xeon® Processor E5-2600 v4 Enterprise Database Applications ShowcaseIntel IT Center
The Intel Xeon processor E5-2600 v4 product family delivers the high performance, increased memory, and I/O bandwidth required for all forms of enterprise databases, is ideal for next-generation application workloads, and is the powerhouse for software-defined infrastructure (SDI) environments where automation and orchestration capabilities are foundational. See how database solutions deployed on the Intel® Xeon® processor E5 v4 product family can deliver increased performance and throughput, as demonstrated by key software partners.
Intel® Xeon® Processor E5-2600 v4 Core Business Applications ShowcaseIntel IT Center
Designed for architecting next-generation, software-defined data centers, the Intel® Xeon® processor E5-2600 v4 product family is supercharged for efficiency, performance, and agile services delivery across cloud-native and traditional applications. Intel® Intelligent Power Technology automatically regulates power consumption to combine industry-leading energy efficiency with intelligent performance that adapts to your workloads.
Intel® Xeon® Processor E5-2600 v4 Financial Security Applications ShowcaseIntel IT Center
The Intel® Xeon® processor E5-2600 v4 product family delivers efficient resource utilization, service tiering, and optimal quality of service (QoS) levels for financial applications by processing faster transactions and delivering exceptional uptime and availability and reduced latency, providing a high-performing, highly scalable system for your most demanding workloads. Enhanced cryptographic speed with two new instructions for Intel® AES-NI for improved security, and the Intel® SSD Data Center Family for NVMe represents optimized management for the future software-defined data centers with industry standard software and drivers.
Intel® Xeon® Processor E5-2600 v4 Telco Cloud Digital Applications ShowcaseIntel IT Center
Cloud and telecommunication companies can deliver better end user experiences while improving cost models across their data centers with the Intel® Xeon® processor E5-2600 v4 product family. See how innovative technologies can deliver high throughput, low latency and more agile delivery of network services to the software-defined data center. Additionally, unparalleled versatility across diverse workloads, such as 4K video processing, editing, and decoding and encoding where improved bandwidth and reduced latency provide noticeable performance improvements.
Intel® Xeon® Processor E5-2600 v4 Tech Computing Applications ShowcaseIntel IT Center
Where breakthrough performance is expected, the Intel® Xeon® processor E5-2600 v4 product family, a key ingredient of the Intel® Scalable System Framework and the software-defined data center, is designed to deliver better performance and performance per watt than ever before. The combination of Intel Xeon processors, Intel® Omni-Path Architecture, Intel Solutions for Lustre* software, and storage technologies improves bandwidth and reduces latency, providing a high-performing, highly scalable system for your most demanding workloads.
Intel® Xeon® Processor E5-2600 v4 Big Data Analytics Applications ShowcaseIntel IT Center
Deeper insights in less time at lower costs are made possible by the Intel® Xeon® processor E5-2600 v4 product family, delivering critical performance enhancements through key platform technologies that benefit the software-defined data center. See how leading software vendors are leveraging these for optimum performance.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
GridMate - End to end testing is a critical piece to ensure quality and avoid...
Unveiling the Early Universe with Intel Xeon Processors and Intel Xeon Phi at COSMOS (University of Cambridge)
1. Unveiling the Early Universe with
Intel® Xeon® Processors and Intel® Xeon Phi™
Coprocessors
James Briggs,
Carlos Martins, Paul Shellard
COSMOS (University of Cambridge)
John Fu, Karl Feind, Mike Woodacre John Pennycook, Jim Jeffers
SGI
Intel
2. COSMOS @ DiRAC - University of Cambridge
Supercomputing facility originally dedicated to cosmology
research:
Founded in January 1997, now part of the UK DiRAC
facility
Consortium brought together by Prof. Stephen Hawking
History of large, shared-memory Intel® Architecture
machines
COSMOS-IX:
SGI Altix UV2000, delivered in July 2012
1856 Intel® Xeon® processor cores (Sandy Bridge)
14.5 TB of globally shared memory
31 Intel® Xeon Phi™ coprocessors introduced in
December 2012
2
3. What is the SGI UV?
The most flexible
compute platform in the
industry
SSI
SuperPC
Huge Coherent
Memory
Scalable
IO/Coprocessors
PGAS
UPC, SHMEM, etc.
8 PB Global Address
Space
Optimized for Small
Transfers
Rich Synchronization
Cluster
MPI
Connectionless
Small
message
optimized
3
4. The Code – “Walls”
Simulates the evolution of domain wall networks
in the early universe:
Adjacent regions misalign over time (Higgs)
Domain „energy‟ walls form to separate
them
„Observable” analogy is ferromagnet
domains
Press-Ryden-Spergel algorithm [1]
Compiled for 3D or 4D simulations
Used at COSMOS for over 10 years, developed
for complex hybrid networks
Stencil code, targeting SMP with OpenMP
Benchmark for acceptance testing on previous
machines
To find out more about domain walls see the
Hawking Cosmology Centre public pages:
www.ctc.cam.ac.uk/outreach/origins/cosmic_structures_two.php
Video courtesy of Dr. Carlos Martins, University of Porto.
4
6. Baseline Performance
“Out-of-the-Box” Comparison:
1.2
Processor is ~2x faster than coprocessor!
Why?
Poor vectorization
Poor memory behavior
etc
Experimental Setup:
480 x 480 x 480 problem
2 x Intel® Xeon® E5-4650L processor
1 x Intel® Xeon Phi™ 5110P coprocessor
icc version 14.0.0
1
0.8
0.6
0.4
0.2
0
2 x Processor
1 x Coprocessor
6
7. Optimization and Modernization (1/3)
The Strategy:
Use straightforward parallel tuning techniques
vectorize, scale, memory
Use tools/compiler guidance features
Maintain readability and platform portability
The Result:
Significant performance improvements in ~3-4 weeks
Single, clear, readable code-base
“Template” stencil code transferable to other simulations
7
8. Optimization and Modernization (2/3)
Optimizations
Improve auto-vectorization (using -vec-report3 and -guide-vec)
int ip1 = (i+1) % Nx;
loop was not vectorized: operator unsuited for vectorization.
int ip1 = (i < Nx-1) ? i+1 : 0;
LOOP WAS VECTORIZED
8
9. Optimization and Modernization (2/3)
Optimizations
Improve auto-vectorization (using -vec-report3 and -guide-vec)
Introduce halo regions, to improve cache behavior (and remove gathers)
int ip1 = i+1;
Data from i = 0 is replicated at i = Nx; all loads become contiguous
9
10. Optimization and Modernization (2/3)
Optimizations
Improve auto-vectorization (using -vec-report3 and -guide-vec)
Introduce halo regions, to improve cache behavior (and remove gathers)
Swap division by constants for multiplication by pre-computed reciprocals
P2[i][j][k][l] = … / (1-delta);
One division per stencil point
P2[i][j][k][l] = … * i1mdelta;
One division reused for all stencil points
10
12. Optimization and Modernization (3/3)
Modernizations
Reduce memory footprint (2x) by removing redundant arrays
Remove unnecessary 4D calculations and array lookups from 3D simulations
Lphi = P[i-1][j][k][l] + P[i+1][j][k][l] + P[i][j-1][k][l] + …
- 8 * P[i][j][k][l];
Lphi = P[i][j-1][k][l] + P[i][j+1][k][l] + …
- 6 * P[i][j][k][l];
Saves two reads/additions per stencil point
12
13. Optimization and Modernization (3/3)
Modernizations
Reduce memory footprint (2x) by removing redundant arrays
Remove unnecessary 4D calculations and array lookups from 3D simulations
Combine three algorithmic stages into a single loop
Before: solve(t), timestep(), area(t+1)
After: solve(t), area(t), timestep()
Allows for one pass through the data each timestep.
13
16. Future Work
Exploration of other optimizations (e.g. cache blocking)
Stream larger problems through coprocessors
Work sharing between multiple processors and coprocessors
Incorporate stencil template into other key codes
16
17. Conclusions
Modernizing Code for Parallelism Works!
Straightforward code changes -> Dramatic Performance Impact
Dual-tuning advantage –> Single Source
Processor ~6x ; Coprocessor ~18x over baseline
Future benefits -> ready to take advantage of increasing parallelism
Welcome to the Parallel Universe!
17
18. Intel® Xeon Phi™ Coprocessor Starter Kits
3120A
OR
5110P
software.intel.com/xeon-phi-starter-kit
Other brands and names are the property of their respective owners.
*Pricing and starter kit configurations will vary. See software.intel.com/xeon-phi-starter-kit and provider websites for full details and disclaimers. Stated currency is US Dollars.
19. References
[1] W.H. Press, B.S. Ryden and D.N. Spergel, “Dynamical Evolution of Domain
Walls in an Expanding Universe”, Astrophys. J. 347 (1989)
[2] A.M.M. Leite and C.J.A.P Martins, “Scaling Properties of Domain Wall
Networks”, Physical Review D 84 (2011)
[3] A.M.M. Leite, C.J.A.P Martins and E.P.S. Shellard, “Accurate Calibration of
the Velocity-Dependent One-Scale Model for Domain Walls”, Physics Letters B 7
18 (2013)
19
23. Risk Factors
The above statements and any others in this document that refer to plans and expectations for the third quarter, the year and the future are forwardlooking statements that involve a number of risks and uncertainties. Words such as “anticipates,” “expects,” “intends,” “plans,” “believes,” “seeks,”
“estimates,” “may,” “will,” “should” and their variations identify forward-looking statements. Statements that refer to or are based on
projections, uncertain events or assumptions also identify forward-looking statements. Many factors could affect Intel‟s actual results, and variances
from Intel‟s current expectations regarding such factors could cause actual results to differ materially from those expressed in these forward-looking
statements. Intel presently considers the following to be the important factors that could cause actual results to differ materially from the company‟s
expectations. Demand could be different from Intel's expectations due to factors including changes in business and economic conditions; customer
acceptance of Intel‟s and competitors‟ products; supply constraints and other disruptions affecting customers; changes in customer order patterns
including order cancellations; and changes in the level of inventory at customers. Uncertainty in global economic and financial conditions poses a risk
that consumers and businesses may defer purchases in response to negative financial events, which could negatively affect product demand and
other related matters. Intel operates in intensely competitive industries that are characterized by a high percentage of costs that are fixed or difficult to
reduce in the short term and product demand that is highly variable and difficult to forecast. Revenue and the gross margin percentage are affected by
the timing of Intel product introductions and the demand for and market acceptance of Intel's products; actions taken by Intel's competitors, including
product offerings and introductions, marketing programs and pricing pressures and Intel‟s response to such actions; and Intel‟s ability to respond
quickly to technological developments and to incorporate new features into its products. The gross margin percentage could vary significantly from
expectations based on capacity utilization; variations in inventory valuation, including variations related to the timing of qualifying products for sale;
changes in revenue levels; segment product mix; the timing and execution of the manufacturing ramp and associated costs; start-up costs; excess or
obsolete inventory; changes in unit costs; defects or disruptions in the supply of materials or resources; product manufacturing quality/yields; and
impairments of long-lived assets, including manufacturing, assembly/test and intangible assets. Intel's results could be affected by adverse
economic, social, political and physical/infrastructure conditions in countries where Intel, its customers or its suppliers operate, including military
conflict and other security risks, natural disasters, infrastructure disruptions, health concerns and fluctuations in currency exchange rates.
Expenses, particularly certain marketing and compensation expenses, as well as restructuring and asset impairment charges, vary depending on the
level of demand for Intel's products and the level of revenue and profits. Intel‟s results could be affected by the timing of closing of acquisitions and
divestitures. Intel's results could be affected by adverse effects associated with product defects and errata (deviations from published
specifications), and by litigation or regulatory matters involving intellectual property, stockholder, consumer, antitrust, disclosure and other
issues, such as the litigation and regulatory matters described in Intel's SEC reports. An unfavorable ruling could include monetary damages or an
injunction prohibiting Intel from manufacturing or selling one or more products, precluding particular business practices, impacting Intel‟s ability to
Rev. its products, or requiring other remedies such as compulsory licensing of intellectual property. A detailed discussion of these and other factors
design 7/17/13
that could affect Intel‟s results is included in Intel‟s SEC filings, including the company‟s most recent reports on Form 10-Q, Form 10-K and earnings
release.
Editor's Notes
Background for "Walls" code. The discovery of the Higgs particle at the LHC confirms that the Universe has a complicated underlying structure with broken symmetries. A more familiar type of broken symmetry is seen in a ferromagnet with its magnetised domains; the boundaries between each of these misaligned regions has additional energy which are called domain walls. The same can happen in our Universe: Higgs-like fields in different regions become aligned in different directions with domain walls separating them (or indeed other defects, such as cosmic strings). Walls have potentially important implications in the very early universe, as well as the late universe today. The Planck satellite recently observed asymmetries (or anomalies) in the cosmic microwave sky; one possibility is that these are due to low-energy walls stretching across the Universe. The Walls code creates random initial conditions on a 3D grid for a network of domain walls in the early universe. It then solves dynamical field equations to find out how they evolve, counting up the wall area. Typically walls 'scale' as the Universe expands, with a fixed number of them stretching across the observable universe at any one time. The largest defect simulations to date have been performed on COSMOS using the Walls code, which has been developed into a number of variants. For example, complex hybrid networks with up to 100 different types of walls and strings have been investigated, as well as wall evolution in four and even five spatial dimensions inspired by fundamental theory; this is only possible because of the large available memory on COSMOS. The scalable OpenMP Walls code has also been used to benchmark shared-memory machines on several occasions.
Laplacian stencil is standard 3D stencil. In 4D you have two extra additions (for l-1, l+1) and the 6 becomes an 8. This is a good example to refer to when discussing the “3D specialization” optimization; the original code relied on the periodic boundary condition to simplify this down to 3D. (i.e. stencil = 6 useful directions + ijkl + ijkl – 8ijkl = 6 useful directions – 6ijkl)The single loop optimization depends on a rearrangement of Eq. 2: we can rewrite to find n-1/2 from n and n-1. Also useful when discussing division hoisting: (1+delta) is a constant that we divide by every iteration.Eq 3 is weird-looking, but what the code basically does is looks for a change of sign between two adjacent grid-points in all dimensions (e.g. ijkl = +1, ijkl-1 = -1) to detect walls, then computes their area.
We use KMP_AFFINITY=compact,granularity=fine. This ensures the 4 threads on the same KNC core work on the same area of the stencil, and encourages cache re-use.If somebody asks about the reason for 480^3, it’s to fairly divide the same way on both pieces of hardware (i.e. 480 / 16 and 480 / 240 are both “good” decompositions).Other problem sizes would be unfair to the coprocessor. Coprocessor can’t ever have an advantage, because 240 / 16 = 15.Early work with #pragma omp collapse suggests we can actually get the same performance benefits for any problem size, by increasing task granularity. A few threads having one or two extra grid-points each has much less impact on performance than a few threads having one or two extra tiles.
“Optimizations” are largely hardware- or compiler-driven; improving an algorithm’s implementation on IA.“Modernizations” are code changes that are algorithmic; not specific to IA.Halo regions help because then reading “one to the left” or “one to the right” is a contiguous chunk. The original code used modulo division to determine the correct index (and then pulled the value from the other side of the array).
This is not the only thing we did to improve auto-vectorization, but it’s a good example of where the tools helped.We had two % operations (for +1 and -1) in each of the four dimensions.Also had to change some variable types (code tried to add doubles to a long) and help the compiler realise the variable it thought had a dependency issue was actually being used for a reduction.
Again, not the only instance of this – just an example.Think we actually hoisted a few other divisions out.
Just an example (but an easy one to follow).There’s some similar stuff in the area calculation.
This shows the speed-up for each version of the code running on the processor and coprocessor, relative to the original code running on the processor.A few interesting points: 1) auto-vectorization alone was sufficient to make coprocessor beat processor; 2) performance difference between the two is pretty striking; 3) because of this investigation, a single coprocessor runs the code 8x faster than dual-socket processor ran the baseline code, and 1.38x faster than it runs the final code.
This shows the speed-up for each version of the code running on the processor and coprocessor, relative to the original code running on the processor.A few interesting points: 1) auto-vectorization alone was sufficient to make coprocessor beat processor; 2) performance difference between the two is pretty striking; 3) because of this investigation, a single coprocessor runs the code 8x faster than dual-socket processor ran the baseline code, and 1.38x faster than it runs the final code.
None of the optimizations employed to date are really very complicated; they’re all either pragmas or simple code transformations. The code is readable and looks much like it did before.Future optimization work is likely to be a little more destructive, but we’re not interested in going to intrinsics or inline assembly.Bulk of future work is actually in moving away from a single coprocessor in order to run much larger problems.