This document discusses different methods for virtualizing I/O in virtual machines. It covers virtual I/O approaches like virtio, PCI passthrough, and SR-IOV. It also explains the role of the VMM/hypervisor in managing I/O between VMs and physical devices using techniques like VT-d, Open vSwitch, and single root I/O virtualization. Finally, it discusses emerging standards for virtual switching like virtual Ethernet bridging.
29回勉強会資料「PostgreSQLのリカバリ超入門」
See also http://www.interdb.jp/pgsql (Coming soon!)
初心者向け。PostgreSQLのWAL、CHECKPOINT、 オンラインバックアップの仕組み解説。
これを見たら、次は→ http://www.slideshare.net/satock/29shikumi-backup
This document discusses messaging queues and platforms. It begins with an introduction to messaging queues and their core components. It then provides a table comparing 8 popular open source messaging platforms: Apache Kafka, ActiveMQ, RabbitMQ, NATS, NSQ, Redis, ZeroMQ, and Nanomsg. The document discusses using Apache Kafka for streaming and integration with Google Pub/Sub, Dataflow, and BigQuery. It also covers benchmark testing of these platforms, comparing throughput and latency. Finally, it emphasizes that messaging queues can help applications by allowing producers and consumers to communicate asynchronously.
Seven years ago at LCA, Van Jacobsen introduced the concept of net channels but since then the concept of user mode networking has not hit the mainstream. There are several different user mode networking environments: Intel DPDK, BSD netmap, and Solarflare OpenOnload. Each of these provides higher performance than standard Linux kernel networking; but also creates new problems. This talk will explore the issues created by user space networking including performance, internal architecture, security and licensing.
29回勉強会資料「PostgreSQLのリカバリ超入門」
See also http://www.interdb.jp/pgsql (Coming soon!)
初心者向け。PostgreSQLのWAL、CHECKPOINT、 オンラインバックアップの仕組み解説。
これを見たら、次は→ http://www.slideshare.net/satock/29shikumi-backup
This document discusses messaging queues and platforms. It begins with an introduction to messaging queues and their core components. It then provides a table comparing 8 popular open source messaging platforms: Apache Kafka, ActiveMQ, RabbitMQ, NATS, NSQ, Redis, ZeroMQ, and Nanomsg. The document discusses using Apache Kafka for streaming and integration with Google Pub/Sub, Dataflow, and BigQuery. It also covers benchmark testing of these platforms, comparing throughput and latency. Finally, it emphasizes that messaging queues can help applications by allowing producers and consumers to communicate asynchronously.
Seven years ago at LCA, Van Jacobsen introduced the concept of net channels but since then the concept of user mode networking has not hit the mainstream. There are several different user mode networking environments: Intel DPDK, BSD netmap, and Solarflare OpenOnload. Each of these provides higher performance than standard Linux kernel networking; but also creates new problems. This talk will explore the issues created by user space networking including performance, internal architecture, security and licensing.
This document discusses the Plan 9 operating system and network programming in Plan 9. It provides an overview of Plan 9's origins from UNIX and its networking APIs and model, including the use of file descriptors to represent network connections. It also demonstrates examples of echo clients and servers implemented using these networking APIs.
We build tribes. We help find, profile and activite your customer base into a tribe so you grow your customers and make more money from each of them. Simply put we help you win friends and influence people.
This document discusses optimizations for TCP/IP networking performance on multicore systems. It describes several inefficiencies in the Linux kernel TCP/IP stack related to shared resources between cores, broken data locality, and per-packet processing overhead. It then introduces mTCP, a user-level TCP/IP stack that addresses these issues through a thread model with pairwise threading, batch packet processing from I/O to applications, and a BSD-like socket API. mTCP achieves a 2.35x performance improvement over the kernel TCP/IP stack on a web server workload.
The document discusses the performance of three SPEC CPU2006 benchmarks - 483.xalancbmk, 462.libquantum, and 471.omnetpp - under different last-level cache (LLC) configurations and when subjected to LLC cache interference from a background workload. Key findings include reduced performance for the benchmarks when run with a smaller LLC size or when interfered with by a LLC jammer workload, but maintained performance when QoS techniques were applied to isolate the benchmark workload in the LLC.
The document discusses interrupts in the Intel 8085 microprocessor. There are five hardware interrupts - TRAP, RST 7.5, RST 6.5, RST 5.5, and INTR. TRAP has the highest priority and is a non-maskable, edge- and level-triggered interrupt used for power failures or emergencies. The interrupts differ in priority, whether they are maskable or not, and whether they are vectored or non-vectored. Upon an interrupt, the program counter is saved to the stack and an interrupt acknowledge signal is sent before servicing the interrupt routine.
Toward a practical “HPC Cloud”: Performance tuning of a virtualized HPC clusterRyousei Takano
1) Performance tuning methods for HPC Cloud include PCI passthrough, NUMA affinity, and reducing VMM noise to improve performance and close the gap with bare metal machines.
2) Evaluation of MPI and HPC applications on a 16-node cluster showed PCI passthrough improved MPI bandwidth close to bare metal, and NUMA affinity improved performance up to 2%.
3) Parallel efficiency of coarse-grained applications was comparable to bare metal, but fine-grained applications saw up to 22% degradation due to communication overhead and virtualization.
The document discusses the history and usage of virtualization technology, provides an overview of CPU, memory, and I/O virtualization, compares the Xen and KVM virtualization architectures, and describes some Intel work to support virtualization in OpenStack including the Open Attestation service.
Cooperative VM Migration for a virtualized HPC Cluster with VMM-bypass I/O de...Ryousei Takano
1) Cooperative VM migration allows live migration of VMs with VMM-bypass I/O devices like InfiniBand adapters.
2) SymVirt enables coordination between the guest OS and VMM to safely detach and reattach devices during migration.
3) Experiments show SymVirt enables fault-tolerant live migration with minimal overhead for HPC workloads on an InfiniBand cluster. Postcopy migration further reduces downtime during migration.
virtualization tutorial at ACM bangalore Compute 2009ACMBangalore
This document summarizes a tutorial on the hardware revolution in server virtualization. It begins with an overview of server virtualization technologies including VMM architectures and the criteria for a processor to be virtualizable. It then discusses the challenges of virtualizing x86 processors due to their architecture. The document outlines software techniques like binary translation and para-virtualization used for CPU, memory, and I/O virtualization. It also reviews hardware techniques enabled by technologies like VT-x, EPT, and SR-IOV. The summary concludes with a brief discussion of future trends in manageability and security relating to server virtualization.
This presenation gives a quick history on Hyper-V and discusses the arhcitecture of the vurrent release. It then goes into detail on Hyper-V R2, i.e. the build included in Hyper-V Server 2008 R2 and Windows Server 2008 R2. It includes Live Migration, Cluster Shared Volumes, Virtual Machine Queue, SLAT, Core Parking and Native VHD.
This document discusses ARMVISOR, a hypervisor for ARM architecture. It presents ARMVISOR's system architecture, challenges in virtualizing ARM, and implementations of CPU, memory, and I/O virtualization. Optimizations are described to reduce virtualization overhead through techniques like shadow page tables, para-virtualization, and reducing the number of traps. Experimental results show that the optimizations provide significant performance improvements over the base ARMVISOR system.
Hardware support for virtualization originated in the 1970s with goals of running multiple virtual machines on a single physical machine. A key requirement was virtualization allowing equivalent execution of programs in a virtual environment as running natively. The x86 architecture posed challenges to virtualization due to sensitive instructions. Intel Virtualization Technology (VT-x) added hardware support for virtualization on x86 by introducing a new CPU operation mode called VMX non-root, and transitions between it and VMX root mode. This reduced the need for software emulation of sensitive instructions and improved virtualization performance.
Toward a practical “HPC Cloud”: Performance tuning of a virtualized HPC clusterRyousei Takano
This document evaluates the performance of a virtualized HPC cluster using the HPC Challenge benchmark suite. It investigates three performance tuning techniques: PCI passthrough to bypass virtualization overhead for the network interface card, NUMA affinity to improve memory access performance, and reducing "VMM noise" like unnecessary services on the host OS. The results show these techniques can improve performance of the virtualized cluster to be close to that of a non-virtualized or "bare metal" system, realizing a more practical "true HPC Cloud."
- Peter Chang is a developer of ARMvisor, which is a KVM implementation for ARM architecture
- ARMvisor uses para-virtualization and trap-and-emulate techniques to virtualize sensitive ARM instructions like branches, register access, and memory operations
- It traps these instructions and emulates their behavior to enable full system virtualization on ARM platforms
SR-IOV allows a physical network interface card to present multiple virtual interfaces to different virtual machines. The document discusses SR-IOV support in Xen, including the SR-IOV specification, Xen architecture with SR-IOV, and considerations around virtual function communication and device instance representation in the host. SR-IOV enables high performance network virtualization by allowing each guest direct access to hardware while avoiding hypervisor intervention at runtime.
The needs for immediate responsiveness of VMs in the virtualized environments have been on the rise. Several services in SKT also require soft realtime support for virtual machines to substitute the physical machines to achieve high utilization and adaptability. However, consolidated multiple OSes and irregular external events might render the hypervisor infringe on a VM's promptitude. As a solution of this problem, we are improving Xen's credit scheduler by introducing the RT_PRIORITY that guarantees a VM's running at any given point in time as long as credits remains to be burn. It would increase the quality of service and make a VM's behavior predictable on the consolidated environment. In addition, we extend our suggestion to the multi-core environment and even a large number of physical machines by using live migrations.
This document provides an overview and summary of key concepts around virtualization that will be covered in more depth at a technical deep dive session, including:
- Virtualization capabilities for desktops/laptops and servers including workstation virtualization and server consolidation.
- How virtual machines work and the overhead associated with virtualization.
- Properties of virtualization like partitioning, isolation, and encapsulation.
- Benefits of server virtualization like consolidation, simpler management, and automated resource pooling.
- Comparison of "hosted" and vSphere virtualization architectures.
- Technologies used in virtualization like binary translation, hardware assistance from Intel VT/AMD-V.
- Ability to virtualize CPU intensive applications with
BitVisor is a security-focused virtual machine monitor (VMM) developed in Japan with the goals of encrypting storage and networks and using smart cards for authentication and key management. It uses a para-virtualization approach where most device I/O is passed through directly to the guest operating system, unlike Xen which uses full virtualization and device emulation. This makes BitVisor's VMM smaller and lower overhead than Xen. Experimental results showed BitVisor running Windows and Linux guests with encryption of storage and networking.
I spoke about:
- The need for machine virtualisation
- Hyper-V architecture and components
- Failover Clustering
- Licensing
- Management- W2008 R2: Cluster Shared Volume and Live Migration
- The possible future
This document describes SIOEMU, a self-IO emulation technique that allows non-x86 operating systems like OpenVMS to run on Xen/ia64 virtual machines. It does so by having a firmware within the domain handle all IO emulation instead of relying on Qemu in the control domain. This makes the domains more flexible and improves performance by avoiding domain scheduling for IO operations. The firmware emulates devices like IDE and network interfaces to provide full system emulation. Initial results show it can run Linux and OpenVMS domains, but ongoing work is needed to support SMP, save/restore, and add support for devices like VGA.
Aidan Finn Hyper V The Future Of InfrastructureNathan Winters
VM 1 and VM 2 represent virtual machines running on separate physical hosts. The document describes live migration of a virtual machine (VM 1) from one physical host (Host 1) to another (Host 2) with minimal downtime. This is accomplished by first copying memory pages over the network from Host 1 to Host 2. Then the storage connectivity is moved and the virtual machine is paused on Host 1. Finally, the virtual machine resumes running on Host 2 while being deleted from Host 1.
This document provides an overview of virtualization technology, including a brief history, common usage models, and challenges and approaches for CPU, memory, and I/O virtualization. It discusses the architectures of Xen and KVM, and outlines some of Intel's work on virtualization for OpenStack, including trusted computing pools using attestation.
KVM provides virtualization capabilities using the Linux kernel. It supports full virtualization of x86, PowerPC, s390 and IA-64 architectures using hardware extensions like Intel-VTx and AMD-V. KVM leverages existing Linux components like the scheduler and uses the Linux security model. Guests are scheduled as regular processes. Paravirtualization is used to improve performance through virtio drivers and paravirt_ops. KVM development is ongoing with goals of supporting more hardware features, improving scalability and integrating with management tools like libvirt.
This document discusses moving backend drivers from the Dom0 domain to a separate HVM driver domain in Xen. Testing showed the HVM driver domain provided better network performance than the PV backend domain, with lower CPU utilization. Issues were discussed around booting the system without physical device drivers in Dom0, requiring the HVM driver domain to run devices and provide networking/storage. Further analysis of EPT page flipping performance was suggested.
This document discusses the challenges of graphics virtualization. It provides background on native device initialization, QEMU I/O virtualization, and PCI device pass-through. It then covers graphics pass-through for discrete and integrated graphics, including the current status and future work, such as supporting dual graphics devices and improving driver validation.
1) The document explores a new concept called error permissive computing that improves computing capabilities and reduces power consumption by allowing and managing hardware errors through system software instead of eliminating errors through general purpose hardware error correction.
2) It describes several approaches for implementing error permissive computing including a software framework called BITFLEX that enables approximate computing, an FPGA-based memory emulator for evaluating new system software mechanisms, and techniques for sparse and topology-aware communication that can accelerate large-scale deep learning and reduce communication costs.
3) The goal is to take a holistic approach across hardware and software layers to perform lightweight error correction at the software level while eliminating general purpose error correction in hardware for improved efficiency.
Opportunities of ML-based data analytics in ABCIRyousei Takano
This document discusses opportunities for using machine learning-based data analytics on the ABCI supercomputer system. It summarizes:
1) An introduction to the ABCI system and how it is being used for AI research.
2) How sensor data from the ABCI system and job logs could be analyzed using machine learning to optimize data center operation and improve resource utilization and scheduling.
3) Two potential use cases - using workload prediction to enable more efficient cooling system control, and applying machine learning to better predict job execution times to improve scheduling.
ABCI: An Open Innovation Platform for Advancing AI Research and DeploymentRyousei Takano
AI Infrastructure for Everyone (Democratization AI) aims to build an AI infrastructure platform that is accessible to everyone from beginners to experts. The platform provides up to 512-node computing resources, ready-to-use software, datasets, and pre-trained models. It also offers services like an easy-to-use web-based IDE for beginners and an AI cloud with on-demand, reserved, and batch processing options. The goal is to accelerate AI research and promote social implementation of AI technologies.
The document summarizes four presentations from the USENIX NSDI 2016 conference session on resource sharing:
1. "Ernest: Efficient Performance Prediction for Large-Scale Advanced Analytics" proposes a framework that uses results from small training jobs to efficiently predict performance of data analytics workloads in cloud environments and reduce the number of required training jobs.
2. "Cliffhanger: Scaling Performance Cliffs in Web Memory Caches" presents algorithms to dynamically allocate memory across queues in Memcached to smooth out performance cliffs and potentially save memory usage.
3. "FairRide: Near-Optimal, Fair Cache Sharing" introduces a caching policy that provides isolation guarantees, prevents strategic behavior, and
Flow-centric Computing - A Datacenter Architecture in the Post Moore EraRyousei Takano
1) The document proposes a new "flow-centric computing" data center architecture for the post-Moore era that focuses on data flows.
2) It involves disaggregating server components and reassembling them as "slices" consisting of task-specific processors and storage connected by an optical network to efficiently process data.
3) The authors expect optical networks to enable high-speed communication between processors, replacing general CPUs, and to potentially revolutionize how data is processed in future data centers.
A Look Inside Google’s Data Center NetworksRyousei Takano
1) Google has been developing their own data center network architectures using merchant silicon switches and centralized network control since 2005 to keep up with increasing bandwidth demands.
2) Their network designs have evolved from Firehose and Watchtower to the current Saturn and Jupiter networks, increasing port speeds from 1/10Gbps to 40/100Gbps and aggregate bandwidth from terabits to petabits per second.
3) Their network architectures employ Clos topologies with merchant silicon switches at the top-of-rack, aggregation, and spine layers and centralized control of traffic routing.
- Hardware such as DRAM and NAND flash are facing scaling challenges as density increases, which could impact performance and cost. New non-volatile memory (NVM) technologies may provide opportunities to address these challenges but require software and system architecture changes to realize their full potential. Key considerations include persistence, performance, and programming models.
AIST Super Green Cloud: lessons learned from the operation and the performanc...Ryousei Takano
This document discusses lessons learned from operating the AIST Super Green Cloud (ASGC), a fully virtualized high-performance computing (HPC) cloud system. It summarizes key findings from the first six months of operation, including performance evaluations of SR-IOV virtualization and HPC applications. It also outlines conclusions and future work, such as improving data movement efficiency across hybrid cloud environments.
The document summarizes the author's participation report at the IEEE CloudCom 2014 conference. Some key points include:
- The author attended sessions on virtualization and HPC on cloud.
- Presentations had a strong academic focus and many presenters were Asian.
- Eight papers on HPC on cloud covered topics like reliability, energy efficiency, performance metrics, and applications like Monte Carlo simulations.
Exploring the Performance Impact of Virtualization on an HPC CloudRyousei Takano
The document evaluates the performance impact of virtualization on high-performance computing (HPC) clouds. Experiments were conducted on the AIST Super Green Cloud, a 155-node HPC cluster. Benchmark results show that while PCI passthrough mitigates I/O overhead, virtualization still incurs performance penalties for MPI collectives as node counts increase. Application benchmarks demonstrate overhead is limited to around 5%. The study concludes HPC clouds are promising due to utilization improvements from virtualization, but further optimization of virtual machine placement and pass-through technologies could help reduce overhead.
From Rack scale computers to Warehouse scale computersRyousei Takano
This document discusses the transition from rack-scale computers to warehouse-scale computers through the disaggregation of technologies. It provides examples of rack-scale architectures like Open Compute Project and Intel Rack Scale Architecture. For warehouse-scale computers, it examines HP's The Machine project using application-specific cores, universal memory, and photonics fabric. It also outlines UC Berkeley's FireBox project utilizing 1 terabit/sec optical fibers, many-core systems-on-chip, and non-volatile memory modules connected via high-radix photonic switches.
高性能かつスケールアウト可能なHPCクラウド AIST Super Green CloudRyousei Takano
The document contains configuration instructions for creating a cluster in a cloud computing environment called myCluster. It specifies creating a frontend node and 16 compute nodes using specified templates, compute and disk offerings. It also defines the cluster name, zone, network, and SSH key to use. The cluster can then be started and later destroyed along with a configuration file.
Iris: Inter-cloud Resource Integration System for Elastic Cloud Data CenterRyousei Takano
The document describes Iris, an inter-cloud resource integration system that enables elastic cloud data centers. Iris uses nested virtualization technologies including nested KVM to construct a virtual infrastructure spanning multiple distributed data centers. It provides a new Hardware as a Service (HaaS) model for inter-cloud federation at the infrastructure provider level. The authors demonstrate Apache CloudStack can seamlessly manage resources across emulated inter-cloud environments using Iris.
A Scalable and Distributed Electrical Power Monitoring System Utilizing Cloud...Ryousei Takano
This document describes a scalable and distributed electrical power monitoring system using cloud computing. Low-cost power measuring units collect data from current sensors and send it to data collecting units. These units then push the data to a data store hosted on Google App Engine. This cloud-based system allows visualization of power consumption across a large campus from any application accessing the data through a REST API. The system is scalable, low-cost, and easy to develop applications for power monitoring and planning energy savings.
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...Zilliz
Join us to introduce Milvus Lite, a vector database that can run on notebooks and laptops, share the same API with Milvus, and integrate with every popular GenAI framework. This webinar is perfect for developers seeking easy-to-use, well-integrated vector databases for their GenAI apps.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
11. KVM: Kernel-based Virtual Machine
•
– Xen ring aliasing
• CPU QEMU
– BIOS
VMX root mode OS VMX non-root mode OS
proc.
QEMU
Ring 3
device memory VM Entry
emulation management VMCS
VM Exit
KVM
Ring 0 Guest OS Kernel
Linux Kernel
11
12. CPU
Xen KVM
VM VM (Xen DomU) VM (QEMU process)
(Dom0) Guest OS Guest OS
Process Process
VCPU VCPU
threads
Xen Hypervisor Linux
KVM
Domain Process
scheduler scheduler
Physical Physical
CPU CPU
12
13. OS Guest
OS
VA PA GVA GPA
GVA
VMM GPA
HPA
MMU#
MMU#
(CR3)
(CR3)
page page
H/W
13
14. PVM HVM EPT# HVM
Guest Guest Guest
OS OS OS
GVA HPA GVA GPA GVA GPA
OS
OS SPT
VMM VMM VMM
GVA HPA GPA HPA
MMU# MMU# MMU#
(CR3) (CR3) (CR3)
page page page
H/W
14
20. VM I/O
• I/O VM
(virtio, vhost)
PCI pass-
through
SR-IOV
– VMM VMM
Open vSwitch
• QEMU ne2000
rtl8139 e1000 VT-d
•
– Xen split driver model
– virtio vhost
– VMWare VMXNET3
• Direct assignment VMM bypass I/O
– PCI
– SR-IOV
20
21. VM I/O
I/O
PCI SR-IOV
VM1 VM2 VM1 VM2 VM1 VM2
Guest OS Guest OS Guest OS
… … …
Guest Physical Physical
driver driver driver
VMM VMM VMM
vSwitch
Physical
driver
NIC NIC NIC
Switch (VEB)
I/O emulation PCI passthrough SR-IOV
VM
21
22. Edge Virtual Bridging
(IEEE 802.1Qbg)
• VM
•
(a) Software VEB (b) Hardware VEB (c) VEPA, VN-Tag
VM1 VM2 VM1 VM2 VM1 VM2
VNIC VNIC VNIC VNIC VNIC VNIC
VMM vSwitch VMM VMM
NIC NIC switch NIC
switch
VEB: Virtual Ethernet Bridging VEPA: Virtual Ethernet Port Aggregator
22
23. I/O
• OS
–
• VM Exits
VMX root mode VMX non-root mode
QEMU
Ring 3
e1000 copy
Linux Kernel/ tap Guest OS Kernel
KVM
vSwitch buffer
Ring 0
Physical driver e1000
23
24. virtio
• VM Exits
• virtio_ring
– I/O
VMX root mode VMX non-root mode
QEMU
Ring 3
virtio_net copy
Linux Kernel/ tap Guest OS Kernel
KVM
vSwitch buffer
Ring 0
Physical driver virtio_net
24
25. vhost
• tap QEMU
• macvlan/macvtap
VMX root mode VMX non-root mode
QEMU
Ring 3
Linux Kernel/
vhost_net
KVM
Guest OS Kernel
macvtap
buffer
Ring 0
physical driver macvlan virtio_net
25
26. VM PCI pass-
SR-IOV
• (virtio, vhost) through
– VMM DMA VMM
Open vSwitch
– VMM VT-d
VMX root mode VMX non-root mode
QEMU
Ring 3
Linux Kernel/ Guest OS Kernel
KVM
Ring 0 buffer
physical driver
EOI
H/W VT-d DMA
26
27. VM1 VM2
: Guest OS
VMM …
VM Exit
VMCS
VMM
OS VM Entry
DMA
IOMMU
NIC
VMCS: Virtual Machine Control Structure
27
30. ource-id” in this document. The remapping hardware may determine the source-id of a
in implementation-specific ways. For example, some I/O bus protocols may provide the
device identity as part of each I/O transaction. In other cases (for Root-Complex
devices, for example), the source-id may be derived based on the Root-Complex internal
tion.
DMA remapping page walk
ress devices, the source-id is the requester identifier in the PCI Express transaction layer
requester identifier of a device, which is composed of its PCI Bus/Device/Function
assigned by configuration software and uniquely identifies the hardware function that
request. Figure 3-6 illustrates the requester-id1 as defined by the PCI Express
n.
1
ID
DMA Remapping—Intel® Virtualization Technology for Directed I/O
• BDF ID
5 87 3 2 0
DMA Remapping—Intel® Virtualization Technology for Directed I/O
Bus # Device # Function #
• ID DMA
Figure 3-6. Requester Identifier Format
• IOTLB hardware encounters a page-table entry with either Read or Write fieldisClear
— If
address translating a Atomic Operation (AtomicOp) request, the request bloc
(Dev 31, Func 7) Context entry 255
g sections describe the data structures for mapping I/O devices to domains.
Figure 3-8 shows a multi-level (3-level) page-table structure with 4KB page mappings
tables. Figure 3-9 shows a 2-level page-table structure with 2MB super pages.
Root-Entry
(Dev 0, Func 1) DMA
try functions as the top level (Dev 0, Func 0) to map devices on a specific PCI bus to6 their
(Bus 255) Root entry 255 structure Context entry 0 3 3 3 2 22 11
domains. Each root-entry structure contains the following fields: Translation
Context-entry Table Address
3 9 8 0 9 10 21 0
for Bus N Structures for Domain A DMA with address bits
(Bus N) Root entry N
t flag: The present field is used by software to indicate to hardware whether the root-
0s 63:39 validated to be 0s
present and initialized. Software may Clear the present field for root entries
12-bits
9-bits
9-bits
9-bits
onding to bus numbers that are either not present in the platform, or don’t have any
eam devices attached. If the present field of a root-entry used to process a DMA request
(Bus 0) Root entry 0 +
the DMA request is blocked, resulting in a translation fault. << 3
Root-entry Table
t-entry table pointer: The context-entry table 255
Context entry pointer references the context-entry
r devices on the bus identified by the root-entry. Section 3.3.3 describes context entries in
detail. << 3
+ SP = 0
illustrates the root-entry format. The root entries are programmed Translation the root-entry
Address through 4KB page
ocation of the root-entry table in system memory is programmed through the Root-entry
Structures for Domain B
ss register. The root-entry table is 4KB in sizeentry 0 accommodates 256 root entries to
Context and
<< 3 + SP = 0
CI bus number space (0-255). In the case of a PCI device, the bus number (upper 8-bits)
Context-entry Table
a DMA transaction’s source-id field is used for Bus 0 into the root-entry structure.
to index 1
4KB page table
2 6 1
llustrates how these tables are used to map devices to domains. 7 3 2 0
Figure 3-7. Device to Domain Mapping Structures ASR + SP = 0
Context
Entry 4KB page table
,
3.3.3 Context-Entry ,
4KB page table
A context-entry maps a specific I/O device on a bus to the domain to which it is assigned, and, 3-8. Example Multi-level Page Table
Figure in
turn, to the address translation structures for the domain. The context entries are programmed
Intel Virtualization Technology for Directed I/O
through the memory-resident context-entry tables. Each root-entry in the root-entry table contains
the pointer to the context-entry table for the corresponding bus number. Each context-entry table
Express devices entries, with each entryRouting-ID Interpretation (ARI), bits traditionally bus. For a PCI
contains 256 supporting Alternative representing a unique PCI device function on the 30
32. mance for I/O Virtualization Exit-Less Interrupt
mit2 Nadav Har’El1
1
• “ELI: Bare-Metal Performance
Assaf Schuster2 Dan Tsafrir2 for I/O Virtualization”, A.
Gordon, et al., ASPLOS 2012
2 Technion—Israel Institute of Technology
namit,muli,assaf,dan}@cs.technion.ac.il
– OS VM Exits ELI (Exit-Less
Interrupt)
– netperf Apache memcached BMM 97-100%
guest/host context switch (exits and entries) interrupt to the host
CPU forces an exit and delivers the through the
handling costIDT.
host (handling physical interrupts and their completions) guest interrupt
Guests receive virtual interrupts, which are not necessarily related IDT handler
to physical interrupts. The host may decide to inject guest
the guest with a
(a) baseline assigned
virtual interrupt because the host received a corresponding physical
physical interrupt interrupt interrupt
host
interrupt, or the host injection completion
interrupt may decide to inject the guest with a virtual
interrupt manufactured by the host. The host injects virtual interrupts shadow
guest
ELI through the guest IDT. When the processor enters guest mode after shadow IDT
(b) delivery an injection, the guest receives and handles the virtual interrupt.
interrupt host IDT VM non-assigned
During interrupt completion the guest will access its LAPIC. Just
handling, interrupt
like the IDT, full access to a core’s physical LAPIC implies total ELI (exit)
ELI guest
control of the core, so the host cannot easily give untrusted guests
delivery &
delivery hypervisor
(c) access to the physical LAPIC. For guests using the first LAPIC x2APIC
completion host Non present
generation, the processor forces an exit when the guest accesses
(d) bare-metal LAPIC memory area. For guests using x2APIC, the host traps
the physical
LAPIC accesses through an MSR bitmap. When running a guest, interrupt
the host provides the CPU with a bitmap specifying whichtime benign
MSRs the guest is allowed to access directly and which sensitive
Figure 1. Exits during interrupt handling
MSRs must not be accessed by the guest directly. When the guest Figure 2. ELI interrupt delivery flow
accesses sensitive MSRs, execution exits back to the host. In general, 32
33. PCI-SIG IO Virtualization
• I/O PCIe Gen2
– SR-IOV (Single Root-I/O Virtualization)
• VM
• NIC
– MR-IOV (Multi Root-I/O Virtualization)
•
•
• NEC ExpEther
• VMM SR-IOV
– KVM Xen VMWare Hyper-V
–
Linux VFIO
33
34. SR-IOV NIC
• 1 NIC NIC vNIC
VM
– vNIC = VF (Virtual Function)
VM1 VM2 VM3
vNIC vNIC vNIC
VMM
RX TX
Virtual
Function
L2 Classified Sorter
MAC/PHY
34
35. SR-IOV NIC
• Physical Function (PF)
– VMM
• Virtual Function (VF)
– VM OS VF
– PF PF
– 82576 8 256
VM
Guest OS VM Device System Device
Config Space Config Space
VF driver
VFn0 PFn0
Virtual NIC
VFn0
VMM PF driver
VFn1
Physical NIC VFn2
:
35
36. 1. + tap VM
(virtio, vhost)
PCI pass-
through
SR-IOV
–
VMM
Open vSwitch
–
• VT-d
• Open vSwitch
2. MAC tap : macvlan/macvtap
–
VM1 VM2 VM1 VM2
1. 2.
eth0 eth0 eth0 eth0
VMM VMM
tap0 tap1 tap0 tap1
macvlan0 macvlan1
eth0 eth0
36
37. Open vSwitch
•
– Linux
•
• OvS
– OpenFlow
–
• Linux kernel 3.3
•
Pica8 Pronto
http://openvswitch.org/
37
38. VM
VM1 VM2
VM VLAN
• OS VLAN
eth0 eth0
• 1 VM VLAN ID
VMM
tap0 tap1 # ovs-vsctl add-br br0
# ovs-vsctl add-port br0 tap0 tag=101
# ovs-vsctl add-port br0 tap1 tag=102
vSwitch (br0)
# ovs-vsctl add-port br0 eth0
VLAN ID 101
eth0
VLAN ID 102 VLAN
tap0 <-> br0_101 <-> eth0.101
38
44. QEMU/KVM
$ cat /etc/ovs-ifup
#!/bin/sh
switch='br0'
/sbin/ip link set mtu 9000 dev $1 up
/opt/bin/ovs-vsctl add-port ${switch} $1
$ cat /etc/ovs-ifdown
#!/bin/sh
switch='br0'
/sbin/ip link set $1 down
/opt/bin/ovs-vsctl del-port ${switch} $1
QEMU/KVM tap
ovs-vsctl brctl
44
45. PCI
1. BIOS Intel VT VT-d
2. Linux VT-d
– intel_iommu=on
3. PCI
4. OS
5. OS
“How to assign devices with VT-d in KVM,”
http://www.linux-kvm.org/page/How_to_assign_devices_with_VT-
d_in_KVM
45
50. SR-IOV
• VF NIC
• VF OS
# ip link set dev eth5 vf 0 rate 200
# ip link set dev eth5 vf 1 rate 400
# ip link show dev eth5
42: eth5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq
state UP mode DEFAULT qlen 1000
link/ether 00:1b:21:81:55:3e brd ff:ff:ff:ff:ff:ff
vf 0 MAC 00:16:3e:1d:ee:01, tx rate 200 (Mbps), spoof
checking on
vf 1 MAC 00:16:3e:1d:ee:02, tx rate 400 (Mbps), spoof
checking on
OS 2010-OS-117 13
OS
50
51. SR-IOV TIPS
• VF MAC
# ip link set dev eth5 vf 0 00:16:3e:1d:ee:01
• VF VLAN ID
# ip link set dev eth5 vf 0 vlan 101
• Intel 82576 GbE 82599 X540 10GbE
NIC
–
– http://www.intel.com/content/www/us/en/ethernet-
controllers/ethernet-controllers.html
51
52. VM
• VM NG
• PCI Bonding
– PCI NIC
– NIC
virtio NIC active-standby
bonding
– S
• SR-IOV NIC VF virio PV
1 NIC
52
53. SR-IOV:
GesutOS
bond0
eth0 eth1
(virtio) (igbvf)
tap0 Host OS Host OS
tap0
br0 br0
eth0 eth0
(igb) (igb)
SR-IOV NIC SR-IOV NIC
53
54. SR-IOV:
GesutOS
(qemu) device_del vf0
bond0
eth0 eth1
(virtio) (igbvf)
tap0 Host OS Host OS
tap0
br0 br0
eth0 eth0
(igb) (igb)
SR-IOV NIC SR-IOV NIC
54
55. SR-IOV:
(qemu) migrate -d tcp:x.x.x.x:y
GesutOS GesutOS
bond0
eth0
(virtio)
$ qemu -incoming tcp:0:y ...
tap0 Host OS Host OS
tap0
br0 br0
eth0 eth0
(igb) (igb)
SR-IOV NIC SR-IOV NIC
55
64. AIST Green Cloud
AGC 1 16
HPC
Compute node Dell PowerEdge M610 Host machine environment
CPU Intel quad-core Xeon E5540/2.53GHz x2 OS Debian 6.0.1
Chipset Intel 5520 Linux kernel 2.6.32-5-amd64
Memory 48 GB DDR3 KVM 0.12.50
InfiniBand Mellanox ConnectX (MT26428) Compiler gcc/gfortran 4.4.5
MPI Open MPI 1.4.2
Blade switch VM environment
InfiniBand Mellanox M3601Q (QDR 16 ports) VCPU 8
Memory 45 GB
1 1 VM
64
65. MPI Point-to-Point
10000
(higher is better) 2.4 GB/s qperf
3.2 GB/s
1000
Bandwidth [MB/sec]
100
PCI KVM
10
Bare Metal
Bare Metal
KVM
1
1 10 100 1k 10k 100k 1M 10M 100M 1G
Message size [byte] Bare Metal:
65
66. NPB BT-MZ:
(higher is better)
300 100
Performance [Gop/s total]
250 Degradation of PE: 80
Parallel efficiency [%]
KVM: 2%, EC2 CCI: 14%
200
Bare Metal 60
150 KVM
Amazon EC2
40
100 Bare Metal (PE)
KVM (PE)
20
50 Amazon EC2 (PE)
0 0
1 2 4 8 16
EC2 Cluster compute Number of nodes
instances (CCI)
66
67. Bloss:
Rank 0 Rank 0 N
MPI OpenMP Bcast
760 MB
Liner Solver
(require 10GB mem.
Reduce
1 GB
coarse-grained MPI comm.
Parallel Efficiency 1 GB
120 Bcast Eigenvector calc.
(higher is better) Gather
100 350 MB
Parallel Efficiency [%]
80
60
Degradation of PE:
40
KVM: 8%, EC2 CCI: 22%
20 Bare Metal
KVM
Amazon EC2
Ideal
0
1 2 4 8 16
Number of nodes 67
68. VMWare ESXi
• Dell PowerEdge T410
– CPU Intel Hexa-core Xeon X5650, single socket
– 6GB DDR3-1333
– HBA: QLogic QLE2460 (single-port 4Gbps Fibre Channel)
• IBM DS3400 FC SAN
• VMM: VMWare ESXi 5.0 T410 Fibre DS3400
Channel
• OS Windows server 2008 R2
•
Ethernet
– 8 vCPU (out-of-band )
– 3840 MB
•
– IOMeter 2006.07.27 (http://www.iometer.org/)
69. Bare Metal Machine Raw Device Mapping VMDirectPath I/O
(BMM) (RDM) (FPT)
VM VM
Windows Windows Windows
NTFS NTFS NTFS
Volume manager Volume manager Volume manager
Disk class driver Disk class driver Disk class driver
Storport/FC HBA driver Storport/SCSI driver Storport/FC HBA driver
VMKernel VMKernel
FC HBA driver
LUN LUN LUN