Martin Děcký presented on microkernels in the era of data-centric computing. He discussed how emerging memory technologies and near-data processing can break away from the von Neumann architecture. Near-data processing provides benefits like reduced latency, increased throughput, and lower energy consumption. This leads to more distributed and heterogeneous systems that are well-matched to multi-microkernel architectures, with microkernels providing isolation and message passing between cores. The talk outlined an incremental approach from initial workload offloading to developing a full multi-microkernel system.
Hardware/Software Co-Design for Efficient Microkernel ExecutionMartin Děcký
While the performance overhead of IPC in microkernel multiserver operating systems is no longer considered a blocker for their practical deployment (thanks to many optimization ideas that have been proposed and implemented over the years), it is undeniable that the overhead still does exist and the more fine-grained the architecture of the operating system is (which is desirable from the reliability, dependability, safety and security point of view), the more severe performance penalties due to the IPC overhead it suffers. A closely related issue is the overhead of handing hardware interrupts in user space device drivers. This talk discusses some specific hardware/software co-design ideas to improve the performance of microkernel multiserver operating systems.
One reason for the IPC overhead is the fact that current hardware and CPUs were never designed with microkernel multiserver operating systems in mind, but they were rather fitted for the traditional monolithic operating systems. This calls for an out-of-the-box thinking while designing instruction set architecture (ISA) extensions and other hardware features that would support (a) efficient communication between isolated virtual address spaces using synchronous and asynchronous IPC primitives, and (b) treating object references (e.g. capability references) as first-class entities on the hardware level. A good testbed for evaluating such approaches (with the potential to be eventually adopted as industry standard) is the still unspecified RV128 ISA (128-bit variant of RISC-V).
This talk discusses some specific hardware/software co-design ideas to improve the performance of microkernel multiserver operating systems.
Lessons Learned from Porting HelenOS to RISC-VMartin Děcký
HelenOS is an open source operating system based on the microkernel multiserver design principles. One of its goals is to provide excellent target platform portability. From the time of its inception, HelenOS already supported 4 different hardware platforms and currently it supports platforms as diverse as x86, SPARCv9 and ARM. This talk presents practical experiences and lessons learned from porting HelenOS to RISC-V.
While the unprivileged (user space) instruction set architecture of RISC-V has been declared stable in 2014, the privileged instruction set architecture is technically still allowed to change in the future. Likewise, many major design features and building blocks of HelenOS are already in place, but no official commitment to ABI or API stability has been made yet. This gives an interesting perspective on the pros and cons of both HelenOS and RISC-V. The talk also points to some possible research directions with respect to hardware/software co-design.
LO-PHI: Low-Observable Physical Host Instrumentation for Malware AnalysisPietro De Nicolao
Presentation of paper "LO-PHI: Low-Observable Physical Host Instrumentation for Malware Analysis" for the course of Advanced Topics in Computer Security of prof. Stefano Zanero.
Source and further information: https://github.com/pietrodn/lo-phi
Hardware Implementation of Algorithm for Cryptanalysisijcisjournal
Cryptanalysis of block ciphers involves massive computations which are independent of each other and can
be instantiated simultaneously so that the solution space is explored at a faster rate. With the advent of low
cost Field Programmable Gate Arrays (FPGA’s), building special purpose hardware for computationally
intensive applications has now become possible. For this the Data Encryption Standard (DES) is used as a
proof of concept. This paper presents the design for Hardware implementation of DES cryptanalysis on
FPGA using exhaustive key search. Two architectures viz. Rolled and Unrolled DES architecture are compared
and based on experimental result the Rolled architecture is implemented on FPGA. The aim of this
work is to make cryptanalysis faster and better.
Hardware/Software Co-Design for Efficient Microkernel ExecutionMartin Děcký
While the performance overhead of IPC in microkernel multiserver operating systems is no longer considered a blocker for their practical deployment (thanks to many optimization ideas that have been proposed and implemented over the years), it is undeniable that the overhead still does exist and the more fine-grained the architecture of the operating system is (which is desirable from the reliability, dependability, safety and security point of view), the more severe performance penalties due to the IPC overhead it suffers. A closely related issue is the overhead of handing hardware interrupts in user space device drivers. This talk discusses some specific hardware/software co-design ideas to improve the performance of microkernel multiserver operating systems.
One reason for the IPC overhead is the fact that current hardware and CPUs were never designed with microkernel multiserver operating systems in mind, but they were rather fitted for the traditional monolithic operating systems. This calls for an out-of-the-box thinking while designing instruction set architecture (ISA) extensions and other hardware features that would support (a) efficient communication between isolated virtual address spaces using synchronous and asynchronous IPC primitives, and (b) treating object references (e.g. capability references) as first-class entities on the hardware level. A good testbed for evaluating such approaches (with the potential to be eventually adopted as industry standard) is the still unspecified RV128 ISA (128-bit variant of RISC-V).
This talk discusses some specific hardware/software co-design ideas to improve the performance of microkernel multiserver operating systems.
Lessons Learned from Porting HelenOS to RISC-VMartin Děcký
HelenOS is an open source operating system based on the microkernel multiserver design principles. One of its goals is to provide excellent target platform portability. From the time of its inception, HelenOS already supported 4 different hardware platforms and currently it supports platforms as diverse as x86, SPARCv9 and ARM. This talk presents practical experiences and lessons learned from porting HelenOS to RISC-V.
While the unprivileged (user space) instruction set architecture of RISC-V has been declared stable in 2014, the privileged instruction set architecture is technically still allowed to change in the future. Likewise, many major design features and building blocks of HelenOS are already in place, but no official commitment to ABI or API stability has been made yet. This gives an interesting perspective on the pros and cons of both HelenOS and RISC-V. The talk also points to some possible research directions with respect to hardware/software co-design.
LO-PHI: Low-Observable Physical Host Instrumentation for Malware AnalysisPietro De Nicolao
Presentation of paper "LO-PHI: Low-Observable Physical Host Instrumentation for Malware Analysis" for the course of Advanced Topics in Computer Security of prof. Stefano Zanero.
Source and further information: https://github.com/pietrodn/lo-phi
Hardware Implementation of Algorithm for Cryptanalysisijcisjournal
Cryptanalysis of block ciphers involves massive computations which are independent of each other and can
be instantiated simultaneously so that the solution space is explored at a faster rate. With the advent of low
cost Field Programmable Gate Arrays (FPGA’s), building special purpose hardware for computationally
intensive applications has now become possible. For this the Data Encryption Standard (DES) is used as a
proof of concept. This paper presents the design for Hardware implementation of DES cryptanalysis on
FPGA using exhaustive key search. Two architectures viz. Rolled and Unrolled DES architecture are compared
and based on experimental result the Rolled architecture is implemented on FPGA. The aim of this
work is to make cryptanalysis faster and better.
An Efficient PDP Scheme for Distributed Cloud StorageIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
NEW ALGORITHM FOR WIRELESS NETWORK COMMUNICATION SECURITYijcisjournal
This paper evaluates the security of wireless communication network based on the fuzzy logic in Mat lab. A new algorithm is proposed and evaluated which is the hybrid algorithm. We highlight the valuable assets in designing of wireless network communication system based on network simulator (NS2), which is crucial to protect security of the systems. Block cipher algorithms are evaluated by using fuzzy logics and a hybrid
algorithm is proposed. Both algorithms are evaluated in term of the security level. Logic (AND) is used in the rules of modelling and Mamdani Style is used for the evaluations
This talk is about architecture and programming of the Epiphany processor on the Parallella board, discussing step-by-step how to improve and optimize software kernels on such distributed DSP systems. It was held at the "Softwarekonferenz für Parallel Programming, Concurrency
und Multicore-Systeme" in Karlsruhe/Germany 2014.
Reactive Data Centric Architectures with Vortex, Spark and ReactiveXAngelo Corsaro
An increasing number of Software Architects are realising that data is the most important asset of a system and are staring to embrace the Data-Centric revolution (datacentricmanifesto.org) — setting data at the center of their architecture and modelling applications as “visitors” to the data. At the same time, architect have also realised that reactive architectures (reactivemanifesto.org) facilitates the design of scalable, fault-tolerant and high performance systems.
Yet, few architects have realised that reactive and data-centric architectures are the two sides to the same coin and should always go hand in hand.
This presentation shows how reactive data-centric systems can be designed and built taking advantage of Vortex data sharing capabilities along with its integration with reactive and data-centric processing technologies such as Apache Spark and ReactiveX.
Final Year Project Synopsis: Post Quantum Encryption using Neural NetworksJPC Hanson
A synopsis of my final year project at Brunel University exploring the possibilities of using Neural Networks as a method of encryption immune to Shor's algorithm. i.e. a secure, 'post quantum' alternative to the NTRU algorithms.
Edge computing brings cloud services closer to the edge of the network, where data originates, and dramatically reduces the network latency of the cloud. It is a bridge linking clouds and users making the foundation for novel interconnected applications. However, edge computing still faces many challenges like remote configuration, well-defined native applications model, and limited node capacity. It lacks geo-organization and a clear separation of concerns. As such edge computing is hard to be offered as a service for future real-time user-centric applications. This paper presents the dynamic organization of geo-distributed edge nodes into micro data-centers to cover any arbitrary area and expand capacity, availability, and reliability. A cloud organization is used as an influence with adaptations for a different environment, and a model for edge applications utilizing these adaptations is presented. It is argued that the presented model can be integrated into existing solutions or used as a base for the development of future systems. Furthermore, a clear separation of concerns is given for the proposed model. With the separation of concerns setup, edge-native applications model, and a unified node organization, we are moving towards the idea of edge computing as a service, like any other utility in cloud computing.
PERFORMANCE EVALUATION OF PARALLEL INTERNATIONAL DATA ENCRYPTION ALGORITHM ON...IJNSA Journal
Distributed security is an evolving sub-domain of information and network security. Security applications play a serious role when data exchanging, different volumes of data should be transferred from one site to another safely and at high speed. In this paper, the parallel International Data Encryption Algorithm (IDEA) which is one of the security applications is implemented and evaluated in terms of running time, speedup, and efficiency. The parallel IDEA has been implemented using message passing interface (MPI) library, and the results have been conducted using IMAN1 Supercomputer, where a set of simulation runs carried out on different data sizes to define the best number of processor which can be used to manipulate these data sizes and to build a visualization about the processor number that can be used while the size of data increased. The experimental results show a good performance by reducing the running time, and increasing speed up of encryption and decryption processes for parallel IDEA when the number of processors ranges from 2 to 8 with achieved efficiency 97% to 83% respectively.
Privacy-Preserving Public Auditing for Regenerating-Code-Based Cloud Storage1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
An Efficient PDP Scheme for Distributed Cloud StorageIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
NEW ALGORITHM FOR WIRELESS NETWORK COMMUNICATION SECURITYijcisjournal
This paper evaluates the security of wireless communication network based on the fuzzy logic in Mat lab. A new algorithm is proposed and evaluated which is the hybrid algorithm. We highlight the valuable assets in designing of wireless network communication system based on network simulator (NS2), which is crucial to protect security of the systems. Block cipher algorithms are evaluated by using fuzzy logics and a hybrid
algorithm is proposed. Both algorithms are evaluated in term of the security level. Logic (AND) is used in the rules of modelling and Mamdani Style is used for the evaluations
This talk is about architecture and programming of the Epiphany processor on the Parallella board, discussing step-by-step how to improve and optimize software kernels on such distributed DSP systems. It was held at the "Softwarekonferenz für Parallel Programming, Concurrency
und Multicore-Systeme" in Karlsruhe/Germany 2014.
Reactive Data Centric Architectures with Vortex, Spark and ReactiveXAngelo Corsaro
An increasing number of Software Architects are realising that data is the most important asset of a system and are staring to embrace the Data-Centric revolution (datacentricmanifesto.org) — setting data at the center of their architecture and modelling applications as “visitors” to the data. At the same time, architect have also realised that reactive architectures (reactivemanifesto.org) facilitates the design of scalable, fault-tolerant and high performance systems.
Yet, few architects have realised that reactive and data-centric architectures are the two sides to the same coin and should always go hand in hand.
This presentation shows how reactive data-centric systems can be designed and built taking advantage of Vortex data sharing capabilities along with its integration with reactive and data-centric processing technologies such as Apache Spark and ReactiveX.
Final Year Project Synopsis: Post Quantum Encryption using Neural NetworksJPC Hanson
A synopsis of my final year project at Brunel University exploring the possibilities of using Neural Networks as a method of encryption immune to Shor's algorithm. i.e. a secure, 'post quantum' alternative to the NTRU algorithms.
Edge computing brings cloud services closer to the edge of the network, where data originates, and dramatically reduces the network latency of the cloud. It is a bridge linking clouds and users making the foundation for novel interconnected applications. However, edge computing still faces many challenges like remote configuration, well-defined native applications model, and limited node capacity. It lacks geo-organization and a clear separation of concerns. As such edge computing is hard to be offered as a service for future real-time user-centric applications. This paper presents the dynamic organization of geo-distributed edge nodes into micro data-centers to cover any arbitrary area and expand capacity, availability, and reliability. A cloud organization is used as an influence with adaptations for a different environment, and a model for edge applications utilizing these adaptations is presented. It is argued that the presented model can be integrated into existing solutions or used as a base for the development of future systems. Furthermore, a clear separation of concerns is given for the proposed model. With the separation of concerns setup, edge-native applications model, and a unified node organization, we are moving towards the idea of edge computing as a service, like any other utility in cloud computing.
PERFORMANCE EVALUATION OF PARALLEL INTERNATIONAL DATA ENCRYPTION ALGORITHM ON...IJNSA Journal
Distributed security is an evolving sub-domain of information and network security. Security applications play a serious role when data exchanging, different volumes of data should be transferred from one site to another safely and at high speed. In this paper, the parallel International Data Encryption Algorithm (IDEA) which is one of the security applications is implemented and evaluated in terms of running time, speedup, and efficiency. The parallel IDEA has been implemented using message passing interface (MPI) library, and the results have been conducted using IMAN1 Supercomputer, where a set of simulation runs carried out on different data sizes to define the best number of processor which can be used to manipulate these data sizes and to build a visualization about the processor number that can be used while the size of data increased. The experimental results show a good performance by reducing the running time, and increasing speed up of encryption and decryption processes for parallel IDEA when the number of processors ranges from 2 to 8 with achieved efficiency 97% to 83% respectively.
Privacy-Preserving Public Auditing for Regenerating-Code-Based Cloud Storage1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
Privacy protection domain-user integra tag deduplication in cloud data serverIJECEIAES
The cloud with strong storage management has recently developed in the big data world which can confirm the data integrity and keep just a single data duplicate. Many cloud auditing storage techniques have been developed to overcome the data deduplication (DD) problem, but they are vulnerable and can't resist brute force attacks (BFA). There is some privacy leakage problem that occurred in the present method. In this article, an original strategy called domain-user integra tag (DUIT) has been presented which comprises inter and intra deduplication with file tag and symmetric encryption key. The DUIT has two phases, the first one is random tag generation for Intra deduplication and the other is random ciphertext (CT) generation for encryption. The benefit of the DUIT is the security of individual user’s files would not reveal to people in general, hence we proved that the DUIT is protected from the BFA. Finally, an experiment has conducted in Linux processor and C program software. The outcome of DUIT demonstrates that our method has reduced the computation cost (CC) by 27% and 35% and searching complexity (SC) by 10% and 26% related with the previous methods. It is decided that the DUIT achieves the low CC and SC.
A New Direction for Computer Architecture Researchdbpublications
This paper we suggest a different computing environment as a worthy new direction for computer architecture research: personal mobile computing, where portable devices are used for visual computing and personal communications tasks. Such a device supports in an integrated fashion all the functions provided to-day by a portable computer, a cellular phone, a digital camera and a video game. The requirements placed on the processor in this environment are energy efficiency, high performance for multimedia and DSP functions, and area efficient, scalable designs. We examine the architectures that were recently pro-posed for billion transistor microprocessors. While they are very promising for the stationary desktop and server workloads, we discover that most of them are un-able to meet the challenges of the new environment and provide the necessary enhancements for multimedia applications running on portable devices.
In our recap of the final day of International Supercomputing 2016, we explore how OpenPOWER members are working in the HPC industry and continue the conversation around cognitive computing, deep learning, and machine learning.
This slide deck is used as an introduction to the MapReduce programming model, trying hard to be Hadoop-agnostic, as part of the Distributed Systems and Cloud Computing course I hold at Eurecom.
Course website:
http://michiard.github.io/DISC-CLOUD-COURSE/
Sources available here:
https://github.com/michiard/DISC-CLOUD-COURSE
Big Data in a neurophysiology research lab… what?J On The Beach
Big Data in a neurophysiology research lab… what? by Max Novelli
At RNEL, we have been working hard to lay the foundation to better manage our data and be able to integrate big data and AI technologies into our data management and analysis pipelines. These needs have arisen from the very size of the experimental data that push the limits: they are simply becoming unmanageable even on powerful workstations. We also determined that better query methodologies, validation and visualization tools are needed.
Our long term goal is finding the answer to the following question: Will we ever be able to go from experimental raw data to query curated data with a simple SQL-like language without spending humongous resources and manpower, while using a process that is organic, intuitive and flexible? Can we also leverage modern big data technologies and data science to achieve our goal?
This presentation is the story of an inter-disciplinary journey that started approximately 5 years ago. The journey enabled us to build a deeper knowledge of our data, a better system of management methodologies, as well as tools that allow us to query and aggregate across various datasets and easily improve such functionalities.
In this presentation, we will provide a general background of the work that we do in our lab. First, we will provide some examples of experiments that we conduct as a context in which to explain the data that are acquired and the challenge that comes with them. Next, we will outline some of the questions that researchers asked (and keep asking) when they attempt to work with large data structures to answer their own scientific questions without having to be bogged down by the technologies used and the original format of the data. Finally, we will raise some questions related to data management, which will help to improve validation and reduce the manpower necessary to curate the data. From the big picture, we will walk through the decisions and requirements that came out of our brainstorm sessions and show how far along we currently are in our journey and the path we took to get here. We will conclude by highlighting some of the amazing results that we were able to achieve, such as activation maps and central nervous system stimulation counts.
Many of today’s most challenging computer science problems—such as those involving very large data structures, unstructured data, random access or real-time performance requirements—require highly parallel solutions. The current implementation of parallelism can be cumbersome and complex, challenging the capabilities of traditional CPU and memory system architectures and often requiring significant effort on the part of programmers and system designers.
For the past seven years, Micron Technology has been developing a hardware co-processor technology that can directly implement large-scale Non-deterministic Finite Automata (NFA) for efficient parallel execution. This new non-Von Neumann processor, currently in fabrication, borrows from the architecture of memory systems to achieve massive data parallelism, addressing complex problems in an efficient, manageable method.
On September 17, the Wall Street Technology Association (WSTA) hosted a seminar for Financial IT professionals entitled Delivering Big Data. As part of that event, Micron’s Dan Skinner delivered an introductory on this revolutionary new technology, the growing ecosystem, as well as potential applications in the area of computational finance and data analytics.
Many HPC applications are massively parallel and can benefit from the spatial parallelism offered by reconfigurable logic. While modern memory technologies can offer high bandwidth, designers must craft advanced communication and memory architectures for efficient data movement and on-chip storage. Addressing these challenges requires to combine compiler optimizations, high-level synthesis, and hardware design.
In this talk, I will present challenges, solutions, and trends for generating massively parallel accelerators on FPGA for high-performance computing. These architectures can provide performance comparable to software implementations on high-end processors, and much higher energy efficiency thanks to logic customization.
Abstract:
Following the state of the art is paramount for sound and impact scientific practices informed strategic R&D decisions. This seminar seeks two main contributions: (i) providing a 10,000 foot view of 10 selected hot topics in networking, and (ii) Overview of recent practices in scientific events (e.g. Digitalization, Submission deadlines revisited, Open Science, Artifact Review Badging).
The 10 selected hot topics are as follows:
Intent-Based Networking (IBN)
Zero-Touch Management (ZTM)
Digital Twins (Networking for Digital Twins & Network Digital Twins)
Metaverse
Blockchain Networking
AI/ML (Network protocols meet AI/ML, Machine Learning for Networking)
High precision networking
Quantum Communications & Computing
6G (Beyond 5G)
OpenRAN
Data Engineer's Lunch #85: Designing a Modern Data StackAnant Corporation
What are the design considerations that go into architecting a modern data warehouse? This presentation will cover some of the requirements analysis, design decisions, and execution challenges of building a modern data lake/data warehouse.
Similar to Microkernels in the Era of Data-Centric Computing (20)
RISC-V is the most recent attempt (originally from UC Berkeley) to design a brand new instruction set architecture based on the reduced instruction set computing (RISC) principles. One of its goals is to be completely open and free (both as in free beer and as in free speech) for designers, users and manufacturers. HelenOS is an open source operating system designed and implemented from scratch based on the microkernel multiserver design principles. One of its goals is to provide excellent target platform portability and it currently supports 8 different hardware platforms.
Both projects are still in the process of maturing: While the unprivileged (user space) instruction set architecture of RISC-V has been declared stable in 2014, the privileged instruction set architecture is still in a stage of draft and is allowed to change in the future. Likewise, many major design features and building blocks of HelenOS are already in place, but no official commitment to ABI or API stability has been made yet.
This talk introduces both projects, presents the initial lessons learned from porting HelenOS to RISC-V and evaluates the portability of HelenOS on yet another porting effort.
What Could Microkernels Learn from Monolithic Kernels (and Vice Versa)Martin Děcký
Some developers of both microkernel and monolithic operating systems view the design of their system as absolutely superior to the other design. This black-white thinking and "holy war" attitude, while understandable to a certain degree, makes it hard to to acknowledge that one size does not necessarily fit all. Rather than striving for an unreachable goal of creating the best operating system design for all possible use cases it is vital to understand and reflect the trade-offs of the use cases at hand. This talk focuses on a few features and properties of the current monolithic operating systems that could be an inspiration for the current microkernel operating systems and vice versa. The talk should also initiate a discussion about some "non-goals" of microkernel operating systems that are nevertheless sometimes presented as goals of microkernel operating systems, to the detriment of its own cause.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Welocme to ViralQR, your best QR code generator.ViralQR
Welcome to ViralQR, your best QR code generator available on the market!
At ViralQR, we design static and dynamic QR codes. Our mission is to make business operations easier and customer engagement more powerful through the use of QR technology. Be it a small-scale business or a huge enterprise, our easy-to-use platform provides multiple choices that can be tailored according to your company's branding and marketing strategies.
Our Vision
We are here to make the process of creating QR codes easy and smooth, thus enhancing customer interaction and making business more fluid. We very strongly believe in the ability of QR codes to change the world for businesses in their interaction with customers and are set on making that technology accessible and usable far and wide.
Our Achievements
Ever since its inception, we have successfully served many clients by offering QR codes in their marketing, service delivery, and collection of feedback across various industries. Our platform has been recognized for its ease of use and amazing features, which helped a business to make QR codes.
Our Services
At ViralQR, here is a comprehensive suite of services that caters to your very needs:
Static QR Codes: Create free static QR codes. These QR codes are able to store significant information such as URLs, vCards, plain text, emails and SMS, Wi-Fi credentials, and Bitcoin addresses.
Dynamic QR codes: These also have all the advanced features but are subscription-based. They can directly link to PDF files, images, micro-landing pages, social accounts, review forms, business pages, and applications. In addition, they can be branded with CTAs, frames, patterns, colors, and logos to enhance your branding.
Pricing and Packages
Additionally, there is a 14-day free offer to ViralQR, which is an exceptional opportunity for new users to take a feel of this platform. One can easily subscribe from there and experience the full dynamic of using QR codes. The subscription plans are not only meant for business; they are priced very flexibly so that literally every business could afford to benefit from our service.
Why choose us?
ViralQR will provide services for marketing, advertising, catering, retail, and the like. The QR codes can be posted on fliers, packaging, merchandise, and banners, as well as to substitute for cash and cards in a restaurant or coffee shop. With QR codes integrated into your business, improve customer engagement and streamline operations.
Comprehensive Analytics
Subscribers of ViralQR receive detailed analytics and tracking tools in light of having a view of the core values of QR code performance. Our analytics dashboard shows aggregate views and unique views, as well as detailed information about each impression, including time, device, browser, and estimated location by city and country.
So, thank you for choosing ViralQR; we have an offer of nothing but the best in terms of QR code services to meet business diversity!
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
1. Microkernels in the Era of Data-Centric Computing
Martin Děcký
martin.decky@huawei.com
February 2018
2. Martin Děcký, FOSDEM 2018, February 3rd
2018 Microkernels in the Era of Data-Centric Computing 2
Who Am I
Passionate programmer and operating systems enthusiast
With a specific inclination towards multiserver microkernels
HelenOS developer since 2004
Research Scientist since 2006
Charles University (Prague), Distributed Systems Research Group
Senior Research Engineer since 2017
Huawei Technologies (Munich), Central Software Institute
3. Martin Děcký, FOSDEM 2018, February 3rd
2018 Microkernels in the Era of Data-Centric Computing 3
Motivation3
4. Martin Děcký, FOSDEM 2018, February 3rd
2018 Microkernels in the Era of Data-Centric Computing 4
Memory Barrier
5. Martin Děcký, FOSDEM 2018, February 3rd
2018 Microkernels in the Era of Data-Centric Computing 5
LaMarca, Ladner (1996)
Quick Sort
O(n×log n) operations
Radix Sort
O(n) operations
4 8 16 32 64 128 256 512 1024 2048 4096
0
200
400
600
800
1000
1200
Quick Sort Radix Sort
Thousands of items
Instructions/item
6. Martin Děcký, FOSDEM 2018, February 3rd
2018 Microkernels in the Era of Data-Centric Computing 6
LaMarca, Ladner (1996)
Quick Sort
O(n×log n) operations
Radix Sort
O(n) operations
4 8 16 32 64 128 256 512 1024 2048 4096
0
200
400
600
800
1000
1200
1400
1600
1800
2000
Quick Sort Radix Sort
Thousands of items
Clockcycles/item
7. Martin Děcký, FOSDEM 2018, February 3rd
2018 Microkernels in the Era of Data-Centric Computing 7
LaMarca, Ladner (1996)
Quick Sort
O(n×log n) operations
Radix Sort
O(n) operations
4 8 16 32 64 128 256 512 1024 2048 4096
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
Quick Sort Radix Sort
Thousands of items
Cachemisses/item
8. Martin Děcký, FOSDEM 2018, February 3rd
2018 Microkernels in the Era of Data-Centric Computing 8
The Myth of RAM
Accessing a random memory location requires
O(1)
operations
Accessing a random memory location takes
O(1)
time units
9. Martin Děcký, FOSDEM 2018, February 3rd
2018 Microkernels in the Era of Data-Centric Computing 9
The Myth of RAM
Accessing a random memory location requires
O(1)
operations
Accessing a random memory location takes
O(1)
time units
10. Martin Děcký, FOSDEM 2018, February 3rd
2018 Microkernels in the Era of Data-Centric Computing 10
The Myth of RAM
Accessing a random memory location requires
O(1)
operations
Accessing a random memory location takes
O(√n)
time units
11. Martin Děcký, FOSDEM 2018, February 3rd
2018 Microkernels in the Era of Data-Centric Computing 11
Breaking the Barrier
12. Martin Děcký, FOSDEM 2018, February 3rd
2018 Microkernels in the Era of Data-Centric Computing 12
Von Neumann Forever?
Data
Control
Status
ALU
Input
peripheral
Output
peripheral
RAM
Controller
13. Martin Děcký, FOSDEM 2018, February 3rd
2018 Microkernels in the Era of Data-Centric Computing 13
Von Neumann Forever?
New emerging memory technologies
Bridging the gap between volatile and non-volatile memory
No longer necessary to keep the distinction between RAM and storage
(peripherals)
Single-level memory (universal memory)
See also the talk by Liam Proven (The circuit less traveled), Janson, Sat 13:00
Many technologies in development
Magnetoresistive random-access memory (MRAM)
Racetrack memory
Ferroelectric random-access memory (FRAM)
Phase-change memory (PCM)
Nano-RAM (Nanotube RAM)
14. Martin Děcký, FOSDEM 2018, February 3rd
2018 Microkernels in the Era of Data-Centric Computing 14
Less Radical Solution
Near-Data Processing
Moving the computation closer to the place where the data is
Not a completely new idea at all
Spatial locality in general
GPUs processing graphics data locally
Breaking the monopoly of CPU on data processing even more
CPUs are fast, but also power-hungry
CPUs can only process the data already fetched from the memory/storage
The more data we avoid moving from the memory/storage to the CPU,
the more efficiently the CPU runs
15. Martin Děcký, FOSDEM 2018, February 3rd
2018 Microkernels in the Era of Data-Centric Computing 15
Near-Data Processing
Benefits
Decreased latency, increased throughput
Not necessarily on an unloaded system, but improvement under load
[1] Gu B., Yoon A. S., Bae D.-H., Jo I., Lee
J., Yoon J., Kang J.-U., Kwon M., Yoon C.,
Cho S., Jeong J., Chang D.: Biscuit: A
Framework for Near-Data Processing of
Big Data Workloads, in Proceedings of
43rd Annual International Symposium
on Computer Architecture, ACM/IEEE,
2016
16. Martin Děcký, FOSDEM 2018, February 3rd
2018 Microkernels in the Era of Data-Centric Computing 16
Near-Data Processing
Benefits (2)
Decreased energy consumption
[2] Kim S., Oh H., Park C., Cho S., Lee S.-
W.: Fast, Energy Efficient Scan inside
Flash Memory SSDs, in Proceedings of
37th International Conference on Very
Large Data Bases (VLDB), VLDB
Endowment, 2011
17. Martin Děcký, FOSDEM 2018, February 3rd
2018 Microkernels in the Era of Data-Centric Computing 17
In-Memory Near-Data Processing
Adding computational capability to DRAM cells
Simple logical comparators/operators that could be computed in parallel
on individual words
Filtering based on bitwise pattern
Bitwise operations
Making use of the inherent parallelism
Avoiding moving unnecessary data out of DRAM
Avoiding linear processing of independent words of the data in CPU
18. Martin Děcký, FOSDEM 2018, February 3rd
2018 Microkernels in the Era of Data-Centric Computing 18
Dynamic RAM
memory
matrix
rowdecoder
⁞
controllogic Sense amps
.........
address
data
Y-gating
.........
Y
RAS
CAS
WE
19. Martin Děcký, FOSDEM 2018, February 3rd
2018 Microkernels in the Era of Data-Centric Computing 19
Dynamic RAM with NDP
memory
matrix
rowdecoder
⁞
controllogic Sense amps
.........
address
data
Filtering / Computing
.........
opcode
RAS
CAS
WE
20. Martin Děcký, FOSDEM 2018, February 3rd
2018 Microkernels in the Era of Data-Centric Computing 20
In-Storage Near-Data Processing
Adding computational capability to SSD controllers
Again, making use of inherent parallelism of flash memory
But SSD controllers also contain powerful embedded cores
Flash Translation Layer, garbage collection, wear leveling
Thus computation is not limited to simple bitwise filtering and operations
Complex trade-offs between computing on the primary CPU and
offloading to the SSD controller
21. Martin Děcký, FOSDEM 2018, February 3rd
2018 Microkernels in the Era of Data-Centric Computing 21
Our Prototype
Based on OpenSSD
http://openssd.io/
Open source (GPL) NVMe SSD controller
Hanyang University (Seoul), Embedded and Network Computing Lab
FPGA design for Xilinx Xynq-7000
ONFI NAND flash controller with ECC engine
PCI-e NVMe host interface with scatter-gather DMA engine
Controller firmware
ARMv7
Flash Translation Layer, page caching, greedy garbage collection
22. Martin Děcký, FOSDEM 2018, February 3rd
2018 Microkernels in the Era of Data-Centric Computing 22
Our Prototype (2)
NVMe NDP extensions
NDP module deployment
Static native code so far, moving to safe byte-code (eBPF)
NDP datasets
Safety boundaries for the NDP modules (for multitenancy, etc.)
NDP Read / Write
Extensions of the standard NVMe Read / Write commands
NDP module executed on each block, transforms/filters data
NDP Transform
Arbitrary data transformations (in-place copying, etc.)
Flow-based computational model
23. Martin Děcký, FOSDEM 2018, February 3rd
2018 Microkernels in the Era of Data-Centric Computing 23
Our Prototype (3)
Fast prototyping
QEMU model of the OpenSSD hardware (for running the OpenSSD firmware on ARMv7)
Connected to a second host QEMU/KVM (as a regular PCI-e NVMe storage device)
Planned evaluation
Real-world application proof-of-concept
Custom storage engine for MySQL with operator push-down
Ceph operator push-down
Key-value store
Generic file system acceleration
Still a very much work-in-progress
Preliminary results quite promising
24. Martin Děcký, FOSDEM 2018, February 3rd
2018 Microkernels in the Era of Data-Centric Computing 24
How Microkernels Fit
into this?
25. Martin Děcký, FOSDEM 2018, February 3rd
2018 Microkernels in the Era of Data-Centric Computing 25
Future Vision
Not just near-data processing, but data-centric computing
As opposed to CPU-centric computing
Running the computation dynamically where it is the most efficient
Not necessarily moving the data to the central processing unit
The CPU is the orchestrator
Massively distributed systems
Within the box of your machine (desktop, laptop, smartphone)
Outside your machine (edge cloud, fog, cloud, data center)
Massively heterogeneous systems
Different ISAs
Fully programmable, partially programmable, fixed-function
26. Martin Děcký, FOSDEM 2018, February 3rd
2018 Microkernels in the Era of Data-Centric Computing 26
Future Vision & Microkernels
file system
driver server
CPU
microkernel
application application application
privileged mode
unprivileged mode
memory
mgmt
scheduler IPC
naming
server
location
server
device driver
server
device driver
server
device driver
server
file system
driver server
file system
driver server
device
multiplexer
file system
multiplexer
network
stack
security
server
...
27. Martin Děcký, FOSDEM 2018, February 3rd
2018 Microkernels in the Era of Data-Centric Computing 27
Future Vision & Multi-Microkernels
file system
driver server
Primary
CPU
microkernel
distributed
application
distributed
application
distributed
application
privileged mode
unprivileged mode
memory
mgmt
scheduler IPC
naming
server
location
server
device driver
server
device driver
server
device driver
server
file system
driver server
file system
driver server
device
multiplexer
file system
multiplexer
network
stack
security
server
...
SSD
CPU
NIC
CPU
GPU
microkernelmemory
mgmt
scheduler IPC microkernelmemory
mgmt
scheduler IPC microkernelmemory
mgmt
scheduler IPC
... ... compositor
distributed
application
28. Martin Děcký, FOSDEM 2018, February 3rd
2018 Microkernels in the Era of Data-Centric Computing 28
Microkernels for Data-Centric Computing
Multiserver microkernels
Naturally support distribution and heterogeneity
IPC is a well-defined mechanism to communicate across isolation
boundaries
Communicating between multiple (heterogeneous) computing cores is just a
different transport
Message passing as the least common denominator
– But also memory coherency emulation is possible
Multiserver designers are used to think in terms of fine-grained components
Composing an application from multiple components
Missing piece: Framework for automatic workload migration
29. Martin Děcký, FOSDEM 2018, February 3rd
2018 Microkernels in the Era of Data-Centric Computing 29
Microkernels for Data-Centric Computing (2)
Why not multikernels?
Yes, why not!
Many multikernels (i.e. Barrelfish) are essentially microkernels to the
extreme
Each core running its independent microkernel
No memory coherency assumed
Why not unikernels?
OK, if you insist
A microkernel running a single fixed payload (e.g. a virtual machine)
could be considered a unikernel
30. Martin Děcký, FOSDEM 2018, February 3rd
2018 Microkernels in the Era of Data-Centric Computing 30
Road Forward
As usual, incremental steps
Start with just the basic workload off-loading to the SSD, NIC, etc.
Gradually transform the run-time for the workload off-loading into a
proto-microkernel
Consider the dynamic workload migration framework
Gradually move to a full-fledged multi-microkernel
31. Martin Děcký, FOSDEM 2018, February 3rd
2018 Microkernels in the Era of Data-Centric Computing 31
32. Martin Děcký, FOSDEM 2018, February 3rd
2018 Microkernels in the Era of Data-Centric Computing 32
Summary
There are potential ways of abandoning the von Neumann
architecture
Revolutionary: Single-level (universal) memory
Evolutionary: Near-Data Processing
Near-Data Processing is beneficial in terms of performance and
energy consumption
Data-centric computing leads to massively distributed and
heterogeneous systems
Multi-microkernels are a natural match for such architectures
33. Martin Děcký, FOSDEM 2018, February 3rd
2018 Microkernels in the Era of Data-Centric Computing 33
Acknowledgements
Huawei Technologies in-storage NDP team
Antonio Barbalace
Martin Decky
Anthony Iliopoulos
Javier Picorel
Dmitry Voytik
Hangjun Zhao
References & Inspiration
Mark Silberstein (Technion)
Biscuit framework (Samsung Electronics)
34. Martin Děcký, FOSDEM 2018, February 3rd
2018 Microkernels in the Era of Data-Centric Computing 34
Q&A