Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
Application Report: Big Data - Big Cluster InterconnectsIT Brand Pulse
As a leading analytics platform that runs on industry-standard hardware and integrates industry-standard database tools and applications, one of ParAccel’s biggest challenges is to architect and test hardware (servers, storage, interconnects) that make their software perform at its peak. In this case, they have achieved their mission to eliminate a cluster bottleneck by implementing 10GbE NICs to provide the bandwidth needed to-day, and well into the future.
Non symbolic base64 an effective representation of ipv6 addressIAEME Publication
The document discusses the transition from IPv4 to IPv6 due to the depletion of IPv4 addresses. It proposes a new scheme called Effective and flexible representation Of IPv6 with Base64 to represent IPv6 addresses in a more compact notation of 28 bytes instead of the standard 39 bytes. This is done using the period as a delimiter instead of the colon in IPv6 addresses and using Base64 in a non-symbolic way. The scheme aims to address issues with long IPv6 addresses like memory usage, bandwidth and latency. Cloud computing will benefit from the more compact and user-friendly representation of IPv6 addresses.
In this session I will tell you what Hortonworks and IBM Power solutions are and how we can realize significant business value development and prompt use of open innovation in future cognitive utilization. In addition, I will introduce the value added unique to IBM that can be provided by IBM and Hortonworks partnership from the viewpoint of storage, analytics, data science and streaming analysis.
In order to share the project experience on IoT architecture and to make the project successful, we will explain the key points to be a hint from the practical experience point to avoid common common pitfalls in the IoT related project .
RackCorp is a cloud services provider that experienced rapid traffic growth of 100x over 12 months fueled by its popular CDN service CacheCentric. This highlighted the need to upgrade RackCorp's 1Gbps network to 10Gbps for greater scalability. RackCorp selected Brocade switches for their high performance, low latency, and support for RackCorp's automation strategies. The new Brocade fabric provided a scalable backbone that supported continued growth and attracted new customers.
InfiniBand In-Network Computing Technology and Roadmapinside-BigData.com
In this video from the UK HPC Conference, Richard Graham from Mellanox presents: InfiniBand In-Network Computing Technology and Roadmap.
"In-Network Computing transforms the data center interconnect to become a "distributed CPU", and "distributed memory", enables to overcome performance barriers and to enable faster and more scalable data analysis. HDR 200G InfiniBand In-Network Computing technology includes several elements - Scalable Hierarchical Aggregation and Reduction Protocol (SHARP), smart Tag Matching and rendezvoused protocol, and more. These technologies are in use at some of the recent large scale supercomputers around the world, including the top TOP500 platforms. The session will discuss the InfiniBand In-Network Computing technology and performance results, as well as view to future roadmap."
Watch the video:
Learn more: http://mellanox.com
and
http://hpcadvisorycouncil.com/events/2019/uk-conference/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Ieee Transition Of I Pv4 To I Pv6 Network Applicationsguest0215f3
This document discusses transitioning IPv4 network applications to IPv6. It begins with an introduction to the need for IPv6 due to IPv4 address depletion. It then discusses IPv6 architecture and some key benefits of IPv6 like increased address space and built-in security. The document outlines three primary considerations for transitioning applications: using IPv6 multicast instead of IPv4 broadcast, enabling multicast reception, and ensuring dual stack compatibility. It categorizes transition complexity and provides examples of changes needed, such as replacing IPv4 data structures and function calls with IPv6 equivalents. Related work on transitioning applications is also discussed.
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...inside-BigData.com
In this deck from the Stanford HPC Conference, Nick Nystrom and Paola Buitrago provide an update from the Pittsburgh Supercomputing Center.
Nick Nystrom is Chief Scientist at the Pittsburgh Supercomputing Center (PSC). Nick is architect and PI for Bridges, PSC's flagship system that successfully pioneered the convergence of HPC, AI, and Big Data. He is also PI for the NIH Human Biomolecular Atlas Program’s HIVE Infrastructure Component and co-PI for projects that bring emerging AI technologies to research (Open Compass), apply machine learning to biomedical data for breast and lung cancer (Big Data for Better Health), and identify causal relationships in biomedical big data (the Center for Causal Discovery, an NIH Big Data to Knowledge Center of Excellence). His current research interests include hardware and software architecture, applications of machine learning to multimodal data (particularly for the life sciences) and to enhance simulation, and graph analytics.
Watch the video: https://youtu.be/LWEU1L1o7yY
Learn more: https://www.psc.edu/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Application Report: Big Data - Big Cluster InterconnectsIT Brand Pulse
As a leading analytics platform that runs on industry-standard hardware and integrates industry-standard database tools and applications, one of ParAccel’s biggest challenges is to architect and test hardware (servers, storage, interconnects) that make their software perform at its peak. In this case, they have achieved their mission to eliminate a cluster bottleneck by implementing 10GbE NICs to provide the bandwidth needed to-day, and well into the future.
Non symbolic base64 an effective representation of ipv6 addressIAEME Publication
The document discusses the transition from IPv4 to IPv6 due to the depletion of IPv4 addresses. It proposes a new scheme called Effective and flexible representation Of IPv6 with Base64 to represent IPv6 addresses in a more compact notation of 28 bytes instead of the standard 39 bytes. This is done using the period as a delimiter instead of the colon in IPv6 addresses and using Base64 in a non-symbolic way. The scheme aims to address issues with long IPv6 addresses like memory usage, bandwidth and latency. Cloud computing will benefit from the more compact and user-friendly representation of IPv6 addresses.
In this session I will tell you what Hortonworks and IBM Power solutions are and how we can realize significant business value development and prompt use of open innovation in future cognitive utilization. In addition, I will introduce the value added unique to IBM that can be provided by IBM and Hortonworks partnership from the viewpoint of storage, analytics, data science and streaming analysis.
In order to share the project experience on IoT architecture and to make the project successful, we will explain the key points to be a hint from the practical experience point to avoid common common pitfalls in the IoT related project .
RackCorp is a cloud services provider that experienced rapid traffic growth of 100x over 12 months fueled by its popular CDN service CacheCentric. This highlighted the need to upgrade RackCorp's 1Gbps network to 10Gbps for greater scalability. RackCorp selected Brocade switches for their high performance, low latency, and support for RackCorp's automation strategies. The new Brocade fabric provided a scalable backbone that supported continued growth and attracted new customers.
InfiniBand In-Network Computing Technology and Roadmapinside-BigData.com
In this video from the UK HPC Conference, Richard Graham from Mellanox presents: InfiniBand In-Network Computing Technology and Roadmap.
"In-Network Computing transforms the data center interconnect to become a "distributed CPU", and "distributed memory", enables to overcome performance barriers and to enable faster and more scalable data analysis. HDR 200G InfiniBand In-Network Computing technology includes several elements - Scalable Hierarchical Aggregation and Reduction Protocol (SHARP), smart Tag Matching and rendezvoused protocol, and more. These technologies are in use at some of the recent large scale supercomputers around the world, including the top TOP500 platforms. The session will discuss the InfiniBand In-Network Computing technology and performance results, as well as view to future roadmap."
Watch the video:
Learn more: http://mellanox.com
and
http://hpcadvisorycouncil.com/events/2019/uk-conference/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Ieee Transition Of I Pv4 To I Pv6 Network Applicationsguest0215f3
This document discusses transitioning IPv4 network applications to IPv6. It begins with an introduction to the need for IPv6 due to IPv4 address depletion. It then discusses IPv6 architecture and some key benefits of IPv6 like increased address space and built-in security. The document outlines three primary considerations for transitioning applications: using IPv6 multicast instead of IPv4 broadcast, enabling multicast reception, and ensuring dual stack compatibility. It categorizes transition complexity and provides examples of changes needed, such as replacing IPv4 data structures and function calls with IPv6 equivalents. Related work on transitioning applications is also discussed.
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...inside-BigData.com
In this deck from the Stanford HPC Conference, Nick Nystrom and Paola Buitrago provide an update from the Pittsburgh Supercomputing Center.
Nick Nystrom is Chief Scientist at the Pittsburgh Supercomputing Center (PSC). Nick is architect and PI for Bridges, PSC's flagship system that successfully pioneered the convergence of HPC, AI, and Big Data. He is also PI for the NIH Human Biomolecular Atlas Program’s HIVE Infrastructure Component and co-PI for projects that bring emerging AI technologies to research (Open Compass), apply machine learning to biomedical data for breast and lung cancer (Big Data for Better Health), and identify causal relationships in biomedical big data (the Center for Causal Discovery, an NIH Big Data to Knowledge Center of Excellence). His current research interests include hardware and software architecture, applications of machine learning to multimodal data (particularly for the life sciences) and to enhance simulation, and graph analytics.
Watch the video: https://youtu.be/LWEU1L1o7yY
Learn more: https://www.psc.edu/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
The document discusses how DDN A3I storage solutions and Nvidia's SuperPOD platform can enable HPC at scale. It provides details on DDN's A3I appliances that are optimized for AI and deep learning workloads and validated for Nvidia's DGX-2 SuperPOD reference architecture. The solutions are said to deliver the fastest performance, effortless scaling, reliability and flexibility for data-intensive workloads.
In this deck from the DDN User Group at ISC 2018, James Coomer from DDN presents: A3I - Accelerated Any-Scale Solutions from DDN.
"Engineered from the ground up for the AI-enabled data center, DDN’s A3I solutions are fully optimized to handle the spectrum of AI and DL activities concurrently: data ingest and preparation, training, validation, and inference. The DDN A3I platform is easy to deploy and manage, highly scalable in both performance and capacity, and represents a highly efficient and resilient solution for all of your current and future AI requirements."
Watch the video: https://youtu.be/puWL5lcKgA4
Learn more: https://www.ddn.com/products/a3i-accelerated-any-scale-ai/
and
https://www.ddn.com/company/events/isc-user-group/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Leveraging IoT as part of your digital transformationJohn Archer
Review of approaches for Edge computing architecture with emphasis on improved security for container workloads collecting telemetry from Industrial IoT environments
Emulex Presents Why I/O is Strategic Global Survey ResultsEmulex Corporation
This webcast is the first in a monthly series on why I/O is strategic for the data center. Emulex will present findings from a global survey of more than 1,500 IT professionals that demonstrate the strategic importance of I/O in the data center across four key technology trends: virtualization, cloud, big data and convergence.
Hadoop's Role in the Big Data Architecture, OW2con'12, ParisOW2
This document discusses big data and Hadoop. It provides an overview of what constitutes big data, how Hadoop works, and how organizations can use Hadoop and its ecosystem to gain insights from large and diverse data sources. Specific use cases discussed include using Hadoop for operational data refining, exploration and visualization of data, and enriching online applications. The document also outlines Hortonworks' strategy of focusing on Apache Hadoop to make it the enterprise big data platform and providing support services around their Hadoop distribution.
PLX Technology provides PCI Express switches and bridges that connect various components within servers, storage systems, networking equipment, and other devices. They have over 70% market share in PCI Express switches and are focused on growing their business connecting components within data centers and cloud computing environments. Their ExpressFabric technology uses PCI Express as a converged fabric to connect servers, storage, and networking within a data center rack in order to reduce costs and power consumption compared to using multiple traditional networking protocols.
Delivering on the Hadoop/HBase Integrated ArchitectureDataWorks Summit
This document discusses using databases within Hadoop, referred to as "In-Hadoop databases". It begins by describing Google's transition from batch to real-time processing using systems like MapReduce, BigTable, and how this led to operational and analytical uses of data. It then discusses how traditional architectures separate these uses onto different systems, and the benefits of using In-Hadoop databases which provide a single system for both real-time and batch processing. Examples are given of companies using In-Hadoop databases for various real-time and analytical use cases. Architectures and technologies for In-Hadoop databases are also covered.
Attaching cloud storage to a campus grid using parrot, chirp, and hadoopJoão Gabriel Lima
This document discusses attaching cloud storage to a campus grid using Parrot, Chirp, and Hadoop. The authors present a solution that bridges the Chirp distributed filesystem to Hadoop to provide simple access to large datasets on Hadoop for jobs running on the campus grid. Chirp layers additional grid computing features on top of Hadoop like simple deployment without special privileges, easy access via Parrot, and strong flexible security access control lists. The authors evaluate the performance of connecting Parrot directly to Hadoop for better scalability versus connecting Parrot to Chirp and then to Hadoop for greater stability.
Design and evaluation of a proxy cache foringenioustech
The document describes the design and evaluation of pCache, a proxy cache for peer-to-peer (P2P) traffic. Key contributions include:
1) A new storage system optimized for P2P caching that efficiently handles requests for object segments of arbitrary lengths.
2) An algorithm to infer information required for caching P2P traffic when this information is not directly available.
3) Achieving full transparency in the proxy cache and efficiently handling non-P2P connections to reduce processing overhead.
Extensive experiments evaluate pCache using real P2P traffic and show that it benefits both clients and ISPs without hurting P2P network performance.
Mellnox Interconnect presentation in OpenPOWER Brazil workshopGanesan Narayanasamy
This document discusses Mellanox's role in accelerating high-performance computing (HPC) and artificial intelligence (AI) systems through high-speed interconnect technologies. It highlights how Mellanox's InfiniBand, Ethernet, and BlueField solutions provide higher data speeds, faster data processing, and better data security to address the exponentially growing data needs of HPC, AI, cloud, and other data-intensive workloads. Specific technologies and products discussed include SHARP for scalable in-network computing, GPUDirect for GPU acceleration, HDR InfiniBand adapters and switches, BlueField smart NICs, and Spectrum Ethernet switches. Case studies show how Mellanox solutions accelerate the world's top
Dell First Out the Blocks with 25GbE ServersIT Brand Pulse
IT Brand Pulse Industry Brief describing the market dynamics leading to a new generation of 25, 50 and 100GbE, the Dell products, applications, and guidelines for when to use 25G.
IBM provides infrastructure to accelerate medical research tasks like genomics, molecular simulation, diagnostics, and quality inspection. This infrastructure delivers faster insights through high-performance data and AI deployed at massive scale on IBM Power Systems and Storage. Case studies show the infrastructure reduces time to results for tasks like processing millions of cryogenic electron microscope images from days to hours.
This document provides a comparison of IPv4 and IPv6 by analyzing their features and addressing schemes. Some key points:
- IPv6 was designed to replace IPv4 due to IPv4's limited 32-bit address space being exhausted, while IPv6 uses a 128-bit address space.
- IPv6 addresses are written in hexadecimal colon notation and can be abbreviated, while IPv4 uses dotted decimal notation.
- IPv6 introduces anycast addressing for routing packets to the closest node, and absorbs IPv4's broadcast addressing into multicast.
- IPv6 supports auto-configuration to simplify address assignment without DHCP, and its larger addressing scheme allows clearer routing.
So in summary, the document
Data Science at Scale on MPP databases - Use Cases & Open Source ToolsEsther Vasiete
Pivotal workshop slide deck for Structure Data 2016 held in San Francisco.
Abstract:
Learn how data scientists at Pivotal build machine learning models at massive scale on open source MPP databases like Greenplum and HAWQ (under Apache incubation) using in-database machine learning libraries like MADlib (under Apache incubation) and procedural languages like PL/Python and PL/R to take full advantage of the rich set of libraries in the open source community. This workshop will walk you through use cases in text analytics and image processing on MPP.
IBM AI Solutions on Power Systems is a presentation about IBM's AI solutions. It introduces IBM Visual Insights for tasks like image classification, object detection, and segmentation. A use case demo shows breast cancer classification in under one second with high accuracy. Another demo detects diabetic retinopathy in eye images. The presentation discusses open issues in medical imaging AI and IBM's response to COVID-19, including an X-ray demo to detect COVID-19 in lung images. It calls for collaboration to share medical data and models.
Nebula - The Future Internet ArchitectureRanjan Dhar
The document describes Nebula, a proposed future internet architecture that aims to securely support cloud computing. It is divided into three main components: NCORE for high performance core routers, NDP for establishing secure and reliable multiple paths between data centers, and NVENT for policy-based control plane technologies. Nebula aims to provide assured delivery, controlled access, high availability, and autonomous resource control to enable applications with strong security and reliability requirements.
This document provides a summary of HPCC Systems, including:
1. A brief history and overview of the architecture with a use case example of calculating insurance policy data within a specified radius.
2. Descriptions of the main components of HPCC Systems - Thor for batch processing, Roxie for real-time queries, and ECL as the data-oriented programming language.
3. Information on how HPCC Systems can be integrated with other systems and technologies through connectors, drivers, and the ability to embed other languages.
In this deck from the 2019 Stanford HPC Conference, Nik Nystrom from the Pittsburgh Supercomputing Center presents: Pioneering and Democratizing Scalable HPC+AI.
"PSC's Bridges was the first system to successfully converge HPC, AI, and Big Data. Designed for the U.S. national research community and supported by NSF, Bridges now serves approximately 1800 projects and 7500 users at 380 institutions, and it is the foundation around which new HPC+AI projects have launched. Bridges emphasizes "nontraditional" uses that span the life, physical, and social sciences, computer science, engineering, business, and humanities. Scalable HPC+AI is driving many of those applications, which span diverse topics such as learning root causes of cancer, strategic reasoning, designing new materials, predicting severe storms, recognizing speech including contextual information, and detecting objects in 4k streaming video. To address the demand for scalable AI, PSC recently introduced Bridges-AI, which adds transformative new AI capability. In this presentation, we share our vision in designing HPC+AI systems at PSC and highlight some of the exciting research breakthroughs they are enabling."
Nick Nystrom is Interim Director and Sr. Director of Research at the Pittsburgh Supercomputing Center (PSC). Nick is architect and PI for Bridges, PSC's flagship system that successfully pioneered the convergence HPC, AI, and Big Data. He is also PI for the NIH Human Biomolecular Atlas Program’s HIVE Infrastructure Component and co-PI for projects that bring emerging AI technologies to research (Open Compass), apply machine learning to biomedical data for breast and lung cancer (Big Data for Better Health), and identify causal relationships in biomedical big data (the Center for Causal Discovery, an NIH Big Data to Knowledge Center of Excellence). His current research interests include hardware and software architecture, applications of machine learning to multimodal data (particularly for the life sciences) and to enhance simulation, and graph analytics.
Watch the video: https://youtu.be/ucRs4A_afus
Learn more: https://www.psc.edu/bridges
and
http://hpcadvisorycouncil.com/events/2019/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
IPv6 is rapidly becoming an important
network technology to service providers,
government agencies and enterprises.
Deployment of IPv6 requires new management strategies, practices and tools to enable deployment and effective operation.
Because most deployments of IPv6 will be in dual-stack networks that use IPv4 and IPv6 in parallel, the IPv4 management infrastructure will be extended for IPv6 for integrated IPv4-IPv6 operation. It will be
crucial for IPv6 deployments to be carefully
planned and managed to ensure successful
implementation and avoid significant
increases in management overhead. This
article provides some background information
on IPv6 deployment and management
strategies.
The document discusses Greenplum Database, an open source massively parallel processing (MPP) relational database system for big data. It provides an overview of Greenplum's architecture, including its master-segment structure and distributed transaction management. It also covers topics like defining data storage, distributions, partitioning, and analytics capabilities. Examples of Greenplum deployments are listed across various industries. Recent accomplishments and roadmap items are also summarized.
El documento proporciona instrucciones sobre el registro de datos personales y laborales, la ejecución de formaciones, la certificación y el sistema de aprendizaje en línea. Explica que el registro incluye información de contacto y experiencia laboral, y que la formación busca desarrollar competencias y resultados de aprendizaje. También describe cómo consultar la constancia de aprendizaje, y que la inscripción a cursos incluye información sobre el programa, duración y lugar de realización.
The document describes the key parameters of a three-zone once-through steam generator, including the cooling water flow rate, tube specifications, effective surface areas, heat transfer rates, and temperatures of each zone. It then shows calculated performance results across a range of circulation water inlet temperatures from 35 to 75 degrees Fahrenheit, indicating changes in generated power, pressures, heat loads, and cleanliness factors compared to predicted design values.
The document discusses how DDN A3I storage solutions and Nvidia's SuperPOD platform can enable HPC at scale. It provides details on DDN's A3I appliances that are optimized for AI and deep learning workloads and validated for Nvidia's DGX-2 SuperPOD reference architecture. The solutions are said to deliver the fastest performance, effortless scaling, reliability and flexibility for data-intensive workloads.
In this deck from the DDN User Group at ISC 2018, James Coomer from DDN presents: A3I - Accelerated Any-Scale Solutions from DDN.
"Engineered from the ground up for the AI-enabled data center, DDN’s A3I solutions are fully optimized to handle the spectrum of AI and DL activities concurrently: data ingest and preparation, training, validation, and inference. The DDN A3I platform is easy to deploy and manage, highly scalable in both performance and capacity, and represents a highly efficient and resilient solution for all of your current and future AI requirements."
Watch the video: https://youtu.be/puWL5lcKgA4
Learn more: https://www.ddn.com/products/a3i-accelerated-any-scale-ai/
and
https://www.ddn.com/company/events/isc-user-group/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Leveraging IoT as part of your digital transformationJohn Archer
Review of approaches for Edge computing architecture with emphasis on improved security for container workloads collecting telemetry from Industrial IoT environments
Emulex Presents Why I/O is Strategic Global Survey ResultsEmulex Corporation
This webcast is the first in a monthly series on why I/O is strategic for the data center. Emulex will present findings from a global survey of more than 1,500 IT professionals that demonstrate the strategic importance of I/O in the data center across four key technology trends: virtualization, cloud, big data and convergence.
Hadoop's Role in the Big Data Architecture, OW2con'12, ParisOW2
This document discusses big data and Hadoop. It provides an overview of what constitutes big data, how Hadoop works, and how organizations can use Hadoop and its ecosystem to gain insights from large and diverse data sources. Specific use cases discussed include using Hadoop for operational data refining, exploration and visualization of data, and enriching online applications. The document also outlines Hortonworks' strategy of focusing on Apache Hadoop to make it the enterprise big data platform and providing support services around their Hadoop distribution.
PLX Technology provides PCI Express switches and bridges that connect various components within servers, storage systems, networking equipment, and other devices. They have over 70% market share in PCI Express switches and are focused on growing their business connecting components within data centers and cloud computing environments. Their ExpressFabric technology uses PCI Express as a converged fabric to connect servers, storage, and networking within a data center rack in order to reduce costs and power consumption compared to using multiple traditional networking protocols.
Delivering on the Hadoop/HBase Integrated ArchitectureDataWorks Summit
This document discusses using databases within Hadoop, referred to as "In-Hadoop databases". It begins by describing Google's transition from batch to real-time processing using systems like MapReduce, BigTable, and how this led to operational and analytical uses of data. It then discusses how traditional architectures separate these uses onto different systems, and the benefits of using In-Hadoop databases which provide a single system for both real-time and batch processing. Examples are given of companies using In-Hadoop databases for various real-time and analytical use cases. Architectures and technologies for In-Hadoop databases are also covered.
Attaching cloud storage to a campus grid using parrot, chirp, and hadoopJoão Gabriel Lima
This document discusses attaching cloud storage to a campus grid using Parrot, Chirp, and Hadoop. The authors present a solution that bridges the Chirp distributed filesystem to Hadoop to provide simple access to large datasets on Hadoop for jobs running on the campus grid. Chirp layers additional grid computing features on top of Hadoop like simple deployment without special privileges, easy access via Parrot, and strong flexible security access control lists. The authors evaluate the performance of connecting Parrot directly to Hadoop for better scalability versus connecting Parrot to Chirp and then to Hadoop for greater stability.
Design and evaluation of a proxy cache foringenioustech
The document describes the design and evaluation of pCache, a proxy cache for peer-to-peer (P2P) traffic. Key contributions include:
1) A new storage system optimized for P2P caching that efficiently handles requests for object segments of arbitrary lengths.
2) An algorithm to infer information required for caching P2P traffic when this information is not directly available.
3) Achieving full transparency in the proxy cache and efficiently handling non-P2P connections to reduce processing overhead.
Extensive experiments evaluate pCache using real P2P traffic and show that it benefits both clients and ISPs without hurting P2P network performance.
Mellnox Interconnect presentation in OpenPOWER Brazil workshopGanesan Narayanasamy
This document discusses Mellanox's role in accelerating high-performance computing (HPC) and artificial intelligence (AI) systems through high-speed interconnect technologies. It highlights how Mellanox's InfiniBand, Ethernet, and BlueField solutions provide higher data speeds, faster data processing, and better data security to address the exponentially growing data needs of HPC, AI, cloud, and other data-intensive workloads. Specific technologies and products discussed include SHARP for scalable in-network computing, GPUDirect for GPU acceleration, HDR InfiniBand adapters and switches, BlueField smart NICs, and Spectrum Ethernet switches. Case studies show how Mellanox solutions accelerate the world's top
Dell First Out the Blocks with 25GbE ServersIT Brand Pulse
IT Brand Pulse Industry Brief describing the market dynamics leading to a new generation of 25, 50 and 100GbE, the Dell products, applications, and guidelines for when to use 25G.
IBM provides infrastructure to accelerate medical research tasks like genomics, molecular simulation, diagnostics, and quality inspection. This infrastructure delivers faster insights through high-performance data and AI deployed at massive scale on IBM Power Systems and Storage. Case studies show the infrastructure reduces time to results for tasks like processing millions of cryogenic electron microscope images from days to hours.
This document provides a comparison of IPv4 and IPv6 by analyzing their features and addressing schemes. Some key points:
- IPv6 was designed to replace IPv4 due to IPv4's limited 32-bit address space being exhausted, while IPv6 uses a 128-bit address space.
- IPv6 addresses are written in hexadecimal colon notation and can be abbreviated, while IPv4 uses dotted decimal notation.
- IPv6 introduces anycast addressing for routing packets to the closest node, and absorbs IPv4's broadcast addressing into multicast.
- IPv6 supports auto-configuration to simplify address assignment without DHCP, and its larger addressing scheme allows clearer routing.
So in summary, the document
Data Science at Scale on MPP databases - Use Cases & Open Source ToolsEsther Vasiete
Pivotal workshop slide deck for Structure Data 2016 held in San Francisco.
Abstract:
Learn how data scientists at Pivotal build machine learning models at massive scale on open source MPP databases like Greenplum and HAWQ (under Apache incubation) using in-database machine learning libraries like MADlib (under Apache incubation) and procedural languages like PL/Python and PL/R to take full advantage of the rich set of libraries in the open source community. This workshop will walk you through use cases in text analytics and image processing on MPP.
IBM AI Solutions on Power Systems is a presentation about IBM's AI solutions. It introduces IBM Visual Insights for tasks like image classification, object detection, and segmentation. A use case demo shows breast cancer classification in under one second with high accuracy. Another demo detects diabetic retinopathy in eye images. The presentation discusses open issues in medical imaging AI and IBM's response to COVID-19, including an X-ray demo to detect COVID-19 in lung images. It calls for collaboration to share medical data and models.
Nebula - The Future Internet ArchitectureRanjan Dhar
The document describes Nebula, a proposed future internet architecture that aims to securely support cloud computing. It is divided into three main components: NCORE for high performance core routers, NDP for establishing secure and reliable multiple paths between data centers, and NVENT for policy-based control plane technologies. Nebula aims to provide assured delivery, controlled access, high availability, and autonomous resource control to enable applications with strong security and reliability requirements.
This document provides a summary of HPCC Systems, including:
1. A brief history and overview of the architecture with a use case example of calculating insurance policy data within a specified radius.
2. Descriptions of the main components of HPCC Systems - Thor for batch processing, Roxie for real-time queries, and ECL as the data-oriented programming language.
3. Information on how HPCC Systems can be integrated with other systems and technologies through connectors, drivers, and the ability to embed other languages.
In this deck from the 2019 Stanford HPC Conference, Nik Nystrom from the Pittsburgh Supercomputing Center presents: Pioneering and Democratizing Scalable HPC+AI.
"PSC's Bridges was the first system to successfully converge HPC, AI, and Big Data. Designed for the U.S. national research community and supported by NSF, Bridges now serves approximately 1800 projects and 7500 users at 380 institutions, and it is the foundation around which new HPC+AI projects have launched. Bridges emphasizes "nontraditional" uses that span the life, physical, and social sciences, computer science, engineering, business, and humanities. Scalable HPC+AI is driving many of those applications, which span diverse topics such as learning root causes of cancer, strategic reasoning, designing new materials, predicting severe storms, recognizing speech including contextual information, and detecting objects in 4k streaming video. To address the demand for scalable AI, PSC recently introduced Bridges-AI, which adds transformative new AI capability. In this presentation, we share our vision in designing HPC+AI systems at PSC and highlight some of the exciting research breakthroughs they are enabling."
Nick Nystrom is Interim Director and Sr. Director of Research at the Pittsburgh Supercomputing Center (PSC). Nick is architect and PI for Bridges, PSC's flagship system that successfully pioneered the convergence HPC, AI, and Big Data. He is also PI for the NIH Human Biomolecular Atlas Program’s HIVE Infrastructure Component and co-PI for projects that bring emerging AI technologies to research (Open Compass), apply machine learning to biomedical data for breast and lung cancer (Big Data for Better Health), and identify causal relationships in biomedical big data (the Center for Causal Discovery, an NIH Big Data to Knowledge Center of Excellence). His current research interests include hardware and software architecture, applications of machine learning to multimodal data (particularly for the life sciences) and to enhance simulation, and graph analytics.
Watch the video: https://youtu.be/ucRs4A_afus
Learn more: https://www.psc.edu/bridges
and
http://hpcadvisorycouncil.com/events/2019/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
IPv6 is rapidly becoming an important
network technology to service providers,
government agencies and enterprises.
Deployment of IPv6 requires new management strategies, practices and tools to enable deployment and effective operation.
Because most deployments of IPv6 will be in dual-stack networks that use IPv4 and IPv6 in parallel, the IPv4 management infrastructure will be extended for IPv6 for integrated IPv4-IPv6 operation. It will be
crucial for IPv6 deployments to be carefully
planned and managed to ensure successful
implementation and avoid significant
increases in management overhead. This
article provides some background information
on IPv6 deployment and management
strategies.
The document discusses Greenplum Database, an open source massively parallel processing (MPP) relational database system for big data. It provides an overview of Greenplum's architecture, including its master-segment structure and distributed transaction management. It also covers topics like defining data storage, distributions, partitioning, and analytics capabilities. Examples of Greenplum deployments are listed across various industries. Recent accomplishments and roadmap items are also summarized.
El documento proporciona instrucciones sobre el registro de datos personales y laborales, la ejecución de formaciones, la certificación y el sistema de aprendizaje en línea. Explica que el registro incluye información de contacto y experiencia laboral, y que la formación busca desarrollar competencias y resultados de aprendizaje. También describe cómo consultar la constancia de aprendizaje, y que la inscripción a cursos incluye información sobre el programa, duración y lugar de realización.
The document describes the key parameters of a three-zone once-through steam generator, including the cooling water flow rate, tube specifications, effective surface areas, heat transfer rates, and temperatures of each zone. It then shows calculated performance results across a range of circulation water inlet temperatures from 35 to 75 degrees Fahrenheit, indicating changes in generated power, pressures, heat loads, and cleanliness factors compared to predicted design values.
Lauren Davis stands out as an exemplary student with strong leadership skills, character, and community involvement. In her AP English class she demonstrated critical thinking and insightful literary analysis. As captain of the volleyball team for four years, she helped lead them to a CIF championship. She possesses strong character traits and has been extensively involved in community service through her church, charity work, and anti-bullying campaigns. The teacher recommends her highly for admission based on her attributes and potential.
1) The document outlines 4 steps for making PR work for a company or non-profit: be strategic, identify stories and audiences, select tactics, and monitor and promote coverage.
2) It provides tips for dealing with traditional and new media, including finding a matching outlet, being persistent, and putting together a media list.
3) Additional recommendations include becoming a publisher with a blog, writing trade articles, and planning for potential crises.
The document provides guidance on creating a marketing plan by building an "instruction manual" that outlines key steps. It recommends starting with defining company identity by establishing a mission, values, and measures of success. The next steps include understanding why the company exists by solving problems or needs, studying the market, and being sensitive to trends. The plan should then set specific and measurable goals and identify messaging and target audiences. Developing relationships and networks of supporters is also important. Finally, the plan should put all the elements together in an organized outline and be verified with research. The overall message is that having a thorough marketing plan can lead to successful outcomes.
- Television in India started as an experimental broadcast in 1959 and regular daily transmission began in 1965 as part of All India Radio. By the mid-1970s, only seven cities had television services.
- In the early 1980s, there was only one national channel, Doordarshan, which was government owned. Private channels began in the late 1980s and cable television grew.
- The television industry is large and growing, with revenue expected to reach INR 975 billion by 2019, driven by subscription and advertising growth. Digitalization of cable has expanded options and revenues.
Day care centers are gaining importance in India as more parents work. The day care market is highly fragmented and untapped, with only 1% of preschoolers currently enrolled. Running a successful day care center requires factors like appropriate staffing ratios, suitable neighborhood location, educational activities, teacher training, brand awareness, and standardized infrastructure across franchises. XYZ and ABC are two business descriptions of existing day care center models in India, with details on their programs, fees, and service offerings. A strategic gap analysis identifies opportunities to expand service areas, age groups covered, and offer additional services like transportation, tutoring, and live streaming. Gurgaon is highlighted as a potential market for new day care centers given its growing population, economic
InfiniBand in the Enterprise Data Center.pdfbui thequan
InfiniBand offers high-speed connectivity in data centers that enables consolidation, virtualization, and a service-centric shared resource model. It allows different data center roles like front-end, application, back-end, and storage layers to connect over a single fabric. InfiniBand's high bandwidth and low latency help meet performance needs for applications and between tiers. Its channels-based I/O allows networking, storage, and inter-process communication to consolidate over one wire. InfiniBand also supports virtualization through features like pass-through that improve utilization and cost.
Enablers, Platforms, & Early adopters for internet of things. How hadoop helps in enabling the technology to process data from sensors? What are the limitations in using Hadoop for internet of things?
Industry Brief: Tectonic Shift - HPC Networks ConvergeIT Brand Pulse
The document discusses the convergence of Ethernet and InfiniBand networks for high-performance computing (HPC). Enhancements to Ethernet, including speeds of 40GbE and 100GbE, have closed the performance gap with InfiniBand. While InfiniBand currently dominates HPC networks, the enhancements to Ethernet and organizations' desire to leverage existing Ethernet infrastructure will result in Ethernet becoming the standard for most HPC applications over the next few years, with InfiniBand remaining only for niche uses. Annual revenue from Ethernet is expected to surpass InfiniBand by 2016.
A Comparative Survey Based on Processing Network Traffic Data Using Hadoop Pi...IJCSES Journal
Big data analysis has now become an integral part of many computational and statistical departments. Analysis of peta-byte scale of data is having an enhanced importance in the present day scenario. Big data manipulation is now considered as a key area of research in the field of data analytics and novel
techniques are being evolved day by day. Thousands of transaction requests are being processed in every minute by different websites related to e-commerce, shopping carts and online banking. Here comes the need of network traffic and weblog analysis for which Hadoop comes as a suggested solution. It can efficiently process the Netflow data collected from routers, switches or even from website access logs at
fixed intervals.
A comparative survey based on processing network traffic data using hadoop pi...ijcses
Big data analysis has now become an integral part of many computational and statistical departments.
Analysis of peta-byte scale of data is having an enhanced importance in the present day scenario. Big data
manipulation is now considered as a key area of research in the field of data analytics and novel
techniques are being evolved day by day. Thousands of transaction requests are being processed in every
minute by different websites related to e-commerce, shopping carts and online banking. Here comes the
need of network traffic and weblog analysis for which Hadoop comes as a suggested solution. It can
efficiently process the Netflow data collected from routers, switches or even from website access logs at
fixed intervals.
NETWORK TRAFFIC ANALYSIS: HADOOP PIG VS TYPICAL MAPREDUCEcscpconf
Big data analysis has become much popular in the present day scenario and the manipulation of big data has gained the keen attention of researchers in the field of data analytics. Analysis of
big data is currently considered as an integral part of many computational and statistical departments. As a result, novel approaches in data analysis are evolving on a daily basis.
Thousands of transaction requests are handled and processed every day by different websites associated with e-commerce, e-banking, e-shopping carts etc. The network traffic and weblog
analysis comes to play a crucial role in such situations where Hadoop can be suggested as an efficient solution for processing the Netflow data collected from switches as well as website
access-logs during fixed intervals.
The document discusses accelerating Apache Hadoop through high-performance networking and I/O technologies. It describes how technologies like InfiniBand, RoCE, SSDs, and NVMe can benefit big data applications by alleviating bottlenecks. It outlines projects from the High-Performance Big Data project that implement RDMA for Hadoop, Spark, HBase and Memcached to improve performance. Evaluation results demonstrate significant acceleration of HDFS, MapReduce, and other workloads through the high-performance designs.
This document discusses how Mellanox networks enable high performance Ceph storage clusters. It notes that Ceph performance and scalability are dictated by the backend cluster network performance. It provides examples of customers deploying Ceph with Mellanox 40GbE and 10GbE interconnects, and highlights how these networks allow building scalable, high performing storage solutions. Specifically, it shows how 40GbE cluster networks and 40GbE client networks provide much higher throughput and IOPS compared to 10GbE. The document concludes by mentioning how RDMA offloads can free CPU for application processing, and how the Accelio library enables high performance RDMA for Ceph.
NETWORK TRAFFIC ANALYSIS: HADOOP PIG VS TYPICAL MAPREDUCEcsandit
Big data analysis has become much popular in the present day scenario and the manipulation of
big data has gained the keen attention of researchers in the field of data analytics. Analysis of
big data is currently considered as an integral part of many computational and statistical
departments. As a result, novel approaches in data analysis are evolving on a daily basis.
Thousands of transaction requests are handled and processed everyday by different websites
associated with e-commerce, e-banking, e-shopping carts etc. The network traffic and weblog
analysis comes to play a crucial role in such situations where Hadoop can be suggested as an
efficient solution for processing the Netflow data collected from switches as well as website
access-logs during fixed intervals.
Monitizing Big Data at Telecom Service ProvidersDataWorks Summit
Hadoop enables telecom service providers to gain valuable insights from large volumes of network and customer data. It provides a cost-effective way to store and analyze this data at scale. Specific use cases discussed include using Hadoop to optimize network infrastructure investments based on usage patterns, identify network nodes responsible for most customer issues to prioritize maintenance, and help diagnose network performance problems while handling large volumes of monitoring data.
With Hadoop, telecom providers are able to gain valuable insights from large volumes of customer data that would otherwise be costly and difficult to analyze. This enables improved network optimization, more efficient customer service, and data-driven sales and marketing strategies. Specifically, the document discusses how Hadoop helps with network capacity planning, targeted maintenance, and root cause analysis for issues. It also allows for better understanding of customer care needs, field service efficiency, and security threats. Hadoop provides a cost-effective way to store and analyze diverse data sources for enhanced business outcomes.
Monetizing Big Data at Telecom Service ProvidersDataWorks Summit
Hadoop enables telecom companies to gain valuable insights from large amounts of customer data. It provides a cost-effective way to store and analyze call detail records, network traffic data, customer account information, and other big data sources. This allows telecoms to improve network maintenance, enhance the customer experience, optimize marketing campaigns, and reduce customer churn. The document discusses several use cases where telecom companies have used Hadoop to save millions of dollars annually or increase revenue through better data-driven decisions.
Accelerate Big Data Processing with High-Performance Computing TechnologiesIntel® Software
Learn about opportunities and challenges for accelerating big data middleware on modern high-performance computing (HPC) clusters by exploiting HPC technologies.
The document discusses the growing demand for higher computing performance and the role of intelligent interconnects in enabling exascale performance. It summarizes Mellanox's vision of co-designing hardware, software and applications to leverage in-network computing. Mellanox introduced its Switch-IB 2 smart switch and SHArP technology that can execute MPI operations in the network to accelerate applications by 10x. Mellanox also discussed its roadmap to deliver higher performance, scalability and efficiency through intelligent programmable adapters, routers and multi-host architectures.
Ceph Day New York 2014: Ceph over High Performance NetworksCeph Community
Mellanox provides high performance networking solutions for Ceph storage clusters. They discussed how Ceph relies on high performance networks for scalability and availability. Mellanox offers end-to-end 40/56GbE and InfiniBand solutions with full CPU offloading. They presented examples of how customers deploy Ceph with Mellanox's 40GbE interconnects across cluster, client, and public networks. Mellanox also discussed ongoing work to integrate RDMA support into Ceph to further improve performance.
Ceph Day London 2014 - Ceph Over High-Performance Networks Ceph Community
Mellanox provides high performance networking solutions for Ceph storage clusters. They discussed how Ceph relies on high performance networks for scalability and availability. Mellanox offers end-to-end 40/56GbE and InfiniBand solutions with full CPU offloading. They presented examples of how customers deploy Ceph with Mellanox's 40GbE interconnects across cluster, client, and public networks. Mellanox also discussed ongoing work to integrate RDMA support into Ceph to further improve performance.
1) InfiniBand is an industry standard channel-based architecture that provides high-speed, low-latency interconnects for distributed computing infrastructures.
2) It combines networks into a unified fabric that collectively routes data between host nodes and network peripherals, reducing required adapters and cables and lowering total cost of ownership.
3) InfiniBand is supported by multiple vendors and is well-suited for applications that require high bandwidth, low latency, and low processor overhead such as high performance computing, databases, and heavily interconnected servers.
IRJET- A Study of Comparatively Analysis for HDFS and Google File System ...IRJET Journal
This document compares and contrasts the Hadoop Distributed File System (HDFS) and the Google File System (GFS), which are both frameworks for handling large-scale, distributed data storage and processing. HDFS is an open-source system implemented by Apache and used by companies like Yahoo, Facebook, and IBM. GFS was originally developed by Google as a proprietary system. Both systems use a master-slave architecture with a centralized metadata manager and distributed data nodes, but HDFS uses a NameNode and DataNodes while GFS uses a MasterNode and ChunkServers. The document outlines several key similarities and differences between the two systems in their objectives, implementations, hardware usage, file management, operations, and other technical aspects.
Performance Evaluation of Soft RoCE over 1 Gigabit EthernetIOSR Journals
Abstract: Ethernet is most influential & widely used technology in the world. With the growing demand of low
latency & high throughput technologies like InfiniBand and RoCE have evolved with unique features viz. RDMA
(Remote Direct Memory Access). RDMA is an effective technology, which is used for reducing system load &
improves the performance. InfiniBand is a well known technology, which provides high-bandwidth and lowlatency
and makes optimal use of in-built features like RDMA. With the rapid evolution of InfiniBand technology
and Ethernet lacking the RDMA and zero copy protocol, the Ethernet community has came out with a new
enhancements that bridges the gap between InfiniBand and Ethernet. By adding the RDMA and zero copy
protocol to the Ethernet a new networking technology is evolved called RDMA over Converged Ethernet
(RoCE). RoCE is a standard released by the IBTA standardization body to define RDMA protocol over
Ethernet. With the emergence of lossless Ethernet, RoCE uses InfiniBand efficient transport to provide the
platform for deploying RDMA technology in mainstream data centres over 10GigE, 40GigE and beyond. RoCE
provide all of the InfiniBand benefits transport benefits and well established RDMA ecosystem combined with
converged Ethernet. In this paper, we evaluate the heterogeneous Linux cluster, having multi nodes with fast
interconnects i.e. gigabit Ethernet & Soft RoCE. This paper presents the heterogeneous Linux cluster
configuration & evaluates its performance using Intel’s MPI Benchmarks. Our result shows that Soft RoCE is
performing better than Ethernet in various performance metrics like bandwidth, latency & throughput.
Keywords: Ethernet, InfiniBand, MPI, RoCE, RDMA, Soft RoCE
This document discusses how Mellanox technologies can accelerate big data solutions using RDMA. It summarizes that Mellanox provides end-to-end interconnect solutions including adapters, switches, and cables. It also discusses three key areas for acceleration: data analytics, storage, and distributed storage. The document presents the Unstructured Data Accelerator plugin which can double MapReduce performance using RDMA for efficient data shuffling. It also discusses using RDMA and SSDs to unlock higher throughput in HDFS and overcome bandwidth limitations of 1GbE and 10GbE networks.
Similar to International Journal of Engineering Research and Development (20)
A Novel Method for Prevention of Bandwidth Distributed Denial of Service AttacksIJERD Editor
Distributed Denial of Service (DDoS) Attacks became a massive threat to the Internet. Traditional
Architecture of internet is vulnerable to the attacks like DDoS. Attacker primarily acquire his army of Zombies,
then that army will be instructed by the Attacker that when to start an attack and on whom the attack should be
done. In this paper, different techniques which are used to perform DDoS Attacks, Tools that were used to
perform Attacks and Countermeasures in order to detect the attackers and eliminate the Bandwidth Distributed
Denial of Service attacks (B-DDoS) are reviewed. DDoS Attacks were done by using various Flooding
techniques which are used in DDoS attack.
The main purpose of this paper is to design an architecture which can reduce the Bandwidth
Distributed Denial of service Attack and make the victim site or server available for the normal users by
eliminating the zombie machines. Our Primary focus of this paper is to dispute how normal machines are
turning into zombies (Bots), how attack is been initiated, DDoS attack procedure and how an organization can
save their server from being a DDoS victim. In order to present this we implemented a simulated environment
with Cisco switches, Routers, Firewall, some virtual machines and some Attack tools to display a real DDoS
attack. By using Time scheduling, Resource Limiting, System log, Access Control List and some Modular
policy Framework we stopped the attack and identified the Attacker (Bot) machines
Hearing loss is one of the most common human impairments. It is estimated that by year 2015 more
than 700 million people will suffer mild deafness. Most can be helped by hearing aid devices depending on the
severity of their hearing loss. This paper describes the implementation and characterization details of a dual
channel transmitter front end (TFE) for digital hearing aid (DHA) applications that use novel micro
electromechanical- systems (MEMS) audio transducers and ultra-low power-scalable analog-to-digital
converters (ADCs), which enable a very-low form factor, energy-efficient implementation for next-generation
DHA. The contribution of the design is the implementation of the dual channel MEMS microphones and powerscalable
ADC system.
Influence of tensile behaviour of slab on the structural Behaviour of shear c...IJERD Editor
-A composite beam is composed of a steel beam and a slab connected by means of shear connectors
like studs installed on the top flange of the steel beam to form a structure behaving monolithically. This study
analyzes the effects of the tensile behavior of the slab on the structural behavior of the shear connection like slip
stiffness and maximum shear force in composite beams subjected to hogging moment. The results show that the
shear studs located in the crack-concentration zones due to large hogging moments sustain significantly smaller
shear force and slip stiffness than the other zones. Moreover, the reduction of the slip stiffness in the shear
connection appears also to be closely related to the change in the tensile strain of rebar according to the increase
of the load. Further experimental and analytical studies shall be conducted considering variables such as the
reinforcement ratio and the arrangement of shear connectors to achieve efficient design of the shear connection
in composite beams subjected to hogging moment.
Gold prospecting using Remote Sensing ‘A case study of Sudan’IJERD Editor
Gold has been extracted from northeast Africa for more than 5000 years, and this may be the first
place where the metal was extracted. The Arabian-Nubian Shield (ANS) is an exposure of Precambrian
crystalline rocks on the flanks of the Red Sea. The crystalline rocks are mostly Neoproterozoic in age. ANS
includes the nations of Israel, Jordan. Egypt, Saudi Arabia, Sudan, Eritrea, Ethiopia, Yemen, and Somalia.
Arabian Nubian Shield Consists of juvenile continental crest that formed between 900 550 Ma, when intra
oceanic arc welded together along ophiolite decorated arc. Primary Au mineralization probably developed in
association with the growth of intra oceanic arc and evolution of back arc. Multiple episodes of deformation
have obscured the primary metallogenic setting, but at least some of the deposits preserve evidence that they
originate as sea floor massive sulphide deposits.
The Red Sea Hills Region is a vast span of rugged, harsh and inhospitable sector of the Earth with
inimical moon-like terrain, nevertheless since ancient times it is famed to be an abode of gold and was a major
source of wealth for the Pharaohs of ancient Egypt. The Pharaohs old workings have been periodically
rediscovered through time. Recent endeavours by the Geological Research Authority of Sudan led to the
discovery of a score of occurrences with gold and massive sulphide mineralizations. In the nineties of the
previous century the Geological Research Authority of Sudan (GRAS) in cooperation with BRGM utilized
satellite data of Landsat TM using spectral ratio technique to map possible mineralized zones in the Red Sea
Hills of Sudan. The outcome of the study mapped a gossan type gold mineralization. Band ratio technique was
applied to Arbaat area and a signature of alteration zone was detected. The alteration zones are commonly
associated with mineralization. The alteration zones are commonly associated with mineralization. A filed check
confirmed the existence of stock work of gold bearing quartz in the alteration zone. Another type of gold
mineralization that was discovered using remote sensing is the gold associated with metachert in the Atmur
Desert.
Reducing Corrosion Rate by Welding DesignIJERD Editor
This document summarizes a study on reducing corrosion rates in steel through welding design. The researchers tested different welding groove designs (X, V, 1/2X, 1/2V) and preheating temperatures (400°C, 500°C, 600°C) on ferritic malleable iron samples. Testing found that X and V groove designs with 500°C and 600°C preheating had corrosion rates of 0.5-0.69% weight loss after 14 days, compared to 0.57-0.76% for 400°C preheating. Higher preheating reduced residual stresses which decreased corrosion. Residual stresses were 1.7 MPa for optimal X groove and 600°C
Router 1X3 – RTL Design and VerificationIJERD Editor
Routing is the process of moving a packet of data from source to destination and enables messages
to pass from one computer to another and eventually reach the target machine. A router is a networking device
that forwards data packets between computer networks. It is connected to two or more data lines from different
networks (as opposed to a network switch, which connects data lines from one single network). This paper,
mainly emphasizes upon the study of router device, it‟s top level architecture, and how various sub-modules of
router i.e. Register, FIFO, FSM and Synchronizer are synthesized, and simulated and finally connected to its top
module.
Active Power Exchange in Distributed Power-Flow Controller (DPFC) At Third Ha...IJERD Editor
This paper presents a component within the flexible ac-transmission system (FACTS) family, called
distributed power-flow controller (DPFC). The DPFC is derived from the unified power-flow controller (UPFC)
with an eliminated common dc link. The DPFC has the same control capabilities as the UPFC, which comprise
the adjustment of the line impedance, the transmission angle, and the bus voltage. The active power exchange
between the shunt and series converters, which is through the common dc link in the UPFC, is now through the
transmission lines at the third-harmonic frequency. DPFC multiple small-size single-phase converters which
reduces the cost of equipment, no voltage isolation between phases, increases redundancy and there by
reliability increases. The principle and analysis of the DPFC are presented in this paper and the corresponding
simulation results that are carried out on a scaled prototype are also shown.
Mitigation of Voltage Sag/Swell with Fuzzy Control Reduced Rating DVRIJERD Editor
Power quality has been an issue that is becoming increasingly pivotal in industrial electricity
consumers point of view in recent times. Modern industries employ Sensitive power electronic equipments,
control devices and non-linear loads as part of automated processes to increase energy efficiency and
productivity. Voltage disturbances are the most common power quality problem due to this the use of a large
numbers of sophisticated and sensitive electronic equipment in industrial systems is increased. This paper
discusses the design and simulation of dynamic voltage restorer for improvement of power quality and
reduce the harmonics distortion of sensitive loads. Power quality problem is occurring at non-standard
voltage, current and frequency. Electronic devices are very sensitive loads. In power system voltage sag,
swell, flicker and harmonics are some of the problem to the sensitive load. The compensation capability
of a DVR depends primarily on the maximum voltage injection ability and the amount of stored
energy available within the restorer. This device is connected in series with the distribution feeder at
medium voltage. A fuzzy logic control is used to produce the gate pulses for control circuit of DVR and the
circuit is simulated by using MATLAB/SIMULINK software.
Study on the Fused Deposition Modelling In Additive ManufacturingIJERD Editor
Additive manufacturing process, also popularly known as 3-D printing, is a process where a product
is created in a succession of layers. It is based on a novel materials incremental manufacturing philosophy.
Unlike conventional manufacturing processes where material is removed from a given work price to derive the
final shape of a product, 3-D printing develops the product from scratch thus obviating the necessity to cut away
materials. This prevents wastage of raw materials. Commonly used raw materials for the process are ABS
plastic, PLA and nylon. Recently the use of gold, bronze and wood has also been implemented. The complexity
factor of this process is 0% as in any object of any shape and size can be manufactured.
Spyware triggering system by particular string valueIJERD Editor
This computer programme can be used for good and bad purpose in hacking or in any general
purpose. We can say it is next step for hacking techniques such as keylogger and spyware. Once in this system if
user or hacker store particular string as a input after that software continually compare typing activity of user
with that stored string and if it is match then launch spyware programme.
A Blind Steganalysis on JPEG Gray Level Image Based on Statistical Features a...IJERD Editor
This paper presents a blind steganalysis technique to effectively attack the JPEG steganographic
schemes i.e. Jsteg, F5, Outguess and DWT Based. The proposed method exploits the correlations between
block-DCTcoefficients from intra-block and inter-block relation and the statistical moments of characteristic
functions of the test image is selected as features. The features are extracted from the BDCT JPEG 2-array.
Support Vector Machine with cross-validation is implemented for the classification.The proposed scheme gives
improved outcome in attacking.
Secure Image Transmission for Cloud Storage System Using Hybrid SchemeIJERD Editor
- Data over the cloud is transferred or transmitted between servers and users. Privacy of that
data is very important as it belongs to personal information. If data get hacked by the hacker, can be
used to defame a person’s social data. Sometimes delay are held during data transmission. i.e. Mobile
communication, bandwidth is low. Hence compression algorithms are proposed for fast and efficient
transmission, encryption is used for security purposes and blurring is used by providing additional
layers of security. These algorithms are hybridized for having a robust and efficient security and
transmission over cloud storage system.
Application of Buckley-Leverett Equation in Modeling the Radius of Invasion i...IJERD Editor
A thorough review of existing literature indicates that the Buckley-Leverett equation only analyzes
waterflood practices directly without any adjustments on real reservoir scenarios. By doing so, quite a number
of errors are introduced into these analyses. Also, for most waterflood scenarios, a radial investigation is more
appropriate than a simplified linear system. This study investigates the adoption of the Buckley-Leverett
equation to estimate the radius invasion of the displacing fluid during waterflooding. The model is also adopted
for a Microbial flood and a comparative analysis is conducted for both waterflooding and microbial flooding.
Results shown from the analysis doesn’t only records a success in determining the radial distance of the leading
edge of water during the flooding process, but also gives a clearer understanding of the applicability of
microbes to enhance oil production through in-situ production of bio-products like bio surfactans, biogenic
gases, bio acids etc.
Gesture Gaming on the World Wide Web Using an Ordinary Web CameraIJERD Editor
- Gesture gaming is a method by which users having a laptop/pc/x-box play games using natural or
bodily gestures. This paper presents a way of playing free flash games on the internet using an ordinary webcam
with the help of open source technologies. Emphasis in human activity recognition is given on the pose
estimation and the consistency in the pose of the player. These are estimated with the help of an ordinary web
camera having different resolutions from VGA to 20mps. Our work involved giving a 10 second documentary to
the user on how to play a particular game using gestures and what are the various kinds of gestures that can be
performed in front of the system. The initial inputs of the RGB values for the gesture component is obtained by
instructing the user to place his component in a red box in about 10 seconds after the short documentary before
the game is finished. Later the system opens the concerned game on the internet on popular flash game sites like
miniclip, games arcade, GameStop etc and loads the game clicking at various places and brings the state to a
place where the user is to perform only gestures to start playing the game. At any point of time the user can call
off the game by hitting the esc key and the program will release all of the controls and return to the desktop. It
was noted that the results obtained using an ordinary webcam matched that of the Kinect and the users could
relive the gaming experience of the free flash games on the net. Therefore effective in game advertising could
also be achieved thus resulting in a disruptive growth to the advertising firms.
Hardware Analysis of Resonant Frequency Converter Using Isolated Circuits And...IJERD Editor
-LLC resonant frequency converter is basically a combo of series as well as parallel resonant ckt. For
LCC resonant converter it is associated with a disadvantage that, though it has two resonant frequencies, the
lower resonant frequency is in ZCS region[5]. For this application, we are not able to design the converter
working at this resonant frequency. LLC resonant converter existed for a very long time but because of
unknown characteristic of this converter it was used as a series resonant converter with basically a passive
(resistive) load. . Here, it was designed to operate in switching frequency higher than resonant frequency of the
series resonant tank of Lr and Cr converter acts very similar to Series Resonant Converter. The benefit of LLC
resonant converter is narrow switching frequency range with light load[6] . Basically, the control ckt plays a
very imp. role and hence 555 Timer used here provides a perfect square wave as the control ckt provides no
slew rate which makes the square wave really strong and impenetrable. The dead band circuit provides the
exclusive dead band in micro seconds so as to avoid the simultaneous firing of two pairs of IGBT’s where one
pair switches off and the other on for a slightest period of time. Hence, the isolator ckt here is associated with
each and every ckt used because it acts as a driver and an isolation to each of the IGBT is provided with one
exclusive transformer supply[3]. The IGBT’s are fired using the appropriate signal using the previous boards
and hence at last a high frequency rectifier ckt with a filtering capacitor is used to get an exact dc
waveform .The basic goal of this particular analysis is to observe the wave forms and characteristics of
converters with differently positioned passive elements in the form of tank circuits.
Simulated Analysis of Resonant Frequency Converter Using Different Tank Circu...IJERD Editor
LLC resonant frequency converter is basically a combo of series as well as parallel resonant ckt. For
LCC resonant converter it is associated with a disadvantage that, though it has two resonant frequencies, the
lower resonant frequency is in ZCS region [5]. For this application, we are not able to design the converter
working at this resonant frequency. LLC resonant converter existed for a very long time but because of
unknown characteristic of this converter it was used as a series resonant converter with basically a passive
(resistive) load. . Here, it was designed to operate in switching frequency higher than resonant frequency of the
series resonant tank of Lr and Cr converter acts very similar to Series Resonant Converter. The benefit of LLC
resonant converter is narrow switching frequency range with light load[6] . Basically, the control ckt plays a
very imp. role and hence 555 Timer used here provides a perfect square wave as the control ckt provides no
slew rate which makes the square wave really strong and impenetrable. The dead band circuit provides the
exclusive dead band in micro seconds so as to avoid the simultaneous firing of two pairs of IGBT’s where one
pair switches off and the other on for a slightest period of time. Hence, the isolator ckt here is associated with
each and every ckt used because it acts as a driver and an isolation to each of the IGBT is provided with one
exclusive transformer supply[3]. The IGBT’s are fired using the appropriate signal using the previous boards
and hence at last a high frequency rectifier ckt with a filtering capacitor is used to get an exact dc
waveform .The basic goal of this particular analysis is to observe the wave forms and characteristics of
converters with differently positioned passive elements in the form of tank circuits. The supported simulation
is done through PSIM 6.0 software tool
Amateurs Radio operator, also known as HAM communicates with other HAMs through Radio
waves. Wireless communication in which Moon is used as natural satellite is called Moon-bounce or EME
(Earth -Moon-Earth) technique. Long distance communication (DXing) using Very High Frequency (VHF)
operated amateur HAM radio was difficult. Even with the modest setup having good transceiver, power
amplifier and high gain antenna with high directivity, VHF DXing is possible. Generally 2X11 YAGI antenna
along with rotor to set horizontal and vertical angle is used. Moon tracking software gives exact location,
visibility of Moon at both the stations and other vital data to acquire real time position of moon.
“MS-Extractor: An Innovative Approach to Extract Microsatellites on „Y‟ Chrom...IJERD Editor
Simple Sequence Repeats (SSR), also known as Microsatellites, have been extensively used as
molecular markers due to their abundance and high degree of polymorphism. The nucleotide sequences of
polymorphic forms of the same gene should be 99.9% identical. So, Microsatellites extraction from the Gene is
crucial. However, Microsatellites repeat count is compared, if they differ largely, he has some disorder. The Y
chromosome likely contains 50 to 60 genes that provide instructions for making proteins. Because only males
have the Y chromosome, the genes on this chromosome tend to be involved in male sex determination and
development. Several Microsatellite Extractors exist and they fail to extract microsatellites on large data sets of
giga bytes and tera bytes in size. The proposed tool “MS-Extractor: An Innovative Approach to extract
Microsatellites on „Y‟ Chromosome” can extract both Perfect as well as Imperfect Microsatellites from large
data sets of human genome „Y‟. The proposed system uses string matching with sliding window approach to
locate Microsatellites and extracts them.
Importance of Measurements in Smart GridIJERD Editor
- The need to get reliable supply, independence from fossil fuels, and capability to provide clean
energy at a fixed and lower cost, the existing power grid structure is transforming into Smart Grid. The
development of a smart energy distribution grid is a current goal of many nations. A Smart Grid should have
new capabilities such as self-healing, high reliability, energy management, and real-time pricing. This new era
of smart future grid will lead to major changes in existing technologies at generation, transmission and
distribution levels. The incorporation of renewable energy resources and distribution generators in the existing
grid will increase the complexity, optimization problems and instability of the system. This will lead to a
paradigm shift in the instrumentation and control requirements for Smart Grids for high quality, stable and
reliable electricity supply of power. The monitoring of the grid system state and stability relies on the
availability of reliable measurement of data. In this paper the measurement areas that highlight new
measurement challenges, development of the Smart Meters and the critical parameters of electric energy to be
monitored for improving the reliability of power systems has been discussed.
Study of Macro level Properties of SCC using GGBS and Lime stone powderIJERD Editor
The document summarizes a study on the use of ground granulated blast furnace slag (GGBS) and limestone powder to replace cement in self-compacting concrete (SCC). Tests were conducted on SCC mixes with 0-50% replacement of cement with GGBS and 0-20% replacement with limestone powder. The results showed that replacing 30% of cement with GGBS and 15% with limestone powder produced SCC with the highest compressive strength of 46MPa, meeting fresh property requirements. The study concluded that this ternary blend of cement, GGBS and limestone powder can improve SCC properties while reducing costs.
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
International Journal of Engineering Research and Development
1. International Journal of Engineering Research and Development
e-ISSN: 2278-067X, p-ISSN: 2278-800X, www.ijerd.com
Volume 10, Issue 5 (May 2014), PP.26-35
26
Can Modern Interconnects Improve the Performance of
Hadoop Cluster? Performance evaluation of
Hadoop on SSD and HDD with IPoIB
Piyush Saxena
M.Tech (Computer Science and Engineering), Amity School of Engineering and Technology,
Amity University, Noida, India +91-9451427546
Abstract:- In today‟s world where Internet is most required and where pentabytes of data is produced per hour,
there is a drastic need to speed up the performance and throughput of the cloud system. Traditional cloud
systems were not able to give the performance that the storage devices like SSD and HDD were meant to
deliver.
In the last paper we showed that the hadoop on SSD and HDD did not showed much difference in performance
as these were traditionally connected to the processing system that acts as a hindrance to the system. Another
reason that could be spotted with the pattern of data access was that there was less of Random Access Memory
with low caching resources available. To these issues, another set of experiments were conducted using a highly
improved connecting method than the conventional 10 GigE and by implementing Distributed shared memory
that can make the access patterns much faster. The improved methods that were considered for the test purposes
were IPoIB and RDMA-IB.
In this paper we will also present that Modern Interconnects used in Hadoop (MapReduce) with SSD can
outperform the traditional Interconnecting technique like 10 GigE networks. In addition, we also demonstrate
that the use of sockets or conventional TCP/IP applications can be still used with new technology and with
improved throughput and less latency when IBoIP is used.
Keywords:- Hadoop, HDFS, SSD, HDD, HiBench, Benchmarking, 10 GigE, IPoIB, RDMA-IB.
I. INTRODUCTION TO HADOOP AND HDFS
In today‟s digital age, a big measure of data is been processed on the internet. Allotting optimal data
processing with advantageous response duration acts the output to the requests by the consumer. There are
frequent users that assay to enter the alike data above the web and it is a challenging task for the server to deliver
optimal result. The large amount of data the internet has to deal with every day has made conventional solutions
extremely uneconomical. There are difficulties like processing large documents split into many disaffiliated sub-
tasks that are segmented with the available nodes, and processed in parallel. Due to this, MapReduce and Hadoop
came into existence.
Hadoop is a free-of-cost, programming architecture that is java-based and supports the processing of
large amounts of applications on systems that have thousands of nodes and involves multiple pentabytes of data.
The Hadoop Distributed File System helps faster data transfer rates between the nodes and makes the cluster to
persist functioning performances uninterrupted in case of node failure. This system actually lowers the risk of
complete system failure even when a significant no. of nodes are in-operative.[2]
Hadoop was motivated by MapReduce (Fig.1) that was introduced by Google, a software framework in which an
application is broken down into numerous small parts. Any of these parts (also called fragments or blocks) can be
run on any node in the cluster.[3]
MapReduce-based studies have been actively carried out for the efficient processing of big data on
hadoop. Hadoop runs on clusters of computers that can handle large amounts of data and support distributed
applications.[4] In the last few years, lots of research has been carried out to improve the performance of hadoop.
One of the hindrances is the performance issues of the storage device used as it is connected to the system by a
slower connecting interface like Bus. Even the difference in the Devices used for storage creates the hindrance.[5]
The performance of the Hadoop system is also bound on the type of workload that we consider. This is
why we consider HiBench as the standard model for testing Hadoop Distributed File System (HDFS). In this
paper, we try to study and evaluate the performance of Hadoop Distributed File System on a Hadoop Cluster
system that contains flash memory based SSD (Solid State Drive) and Hard Disk Drive by optimizing each
parameter on HiBench.
2. Can Modern Interconnects Improve the Performance of Hadoop Cluster?...
27
Technology has advanced fast, and datasets have grown even faster as it is easier to generate and
incarcerate data. The large Big Data, are a warehouse of information. The primary challenge in the investigation
of Big Data is to conquer the I/O blockage present on modern systems.[6] Lethargic I/O systems overpower the
very use of having high end processors. They cannot provide data fast adequate to utilize all of the accessible
processing power. Outcome of this is wastage of power and increases in the price of in commission large clusters.
An approach is the use of Modern interconnects like IPoIB and RDMA-IB in place of Traditional Interconnects
like 10 GigE.
Fig 1.) Hadoop MapReduce Architecture
II. TRADITIONAL INTERCONNECT 10 GIGE NETWORK
10 gigabit Ethernet is a communication methodology that can give data transfer speeds up to 10 billion
bits per second. 10 gigabit Ethernet is also known as 10GE, 10GbE or 10 GigE.
It supports full duplex connections that can be connected by network switches and shared medium operation with
CSMA/CD.[7] It can work properly with the existing protocols. Since the 10 GigE works in full-duplex method,
it doesn‟t need Carrier Sense Multiple Access/Collision Detection protocols that is extremely important as this
improves the efficiency and the speed of 10 Gb Ethernet as it can be easily deployed in the existing network, thus
giving a cost-efficient methodology that support high-speed, low-latency requirements.[8]
10-Gigabit Ethernet offers distances between physical locations up to 40 kilometers over a single-mode fiber and
multi-mode fiber systems.
Technically 10 Gb Ethernet is a Layer 1 and Layer 2 protocol that follows the Ethernet attributes like
Media Access Control (MAC) protocol, the Ethernet frame format, & min and max frame size. This technology
supports both LAN and WAN standards. (Fig 2.)
Issues faced in deploying 10 Gb Ethernet are due to the costs of fibre channels, but the benefits received are very
large.
III. MODERN INTERCONNECT IPOIB NETWORK
Infini Band (IB) [9] is a uniform organization regional Network that is applicable in HPC and data
centre environments Infini Band Technology has high speed data transfer at a very low latency time. To allow the
legacy IP based applications over Internet Protocol based apps over InfiniBand in Data Centers. Internet Protocol
over InfiniBand protocol uses an interface on top of Infini band „Verbs‟ Layer that allows the applications
running on sockets to use host based TCP/IP protocol stack that is converted into native InfiniBand Verbs that
looks invisible to the application. Sockets Direct Protocol (SDP) is a development of the sockets based boundary
interface, allows the process to bypass the TCP/IP protocol stack and translate socket based packets into the verbs
layer RDMA operations, still maintaining TCP streaming socket symbolism. [10]
SDP has the benefits of trespassing software layer that is required in IPoIB. The results of this are SDP
has better latency and performance than IPoIB.
3. Can Modern Interconnects Improve the Performance of Hadoop Cluster?...
28
The uses of the InfiniBand are in modern computing and high performance computing. The benefits of
IBoIP is reducing communication latency as well as providing higher available bandwidth to clients in the local
DCN.
The administration of network load is of concern in the new networking technologies. Quality of Service
provisioning could be used to control the traffic for intra-network loads that can have main concern over the input
data stream (IDS). The traffic loads and flexibility in fine tuning of the performance of the network is also a bottle
neck for the system wide performance. Such technology would boost the performance of traditional data centers
that still work on Ethernet Best Effort Service with low or no requirement for modifying the conventional socket
applications.
It is important to evaluate the behavior of the H/W level QOS provisioning for InfiniBand network with
applications on the optimized socket based protocols. This yields in a step to use of this new technology to
harness high-speed interconnects for existing Internet applications.
In this paper, we will analyze and see the performance improvements in case of modern interconnects
like IPoIB in comparison of the traditional Bus interconnects or 10 GigE hardware.
InfiniBand is a prominent cluster interconnecting technology with very low latency and very high
performance. Native InfiniBand verbs is the lowest software layer of the InfiniBand network that allows direct
user-level access to IB Host Channel Adapter (HCA) resources by omitting the Operating System. At the IB
verbs level, a queue pairing form is used for message underneath both Send/Receive and RDMA semantics.
InfiniBand needs the user to register the buffer before using it for communication.
InfiniBand HCAs has 2 ports that can operate as 4X InfiniBand or 10-GigE. The architecture of HCA
includes a stateless offload engine for network interface card (NIC) based protocol processing.
Sockets Direct Protocol was designed originally for InfiniBand that has now been redefined as a
transport –agnostic protocol for RDMA network based fabrics. It was made known to improve and progress the
performance of sockets by using the RDMA protocol of the InfiniBand network. SDP is a byte-stream protocol
that is built on TCP stream socket connotations. SDP uses a protocol switch inside the operating system kernel
that clearly alternates between kernel TCP/IP stack above IB (IPoIB) along with the SDP above IB (which
sidesteps the kernel TCP/IP stack) [11].
SDP acquires bi-form layouts of data interchange. In the buffered-copy arrangement, the socket data is
duplicated in a preregistered buffer foregoing the network transfer. In the zero-copy arrangement, the consumer
buffer is lucidly registered for broadcasting to bypass data reproduction. (Fig 2.)
IV. MODERN INTERCONNECT RDMA-IB
InfiniBand Host Channel Adapters (HCA) and further network equipments can be approached by the
upper layer software using an interface called Verbs. The verbs interface is a low level communication interface
that follows the Queue Pair (or communication end-points) model.
Queue pairs are required to establish a channel between the two communicating entities. Each queue pair
has a certain number of work queue elements. Upper-level software places a work request on the queue pair that
is then processed by the HCA. When a work element is completed, it is placed in the completion queue. Upper
level software can detect completion by polling the completion queue. Verbs that are used to transfer data are
completely OS-bypassed. (Fig 2.)
Fig 2.) Various Interconnect Technologies and architecture
4. Can Modern Interconnects Improve the Performance of Hadoop Cluster?...
29
V. TEST BED SYSTEM USED FOR THE ANALYSIS
4 node all 1U servers (Quanta Stack) with 2 Intel Xeon X5670 CPU‟s, each one has 6 cores that is equal
to 12 physical cores with 96GB of Memory and Ethernet/ Infiniband as network. Storage Device 100 GB SSD
and 2 TB HDD.
VI. CLASSIFICATION OF MICRO BENCHMARK WORKLOADS
1.) Sort: It is a representation of a large subset of real world MapReduce jobs that is transforming data from
one representation to another. Sort requires an Input Output bound system resource utilization with the data
access patterns as equal quantities of data access. The input data is generated using the RandomTextWriter
program contained in the Hadoop distribution. Time taken by Reduce stage is twice the time taken by Map stage.
(Fig.3.1) [12]
Fig 3.1) MapReduce for SORT workload
2.) Word Count: It is also a representation of a large subset of real world MapReduce jobs that is
transforming data by extracting a small amount of interesting data from a large data set. Word Count requires a
CPU bound system resource utilization with the data access patterns as reducing quantities of data access. The
input data is generated using the RandomTextWriter program contained in the Hadoop distribution. Time taken by
Reduce stage is nearly the same as the time taken by Map stage. (Fig.3.2)
Fig 3.2) MapReduce for WORD COUNT Workload
3.) TeraSort: It sorts 10 billion 100-byte records generated by the TeraGen program contained in the
Hadoop distribution. TeraSort requires CPU bound system resource utilization during Map stage and Input
Output bound system resource utilization during Reduce stage with the data access patterns as reducing and then
growing quantities of data access. Time taken by Reduce stage is 1.5 times the time taken by Map stage. (Fig.3.3)
Fig 3.3) MapReduce for TERA SORT workload
5. Can Modern Interconnects Improve the Performance of Hadoop Cluster?...
30
VII. PERFORMANCE EVALUATION OF SSD AND HDD ON 10GIGE AND IPOIB
For the Performance evaluation and Analysis of the performance of SSD and HDD the considered work
loads are Sort, Word Count and Tera Sort on two different workloads viz. 10 GigE and IBoIP. The size of data
taken for all the workloads is 6550021992 bytes that is 6.1001GB of data. [13][14][15] (X-Axis -> Percentage ;
Y-Axis -> Time in sec)
1) Sort Work Load: Since Sort has an Input Output bound resource utilization it is easily observed that
SSD (Fig.4.1) buffers the data much earlier and at a faster rate than HDD (Fig.5.1) that tends to buffer at a
constant speed. Due to this reason the SSD had an earlier chance to start off with the Reduce phase as compared
to the HDD. It can also be inference from the graduated behavior of the graph that HDD works in a much
stabilized manner as compared to the SDD. Over all SSD finishes off its job with the processors 39seconds
earlier than the HDD. This proves that the SSD works much faster than HDD in the scenario of Sort Workload.
Fig 4.1.) SORT workload on Solid State Drive 10GigE (Blue -> Map Phase; Red -> Reduce Phase)
Fig 4.2.) SORT workload on Solid State Drive IPoIB (Blue -> Map Phase; Red -> Reduce Phase)
As of the performance change between the Modern Interconnect using IPoIB comapred to the
Traditional Interconnect using 10 GigE can be analysed from the benchmarking results of SSD and HDD used on
both types of interconnects. The analysis is as follows:
For SSD: the level of improvement is an average of 45% with precise improvement of 44% in Map
Phase and 46% in Reduce Phase. The Map phase completed at 113sec in case of IBoIP as compared to 212sec in
case of 10GigE. The Reduce phase completed at 455sec in IPoIB as compared to 852sec in case of 10GigE. The
reduce phase started from 30% of map phase.
For HDD: the level of improvement is an average of 27% with precise improvement of 26% in Map
Phase and 27% in Reduce Phase. The Map phase completed at 196sec in case of IBoIP as compared to 265sec in
case of 10GigE. The Reduce phase completed at 699sec in IPoIB as compared to 953sec in case of 10GigE. The
reduce phase started from 28% of map phase.
6. International Journal of Engineering Research and Development
e-ISSN: 2278-067X, p-ISSN: 2278-800X, www.ijerd.com
Volume 10, Issue 5 (May 2014), PP.26-35
31
Fig 5.1) SORT workload on Hard Disk Drive 10GigE (Blue -> Map Phase; Red -> Reduce Phase)
Fig 5.2) SORT workload on Hard Disk Drive IPoIB (Blue -> Map Phase; Red -> Reduce Phase)
2) Word Count Work Load: Since Sort has a CPU bound resource utilization it is easily observed that
SSD (Fig.6.1) and HDD (Fig.7.1) both buffers approximately at the same rate but with a little variation in the
speed as SSD buffers about 3 seconds faster than HDD. Due to this reason the SSD had an earlier chance to start
off with the Reduce phase at 47 seconds as compared to the HDD that starts at 49 seconds. It can also be inferred
from the abrupt behavior of the graph that HDD takes a longer time in the reduce phase as compared to the SSd
that takes less time. Over all SSD finishes off its job with the processors 6seconds earlier than the HDD that is not
a very major time difference. But, still this proves that the SSD works faster than HDD in the scenario of Word
Count Workload.
Fig 6.1) WORD COUNT Workload on Solid State Device 10GigE
7. Can Modern Interconnects Improve the Performance of Hadoop Cluster?...
32
Fig 6.2) WORD COUNT Workload on Solid State Device IPoIB
As of the performance change between the Modern Interconnect using IPoIB comapred to the
Traditional Interconnect using 10 GigE can be analysed from the benchmarking of SSD and HDD. The analysis is
as follows:
For SSD: the level of improvement is an average of 46% with precise improvement of 45% in Map
Phase and 47% in Reduce Phase. The Map phase completed at 43sec in case of IBoIP as compared to 79sec in
case of 10GigE. The Reduce phase completed at 50sec in IPoIB as compared to 94sec in case of 10GigE. The
reduce phase started from 44% of map phase.
For HDD: the level of improvement is an average of 29% with precise improvement of 26% in Map Phase and
33% in Reduce Phase. The Map phase completed at 64sec in case of IBoIP as compared to 86sec in case of
10GigE. The Reduce phase completed at 67sec in IPoIB as compared to 100sec in case of 10GigE. The reduce
phase started from 65% of map phase.
Fig 7.1) WORD COUNT Workload on Hard Disk Drive 10GigE
3) TeraSort Work Load: Since Sort has a CPU bound system resource utilization during Map stage and
Input Output bound system resource utilization during Reduce stage it is easily observed that SSD (Fig.8.1)
buffers the data much earlier 19Sec and at a faster rate than HDD (Fig.9.1) 21Sec that tends to buffer at an abrupt
speed. Due to this reason the SSD had an earlier chance to start off with the Reduce at 23sec as compared to the
HDD that starts at 24 second. It can also be observed that the reduce phase for SSD and HDD takes equal amount
of time i.e. process is independent of SSD or HDD & dependent on processor. SSD finishes its job 1second
earlier than the HDD that is not a negligible difference. But, still this proves that the SSD has lower latency than
HDD in the scenario of Tera Sort Workload
8. International Journal of Engineering Research and Development
e-ISSN: 2278-067X, p-ISSN: 2278-800X, www.ijerd.com
Volume 10, Issue 5 (May 2014), PP.26-35
33
Fig 7.2) WORD COUNT Workload on Hard Disk Drive IPoIB
Fig 8.1) TERASORT workload on Solid State Drive 10GigE
Fig 8.2) TERASORT workload on Solid State Drive IPoIB
As of the performance change between the Modern Interconnect using IPoIB comapred to the
Traditional Interconnect using 10 GigE can be analysed from the benchmarking results of SSD and HDD used on
both types of interconnects. The analysis is as follows:
For SSD: the level of improvement is an average of 44% with precise improvement of 41% in Map
Phase and 46% in Reduce Phase. The Map phase completed at 14sec in case of IBoIP as compared to 28sec in
case of 10GigE. The Reduce phase completed at 25sec in IPoIB as compared to 46sec in case of 10GigE. The
reduce phase started from 98% of map phase.
For HDD: the level of improvement is an average of 25% with precise improvement of 24% in Map
Phase and 26% in Reduce Phase. The Map phase completed at 16sec in case of IBoIP as compared to 21sec in
9. Can Modern Interconnects Improve the Performance of Hadoop Cluster?...
34
case of 10GigE. The Reduce phase completed at 34sec in IPoIB as compared to 46sec in case of 10GigE. The
reduce phase started from 100% of map phase.
Fig 9.1) TERASORT workload on Hard Disk Drive 10GigE
Fig 9.2) TERASORT workload on Hard Disk Drive IPoIB
VIII. CONCLUSION AND FUTURE SCOPE
From the above results and analysis the performance of SSD and HDD is nearly the same for the same
Interconnect used, but positive results can be seen for better performance of SSD than HDD with use of IPoIB
(Fig 10). Also the difference in the performance is very visible and drastic. So, an observation that can be
monitored is that the Map phase in any of the workload is performing well until the random access memory is not
consumed or the interconnect technology of the network used is of very high throughput and low latency. This
concludes that there is a need to involve a Distributed Shared Memory (DSM) or the need to improve the
Interconnect technology for networking from the traditional 10GigE to IBoIP to improve the performance of the
SSD and HDD and get better significant results [16]. Another connection technique like InfiniBand using RDMA
is used to connect then better performance in terms of latency, speed of access and fault tolerance can be achieved
[10].
Fig 10) Comparison of performances of the Interconnect Technologies
10. Can Modern Interconnects Improve the Performance of Hadoop Cluster?...
35
In the future, a model to have DSM as a part should be used to be implemented with the use of
InfiniBand on RDMA that supports the use of Verbs and with the technology of Optical Fibers to achieve faster
performances and Remote Dynamic Memory Access. [17][18][19] The modern interconnects like IPoIB and
RDMA-IB has got a lot of potential and their powers need to be researched and harnessed on in the future.[20]
REFERENCES
[1] Hadoop Home: http://hadoop.apache.org/
[2] Jacky Wu, "Hadoop HDFS & MapReduce", Help Guidelines LSA Lab, NTHU, Taiwan 2013.8.7.
[3] Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung, "The Google File System", SOSP‟03,
October 19–22, 2003, Bolton Landing, New York, USA. Copyright 2003 ACM.
[4] Piyush Saxena, Satyajit Padhy, Praveen Kumar, “Optimizing Parallel Data Processing With Dynamic
Resource Allocation”, International Conference on Reliability, Infocom Technologies and
Optimization.,pp. 735-739, Jan. 29-31, 2013.
[5] Jeffrey Dean and Sanjay Ghemawat, "MapReduce: Simplified Data Processing on Large Clusters",
OSDI‟04, 2004, Bolton Landing, New York, USA. Copyright 2004 ACM.
[6] “Can High-Performance Interconnects Benefit Hadoop Distributed File System? ”, Lab Resources of
Network-Based Computing Laboratory, Department of Computer Science and Engineering, The Ohio
State University, USA.
[7] N. S. Islam, M. W. Rahman, J. Jose, R. Rajachandrasekar, H. Wang, H. Subramoni, C. Murthy, and D.
K. Panda, “High Performance RDMA-based Design of HDFS over InfiniBand”, Research Resources of
Department of Computer Science and Engineering and IBM T.J Watson Research Center, The Ohio
State University Yorktown Heights, NY. November 10-16, 2012, Salt Lake City, Utah, USA.
[8] Open Fabrics Enterprise Distribution, http://www.openfabrics.org/.
[9] X. Ding, S. Jiang, F. Chen, K. Davis, and X. Zhang. DiskSeen: Exploiting Disk Layout and Access
History to Enhance I/O Prefetch. In Proceedings of USENIX07, 2007.
[10] InfiniBand Trade Association Home: http://www.infinibandta.org/
[11] C. Gniady, Y. C. Hu, and Y.-H. Lu. Program Counter Based Techniques for Dynamic Power
Management. In Proceedings of the 10th International Symposium on High Performance Computer
Architecture, HPCA ‟04, Washington, DC, USA, 2004. IEEE Computer Society.
[12] Shengsheng Huang, Jie Huang, Jinquan Dai, Tao Xie, and Bo Huang, "The HiBench Benchmark Suite:
Characterization of the MapReduce-Based Data Analysis", ICDE Workshops'10, Oct. 2010, 2010
IEEE.
[13] Lan Yi, “Experience with HiBench: From Micro-Benchmarks toward End-to-End Pipelines”, WBDB
2013 Workshop Presentation, Intel China Software Center, 2013.07.16.
[14] Dominique Heger, “Hadoop Performance Tuning - A Pragmatic & Iterative Approach”, Research
details by DHTechnologies ‐ www.dhtusa.com, 2013.
[15] Jason Dai, “Toward Efficient Provisioning and Performance Tuning for Hadoop”, Apache Asia
Roadshow 2010, Intel China Software Center, June 2010.
[16] Remote Direct Memory Access : http://en.wikipedia.org/wiki/Remote_direct_memory_access
[17] Liang Ming , Dan Feng, Fang Wang, Qi Chen, Yang Li, Yong Wan, Jun Zhou, "A Performance
Enhanced User-space Remote Procedure Call on InfiniBand*", Photonics and Optolectronics Meetings
(POEM)., 2011.
[18] Fan Liang, Chen Feng, Xiaoyi Lu, Zhiwei Xu, "Performance Benefits of DataMPI:A Case Study with
BigDataBench", ACM SOFT BPOE ‟14, Mar 1, 2014, Salt Lake City, Utah, USA.
[19] Xiaoyi Lu, Nusrat S. Islam, Md. Wasi-ur-Rahman, Jithin Jose, Hari Subramoni, Hao Wang, and
Dhabaleswar K. (DK) Panda, "High-Performance Design of Hadoop RPC with RDMA over
InfiniBand", National Science Foundation grants #OCI-0926691, #OCI-1148371 and #CCF-1213084,
2013 IEEE.
[20] K. Gupta, R. Jain, H. Pucha, P. Sarkar, and D. Subhraveti, “Scaling Highly-Parallel Data-Intensive
Supercomputing Applications on a Parallel Clustered Filesystem,” in The SC10 Storage Challenge.
Piyush Saxena Pursuing Master of Technology in Computer Science and Engineering from
Amity School of Engineering and Technology, Amity University Uttar Pradesh, Noida, India,
Area of Interest: Cloud Computing, Data Mining and Warehousing and Soft Computing.