Parallel computing is computing architecture paradigm ., in which processing required to solve a problem is done in more than one processor parallel way.
Parallel computing and its applicationsBurhan Ahmed
Parallel computing is a type of computing architecture in which several processors execute or process an application or computation simultaneously. Parallel computing helps in performing large computations by dividing the workload between more than one processor, all of which work through the computation at the same time. Most supercomputers employ parallel computing principles to operate. Parallel computing is also known as parallel processing.
↓↓↓↓ Read More:
Watch my videos on snack here: --> --> http://sck.io/x-B1f0Iy
@ Kindly Follow my Instagram Page to discuss about your mental health problems-
-----> https://instagram.com/mentality_streak?utm_medium=copy_link
@ Appreciate my work:
-----> behance.net/burhanahmed1
Thank-you !
Please contact me to download this pres.A comprehensive presentation on the field of Parallel Computing.It's applications are only growing exponentially day by days.A useful seminar covering basics,its classification and implementation thoroughly.
Visit www.ameyawaghmare.wordpress.com for more info
Memory organization
Memory Organization in Computer Architecture. A memory unit is the collection of storage units or devices together. The memory unit stores the binary information in the form of bits. ... Volatile Memory: This loses its data, when power is switched off.
Parallel computing is computing architecture paradigm ., in which processing required to solve a problem is done in more than one processor parallel way.
Parallel computing and its applicationsBurhan Ahmed
Parallel computing is a type of computing architecture in which several processors execute or process an application or computation simultaneously. Parallel computing helps in performing large computations by dividing the workload between more than one processor, all of which work through the computation at the same time. Most supercomputers employ parallel computing principles to operate. Parallel computing is also known as parallel processing.
↓↓↓↓ Read More:
Watch my videos on snack here: --> --> http://sck.io/x-B1f0Iy
@ Kindly Follow my Instagram Page to discuss about your mental health problems-
-----> https://instagram.com/mentality_streak?utm_medium=copy_link
@ Appreciate my work:
-----> behance.net/burhanahmed1
Thank-you !
Please contact me to download this pres.A comprehensive presentation on the field of Parallel Computing.It's applications are only growing exponentially day by days.A useful seminar covering basics,its classification and implementation thoroughly.
Visit www.ameyawaghmare.wordpress.com for more info
Memory organization
Memory Organization in Computer Architecture. A memory unit is the collection of storage units or devices together. The memory unit stores the binary information in the form of bits. ... Volatile Memory: This loses its data, when power is switched off.
Processes description and process control.Ahsan Rahim
My Fiverr Profile: https://www.fiverr.com/ahsanrahim786
Click to download more Books: http://appsdisaster.blogspot.com/
Process description and process control block in operating system. It includes Process Control Block, Two-states Process model & Five-state process models,
Audio Version available in YouTube Link : https://www.youtube.com/AKSHARAM?sub_confirmation=1
subscribe the channel
Computer Architecture and Organization
V semester
Anna University
By
Babu M, Assistant Professor
Department of ECE
RMK College of Engineering and Technology
Chennai
Memory Hierarchy
The memory unit is an essential component in any digital computer since it is needed for storing programs and data
Not all accumulated information is needed by the CPU at the same time
Therefore, it is more economical to use low-cost storage devices to serve as a backup for storing the information that is not currently used by CPU
auxiliary memory
main memory
cache memory
RAM– Random Access memory
Random Access Memory Types
Dynamic RAM (DRAM)
ROM(Read Only Memory)
ROM(Read Only Memory)
Processes description and process control.Ahsan Rahim
My Fiverr Profile: https://www.fiverr.com/ahsanrahim786
Click to download more Books: http://appsdisaster.blogspot.com/
Process description and process control block in operating system. It includes Process Control Block, Two-states Process model & Five-state process models,
Audio Version available in YouTube Link : https://www.youtube.com/AKSHARAM?sub_confirmation=1
subscribe the channel
Computer Architecture and Organization
V semester
Anna University
By
Babu M, Assistant Professor
Department of ECE
RMK College of Engineering and Technology
Chennai
Memory Hierarchy
The memory unit is an essential component in any digital computer since it is needed for storing programs and data
Not all accumulated information is needed by the CPU at the same time
Therefore, it is more economical to use low-cost storage devices to serve as a backup for storing the information that is not currently used by CPU
auxiliary memory
main memory
cache memory
RAM– Random Access memory
Random Access Memory Types
Dynamic RAM (DRAM)
ROM(Read Only Memory)
ROM(Read Only Memory)
GPU programing
The Brick Wall -- UC Berkeley's View
Power Wall: power expensive, transistors free
Memory Wall: Memory slow, multiplies fast ILP Wall: diminishing returns on more ILP HW
A quick review and demonstration on how to get started on parallel computing with R. Includes an example of SNOW cluster set up in the departmental lab.
Iterative computations are at the core of the vast majority of data-intensive scientific computations. Recent advancements in data intensive computational fields are fueling a dramatic growth in number as well as usage of such data intensive iterative computations. The utility computing model introduced by cloud computing combined with the rich set of cloud infrastructure services offers a very viable environment for the scientists to perform data intensive computations. However, clouds by nature offer unique reliability and sustained performance challenges to large scale distributed computations necessitating computation frameworks specifically tailored for cloud characteristics to harness the power of clouds easily and effectively. My research focuses on identifying and developing user-friendly distributed parallel computation frameworks to facilitate the optimized efficient execution of iterative as well as non-iterative data-intensive computations in cloud environments, alongside the evaluation of heterogeneous cloud resources offering GPGPU resources in addition to CPU resources, for data-intensive iterative computations.
High Performance Parallel Computing with Clouds and Cloud Technologiesjaliyae
Infrastructure services (Infrastructure-as-a-service), provided by cloud vendors, allow any user to provision a large number of compute instances fairly easily. Whether leased from public clouds or allocated from private clouds, utilizing these virtual resources to perform data/compute intensive analyses requires employing different parallel runtimes to implement such applications. Among many parallelizable problems, most “pleasingly parallel” applications can be performed using MapReduce technologies such as Hadoop, CGL-MapReduce, and Dryad, in a fairly easy manner. However, many scientific applications, which have complex communication patterns, still require low latency communication mechanisms and rich set of communication constructs offered by runtimes such as MPI. In this paper, we first discuss large scale data analysis using different MapReduce implementations and then, we present a performance analysis of high performance parallel applications on virtualized resources.
This presentation, by big data guru Bernard Marr, outlines in simple terms what Big Data is and how it is used today. It covers the 5 V's of Big Data as well as a number of high value use cases.
advanced computer architecture unit 1 notes. topics covered are Parallel Computer Models : The state of computing, Multiprocessors and Multi-computers, Multi vector and SIMD Computers, PRAM and VLSI Models, Architectural Development Tracks
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
UiPath Test Automation using UiPath Test Suite series, part 5
Parallel computing in india
1.
2. What is parallel computing?
Parallel computing is a form of computation in which
many instructions are carried out simultaneously
operating on the principle that large problems can
often be divided into smaller ones,
which are then
solved concurrently
(in parallel)
3. Why Parallel Computing?
The primary reasons for using parallel computing:
Save time - wall clock time
Solve larger problems
Provide concurrency (do multiple things at the same
time)
4. Other reasons might
include:
Cost savings - using multiple "cheap" computing
resources instead of paying for time on a supercomputer.
Overcoming memory constraints - single computers
have very finite memory resources. For large problems,
using the memories of multiple computers may
overcome this obstacle.
5. Architecture Concepts
von Neumann Architecture
Comprised of four main components:
• Memory
• Control Unit
• Arithmetic Logic Unit
• Input / Output
6. Memory is used to store both program instructions
and data
• Program instructions are coded data which tell the
computer to do something
• Data is simply information to be used by the program
Control unit fetches instructions/data from memory ,
decodes the instructions and then sequentially
coordinates operations to accomplish the programmed
task.
Arithmetic Unit performs basic arithmetic operations
Input / Output is the interface to the human operator
8. Single Instruction, Single Data
(SISD):
A serial (non-parallel) computer Single instruction
only one instruction stream is being acted on by the
CPU during any one clock cycle Single data:
only one data stream is being used as input during any
one clock cycle
Deterministic execution
9. Single Instruction, Multiple Data
(SIMD):
A type of parallel computer Single instruction
All processing units execute the same instruction at
any given clock cycle
Multiple data: Each processing unit can operate on a
different data element
Best suited for specialized problems characterized by
a high degree of regularity, such as graphics/image
processing.
Two varieties: Processor Arrays and Vector Pipelines
10. Multiple Instruction, Single Data
(MISD):
A single data stream is fed into multiple processing
units.
Each processing unit operates on the data
independently via independent instruction streams.
Some conceivable uses might be :
multiple frequency filters operating on a single signal stream
multiple cryptography algorithms attempting to crack a single coded
message.
11. Multiple Instruction, Multiple Data
(MIMD):
Multiple Instruction: every processor may be
executing a different instruction stream
Multiple Data: every processor may be working with a
different data stream
Execution can be synchronous or asynchronous,
deterministic or non- deterministic
13. Shared Memory :
Shared memory parallel computers have in common
the ability for all processors to access all memory as
global address space.
Multiple processors can operate independently
Changes in a memory location effected by one
processor are visible to all other processors
Shared memory machines can be divided into two
main classes based upon memory access times: UMA
and NUMA.
14. Shared Memory : UMA
Uniform Memory Access (UMA):
Identical processors, Symmetric Multiprocessor (SMP)
Equal access and access times to memory
Sometimes called CC-UMA - Cache Coherent UMA.
Cache coherent means if one processor updates a
location in shared memory, all the other processors
know about the update.
15. Shared Memory : NUMA
Non-Uniform Memory Access (NUMA):
Often made by physically linking two or more SMPs
One SMP can directly access memory of another SMP
Not all processors have equal access time to all
memories
Memory access across link is slower
If cache coherency is maintained, then may also be
called CC-NUMA - Cache Coherent NUMA
16. Distributed Memory :
Distributed memory systems require a communication
network to connect inter-processor memory
Processors have their own local memory
Because each processor has its own local memory, it
operates independently. Hence, the concept of cache
coherency does not apply
When a processor needs access to data in another
processor, it is usually the task of the programmer to
explicitly define how and when data is communicated.
Synchronization between tasks is likewise the
programmers responsibility.
17. Hybrid Distributed-Shared
Memory:
The shared memory component is usually a cache
coherent SMP machine. Processors on a given SMP
can address that machines memory as global
The distributed memory component is the
networking of multiple SMPs. SMPs know only about
their own memory - not the memory on another SMP.
Therefore, network communications are required to
move data from one SMP to another.
18. The main fields that need advanced
computing are:
Computational Fluid Dynamics
Design of large structures
Computational physics and Chemistry
Climate Modeling
Vehicle Simulation
Image processing
Signal Processing
Oil reservoir Management
Seismic data processing
19. India ‘s contribution towards
parallel computing
India has made significant strides in developing high-
performance parallel computers.
21. PARAM (Centre for Development
of Advanced Computing)
India’s first super computer – PARAM 8000
It was indigenously built in 1990
1 GF (Giga Flops) Parallel Machine
64 node prototype i.e. had 64 CPU’s
PARAM is Sanskrit and means “Supreme”
Programming environment called PARAS
22. Based on Transputers 800/805
Theoretically peak performance was 1 giga flops
Practically provided 100 to 200 Mflops
Hardware upgrade was given to PARAM 8000 to
produce the new PARAM 8600
Hardware upgradation was the integration of Intel
i860 coprocessor with PARAM 8000
It was a 256 CPU computer
23. PARAM 9000
Mission was to deliver teraflops range parallel system
This architecture emphasizes flexibility
PARAM 9000SS is based on SuperSarc Processors
Operating speed of processor is 75 Mhz
PARAM 10000 has a peak speed of 6.4 GFlops
24. PARAM Padma
Introduced in April 2003
Top speed of 1024 Gflops ( 1 Tflops)
Used IBM Power4 CPU’s
Operating system was IBM AIX 5
First Indian computer to break 1 Tflops barrier
25. PARAM Yuva
Unveiled in November 2008
The maximum sustainable speed is 38.1 Tflops
The peak speed is 54 Tflops
Uses Intel 73XX with 2.9 Ghz each.
Storage capacity of 25 TB up to 200 TB
PARAM Yuva II released in February 2013
Peak performance of 524 Tflops
Uses less power compared to its predecessor
26. ANUPAM
Developed by Bhabha Atomic Research Centre,
Bombay
200 Mflops of sustained computing power was
needed by them
Based on standard MultiBus II i860 hardware
27. ANUPAM 860
First developed in December 1991
It made use of i860 microprocessor @ 40Mhz
Overall sustained speed of 30 Mflops
Upgraded version released on August 1992 has a
computational speed of 52 Mflops
Further upgradation provided a sustained speed of 110
Mflops which was released in November 1992
28. ANUPAM Alpha
Developed in July 1997 having a sustained speed of
1000 Mflops
Made use of Alpha 21164 microprocessor @ 400 Mhz
This system used complete servers / workstations as
compute node instead of processor boards.
Updated version released in March 1998 had a
sustained speed of 1.5 Gflops.
29. ANUPAM Pentium
Started in January 1998
Main focus of its development is the minimization of
cost
The first version ANUPAM Pentium II/4 gave a
sustained speed of 248 Mflops
ANUPAM Pentium II was expanded in march 1999
with a sustained speed of 1.3Gflops
30. In April 2000 the system was upgraded to Pentium
III/16 which gave a sustained speed of 3.5 Gflops
ANUPAM PIV 64 node has a sustained speed of 43
Gflop
31. Applications
All the three versions of ANUPAM was introduced to
solve the computational problems at BARC.
The main fields that ANUPAM being used are ◦
Molecular Dynamic Simulation
◦ Neutron Transport Calculation
◦ Gamma Ray Simulation by Monte Carlo method
◦ Crystal Structure Analysis
32. PACE
Developed by ANURAG (Advanced Numerical
Research and Analysis Group) under DRDO
Developed as a result of R & D in parallel computing
Uses VLSI
Started in 1998
Motorolla 68020 processor @ 16.67 MHz
33. Processor for Aerodynamic Computation and
Evaluation (PACE)
Used to design computational Fluid Dynamics needed
in aircraft
Developed version is Pace Plus 32 used in missile
development
More advanced version is PACE++
34. ANAMICA - Software
ANURAG’s Medical Imaging and Characterization Aid
(ANAMICA)
Medical visualization software for data obtained from
MRI , CT and Ultrasound
Has both two dimensional and three dimensional
visualization
Used for medical diagnosis etc.
35. DHRUVA 3
Set up by DRDO for solving mission critical Defence
Research and Development applications
Used in design of aircraft
E.g.: Advanced Medium Combact Aircraft (AMCA)
36. FLOSOLVER
Started in 1986 by National Aerospace Laboratories
(NAL)
Used in numerical weather prediction
Varsha GCM could predict the weather accurately in
two weeks advance uses FS
Based on 16 bit Intel 8086 and 8087 processors
Updated versions were released to increase the
performance
37. CHIPPS
Developed to have indigenous digital switching
technology
Established in rural exchanges and secondary
switching areas
Speed of 200 Mflops is acquired