Computer architecture involves designing hardware and software interfaces to meet performance, energy, cost and other goals. There are fundamental tradeoffs between instruction set architecture (ISA) and microarchitecture implementations. The von Neumann model of sequential instruction processing from a program counter is the dominant paradigm but alternatives like dataflow exist. Computer architecture design considers the problem space and balances factors like cost, power and time to market.
1. Computer architecture involves designing hardware components and the hardware/software interface to meet goals like performance, power consumption, and cost.
2. The Von Neumann model, which is still used in most instruction set architectures today, involves storing both instructions and data in memory and processing instructions sequentially based on a program counter.
3. The instruction set architecture (ISA) defines the interface between software and hardware, while the microarchitecture involves specific implementation details not visible to software.
Computer science is an interdisciplinary field that studies computation through hardware, software, and theory. It involves designing algorithms, developing programs, analyzing computational systems, and studying the capabilities and limitations of computation. While some debate whether it is a science, computer science follows a rigorous scientific process of hypothesis testing and analysis. It is considered an "artificial science" that studies human-made computational systems rather than natural phenomena. The field encompasses many subfields that approach computation from different perspectives like algorithms, architecture, artificial intelligence, and more.
Chap10.ppt Chemistry applications in computer sciencepranshu19981
Chemistry applications in computer scienceChemistry applications in computer scienceChemistry applications in computer scienceChemistry applications in computer scienceChemistry applications in computer scienceChemistry applications in computer scienceChemistry applications in computer science
This document discusses re-engineering engineering approaches from a top-down to a bottom-up model. It addresses issues facing CIOs like legacy systems, skills shortages, and changing business needs. The author advocates for evolvability, specialization, experimentation, and diversity to allow for continuous change and innovation. The road ahead is seen as requiring more modularity, adaptability, personalization and application interoperability to deal with increasing complexity. The future of computing is envisioned as autonomous systems that are self-healing, self-optimizing through technologies like SDN, NFV and platform-as-a-service models.
This document provides an introduction to parallel computing. It begins with definitions of parallel computing as using multiple compute resources simultaneously to solve problems. Popular parallel architectures include shared memory, where all processors can access a common memory, and distributed memory, where each processor has its own local memory and they communicate over a network. The document discusses key parallel computing concepts and terminology such as Flynn's taxonomy, parallel overhead, scalability, and memory models including uniform memory access (UMA), non-uniform memory access (NUMA), and distributed memory. It aims to provide background on parallel computing topics before examining how to parallelize different types of programs.
The document provides information about a computer architecture course taught by Mohamed ELARBI including:
- Contact information for the instructor
- Recommended textbooks and other resources
- A list of topics to be covered each week throughout the course including parallel processing, CPU design, pipelining, and memory hierarchy
- Definitions of key terms related to computer architecture and organization such as the difference between architecture and organization
- An overview of the von Neumann model and system bus model of computer system organization
Parallel processing architectures allow for simultaneous computation across multiple processing elements. There are four main types of parallel architectures: single instruction single data (SISD), single instruction multiple data (SIMD), multiple instruction single data (MISD), and multiple instruction multiple data (MIMD). MIMD systems are the most common and can have either shared or distributed memory. Effective parallel programming requires approaches like message passing or shared memory models to facilitate communication between processing elements.
1. Computer architecture involves designing hardware components and the hardware/software interface to meet goals like performance, power consumption, and cost.
2. The Von Neumann model, which is still used in most instruction set architectures today, involves storing both instructions and data in memory and processing instructions sequentially based on a program counter.
3. The instruction set architecture (ISA) defines the interface between software and hardware, while the microarchitecture involves specific implementation details not visible to software.
Computer science is an interdisciplinary field that studies computation through hardware, software, and theory. It involves designing algorithms, developing programs, analyzing computational systems, and studying the capabilities and limitations of computation. While some debate whether it is a science, computer science follows a rigorous scientific process of hypothesis testing and analysis. It is considered an "artificial science" that studies human-made computational systems rather than natural phenomena. The field encompasses many subfields that approach computation from different perspectives like algorithms, architecture, artificial intelligence, and more.
Chap10.ppt Chemistry applications in computer sciencepranshu19981
Chemistry applications in computer scienceChemistry applications in computer scienceChemistry applications in computer scienceChemistry applications in computer scienceChemistry applications in computer scienceChemistry applications in computer scienceChemistry applications in computer science
This document discusses re-engineering engineering approaches from a top-down to a bottom-up model. It addresses issues facing CIOs like legacy systems, skills shortages, and changing business needs. The author advocates for evolvability, specialization, experimentation, and diversity to allow for continuous change and innovation. The road ahead is seen as requiring more modularity, adaptability, personalization and application interoperability to deal with increasing complexity. The future of computing is envisioned as autonomous systems that are self-healing, self-optimizing through technologies like SDN, NFV and platform-as-a-service models.
This document provides an introduction to parallel computing. It begins with definitions of parallel computing as using multiple compute resources simultaneously to solve problems. Popular parallel architectures include shared memory, where all processors can access a common memory, and distributed memory, where each processor has its own local memory and they communicate over a network. The document discusses key parallel computing concepts and terminology such as Flynn's taxonomy, parallel overhead, scalability, and memory models including uniform memory access (UMA), non-uniform memory access (NUMA), and distributed memory. It aims to provide background on parallel computing topics before examining how to parallelize different types of programs.
The document provides information about a computer architecture course taught by Mohamed ELARBI including:
- Contact information for the instructor
- Recommended textbooks and other resources
- A list of topics to be covered each week throughout the course including parallel processing, CPU design, pipelining, and memory hierarchy
- Definitions of key terms related to computer architecture and organization such as the difference between architecture and organization
- An overview of the von Neumann model and system bus model of computer system organization
Parallel processing architectures allow for simultaneous computation across multiple processing elements. There are four main types of parallel architectures: single instruction single data (SISD), single instruction multiple data (SIMD), multiple instruction single data (MISD), and multiple instruction multiple data (MIMD). MIMD systems are the most common and can have either shared or distributed memory. Effective parallel programming requires approaches like message passing or shared memory models to facilitate communication between processing elements.
This document describes the history and evolution of computing systems from the early use of abacuses through modern computers, networks, and software. It discusses the layers of a computing system including hardware and software, and how abstraction is a key concept. The roles of systems programmers who build tools versus applications programmers and users who utilize tools are also distinguished.
The document discusses key topics in computer organization and design, including:
- The evolution of computing technology driven by Moore's law and new applications.
- Different classes of computers like PCs, servers, supercomputers, and embedded systems.
- Emerging post-PC devices like smartphones, tablets, and cloud computing.
- The hardware and software layers that make up a computer system, from high-level programs to machine-level execution.
The document discusses computer memory organization and hierarchy. It describes:
- Main memory as the primary storage location that directly communicates with the CPU. Main memory is typically RAM.
- Auxiliary memory as secondary storage units like magnetic disks and tapes that provide backup storage.
- Cache memory as a faster memory located between the CPU and main memory that stores frequently used contents of main memory for quicker access by the CPU.
- Virtual memory as a memory management technique that allows programs to run as if they have more memory than what is physically installed by swapping contents to auxiliary memory.
This document provides an overview of computer architecture. It discusses how computer architecture is the science of selecting and interconnecting hardware components to meet functional, performance and cost goals. The document then explains key concepts like instruction sets, components of computers like processors and memory, and design goals for architectures like performance, cost and reliability. It also covers the evolution of computer technologies over generations from mechanical to vacuum tube-based systems to today's integrated circuits.
The document discusses high performance cluster computing, including its architecture, systems, applications, and enabling technologies. It provides an overview of cluster computing and classifications of cluster systems. Key components of cluster architecture are discussed, along with representative cluster systems and conclusions. Cluster middleware, resources, and applications are also mentioned.
Lecturer1 introduction to computer architecture (ca)ADEOLA ADISA
The document provides an overview of the topics that will be covered in a computer architecture course. It discusses the course structure, including an introduction, sign-in sheet, and evaluation. The topics to be covered are the history of computers, organization and architecture, structure and function, evolution of Intel x86, embedded systems, and cloud computing. The goals of the course are to explain computer functions and evolution, overview x86 architecture evolution, define embedded systems, and present cloud computing models. Generations of computers and the technologies that defined each generation are also summarized.
Parallel computing is computing architecture paradigm ., in which processing required to solve a problem is done in more than one processor parallel way.
This chapter discusses various classification attributed to parallel architectures. It also introduces related parallel programming models and presents the actions of these models on parallel architectures. Notions such as Data parallelism Task parallelism, Tighty and Coupled system, UMA/NUMA, Multicore computing, Symmetric multiprocessing, Distributed Computing, Cluster computing, Shared memory without thread/Thread, etc..
BCO 117 IT Software for Business Lecture Reference Notes.docxjesuslightbody
BCO 117 IT Software for Business
Lecture Reference Notes
Cloud
computing
Eras in IT infrastructure evolution
Chapter 5. IT Infrastructure and EmergingTechnologies
Management Information Systems (Kenneth P. Laudon, Jane C. Laudon)
An information technology (IT) paradigm, a model for enabling ubiquitous access to shared pools of configurable resources (such as computer networks, servers, storage, applications and services), which
can be rapidly provisioned with minimal management effort, often over the Internet.
· Computing as a service
· Computing on the Internet
· Business line for computing corporations
Hassan, Qusay (2011).
"Demystifying Cloud Computing"(PDF).
The Journal of Defense Software Engineering.
Cloud computing
Cloud computing
Cloud computing
Cloud computing
www.euruni.edu
Cloud computing examples
Software as a Service
Platform as a Service
Insfrastructure as a Service
Cloud computing examples
Cloud computing examples
https://aws.amazon.com/products/?hp=tile&so-exp=below
Cloud computing examples
Cloud computing examples
Cloud computing examples
www.euruni.edu
Cloud computing examples
Cloud computing examples
Cloud computing examples
www.euruni.edu
Cloud computing success
Key concepts
·
Reliability – reliability of the system, measured in Mean Time Between Failures (MTBF)
·
Availability – uptime of the system or application, measured in parts per million (PPM) of downtime
·
Serviceability – easily restoring the system after a failure, measured in Mean Time To Repair (MTTR)
·
Manageability – the ease with which the entire system can be managed, measured in systems per headcount.
·
Scalability - the ability of an information system to be used or produced in a range of capabilities
·
“Updatability”– a key factor linked to performance, integration with other IS and security
https://software.intel.com/en-us/articles/total-cost-of-ownership-factors-to-consider
Top Benefits of Cloud Computing
http://www.mushibhuiyan.com/category/cloud/
Debate
https://www.forbes.com/sites/louiscolumbus/2013/08/13/idg-cloud-computing-survey-security-integration-challenge-growth/#268d6d3755cb
Debate
https://www.forbes.com/sites/louiscolumbus/2013/08/13/idg-cloud-computing-survey-security-integration-challenge-growth/#268d6d3755cbCloud Computing strategy
https://www-01.ibm.com/common/ssi/cgi-bin/ssialias?htmlfid=WUW12350USEN
www.euruni.edu
image24.jpg
image25.jpg
image26.png
image27.png
image31.png
image29.png
image28.png
image3.jpg
image4.png
image5.jpg
image30.png
image4.jpg
image6.jpg
image7.jpg
image8.jpg
image9.jpg
image70.jpg
image80.jpg
image10.png
image11.png
image12.jpg
image1.png
image13.png
image14.jpg
image15.jpg
image13.jpg
image140.jpg
image16.png
image17.png
image15.png
image32.png
image2.png
image18.jpg
image19.png
i.
The document provides an overview of a course on software engineering. It discusses key concepts like structured programming, object-oriented programming, design principles of abstraction and modularity. It also covers programming in languages like C and Matlab. The goals of the course are to understand basic program design techniques, produce well-structured programs, and have a basic understanding of object-oriented design.
The document describes a course on software engineering that covers basic design principles and techniques like structured programming, object-oriented programming, and data structures. It aims to give students an understanding of how to produce well-structured, maintainable code. Examples will be shown in MATLAB and C/C++. Key concepts covered include abstraction, modularity, procedural and object-oriented programming, functions, classes, and arrays.
Unit 1 Introduction to Embedded computing and ARM processorVenkat Ramanan C
INTRODUCTION TO EMBEDDED COMPUTING AND ARM PROCESSORS
Complex systems and microprocessors – Embedded system design process – Formalism for system design– Design example: Model train controller- ARM Processor Fundamentals- Instruction Set and Programming using ARM Processor.
This document provides an introduction to embedded real-time systems. It defines embedded systems as special purpose computer systems that are embedded or hidden within larger systems. Embedded systems often have constraints in areas like cost, memory, power consumption and real-time requirements. The document discusses the increasing interest and opportunities in embedded systems education and careers due to growth in microprocessor technology and the integration of computers into everyday devices and industrial systems. Key topics covered include embedded system examples, trends, and the multi-disciplinary knowledge and skills required of embedded systems designers.
This document discusses IT infrastructure, including hardware, software, networks, and data management technology. It covers the types and sizes of computers from personal computers to supercomputers. It also discusses operating systems, application software, groupware, and contemporary trends like edge computing, virtual machines, and cloud computing. The document examines different types of networks including client-server, web servers, and storage area networks. It provides an overview of strategic decision making around managing infrastructure technology.
This document discusses distributed system models and enabling technologies for scalable computing over the Internet. It covers the evolution of computing from centralized to distributed models utilizing multiple computers over the Internet. Key topics include high-performance computing, high-throughput computing, different levels of parallelism, innovative applications, and new paradigms like the Internet of Things. Computer clusters, virtualization, and their design are also summarized.
The document provides an overview of computer architecture and organization. It discusses how computers work at a component level, different types of computer systems and their structures and behaviors. It covers key concepts like instruction sets, memory access methods, and I/O mechanisms. The document also distinguishes between architecture, which defines attributes visible to programmers, and organization, which refers to how components are interconnected. It outlines reasons for studying computer architecture and what skills can be developed. Finally, it briefly traces the evolution of computers through different generations defined by underlying technologies.
This Chapter provides a Background Review of Parallel and Distributed Computing. a focus is made on the concept of SISD, SIMD, MISD, MIMD.
It also gives an understanding of the notion of HPC (High-Performance Computing). A survey is done using some case studies to show why parallelism is needed. The chapter discusses the Amdahl's Law and the limitations. Gustafson's Law is also discussed.
Introduction into the problems of developing parallel programsPVS-Studio
As developing parallel software is rather a difficult task at present, the questions of theoretical training of specialists and investigation of methodology of projecting such systems become very urgent. Within the framework of this article we provide historical and technical information preparing a programmer for gaining knowledge in the sphere of developing parallel computer systems.
Creative Restart 2024: Mike Martin - Finding a way around “no”Taste
Ideas that are good for business and good for the world that we live in, are what I’m passionate about.
Some ideas take a year to make, some take 8 years. I want to share two projects that best illustrate this and why it is never good to stop at “no”.
Elevate Your Nonprofit's Online Presence_ A Guide to Effective SEO Strategies...TechSoup
Whether you're new to SEO or looking to refine your existing strategies, this webinar will provide you with actionable insights and practical tips to elevate your nonprofit's online presence.
More Related Content
Similar to onur-comparch-fall2018-lecture3a-whycomparch-afterlecture.pptx
This document describes the history and evolution of computing systems from the early use of abacuses through modern computers, networks, and software. It discusses the layers of a computing system including hardware and software, and how abstraction is a key concept. The roles of systems programmers who build tools versus applications programmers and users who utilize tools are also distinguished.
The document discusses key topics in computer organization and design, including:
- The evolution of computing technology driven by Moore's law and new applications.
- Different classes of computers like PCs, servers, supercomputers, and embedded systems.
- Emerging post-PC devices like smartphones, tablets, and cloud computing.
- The hardware and software layers that make up a computer system, from high-level programs to machine-level execution.
The document discusses computer memory organization and hierarchy. It describes:
- Main memory as the primary storage location that directly communicates with the CPU. Main memory is typically RAM.
- Auxiliary memory as secondary storage units like magnetic disks and tapes that provide backup storage.
- Cache memory as a faster memory located between the CPU and main memory that stores frequently used contents of main memory for quicker access by the CPU.
- Virtual memory as a memory management technique that allows programs to run as if they have more memory than what is physically installed by swapping contents to auxiliary memory.
This document provides an overview of computer architecture. It discusses how computer architecture is the science of selecting and interconnecting hardware components to meet functional, performance and cost goals. The document then explains key concepts like instruction sets, components of computers like processors and memory, and design goals for architectures like performance, cost and reliability. It also covers the evolution of computer technologies over generations from mechanical to vacuum tube-based systems to today's integrated circuits.
The document discusses high performance cluster computing, including its architecture, systems, applications, and enabling technologies. It provides an overview of cluster computing and classifications of cluster systems. Key components of cluster architecture are discussed, along with representative cluster systems and conclusions. Cluster middleware, resources, and applications are also mentioned.
Lecturer1 introduction to computer architecture (ca)ADEOLA ADISA
The document provides an overview of the topics that will be covered in a computer architecture course. It discusses the course structure, including an introduction, sign-in sheet, and evaluation. The topics to be covered are the history of computers, organization and architecture, structure and function, evolution of Intel x86, embedded systems, and cloud computing. The goals of the course are to explain computer functions and evolution, overview x86 architecture evolution, define embedded systems, and present cloud computing models. Generations of computers and the technologies that defined each generation are also summarized.
Parallel computing is computing architecture paradigm ., in which processing required to solve a problem is done in more than one processor parallel way.
This chapter discusses various classification attributed to parallel architectures. It also introduces related parallel programming models and presents the actions of these models on parallel architectures. Notions such as Data parallelism Task parallelism, Tighty and Coupled system, UMA/NUMA, Multicore computing, Symmetric multiprocessing, Distributed Computing, Cluster computing, Shared memory without thread/Thread, etc..
BCO 117 IT Software for Business Lecture Reference Notes.docxjesuslightbody
BCO 117 IT Software for Business
Lecture Reference Notes
Cloud
computing
Eras in IT infrastructure evolution
Chapter 5. IT Infrastructure and EmergingTechnologies
Management Information Systems (Kenneth P. Laudon, Jane C. Laudon)
An information technology (IT) paradigm, a model for enabling ubiquitous access to shared pools of configurable resources (such as computer networks, servers, storage, applications and services), which
can be rapidly provisioned with minimal management effort, often over the Internet.
· Computing as a service
· Computing on the Internet
· Business line for computing corporations
Hassan, Qusay (2011).
"Demystifying Cloud Computing"(PDF).
The Journal of Defense Software Engineering.
Cloud computing
Cloud computing
Cloud computing
Cloud computing
www.euruni.edu
Cloud computing examples
Software as a Service
Platform as a Service
Insfrastructure as a Service
Cloud computing examples
Cloud computing examples
https://aws.amazon.com/products/?hp=tile&so-exp=below
Cloud computing examples
Cloud computing examples
Cloud computing examples
www.euruni.edu
Cloud computing examples
Cloud computing examples
Cloud computing examples
www.euruni.edu
Cloud computing success
Key concepts
·
Reliability – reliability of the system, measured in Mean Time Between Failures (MTBF)
·
Availability – uptime of the system or application, measured in parts per million (PPM) of downtime
·
Serviceability – easily restoring the system after a failure, measured in Mean Time To Repair (MTTR)
·
Manageability – the ease with which the entire system can be managed, measured in systems per headcount.
·
Scalability - the ability of an information system to be used or produced in a range of capabilities
·
“Updatability”– a key factor linked to performance, integration with other IS and security
https://software.intel.com/en-us/articles/total-cost-of-ownership-factors-to-consider
Top Benefits of Cloud Computing
http://www.mushibhuiyan.com/category/cloud/
Debate
https://www.forbes.com/sites/louiscolumbus/2013/08/13/idg-cloud-computing-survey-security-integration-challenge-growth/#268d6d3755cb
Debate
https://www.forbes.com/sites/louiscolumbus/2013/08/13/idg-cloud-computing-survey-security-integration-challenge-growth/#268d6d3755cbCloud Computing strategy
https://www-01.ibm.com/common/ssi/cgi-bin/ssialias?htmlfid=WUW12350USEN
www.euruni.edu
image24.jpg
image25.jpg
image26.png
image27.png
image31.png
image29.png
image28.png
image3.jpg
image4.png
image5.jpg
image30.png
image4.jpg
image6.jpg
image7.jpg
image8.jpg
image9.jpg
image70.jpg
image80.jpg
image10.png
image11.png
image12.jpg
image1.png
image13.png
image14.jpg
image15.jpg
image13.jpg
image140.jpg
image16.png
image17.png
image15.png
image32.png
image2.png
image18.jpg
image19.png
i.
The document provides an overview of a course on software engineering. It discusses key concepts like structured programming, object-oriented programming, design principles of abstraction and modularity. It also covers programming in languages like C and Matlab. The goals of the course are to understand basic program design techniques, produce well-structured programs, and have a basic understanding of object-oriented design.
The document describes a course on software engineering that covers basic design principles and techniques like structured programming, object-oriented programming, and data structures. It aims to give students an understanding of how to produce well-structured, maintainable code. Examples will be shown in MATLAB and C/C++. Key concepts covered include abstraction, modularity, procedural and object-oriented programming, functions, classes, and arrays.
Unit 1 Introduction to Embedded computing and ARM processorVenkat Ramanan C
INTRODUCTION TO EMBEDDED COMPUTING AND ARM PROCESSORS
Complex systems and microprocessors – Embedded system design process – Formalism for system design– Design example: Model train controller- ARM Processor Fundamentals- Instruction Set and Programming using ARM Processor.
This document provides an introduction to embedded real-time systems. It defines embedded systems as special purpose computer systems that are embedded or hidden within larger systems. Embedded systems often have constraints in areas like cost, memory, power consumption and real-time requirements. The document discusses the increasing interest and opportunities in embedded systems education and careers due to growth in microprocessor technology and the integration of computers into everyday devices and industrial systems. Key topics covered include embedded system examples, trends, and the multi-disciplinary knowledge and skills required of embedded systems designers.
This document discusses IT infrastructure, including hardware, software, networks, and data management technology. It covers the types and sizes of computers from personal computers to supercomputers. It also discusses operating systems, application software, groupware, and contemporary trends like edge computing, virtual machines, and cloud computing. The document examines different types of networks including client-server, web servers, and storage area networks. It provides an overview of strategic decision making around managing infrastructure technology.
This document discusses distributed system models and enabling technologies for scalable computing over the Internet. It covers the evolution of computing from centralized to distributed models utilizing multiple computers over the Internet. Key topics include high-performance computing, high-throughput computing, different levels of parallelism, innovative applications, and new paradigms like the Internet of Things. Computer clusters, virtualization, and their design are also summarized.
The document provides an overview of computer architecture and organization. It discusses how computers work at a component level, different types of computer systems and their structures and behaviors. It covers key concepts like instruction sets, memory access methods, and I/O mechanisms. The document also distinguishes between architecture, which defines attributes visible to programmers, and organization, which refers to how components are interconnected. It outlines reasons for studying computer architecture and what skills can be developed. Finally, it briefly traces the evolution of computers through different generations defined by underlying technologies.
This Chapter provides a Background Review of Parallel and Distributed Computing. a focus is made on the concept of SISD, SIMD, MISD, MIMD.
It also gives an understanding of the notion of HPC (High-Performance Computing). A survey is done using some case studies to show why parallelism is needed. The chapter discusses the Amdahl's Law and the limitations. Gustafson's Law is also discussed.
Introduction into the problems of developing parallel programsPVS-Studio
As developing parallel software is rather a difficult task at present, the questions of theoretical training of specialists and investigation of methodology of projecting such systems become very urgent. Within the framework of this article we provide historical and technical information preparing a programmer for gaining knowledge in the sphere of developing parallel computer systems.
Similar to onur-comparch-fall2018-lecture3a-whycomparch-afterlecture.pptx (20)
Creative Restart 2024: Mike Martin - Finding a way around “no”Taste
Ideas that are good for business and good for the world that we live in, are what I’m passionate about.
Some ideas take a year to make, some take 8 years. I want to share two projects that best illustrate this and why it is never good to stop at “no”.
Elevate Your Nonprofit's Online Presence_ A Guide to Effective SEO Strategies...TechSoup
Whether you're new to SEO or looking to refine your existing strategies, this webinar will provide you with actionable insights and practical tips to elevate your nonprofit's online presence.
Level 3 NCEA - NZ: A Nation In the Making 1872 - 1900 SML.pptHenry Hollis
The History of NZ 1870-1900.
Making of a Nation.
From the NZ Wars to Liberals,
Richard Seddon, George Grey,
Social Laboratory, New Zealand,
Confiscations, Kotahitanga, Kingitanga, Parliament, Suffrage, Repudiation, Economic Change, Agriculture, Gold Mining, Timber, Flax, Sheep, Dairying,
CapTechTalks Webinar Slides June 2024 Donovan Wright.pptxCapitolTechU
Slides from a Capitol Technology University webinar held June 20, 2024. The webinar featured Dr. Donovan Wright, presenting on the Department of Defense Digital Transformation.
A Free 200-Page eBook ~ Brain and Mind Exercise.pptxOH TEIK BIN
(A Free eBook comprising 3 Sets of Presentation of a selection of Puzzles, Brain Teasers and Thinking Problems to exercise both the mind and the Right and Left Brain. To help keep the mind and brain fit and healthy. Good for both the young and old alike.
Answers are given for all the puzzles and problems.)
With Metta,
Bro. Oh Teik Bin 🙏🤓🤔🥰
3. What is Computer Architecture?
The science and art of designing, selecting, and
interconnecting hardware components and designing the
hardware/software interface to create a computing system
that meets functional, performance, energy consumption,
cost, and other specific goals.
3
4. An Enabler: Moore’s Law
4
Moore, “Cramming more components onto integrated circuits,”
Electronics Magazine, 1965. Component counts double every other year
Image source: Intel
5. 5
Number of transistors on an integrated circuit doubles ~ every two years
Image source: Wikipedia
6. What Do We Use These Transistors for?
Your readings for this week should give you an idea…
Recommended
Patt, “Requirements, Bottlenecks, and Good Fortune: Agents for
Microprocessor Evolution,” Proceedings of the IEEE 2001.
Required for Review as part of HW 1
Moscibroda and Mutlu, “Memory Performance Attacks: Denial of
Memory Service in Multi-Core Systems,” USENIX Security 2007.
Liu+, “RAIDR: Retention-Aware Intelligent DRAM Refresh,” ISCA
2012.
Kim+, “Flipping Bits in Memory Without Accessing Them: An
Experimental Study of DRAM Disturbance Errors,” ISCA 2014.
6
7. Recommended Reading
Moore, “Cramming more components onto integrated
circuits,” Electronics Magazine, 1965.
Only 3 pages
A quote:
“With unit cost falling as the number of components per
circuit rises, by 1975 economics may dictate squeezing as
many as 65 000 components on a single silicon chip.”
Another quote:
“Will it be possible to remove the heat generated by tens of
thousands of components in a single silicon chip?”
7
8. Why Study Computer Architecture?
Enable better systems: make computers faster, cheaper,
smaller, more reliable, …
By exploiting advances and changes in underlying technology/circuits
Enable new applications
Life-like 3D visualization 20 years ago? Virtual reality?
Self-driving cars?
Personalized genomics? Personalized medicine?
Enable better solutions to problems
Software innovation is built on trends and changes in computer architecture
> 50% performance improvement per year has enabled this innovation
Understand why computers work the way they do
8
9. Computer Architecture Today (I)
Today is a very exciting time to study computer architecture
Industry is in a large paradigm shift (to multi-core and
beyond) – many different potential system designs possible
Many difficult problems motivating and caused by the shift
Huge hunger for data and new data-intensive applications
Power/energy/thermal constraints
Complexity of design
Difficulties in technology scaling
Memory wall/gap
Reliability problems
Programmability problems
Security and privacy issues
No clear, definitive answers to these problems
9
10. Computer Architecture Today (II)
These problems affect all parts of the computing stack – if
we do not change the way we design systems
No clear, definitive answers to these problems
10
Microarchitecture
ISA
Program/Language
Algorithm
Problem
Runtime System
(VM, OS, MM)
User
Logic
Circuits
Electrons
Many new demands
from the top
(Look Up)
Many new issues
at the bottom
(Look Down)
Fast changing
demands and
personalities
of users
(Look Up)
11. Computer Architecture Today (III)
Computing landscape is very different from 10-20 years ago
Both UP (software and humanity trends) and DOWN
(technologies and their issues), FORWARD and BACKWARD,
and the resulting requirements and constraints
11
General Purpose GPUs
Heterogeneous
Processors and
Accelerators
Hybrid Main Memory
Persistent Memory/Storage
Every component and its
interfaces, as well as
entire system designs
are being re-examined
12. Computer Architecture Today (IV)
You can revolutionize the way computers are built, if you
understand both the hardware and the software (and
change each accordingly)
You can invent new paradigms for computation,
communication, and storage
Recommended book: Thomas Kuhn, “The Structure of
Scientific Revolutions” (1962)
Pre-paradigm science: no clear consensus in the field
Normal science: dominant theory used to explain/improve
things (business as usual); exceptions considered anomalies
Revolutionary science: underlying assumptions re-examined
12
13. Computer Architecture Today (IV)
You can revolutionize the way computers are built, if you
understand both the hardware and the software (and
change each accordingly)
You can invent new paradigms for computation,
communication, and storage
Recommended book: Thomas Kuhn, “The Structure of
Scientific Revolutions” (1962)
Pre-paradigm science: no clear consensus in the field
Normal science: dominant theory used to explain/improve
things (business as usual); exceptions considered anomalies
Revolutionary science: underlying assumptions re-examined
13
14. Takeaways
It is an exciting time to be understanding and designing
computing architectures
Many challenging and exciting problems in platform design
That noone has tackled (or thought about) before
That can have huge impact on the world’s future
Driven by huge hunger for data (Big Data), new applications,
ever-greater realism, …
We can easily collect more data than we can analyze/understand
Driven by significant difficulties in keeping up with that
hunger at the technology layer
Three walls: Energy, reliability, complexity, security, scalability
14
15. … but, first …
Let’s understand the fundamentals…
You can change the world only if you understand it well
enough…
Especially the past and present dominant paradigms
And, their advantages and shortcomings – tradeoffs
And, what remains fundamental across generations
And, what techniques you can use and develop to solve
problems
15
17. What is A Computer?
Three key components
Computation
Communication
Storage (memory)
17
18. What is A Computer?
We will cover all three components (with a focus on
memory and I/O)
18
Memory
(program
and data)
I/O
Processing
control
(sequencing)
datapath
19. The Von Neumann Model/Architecture
Also called stored program computer (instructions in
memory). Two key properties:
Stored program
Instructions stored in a linear memory array
Memory is unified between instructions and data
The interpretation of a stored value depends on the control
signals
Sequential instruction processing
One instruction processed (fetched, executed, and completed) at a
time
Program counter (instruction pointer) identifies the current instr.
Program counter is advanced sequentially except for control transfer
instructions
19
When is a value interpreted as an instruction?
20. The Von Neumann Model/Architecture
Recommended readings
Burks, Goldstein, von Neumann, “Preliminary discussion of
the logical design of an electronic computing instrument,”
1946.
Patt and Patel book, Chapter 4, “The von Neumann Model”
Stored program
Sequential instruction processing
20
21. The Von Neumann Model (of a Computer)
21
CONTROL UNIT
IP Inst Register
PROCESSING UNIT
ALU TEMP
MEMORY
Mem Addr Reg
Mem Data Reg
INPUT OUTPUT
22. The Von Neumann Model (of a Computer)
Q: Is this the only way that a computer can operate?
A: No.
Qualified Answer: No, but it has been the dominant way
i.e., the dominant paradigm for computing
for N decades
22
23. The Dataflow Model (of a Computer)
Von Neumann model: An instruction is fetched and
executed in control flow order
As specified by the instruction pointer
Sequential unless explicit control flow instruction
Dataflow model: An instruction is fetched and executed in
data flow order
i.e., when its operands are ready
i.e., there is no instruction pointer
Instruction ordering specified by data flow dependence
Each instruction specifies “who” should receive the result
An instruction can “fire” whenever all operands are received
Potentially many instructions can execute at the same time
Inherently more parallel
23
24. Von Neumann vs Dataflow
Consider a Von Neumann program
What is the significance of the program order?
What is the significance of the storage locations?
Which model is more natural to you as a programmer?
24
v <= a + b;
w <= b * 2;
x <= v - w
y <= v + w
z <= x * y
+ *2
- +
*
a b
z
Sequential
Dataflow
25. More on Data Flow
In a data flow machine, a program consists of data flow
nodes
A data flow node fires (fetched and executed) when all it
inputs are ready
i.e. when all inputs have tokens
Data flow node and its ISA representation
25
28. ISA-level Tradeoff: Instruction Pointer
Do we need an instruction pointer in the ISA?
Yes: Control-driven, sequential execution
An instruction is executed when the IP points to it
IP automatically changes sequentially (except for control flow
instructions)
No: Data-driven, parallel execution
An instruction is executed when all its operand values are
available (data flow)
Tradeoffs: MANY high-level ones
Ease of programming (for average programmers)?
Ease of compilation?
Performance: Extraction of parallelism?
Hardware complexity?
28
29. ISA vs. Microarchitecture Level Tradeoff
A similar tradeoff (control vs. data-driven execution) can be
made at the microarchitecture level
ISA: Specifies how the programmer sees instructions to be
executed
Programmer sees a sequential, control-flow execution order vs.
Programmer sees a data-flow execution order
Microarchitecture: How the underlying implementation
actually executes instructions
Microarchitecture can execute instructions in any order as long
as it obeys the semantics specified by the ISA when making the
instruction results visible to software
Programmer should see the order specified by the ISA
29
30. Let’s Get Back to the Von Neumann Model
But, if you want to learn more about dataflow…
Dennis and Misunas, “A preliminary architecture for a basic
data-flow processor,” ISCA 1974.
Gurd et al., “The Manchester prototype dataflow
computer,” CACM 1985.
A later lecture or course
If you are really impatient:
http://www.youtube.com/watch?v=D2uue7izU2c
http://www.ece.cmu.edu/~ece740/f13/lib/exe/fetch.php?medi
a=onur-740-fall13-module5.2.1-dataflow-part1.ppt
30
31. The Von-Neumann Model
All major instruction set architectures today use this model
x86, ARM, MIPS, SPARC, Alpha, POWER
Underneath (at the microarchitecture level), the execution
model of almost all implementations (or, microarchitectures)
is very different
Pipelined instruction execution: Intel 80486 uarch
Multiple instructions at a time: Intel Pentium uarch
Out-of-order execution: Intel Pentium Pro uarch
Separate instruction and data caches
But, what happens underneath that is not consistent with
the von Neumann model is not exposed to software
Difference between ISA and microarchitecture
31
32. What is Computer Architecture?
ISA+implementation definition: The science and art of
designing, selecting, and interconnecting hardware
components and designing the hardware/software interface
to create a computing system that meets functional,
performance, energy consumption, cost, and other specific
goals.
Traditional (ISA-only) definition: “The term
architecture is used here to describe the attributes of a
system as seen by the programmer, i.e., the conceptual
structure and functional behavior as distinct from the
organization of the dataflow and controls, the logic design,
and the physical implementation.” Gene Amdahl, IBM
Journal of R&D, April 1964
32
33. ISA vs. Microarchitecture
ISA
Agreed upon interface between software
and hardware
SW/compiler assumes, HW promises
What the software writer needs to know
to write and debug system/user programs
Microarchitecture
Specific implementation of an ISA
Not visible to the software
Microprocessor
ISA, uarch, circuits
“Architecture” = ISA + microarchitecture
33
Microarchitecture
ISA
Program
Algorithm
Problem
Circuits
Electrons
34. ISA vs. Microarchitecture
What is part of ISA vs. Uarch?
Gas pedal: interface for “acceleration”
Internals of the engine: implement “acceleration”
Implementation (uarch) can be various as long as it
satisfies the specification (ISA)
Add instruction vs. Adder implementation
Bit serial, ripple carry, carry lookahead adders are all part of
microarchitecture
x86 ISA has many implementations: 286, 386, 486, Pentium,
Pentium Pro, Pentium 4, Core, …
Microarchitecture usually changes faster than ISA
Few ISAs (x86, ARM, SPARC, MIPS, Alpha) but many uarchs
Why?
34
35. ISA
Instructions
Opcodes, Addressing Modes, Data Types
Instruction Types and Formats
Registers, Condition Codes
Memory
Address space, Addressability, Alignment
Virtual memory management
Call, Interrupt/Exception Handling
Access Control, Priority/Privilege
I/O: memory-mapped vs. instr.
Task/thread Management
Power and Thermal Management
Multi-threading support, Multiprocessor support
35
36. Microarchitecture
Implementation of the ISA under specific design constraints
and goals
Anything done in hardware without exposure to software
Pipelining
In-order versus out-of-order instruction execution
Memory access scheduling policy
Speculative execution
Superscalar processing (multiple instruction issue?)
Clock gating
Caching? Levels, size, associativity, replacement policy
Prefetching?
Voltage/frequency scaling?
Error correction?
36
37. Property of ISA vs. Uarch?
ADD instruction’s opcode
Number of general purpose registers
Number of ports to the register file
Number of cycles to execute the MUL instruction
Whether or not the machine employs pipelined instruction
execution
Remember
Microarchitecture: Implementation of the ISA under specific
design constraints and goals
37
38. Design Point
A set of design considerations and their importance
leads to tradeoffs in both ISA and uarch
Considerations
Cost
Performance
Maximum power consumption
Energy consumption (battery life)
Availability
Reliability and Correctness
Time to Market
Design point determined by the “Problem” space
(application space), the intended users/market
38
Microarchitecture
ISA
Program
Algorithm
Problem
Circuits
Electrons
40. Tradeoffs: Soul of Computer Architecture
ISA-level tradeoffs
Microarchitecture-level tradeoffs
System and Task-level tradeoffs
How to divide the labor between hardware and software
Computer architecture is the science and art of making the
appropriate trade-offs to meet a design point
Why art?
40
41. Why Is It (Somewhat) Art?
41
Microarchitecture
ISA
Program/Language
Algorithm
Problem
Runtime System
(VM, OS, MM)
User
We do not (fully) know the future (applications, users, market)
Logic
Circuits
Electrons
New demands
from the top
(Look Up)
New issues and
capabilities
at the bottom
(Look Down)
New demands and
personalities of users
(Look Up)
42. Why Is It (Somewhat) Art?
42
Microarchitecture
ISA
Program/Language
Algorithm
Problem
Runtime System
(VM, OS, MM)
User
And, the future is not constant (it changes)!
Logic
Circuits
Electrons
Changing demands
at the top
(Look Up and Forward)
Changing issues and
capabilities
at the bottom
(Look Down and Forward)
Changing demands and
personalities of users
(Look Up and Forward)
43. How Can We Adapt to the Future
This is part of the task of a good computer architect
Many options (bag of tricks)
Keen insight and good design
Good use of fundamentals and principles
Efficient design
Heterogeneity
Reconfigurability
…
Good use of the underlying technology
…
43
44. We Covered a Lot of This in
Digital Circuits & Computer Architecture
45. One Slide Overview of Digital Circuits SS18
Logic Design, Verilog, FPGAs
ISA (MIPS)
Single-cycle Microarchitectures
Multi-cycle and Microprogrammed Microarchitectures
Pipelining
Issues in Pipelining: Control & Data Dependence Handling,
State Maintenance and Recovery, …
Out-of-Order Execution
Processing Paradigms (SIMD, VLIW, Systolic, Dataflow, …)
Memory, Virtual Memory, and Caches (brief)
45
47. Digital Circuits Materials for Review (I)
All Digital Circuits Lecture Videos Are Online:
https://www.youtube.com/playlist?list=PL5Q2soXY2Zi_QedyP
WtRmFUJ2F8DdYP7l
All Slides and Assignments Are Online:
https://safari.ethz.ch/digitaltechnik/spring2018/doku.php?id=s
chedule
47
48. Digital Circuits Materials for Review (II)
Particularly useful and relevant lectures for this course
Pipelining and Dependence Handling (Lecture 14-15)
https://www.youtube.com/watch?v=f522l7Q-t7g
https://www.youtube.com/watch?v=7XXgZIbBLls
Out-of-order execution (Lecture 16-17)
https://www.youtube.com/watch?v=0E4QTDZ2OBA
https://www.youtube.com/watch?v=vwLyEbIzyfI
48
49. Review Cache Lectures (from Spring 2018)
Memory Organization and Technology (Lecture 23b)
https://www.youtube.com/watch?v=rvBdJ1ZLo2M
Memory Hierarchy and Caches (Lecture 24)
https://www.youtube.com/watch?v=sweCA3836C0
More Caches (Lecture 25a)
https://www.youtube.com/watch?v=kMUZKjaPNWo
Virtual Memory (Lecture 25b)
https://www.youtube.com/watch?v=na-JL1nVTSU
49
50. This Course
We will have more emphasis on
The memory system
Multiprocessing & multithreading
Reconfigurable computing
Customized accelerators
…
We will also likely dig deeper on some Digital Circuits
concepts (as time permits)
ISA
Branch handling
GPUs
…
50
51. Tentative Agenda (Upcoming Lectures)
The memory hierarchy
Caches, caches, more caches (high locality, high bandwidth)
Main memory: DRAM
Main memory control, scheduling, interference, management
Memory latency tolerance and prefetching techniques
Non-volatile memory & emerging technologies
Multiprocessors
Coherence and consistency
Interconnection networks
Multi-core issues
Multithreading
Acceleration, customization, heterogeneity
51