This document provides an overview of computer architecture through a brief history separated into generations. The first generation (1945-1958) used vacuum tubes and featured machine code/assembly languages. The second generation (1958-1964) used transistors and magnetic core memory. Key developments included high-level languages, operating systems, and timesharing. The third generation (1964-1974) introduced integrated circuits and semiconductor memory. Notable systems included IBM's System/360 family. The fourth generation (1974-present) featured VLSI/ULSI and the emergence of personal computers, object-oriented programming, and artificial intelligence.
Esteban Hernandez is a PhD candidate researching heterogeneous parallel programming for weather forecasting. He has 12 years of experience in software architecture, including Linux clusters, distributed file systems, and high performance computing (HPC). HPC involves using the most efficient algorithms on high-performance computers to solve demanding problems. It is used for applications like weather prediction, fluid dynamics simulations, protein folding, and bioinformatics. Performance is often measured in floating point operations per second. Parallel computing using techniques like OpenMP, MPI, and GPUs is key to HPC. HPC systems are used across industries for applications like supply chain optimization, seismic data processing, and drug development.
LibreOffice is a free and open-source office suite, similar to Microsoft Office, consisting of Writer (word processing), Calc (spreadsheet), and Impress (presentations). It can read and write files in common formats like DOC and supports a range of operating systems. While other programs like Draw and Math are available, Chicago Public Libraries currently only offer Writer, Calc, and Impress on their computers.
The document discusses the Linux kernel. It begins with background on Linux and defines a kernel. It describes the Linux kernel's monolithic architecture and discusses kernel modules. It provides details on module management, driver registration, and conflict resolution. It also provides an overview of key Linux kernel functions like process management, memory management, file systems, I/O management, and networking. It concludes with details on the Linux kernel development cycle.
This document provides an overview of SCSI drives and file systems. It describes SCSI interfaces and cables, how SCSI devices are connected in a daisy chain configuration, and SCSI standards including SCSI-1, SCSI-2, and SCSI-3. It also summarizes the FAT and NTFS file systems used in Windows, how they allocate disk space and store file information differently, and the advantages of NTFS. The document concludes with a brief explanation of how disk compression works to save space.
Libre Office is a free and open-source office suite that provides word processing, spreadsheet, presentation, drawing, formula editing, and database functionality. It can open and save files in common formats like Microsoft Office formats. Some key advantages of Libre Office include that it is free to use, open source, cross-platform, has extensive language support, and avoids vendor lock-in through its use of open standards.
The document provides an overview of the Common Language Runtime (CLR) and its role in the .NET framework. The CLR converts managed code written in .NET languages like C# and VB.NET into native code and acts as an intermediary between the operating system and managed applications. It provides key services like just-in-time compilation from MSIL to native code, garbage collection, security, threading and exception handling to enable cross-platform .NET applications.
Esteban Hernandez is a PhD candidate researching heterogeneous parallel programming for weather forecasting. He has 12 years of experience in software architecture, including Linux clusters, distributed file systems, and high performance computing (HPC). HPC involves using the most efficient algorithms on high-performance computers to solve demanding problems. It is used for applications like weather prediction, fluid dynamics simulations, protein folding, and bioinformatics. Performance is often measured in floating point operations per second. Parallel computing using techniques like OpenMP, MPI, and GPUs is key to HPC. HPC systems are used across industries for applications like supply chain optimization, seismic data processing, and drug development.
LibreOffice is a free and open-source office suite, similar to Microsoft Office, consisting of Writer (word processing), Calc (spreadsheet), and Impress (presentations). It can read and write files in common formats like DOC and supports a range of operating systems. While other programs like Draw and Math are available, Chicago Public Libraries currently only offer Writer, Calc, and Impress on their computers.
The document discusses the Linux kernel. It begins with background on Linux and defines a kernel. It describes the Linux kernel's monolithic architecture and discusses kernel modules. It provides details on module management, driver registration, and conflict resolution. It also provides an overview of key Linux kernel functions like process management, memory management, file systems, I/O management, and networking. It concludes with details on the Linux kernel development cycle.
This document provides an overview of SCSI drives and file systems. It describes SCSI interfaces and cables, how SCSI devices are connected in a daisy chain configuration, and SCSI standards including SCSI-1, SCSI-2, and SCSI-3. It also summarizes the FAT and NTFS file systems used in Windows, how they allocate disk space and store file information differently, and the advantages of NTFS. The document concludes with a brief explanation of how disk compression works to save space.
Libre Office is a free and open-source office suite that provides word processing, spreadsheet, presentation, drawing, formula editing, and database functionality. It can open and save files in common formats like Microsoft Office formats. Some key advantages of Libre Office include that it is free to use, open source, cross-platform, has extensive language support, and avoids vendor lock-in through its use of open standards.
The document provides an overview of the Common Language Runtime (CLR) and its role in the .NET framework. The CLR converts managed code written in .NET languages like C# and VB.NET into native code and acts as an intermediary between the operating system and managed applications. It provides key services like just-in-time compilation from MSIL to native code, garbage collection, security, threading and exception handling to enable cross-platform .NET applications.
Hadoop 3.0 has been years in the making, and now it's finally arriving. Andrew Wang and Daniel Templeton offer an overview of new features, including HDFS erasure coding, YARN Timeline Service v2, YARN federation, and much more, and discuss current release management status and community testing efforts dedicated to making Hadoop 3.0 the best Hadoop major release yet.
Programming languages can be categorized by their level of abstraction from machine language. Low-level languages have minimal abstraction and are closer to machine-understandable binary, while high-level languages are more abstract and user-friendly. Low-level languages map directly to hardware instructions but are difficult for humans, whereas high-level languages require translation to machine code but are easier for programmers. Programs transition between these levels through compilation and interpretation.
The document discusses the Network File System (NFS) protocol. NFS allows users to access and share files located on remote computers as if they were local. It operates using three main layers - the RPC layer for communication, the XDR layer for machine-independent data representation, and the top layer consisting of the mount and NFS protocols. NFS version 4 added features like strong security, compound operations, and internationalization support.
This document provides instructions for installing Ubuntu Linux. It begins by having the user download Ubuntu, check if their computer can boot from USB, make any necessary BIOS changes to allow booting from USB, and create a bootable Ubuntu USB installer. It then guides the user through installing Ubuntu which involves selecting options to erase the disk and install Ubuntu. The user then sets their time zone, keyboard, and creates a username, password, and computer name to use after installation completes and the computer restarts.
Remote login allows users to access and control a remote computer over a network connection. It involves installing desktop sharing software on both the host and client computers. The client software connects the user's keyboard and display to the host computer, allowing them to interact with it remotely. Desktop sharing works by encrypting and transmitting packets of information about the host's screen to the client. Remote login is commonly used for remote technical support and real-time collaboration between coworkers in different locations. While convenient, it also presents security risks that require the use of secure protocols like SSH.
Red Hat Linux 9 was released in 2003 and featured the Bluecurve interface for easy navigation, support for Native POSIX Threads for improved performance, and included OpenOffice.org, Mozilla, and email/calendar clients. Red Hat Enterprise Linux 5, released in 2007, was based on the Linux 2.6.18 kernel and included over 1200 components along with virtualization support, optimizations for multi-core CPUs, and enhanced security features such as SELinux.
Linux was created in 1991 by Linus Torvalds as a free and open-source kernel. It has since grown significantly and is now widely used both for personal computers and in other devices like servers, embedded systems, and smartphones through Android. Some key points in Linux's history include the first Linux distribution Red Hat in 1994, the creation of desktop environments like KDE in 1996, and Android's adoption of the Linux kernel which has given it the largest installed base of any OS. There are now over 600 Linux distributions available for different use cases like Ubuntu, Debian, and Fedora for personal computers and embedded distributions for devices.
Computer languages allow humans to communicate with computers through programming. There are different types of computer languages at different levels of abstraction from machine language up to high-level languages. High-level languages are closer to human language while low-level languages are closer to machine-readable code. Programs written in high-level languages require compilers or interpreters to convert them to machine-readable code that can be executed by computers.
The document provides an overview of Hive architecture and workflow. It discusses how Hive converts HiveQL queries to MapReduce jobs through its compiler. The compiler includes components like the parser, semantic analyzer, logical and physical plan generators, and logical and physical optimizers. It analyzes sample HiveQL queries and shows the transformations done at each compiler stage to generate logical and physical execution plans consisting of operators and tasks.
CNIT 121: 13 Investigating Mac OS X SystemsSam Bowne
Slides for a college course based on "Incident Response & Computer Forensics, Third Edition" by by Jason Luttgens, Matthew Pepe, and Kevin Mandia.
Teacher: Sam Bowne
Twitter: @sambowne
Website: https://samsclass.info/121/121_F16.shtml
The primary reasons for using parallel computing:
Save time - wall clock time
Solve larger problems
Provide concurrency (do multiple things at the same time)
The document discusses operating systems and their functions. It defines an operating system as software that manages a computer's hardware and software resources and provides an interface for users. The main functions of an operating system include managing system resources like the CPU and memory, coordinating hardware components through device drivers, providing an environment for software to run, displaying a structure for data management, and monitoring system health. Popular operating systems mentioned are Windows, macOS, Linux, and mobile operating systems like iOS and Android.
Apache Tez - A New Chapter in Hadoop Data ProcessingDataWorks Summit
Apache Tez is a framework for accelerating Hadoop query processing. It is based on expressing a computation as a dataflow graph and executing it in a highly customizable way. Tez is built on top of YARN and provides benefits like better performance, predictability, and utilization of cluster resources compared to traditional MapReduce. It allows applications to focus on business logic rather than Hadoop internals.
2 parallel processing presentation ph d 1st semesterRafi Ullah
This document discusses parallel processing. It begins by defining parallel processing as a form of processing where many instructions are carried out simultaneously by multiple processors. It then discusses why parallel processing is needed due to increasing computational demands and the limitations of increasing processor speeds alone. It classifies different types of parallel processor architectures including single instruction single data (SISD), single instruction multiple data (SIMD), multiple instruction single data (MISD), and multiple instruction multiple data (MIMD). The document concludes by outlining some advantages of parallel processing such as saving time and cost and solving larger problems, and provides examples of applications that benefit from parallel processing.
This document discusses different programming paradigms and languages. It describes batch programs which run without user interaction and event-driven programs which respond to user events. It lists many popular programming languages from Machine Language to Java and C#, and describes low-level languages that are close to machine code and high-level languages that are more human-readable. It also discusses the different types of language translators like compilers, interpreters, and assemblers and how they convert code between languages. Finally, it covers testing, debugging, and different types of errors in programming.
Parallel platforms can be organized in various ways, from an ideal parallel random access machine (PRAM) to more conventional architectures. PRAMs allow concurrent access to shared memory and can be divided into subclasses based on how simultaneous memory accesses are handled. Physical parallel computers use interconnection networks to provide communication between processing elements and memory. These networks include bus-based, crossbar, multistage, and various topologies like meshes and hypercubes. Maintaining cache coherence across multiple processors is important and can be achieved using invalidate protocols, directories, and snooping.
Apache Hive is a data warehouse software built on top of Hadoop that allows users to query data stored in various databases and file systems using an SQL-like interface. It provides a way to summarize, query, and analyze large datasets stored in Hadoop distributed file system (HDFS). Hive gives SQL capabilities to analyze data without needing MapReduce programming. Users can build a data warehouse by creating Hive tables, loading data files into HDFS, and then querying and analyzing the data using HiveQL, which Hive then converts into MapReduce jobs.
- FAT (File Allocation Table) was the original file system developed by Microsoft for early versions of Windows to organize files on disks. It stored metadata in a file allocation table and used a linked list data structure.
- NTFS (New Technology File System) was developed later to replace FAT as disk sizes increased. NTFS uses more advanced data structures like B-trees and provides features like security, compression, encryption, and journaling.
- In NTFS, files are stored in clusters across the disk. The master file table stores metadata about every file and directory, including attributes like security and extended properties. System files also store information to enable features like recoverability.
02 computer evolution and performance.ppt [compatibility mode]bogi007
This document summarizes the evolution of computer architecture from early computers like ENIAC through modern multi-core processors. It discusses key developments like stored programs, transistors replacing vacuum tubes, integrated circuits, memory hierarchies, and parallelism through pipelining and multiple cores. Moore's Law of increasing transistor counts is also summarized.
A Look Back | A Look Ahead Seattle Foundation ServicesSeattle Foundation
The document outlines various philanthropic services provided by Seattle Foundation, including lifetime philanthropy through donor advised funds and family foundations, legacy philanthropy through bequests and planned giving, organizational philanthropy through corporate foundations and agency endowments, and specialized services like global giving and impact investing. It provides details on each type of service and how donors can engage to accomplish their personal, financial, and philanthropic goals through principled oversight of charitable assets.
Hadoop 3.0 has been years in the making, and now it's finally arriving. Andrew Wang and Daniel Templeton offer an overview of new features, including HDFS erasure coding, YARN Timeline Service v2, YARN federation, and much more, and discuss current release management status and community testing efforts dedicated to making Hadoop 3.0 the best Hadoop major release yet.
Programming languages can be categorized by their level of abstraction from machine language. Low-level languages have minimal abstraction and are closer to machine-understandable binary, while high-level languages are more abstract and user-friendly. Low-level languages map directly to hardware instructions but are difficult for humans, whereas high-level languages require translation to machine code but are easier for programmers. Programs transition between these levels through compilation and interpretation.
The document discusses the Network File System (NFS) protocol. NFS allows users to access and share files located on remote computers as if they were local. It operates using three main layers - the RPC layer for communication, the XDR layer for machine-independent data representation, and the top layer consisting of the mount and NFS protocols. NFS version 4 added features like strong security, compound operations, and internationalization support.
This document provides instructions for installing Ubuntu Linux. It begins by having the user download Ubuntu, check if their computer can boot from USB, make any necessary BIOS changes to allow booting from USB, and create a bootable Ubuntu USB installer. It then guides the user through installing Ubuntu which involves selecting options to erase the disk and install Ubuntu. The user then sets their time zone, keyboard, and creates a username, password, and computer name to use after installation completes and the computer restarts.
Remote login allows users to access and control a remote computer over a network connection. It involves installing desktop sharing software on both the host and client computers. The client software connects the user's keyboard and display to the host computer, allowing them to interact with it remotely. Desktop sharing works by encrypting and transmitting packets of information about the host's screen to the client. Remote login is commonly used for remote technical support and real-time collaboration between coworkers in different locations. While convenient, it also presents security risks that require the use of secure protocols like SSH.
Red Hat Linux 9 was released in 2003 and featured the Bluecurve interface for easy navigation, support for Native POSIX Threads for improved performance, and included OpenOffice.org, Mozilla, and email/calendar clients. Red Hat Enterprise Linux 5, released in 2007, was based on the Linux 2.6.18 kernel and included over 1200 components along with virtualization support, optimizations for multi-core CPUs, and enhanced security features such as SELinux.
Linux was created in 1991 by Linus Torvalds as a free and open-source kernel. It has since grown significantly and is now widely used both for personal computers and in other devices like servers, embedded systems, and smartphones through Android. Some key points in Linux's history include the first Linux distribution Red Hat in 1994, the creation of desktop environments like KDE in 1996, and Android's adoption of the Linux kernel which has given it the largest installed base of any OS. There are now over 600 Linux distributions available for different use cases like Ubuntu, Debian, and Fedora for personal computers and embedded distributions for devices.
Computer languages allow humans to communicate with computers through programming. There are different types of computer languages at different levels of abstraction from machine language up to high-level languages. High-level languages are closer to human language while low-level languages are closer to machine-readable code. Programs written in high-level languages require compilers or interpreters to convert them to machine-readable code that can be executed by computers.
The document provides an overview of Hive architecture and workflow. It discusses how Hive converts HiveQL queries to MapReduce jobs through its compiler. The compiler includes components like the parser, semantic analyzer, logical and physical plan generators, and logical and physical optimizers. It analyzes sample HiveQL queries and shows the transformations done at each compiler stage to generate logical and physical execution plans consisting of operators and tasks.
CNIT 121: 13 Investigating Mac OS X SystemsSam Bowne
Slides for a college course based on "Incident Response & Computer Forensics, Third Edition" by by Jason Luttgens, Matthew Pepe, and Kevin Mandia.
Teacher: Sam Bowne
Twitter: @sambowne
Website: https://samsclass.info/121/121_F16.shtml
The primary reasons for using parallel computing:
Save time - wall clock time
Solve larger problems
Provide concurrency (do multiple things at the same time)
The document discusses operating systems and their functions. It defines an operating system as software that manages a computer's hardware and software resources and provides an interface for users. The main functions of an operating system include managing system resources like the CPU and memory, coordinating hardware components through device drivers, providing an environment for software to run, displaying a structure for data management, and monitoring system health. Popular operating systems mentioned are Windows, macOS, Linux, and mobile operating systems like iOS and Android.
Apache Tez - A New Chapter in Hadoop Data ProcessingDataWorks Summit
Apache Tez is a framework for accelerating Hadoop query processing. It is based on expressing a computation as a dataflow graph and executing it in a highly customizable way. Tez is built on top of YARN and provides benefits like better performance, predictability, and utilization of cluster resources compared to traditional MapReduce. It allows applications to focus on business logic rather than Hadoop internals.
2 parallel processing presentation ph d 1st semesterRafi Ullah
This document discusses parallel processing. It begins by defining parallel processing as a form of processing where many instructions are carried out simultaneously by multiple processors. It then discusses why parallel processing is needed due to increasing computational demands and the limitations of increasing processor speeds alone. It classifies different types of parallel processor architectures including single instruction single data (SISD), single instruction multiple data (SIMD), multiple instruction single data (MISD), and multiple instruction multiple data (MIMD). The document concludes by outlining some advantages of parallel processing such as saving time and cost and solving larger problems, and provides examples of applications that benefit from parallel processing.
This document discusses different programming paradigms and languages. It describes batch programs which run without user interaction and event-driven programs which respond to user events. It lists many popular programming languages from Machine Language to Java and C#, and describes low-level languages that are close to machine code and high-level languages that are more human-readable. It also discusses the different types of language translators like compilers, interpreters, and assemblers and how they convert code between languages. Finally, it covers testing, debugging, and different types of errors in programming.
Parallel platforms can be organized in various ways, from an ideal parallel random access machine (PRAM) to more conventional architectures. PRAMs allow concurrent access to shared memory and can be divided into subclasses based on how simultaneous memory accesses are handled. Physical parallel computers use interconnection networks to provide communication between processing elements and memory. These networks include bus-based, crossbar, multistage, and various topologies like meshes and hypercubes. Maintaining cache coherence across multiple processors is important and can be achieved using invalidate protocols, directories, and snooping.
Apache Hive is a data warehouse software built on top of Hadoop that allows users to query data stored in various databases and file systems using an SQL-like interface. It provides a way to summarize, query, and analyze large datasets stored in Hadoop distributed file system (HDFS). Hive gives SQL capabilities to analyze data without needing MapReduce programming. Users can build a data warehouse by creating Hive tables, loading data files into HDFS, and then querying and analyzing the data using HiveQL, which Hive then converts into MapReduce jobs.
- FAT (File Allocation Table) was the original file system developed by Microsoft for early versions of Windows to organize files on disks. It stored metadata in a file allocation table and used a linked list data structure.
- NTFS (New Technology File System) was developed later to replace FAT as disk sizes increased. NTFS uses more advanced data structures like B-trees and provides features like security, compression, encryption, and journaling.
- In NTFS, files are stored in clusters across the disk. The master file table stores metadata about every file and directory, including attributes like security and extended properties. System files also store information to enable features like recoverability.
02 computer evolution and performance.ppt [compatibility mode]bogi007
This document summarizes the evolution of computer architecture from early computers like ENIAC through modern multi-core processors. It discusses key developments like stored programs, transistors replacing vacuum tubes, integrated circuits, memory hierarchies, and parallelism through pipelining and multiple cores. Moore's Law of increasing transistor counts is also summarized.
A Look Back | A Look Ahead Seattle Foundation ServicesSeattle Foundation
The document outlines various philanthropic services provided by Seattle Foundation, including lifetime philanthropy through donor advised funds and family foundations, legacy philanthropy through bequests and planned giving, organizational philanthropy through corporate foundations and agency endowments, and specialized services like global giving and impact investing. It provides details on each type of service and how donors can engage to accomplish their personal, financial, and philanthropic goals through principled oversight of charitable assets.
Computers when invented by Charles Babbage only viewed it as a computing machines. However it is only recently that computer has evolved more rapidly. Through its complex systems and processing capabilities computers can be used to manipulate databases.
For more such innovative content on management studies, join WeSchool PGDM-DLP Program: http://bit.ly/ZEcPAc
This document discusses various models of parallel computer architectures. It begins with an overview of Flynn's taxonomy, which classifies computer systems based on the number of instruction and data streams. The main categories are SISD, SIMD, MIMD, and MISD. It then covers parallel computer models in more detail, including shared-memory multiprocessors, distributed-memory multicomputers, classifications based on interconnection networks and parallelism. It provides examples of different parallel architectures and references papers on advanced computer architecture and parallel processing.
Vector processing involves executing the same operation on multiple data elements simultaneously using a single instruction. Early implementations like the CDC Cyber 100 had limitations. The Cray-1 was the first successful vector processing supercomputer, using vector registers to perform calculations faster than requiring memory access. Seymour Cray led the development of vector processing machines that dominated the field for many years. While vector processing is no longer a focus, its principles are still used today in multimedia SIMD instructions.
Interstage buffer B1 feeds the Decode stage with a newly-fetched instruction.
Interstage buffer B2 feeds the Compute stage with the two operands
Interstage buffer B3 holds the result of the ALU operation
Interstage buffer B4 feeds the Write stage with a value to be written into the register file
This document discusses instruction-level parallelism (ILP), which refers to executing multiple instructions simultaneously in a program. It describes different types of parallel instructions that do not depend on each other, such as at the bit, instruction, loop, and thread levels. The document provides an example to illustrate ILP and explains that compilers and processors aim to maximize ILP. It outlines several ILP techniques used in microarchitecture, including instruction pipelining, superscalar, out-of-order execution, register renaming, speculative execution, and branch prediction. Pipelining and superscalar processing are explained in more detail.
This document discusses instruction level parallelism (ILP) and how it can be used to improve performance by overlapping the execution of instructions through pipelining. ILP refers to the potential overlap among instructions within a basic block. Factors like dynamic branch prediction and compiler dependence analysis can impact the ideal pipeline CPI and number of data hazard stalls. Loop level parallelism refers to the parallelism available across iterations of a loop. Data dependencies between instructions, if not properly handled, can limit parallelism and require instructions to execute in order. The three types of data dependencies are data, name, and control dependencies.
This document discusses instruction pipelining as a technique to improve computer performance. It explains that pipelining allows multiple instructions to be processed simultaneously by splitting instruction execution into stages like fetch, decode, execute, and write. While pipelining does not reduce the time to complete individual instructions, it improves throughput by allowing new instructions to begin processing before previous instructions have finished. The document outlines some challenges to achieving peak performance from pipelining, such as pipeline stalls from hazards like data dependencies between instructions. It provides examples of how data hazards can occur if the results of one instruction are needed by a subsequent instruction before they are available.
Pipelining is an speed up technique where multiple instructions are overlapped in execution on a processor. It is an important topic in Computer Architecture.
This slide try to relate the problem with real life scenario for easily understanding the concept and show the major inner mechanism.
The document discusses the iCub robot project, an open-source humanoid robot that costs €200,000 to build. It provides an overview of the iCub specifications and licensing, comparing it to other open hardware projects like Arduino, RepRap, and the Global Village Construction Set. While the iCub design is publicly available under open-source licenses, it is noted that the licensing and documentation of hardware presents unique challenges compared to software.
This document outlines key challenges organizations face when adapting agile methodologies and provides recommendations. It discusses challenges such as lack of planning, unrealistic expectations of training, need for committed agile coaches, resistance to change, threats to open culture, lack of teamwork, and communication gaps. Recommendations include taking a phased approach, hiring experienced coaches, focusing on business needs before methodology, and emphasizing culture change and stakeholder involvement. Success requires organizational and team commitment to addressing challenges as they arise.
This is a slide show from an interactive training designed for tobacco control advocates and enthusiasts working with youth and young adults. In the training, we reviewed content and navigation of the ATTACK Toolkit. With the help from Jeff Jordan, President and Founder of Rescue Social Change Group, we highlighted how Social Branding strategy promotes tobacco-free lifestyles.
Guy Kawasaki provides 10 ways to use LinkedIn including increasing your visibility by adding connections, improving your connectability by showing all affiliations, and enhancing your search engine results by including links to your blog or website in your profile. Other tips include performing reference checks on potential employers or bosses, increasing the relevancy of your job search using advanced search features, making interviews go smoother by learning about interviewers, gauging the health of companies and industries by searching for employees, and tracking startups by searching terms like "stealth" or "new startup".
Weber Bros Circus is welcoming visitors to their show. The circus features BoBo the clown, who enjoys entertaining crowds by juggling. In just a few sentences, the essential information about the circus and one of its acts is conveyed.
Many people complain about BDD. By taking a closer look in those complaints we realize that they start to reject BDD not because of its ideas, but because they try to solve all of their problems by installing a tool - and not really applying all of the concepts behind it. So, let's put everyone on the same page when the subject is BDD.
This document is a Haiku Deck presentation that features photos from various photographers. It contains 12 photos credited to different photographers and encourages the viewer to get inspired and create their own Haiku Deck presentation on SlideShare.
Attack Toolkit Webinar on Tobacco Industry MarketingAlex T.
This interactive webinar was designed for tobacco control advocates and enthusiasts working with youth and young adults. Slideshow represents the content covered in the webinar. Meghan Bridgid Moran, PhD, presented research on the Tobacco Industry marketing strategies.
This document provides an overview of computer architecture and organization. It discusses the different levels of representation from high-level programming to machine language. It also covers the main components of a computer system, including the processor, memory, and input/output devices. Memory has a hierarchy, with cache being fast but expensive and main memory being slower but able to store more data. Input/output devices have a variety of speeds and requirements that make their organization complex.
This is introduction to micro processor and assembly language course. In this chapter you are going to be introduced to basic idea of microprocessor. Language hierarchy and virtual machine concept.
When a human programmer develops a set of instructions to directly tell a microprocessor how to do something
They’re programming in the CPU’s own “language” This language, which consists of the very same binary codes which the Control Unit inside the CPU chip decodes to perform tasks, is often referred to as machine language.
it is often written in hexadecimal form, because it is easier for human beings to work with. For example, I’ll present just a few of the common instruction codes for the Intel 8080 micro-processor chip.
This presentation is a short introduction to issues in Hardware-Software Codesign. It discusses definition of codesign, its significance, design issues in Hardware-software codesign, Abstraction levels, Duality of harware and software
This document discusses RISC vs CISC architectures and the Harvard and von Neumann computer architectures. It provides examples of multiplying two numbers in memory using CISC and RISC approaches. CISC uses complex instructions that perform multiple operations, while RISC breaks operations into simpler instructions. Harvard architecture separates program and data memory while von Neumann uses shared memory.
This document discusses networked embedded systems and system-on-chip architectures. It covers applications such as automotive, multimedia, and biotech. It also discusses models and methods for design space exploration, verification, and resource-aware computing. The document outlines hardware architectures, software architectures, and system-level modeling, analysis and optimization. It provides examples of using ARTS (Abstraction, Refinement and Type-checking based System design) for modeling, simulation, and applications in multiprocessor systems-on-chip, wireless sensor networks, and automotive systems.
The document discusses human-machine interface design. It defines key terms like HMI, MMI, CHI, HCI and describes the multi-disciplinary nature of interface design. It also outlines the user interface design process including task analysis, interface design activities, prototyping and evaluation. Usability principles are presented focusing on tasks, feedback, consistency and more. Encoding techniques and examples of good and bad interfaces are provided.
The document discusses device drivers and their modeling for real-time schedulability analysis. It provides an overview of device drivers, their design and how they interact with hardware and operating systems. It then discusses challenges device drivers pose for real-time systems, where all tasks must complete within specified time constraints. It presents an analysis of the Linux e1000 network interface driver as a case study and references additional resources on the topic.
Cockatrice: A Hardware Design Environment with ElixirHideki Takase
Cockatrice is a hardware design environment that allows designing hardware circuits from Elixir code. It synthesizes Elixir code following the "Zen style" of using enumerations and pipelines to describe dataflow into a hardware description language representation of a dataflow circuit. The synthesis flow analyzes the Elixir code, generates hardware modules from functions, connects them as a dataflow circuit, and outputs the final circuit description along with an interface driver for communication between the generated hardware and a Elixir software application. This allows accelerating parts of Elixir code by offloading processing to customized hardware circuits designed from the Elixir code.
This document summarizes an eCognition image analysis system. It discusses the eCognition processing chain which takes in input data and performs segmentation, classification, and context analysis to output results as raster, vector, point cloud, or statistics. It describes the eCognition software suite including Developer, Architect, and Server products. It outlines a scalable processing system with centralized data storage, processing units, and thin clients to connect field offices via terminal servers. Finally, it discusses eCognition technology layers from custom to off-the-shelf applications and the underlying eCognition software suite.
Vayavya Labs is a company that develops system level design tools and provides embedded design services. It has created DDGEN, the world's first automated device driver generator, which can significantly reduce the cost and efforts required for device driver development. DDGEN takes hardware specification files as input and generates fully functional device drivers and test code. It supports a range of device complexities and operating systems. Pilot results found DDGEN provided close to 200-300% reductions in time and effort for driver development.
The document discusses instruction set architecture (ISA), describing it as the interface between software and hardware that defines the programming model and machine language instructions. It provides details on RISC ISAs like MIPS and how they aim to have simpler instructions, more registers, load/store architectures, and pipelining to improve performance compared to CISC ISAs. The document also discusses different types of ISA designs including stack-based, accumulator-based, and register-to-register architectures.
This document provides an overview of the basic functional units and operations of a computer system. It discusses how instructions and data are stored and processed using components like the CPU, memory, arithmetic logic unit, and control unit. The document also covers concepts like pipelining and parallel processing that can improve performance, as well as differences between RISC and CISC instruction sets. It aims to explain at a high level how a computer works from an architectural perspective.
Chapter 01 Java Programming Basic Java IDE JAVA INTELLIEJIMPERIALXGAMING
This document provides an introduction to computers, programs, Java, and the basic components of a computer system. It discusses the objectives of understanding computer basics like the CPU, memory, storage devices, input/output, and how programs are run. It then provides details on programming languages like machine language, assembly language, and high-level languages like Java. It explains how source code is translated to machine code through interpreting or compiling. Finally, it discusses why Java is well-suited for developing applications on the Internet.
FPGA introduction for absolute beginners
- What is inside FPGA (Altera example)
- What are the major differences between firmware development for MCU and FPGA
- Some very basics of Verilog HDL language (by similarities with C/C++)
- Testbench approach and Icarus simulator demonstration
- Altera Quartus IDE demonstration -- creating project, compilation, and download
- Signal-Tap internal logic analyzer demonstration
(Verilog source code examples attached inside presentation)
The document discusses the basic structure of computers including functional units like the CPU, memory, and I/O. It describes how instructions and data are stored in memory and executed by the CPU. The CPU contains arithmetic logic units and registers to process instructions step-by-step under the control of a control unit. System software like operating systems and compilers help manage computer resources and translate programs for execution. Performance depends on hardware design, instruction sets, and software optimization.
The document discusses general-purpose processors and their basic architecture. It explains that general-purpose processors have a control unit and datapath that are designed to perform a variety of computation tasks through software programs. The control unit sequences through instruction cycles that involve fetching instructions from memory, decoding them, fetching operands, executing operations in the datapath, and storing results. Pipelining and other techniques can improve processor throughput and performance. The document also covers programming models and assembly-level instruction sets.
This document discusses IT infrastructure, including hardware, software, networks, and data management technology. It covers the types and sizes of computers from personal computers to supercomputers. It also discusses operating systems, application software, groupware, and contemporary trends like edge computing, virtual machines, and cloud computing. The document examines different types of networks including client-server, web servers, and storage area networks. It provides an overview of strategic decision making around managing infrastructure technology.
This presentation was provided by Racquel Jemison, Ph.D., Christina MacLaughlin, Ph.D., and Paulomi Majumder. Ph.D., all of the American Chemical Society, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
This presentation was provided by Rebecca Benner, Ph.D., of the American Society of Anesthesiologists, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...EduSkills OECD
Andreas Schleicher, Director of Education and Skills at the OECD presents at the launch of PISA 2022 Volume III - Creative Minds, Creative Schools on 18 June 2024.
Gender and Mental Health - Counselling and Family Therapy Applications and In...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
Philippine Edukasyong Pantahanan at Pangkabuhayan (EPP) CurriculumMJDuyan
(𝐓𝐋𝐄 𝟏𝟎𝟎) (𝐋𝐞𝐬𝐬𝐨𝐧 𝟏)-𝐏𝐫𝐞𝐥𝐢𝐦𝐬
𝐃𝐢𝐬𝐜𝐮𝐬𝐬 𝐭𝐡𝐞 𝐄𝐏𝐏 𝐂𝐮𝐫𝐫𝐢𝐜𝐮𝐥𝐮𝐦 𝐢𝐧 𝐭𝐡𝐞 𝐏𝐡𝐢𝐥𝐢𝐩𝐩𝐢𝐧𝐞𝐬:
- Understand the goals and objectives of the Edukasyong Pantahanan at Pangkabuhayan (EPP) curriculum, recognizing its importance in fostering practical life skills and values among students. Students will also be able to identify the key components and subjects covered, such as agriculture, home economics, industrial arts, and information and communication technology.
𝐄𝐱𝐩𝐥𝐚𝐢𝐧 𝐭𝐡𝐞 𝐍𝐚𝐭𝐮𝐫𝐞 𝐚𝐧𝐝 𝐒𝐜𝐨𝐩𝐞 𝐨𝐟 𝐚𝐧 𝐄𝐧𝐭𝐫𝐞𝐩𝐫𝐞𝐧𝐞𝐮𝐫:
-Define entrepreneurship, distinguishing it from general business activities by emphasizing its focus on innovation, risk-taking, and value creation. Students will describe the characteristics and traits of successful entrepreneurs, including their roles and responsibilities, and discuss the broader economic and social impacts of entrepreneurial activities on both local and global scales.
1. Computer Architecture
What is Computer Architecture
Forces on Evolution of Computer Architecture
Measurement and Evaluation of Computer Performance
Number Representation
Prepared by: Engr. Alzien S. Malonzo 1
2. What is Computer Architecture?
Coordination of many levels of abstraction
Under a rapidly changing set of forces
Design, Measurement, and Evaluation
Application
Software Operating System
Compiler Firmware
Instruction Set
Instr. Set Proc. I/O system Architecture
Datapath & Control Control
I Reg
Mem
ALU
Digital Design I2 O2
Hardware I1 O1
Vdd
Circuit Design I1 O1 Bottom Up
view
Physical Design Vdd
2 I1 O1
3. What You Will Learn In This Course
A Typical Computing Scenario You will Learn:
• How to design processor
to run programs
Processor
Execution
?
cache
• The memory hierarchy to
loaded
supply instructions and
Computer Bus
data to the processor as
Memory Array quickly as possible
?
• The input and output of a
computer system
HD Controller
HD Controller Hard Drive • In-depth understanding of
Display Controller trade-offs at hardware-
Power
software boundary
Keyboard Controller Supply
Printer Controller • Experience with the design
Network Controller process of a complex
3
(hardware) design
4. Layer of Representations
Program: temp = v[k];
High Level Language
Program v[k] = v[k+1];
Top down v[k+1] = temp;
Compiler
view
Assembly Program:
Assembly Language lw $15, 0($2)
Program
lw $16, 4($2)
Assembler sw $16, 0($2)
sw $15, 4($2)
Object machine code
Linker Machine Language Program:
Executable machine code 0000 1001 1100 0110 1010 1111 0101 1000
Loader 1010 1111 0101 1000 0000 1001 1100 0110
1100 0110 1010 1111 0101 1000 0000 1001
Machine Language 0101 1000 0000 1001 1100 0110 1010 1111
Instruction Program in Memory
Set
Architecture
Machine Interpretation
Control Signal
Specification ALUOP[0:3] InstReg[9:11] & MASK
4
Courtesy D. Patterson
5. Computer Architecture (Our Perspective)
Computer Architecture =
Instruction Set Architecture + Machine Organization
Instruction Set Architecture: the attributes of a
[computing] system as seen by the programmer, i.e.
the conceptual structure and functional behavior
Instruction Set
Instruction Formats
Data Types & Data Structures: Encodings & Representations
Modes of Addressing and Accessing Data Items and Instructions
Organization of Programmable Storage
Exceptional Conditions
Prepared by: Engr. Alzien S. Malonzo 5
6. Computer Architecture
Machine Organization: organization of the data
flows and controls, the logic design, and the
physical implementation.
Capabilities & Performance Characteristics of Principal
Functional Unit (e.g., ALU)
Ways in which these components are interconnected
Information flows between components
Logic and means by which such information flow is
controlled.
Choreography of Functional Units to realize the ISA
Register Transfer Level (RTL) Description
Prepared by: Engr. Alzien S. Malonzo 6
7. Computer Architecture
Forces on Computer Architecture
Technology Programming
Languages
Applications Computer
Architecture
Operating
History
Systems
Prepared by: Engr. Alzien S. Malonzo 7
8. Processor Technology
logic capacity: about 30% per year
clock rate: about 20% per year
100000000
10000000
Pentium R10000
Transistors
i80486 R4400
1000000
i80386
i80286
100000 R3010
i8086 i80x86
SU MIPS M68K
10000
MIPS
i4004 Alpha
1000
1965 1970 1975 1980 1985 1990 1995 2000 2005
1000
R10000
100 R4400
Clock (MHz)
Pentium
i80486
R3010
10
i80x86
1 M68K
MIPS
Alpha
0.1
1965
1970 1975 19808 1985 1990 1995 2000
Prepared by: Engr. Alzien S. Malonzo
9. Memory Technology
DRAM capacity: about 60% per year (2x every 18 months)
DRAM speed: about 10% per year
DRAM Cost/bit: about 25% per year
Disk capacity: about 60% per year
Prepared by: Engr. Alzien S. Malonzo 9
10. How Technology Impacts Computer Architecture
Higher level of integration enables more complex
architectures. Examples:
On-chip memory
Super scaler processors
Higher level of integration enables more application specific
architectures (e.g., a variety of microcontrollers )
Larger logic capacity and higher performance allow more
freedom in architecture trade-offs. Computer architects
can focus more on what should be done rather than
worrying about physical constraints
Lower cost generates a wider market. Profitability and
competition stimulates architecture innovations
Prepared by: Engr. Alzien S. Malonzo 10
11. Measurement and Evaluation
Architecture is an iterative process
-- searching the space of possible designs
Design -- at all levels of computer systems
Analysis
Creativity
Cost /
Performance
Analysis
Good Ideas
Mediocre Ideas
Bad Ideas
Prepared by: Engr. Alzien S. Malonzo 11
12. Performance Analysis
Basic Performance Equation:
Seconds Instructions Cycles Seconds
CPU time = =
(execution time) Program Program Instructions Cycles
Instruction Cycle Per Clock
Count Instruction* Rate
Program X
Compiler X (X)
Instruction Set X X
Organization X X
Technology X
*Note: Different instructions may take different number of clock cycles. Cycle Per
Instruction (CPI) is only an average and can be affected by application.
Prepared by: Engr. Alzien S. Malonzo 12
13. BRIEF HISTORY OF COMPUTER ARCHITECTURE
First Generation (1945-1958) Features
Vacuum tubes
¨ Machine code, Assembly language
¨ Computers contained a central processor that was
unique to that machine
¨ Different types of supported instructions, few
machines could be considered "general purpose"
¨ Use of drum memory or magnetic core memory,
programs and data are loaded using paper tape or punch
cards
¨ 2 Kb memory, 10 KIPS
Prepared by: Engr. Alzien S. Malonzo 13
14. BRIEF HISTORY OF COMPUTER ARCHITECTURE
Two types of models for a computing machine:
1. ¨ Harvard architecture - physically separate storage
and signal pathways for instructions and data. (The
term originated from the Harvard Mark I, relay-based
computer, which stored instructions on punched tape
and
data in relay latches.)
2. ¨ Von Neumann architecture - a single storage
structure to hold both the set of instructions and the
data. Such machines are also known as stored-program
computers.
Prepared by: Engr. Alzien S. Malonzo 14
15. BRIEF HISTORY OF COMPUTER ARCHITECTURE
Von Neumann bottleneck - the bandwidth, or the data
transfer rate, between the CPU and memory is very
small in comparison with the amount of memory.
NB: Modern high performance CPU chip designs
incorporate aspects of both architectures. On chip
cache memory is divided into an instruction cache and a
data cache. Harvard architecture is used as the CPU
accesses the cache and von Neumann architecture is
used for off chip memory access.
Prepared by: Engr. Alzien S. Malonzo 15
16. BRIEF HISTORY OF COMPUTER ARCHITECTURE
1943-46, ENIAC 1949, Whirlwind computer
by Jay Forrester (MIT)
Prepared by: Engr. Alzien S. Malonzo 16
17. BRIEF HISTORY OF COMPUTER ARCHITECTURE
Second Generation (1958-1964)Features
¨ Transistors – small, low-power, low-cost, more reliable
than vacuumtubes,
¨ Magnetic core memory
¨ Two's complement, floating point arithmetic
¨ Reduced the computational time from milliseconds to
microseconds
¨ High level languages
¨ First operating Systems: handled one program at a
time
Prepared by: Engr. Alzien S. Malonzo 17
18. BRIEF HISTORY OF COMPUTER ARCHITECTURE
1959 - IBM´s 7000 series mainframes were the
company´s first transistorized computers.
IBM 7090 is the most powerful data processing system
at that time. The fullytransistorized system has
computing speeds six times faster than those of its
vacuum-tube predecessor, the IBM 709. Although the
IBM 7090 is a general purpose data processing system, it
is designed with special attention to the needs of
the design of missiles, jet engines, nuclear reactors and
supersonic aircraft.
Prepared by: Engr. Alzien S. Malonzo 18
19. BRIEF HISTORY OF COMPUTER ARCHITECTURE
IBM 7090 Basic Cycle Time: 2.18 μSecs
Prepared by: Engr. Alzien S. Malonzo 19
20. BRIEF HISTORY OF COMPUTER ARCHITECTURE
Third Generation (1964-1974) Features
¨ Introduction of integrated circuits combining
thousands of transistor son a single chip
¨ Semiconductor memory
¨ Timesharing, graphics, structured programming
¨ 2 Mb memory, 5 MIPS
¨ Use of cache memory
¨ IBM’s System 360 - the first family of computers
making a clear distinction between architecture and
implementation
Prepared by: Engr. Alzien S. Malonzo 20
21. BRIEF HISTORY OF COMPUTER ARCHITECTURE
The IBM System/360 Model 91 was introduced in 1966 as
the fastest, most powerful computer then in use. It was
specifically designed to handle high-speed data
processing for scientific applications such as space
exploration, theoretical astronomy, subatomic physics
and global weather forecasting.
Prepared by: Engr. Alzien S. Malonzo 21
22. BRIEF HISTORY OF COMPUTER ARCHITECTURE
Fourth Generation (1974-present) Features
¨ Introduction of Very Large-Scale Integration
(VLSI)/Ultra Large Scale Integration (ULSI) - combines
millions of transistors
¨ Single-chip processor and the single-board computer
emerged
¨ Smallest in size because of the high component density
¨ Creation of the Personal Computer (PC)
¨ Wide spread use of data communications
¨ Object-Oriented programming: Objects & operations
on objects
¨ Artificial intelligence: Functions & logic predicates
Prepared by: Engr. Alzien S. Malonzo 22
23. BRIEF HISTORY OF COMPUTER ARCHITECTURE
1971 - The 4004 was the world's first universal
microprocessor,invented by Federico Faggin, Ted Hoff,
and Stan Mazor.
With just over 2,300 MOS transistors in an area of only 3
by 4 millimeters had as much power as the ENIAC.
Prepared by: Engr. Alzien S. Malonzo 23
24. BRIEF HISTORY OF COMPUTER ARCHITECTURE
4-bit CPU
1K data memory and 4K program memory
clock rate: 740kHz
Just a few years later, the word size of the 4004 was
doubled to form the 8008.
Prepared by: Engr. Alzien S. Malonzo 24
25. BRIEF HISTORY OF COMPUTER ARCHITECTURE
1974 – 1977 the first personal computers – introduced on
the market as kits (major assembly required).
¨ Scelbi (SCientific, ELectronic and BIological) and
designed by the Scelbi Computer Consulting
Company, based on Intel's 8008 microprocessor, with 1K
of programmable memory, Scelbi sold for $565 and
came, with an additional 15K of memory available for
$2760.
¨ Mark-8 (also Intel 8008 based) designed by Jonathan
Titus.
Prepared by: Engr. Alzien S. Malonzo 25
26. BRIEF HISTORY OF COMPUTER ARCHITECTURE
Altair (based on the the new Intel 8080
microprocessor), built by MITS (Micro Instrumentation
Telemetry Systems). The computer kit contained an
8080 CPU, a 256 Byte RAM card, and a new Altair
Bus design for the price of $400.
Prepared by: Engr. Alzien S. Malonzo 26
27. BRIEF HISTORY OF COMPUTER ARCHITECTURE
1976 - Steve Wozniak and Steve Jobs released the Apple I
computer and started Apple Computers. The Apple I was the first
single circuit board computer. It came with a video interface, 8k
of RAM and a keyboard. The system incorporated some
economical components, including the 6502 processor (only $25
dollars - designed by Rockwell and produced by MOS
Technologies) and dynamic RAM.
1977 - Apple II computer model was released, also based on the
6502 processor, but it had color graphics (a first for a personal
computer), and used an audio cassette drive for storage. Its
original configuration came with 4 kb of RAM, but a year later this
was increased to 48 kb of RAM and the cassette drive was
replaced by a floppy disk drive.
Prepared by: Engr. Alzien S. Malonzo 27
28. BRIEF HISTORY OF COMPUTER ARCHITECTURE
1977 - Commodore PET (Personal Electronic Transactor)
was designed by Chuck Peddle, ran also on the 6502
chip, but at half the price of the Apple II. It included 4 kb
of RAM, monochrome graphics and an audio cassette
drive for data storage.
1981 - IBM released their new computer IBM PC which
ran on a 4.77 MHz Intel 8088 microprocessor and
equipped with 16 kilobytes of memory, expandable to
256k. The PC came with one or two 160k floppy disk
drives and an optional color monitor.
Prepared by: Engr. Alzien S. Malonzo 28
29. BRIEF HISTORY OF COMPUTER ARCHITECTURE
first one built from off the shelf parts (called open
architecture) and marketed by outside distributors
Prepared by: Engr. Alzien S. Malonzo 29
30. BRIEF HISTORY OF COMPUTER ARCHITECTURE
First Generation (1945-1958)
Prepared by: Engr. Alzien S. Malonzo 30
31. BRIEF HISTORY OF COMPUTER ARCHITECTURE
Second Generation (1958-1964)
Prepared by: Engr. Alzien S. Malonzo 31
32. BRIEF HISTORY OF COMPUTER ARCHITECTURE
Third Generation (1964-1974)
Prepared by: Engr. Alzien S. Malonzo 32
33. BRIEF HISTORY OF COMPUTER ARCHITECTURE
1974-present
Intel 8080
¨ 8-bit Data
¨ 16-bit Address
¨ 6 μm NMOS
¨ 6K Transistors
¨ 2 MHz
¨ 1974
Prepared by: Engr. Alzien S. Malonzo 33
34. BRIEF HISTORY OF COMPUTER ARCHITECTURE
Motorola 68000
¨ 32 bit architecture internally, but 16 bit data bus
¨ 16 32-bit registers, 8 data and 8 address registers
¨ 2 stage pipeline
¨ no vertual memory support
¨ 68020 was fully 32 bit externally
¨ 1979
Prepared by: Engr. Alzien S. Malonzo 34
35. BRIEF HISTORY OF COMPUTER ARCHITECTURE
Intel386 CPU
¨ 32-bit Data
¨ improved addressing
¨ security modes (kernal, system services, application
services, applications)
¨ 1985
Prepared by: Engr. Alzien S. Malonzo 35