IT5304 Computer Systems II (Optional)INTRODUCTIONThis is one of the optional courses designed for Semester 5 of the Bachelor of Information TechnologyDegree program. This course is built on the knowledge gained through Computer Systems I.CREDITS: 04LEARNING OUTCOMESAfter successful completion of this course, the student will be able to: • Appreciate the new paradigms used in the emerging processor technologies. • Explain the core concepts of operating systems. • Apply basic skills in systems programming.MINOR MODIFICATIONSWhen minor modifications are made to this syllabus, those will be reflected in the Virtual LearningEnvironment (VLE) and the latest version can be downloaded from the relevant course page of VLE.Please inform your suggestions and comments through the VLE.http://vle.bit.lkONLINE LEARNING MATERIALS AND ACTIVITIESYou can access all learning materials and this syllabus in the VLE: http://vle.bit.lk, if you are aregistered student of BIT degree program. It is very important to participate in learning activities givenin the VLE to learn this subject.FINAL EXAMINATIONFinal exam of the course will be held at the end of the semester. Learning activities and tutorialexercises are important in this course as they will help students to prepare themselves for thefinal semester exam. Final exam is a two hour written paper with four compulsory questions.
OUTLINE OF SYLLABUS Topic Hours 1- Review of Digital Logic 06 2- Systems Architecture 06 3- Instruction Set Architecture 08 4- Instruction Pipelining 08 5- Memory Hierarchy 06 6- Parallelism 06 7- Operating System Concepts 20Total for the subject 60REQUIRED MATERIALSMain ReadingRef 1: Operating System Concepts by Silberschatz, Galvin, Gagne – International Student Version -8th Edition, Wiley India (P) Limited, 2010Ref 2: Computer Architecture – A Quantitative Approach by John L. Hennessy and DavidA.Patterson,4th Edition, ElsevierRef 3: Computer Organization & Architecture by William Stalling,7th Edition, Prentice Hall IndiaDETAILED SYLLABUS:Section 1: Review of Digital Logic (06 hrs)Instructional Objectives • Design complex combinational logic circuits using AND/OR, NAND and NOR gates. • Design logic circuits using decoders and multiplexers. • Discuss the use of flip-flops, latches in synchronous sequential logic circuits. • Design sequential circuits such as modulo counters and random number generators.
• Discuss practical aspects of digital logic design.Material /Sub Topics 1.1. Boolean expressions & their simplification [Ref. 3: pg 701-703] 1.1.1. Algebraic and karnaugh map approaches 1.2. Elementary Logic gates: XOR, AND, OR, NOT etc. [Ref. 3: pg 703-705] 1.2.1. Sum of products, product of sums expressions 1.3. Combinational Logic devices [Ref. 3: pg 705-726] 1.3.1. Half & full adders, multiplexors, encoders, decoders 1.3.2. Cross synthesis : Eg: adders from multiplexors 1.3.3. Applications : BCD to 7 segment decorder, bucket brigade vs. carry look ahead adders 1.4. Sequential Logic devices: Flip-Flops, Registers, Counters. [Ref. 3: pg 726-735] 1.4.1. Latches 1.4.2. D-flip flop as a 1-bit memory cell 1.4.3. Applications : Memory arrays, pseudo random number generatorsSection 2: Systems Architecture (06 hrs)Instructional Objectives • Define the stored program concept and explain the flexibility of the concept in the design of digital computers. • Describe the structure, and explain the purpose for each of the main blocks of a computer system. • Show the flow and associated behavior of the fetch and execute cycles for any machine instruction. • Explain the significance of Flynn’s taxonomy. • Describe the effectiveness of CPI and FLOPS as performance metrics.Material /Sub Topics 2.1. Stored program control concept 2.1.1. The Von Neumann architecture [Ref. 3: pg 17-29] 2.1.2. Generic organization of a system: CPU, Cache, main memory, I/O [Ref. 3: pg 10-14] 2.1.3. Fetch –Execute Cycle [Ref. 3: pg 424-426] 2.2. Flynn’s classification of architectures [Ref. 2: pg 197]
2.2.1. SISD 2.2.2. SIMD 2.2.3. MIMD 2.3. CPU Performance metrics [Ref: Teachers note] 2.3.1. CPI, FLOPS; and inter-conversion from/to IPS and CPI 2.3.2. Word length and its effect on numerical rangeSection 3: Instruction Set Architecture (08 hrs)Instructional Objectives • Discuss the term “Instruction-Set Architecture” and show how it define a given processor architecture. • Describe stack architectures. Identify the underlying reasons for the discovery of RISC architecture. • List down the advantages and disadvantages that would be there with a large CPU register file. • Identify valid addressing modes for RISC. • Emulate CISC interactions using RISC.Material /Sub Topics 3.1. Main CPU architecture paradigms [Ref. 2: Appendix B-2-B-7] 3.1.1. accumulator 3.1.2. register (RISC) 3.1.3. stack 3.1.4. memory/register (CISC) 3.2. Generic RISC CPU characteristics [Ref. 2: Appendix B-23, Ref: 4. 462-473] 3.2.1. Relative performance in system and user environments 3.2.2. Large register file vs. on-board cache 3.3. A generic RISC Instruction set [Ref. 2: Appendix B-2-B-21] 3.3.1. Arithmetic and Logic (Add, And, Subtract, Or) 3.3.2. Data Transfer (Load and Store) 3.3.3. Flow Control (Branch, Jump, Procedure Call and Return, Traps) 3.4. Emulating CISC and stack instructions using RISC instructions [Ref: Teachers note] 3.5. Encoding simple high level language constructs [Ref: Teachers note] 3.5.1. Using CISC
3.5.2. Using RISC 3.5.3. Calculation of CPI for a given scenario for a given architectureSection 4: Instruction Pipelining (08 hrs)Instructional Objectives • Discuss the way, the need for reducing CPI motivated pipelining. • Discuss effects of memory latency and process blocking on pipeline performance. • Analyse pieces of machine code for pipeline performance. • Explain pipelining for CISC instructions.Material /Sub Topics 4.1 Expression for speedup over a non pipelined system [Ref. 2: Appendix A-2-A-3] 4.2 Generic RISC five Stage pipeline [Ref. 2: Appendix A-5-A-6] 4.2.1 Microinstruction sequencing under IF, ID,EX,MFM, & WB stages [Ref. 2: Appendix A- 6-A-10] 4.2.2 Pipeline hazards: structural, data, control and their effects 4.3 Elimination of structural data & Control hazards: individual caches, arithmetic bypass, branch prediction methods [Ref: Teachers note] 4.3.1 RISC code execution analysis with hazard elimination 4.4 Pipelining for CISC: emulation by RISC; complex pipelines [Ref: Teachers note] 4.5 Memory and context switching effects on pipeline performance [Ref. 2: Appendix A-37]Section 5: Memory Hierarchy (06 hrs)Instructional Objectives • Identify the need to map a CPU generated virtual address to a physical memory address. • Identify the bottleneck between CPU and memory latency. • Describe how a memory hierarchy can provide a balance between cost and latency. • Explain why the locality of reference principle should work in practice. • Evaluate the relative costs and performance of cache architectures
Material /Sub Topics 5.1. Read / Write memory [Ref. 3: pg 170-179] 5.1.1. Virtual vs. physical memory organization 5.1.2. bytewise and wordwise memory organization 5.1.3. Memory parameters: access time, cycle time, cost per bit 5.2. Memory hierarchy [Ref. 3: pg 104-108] 5.2.1. Locality of reference principle: spatial vs. temporal locality 5.2.2. Memory hierarchy in practice: Cache, Main memory and Secondary memory 5.2.3. Expression for average read only access time of a 2- level memory as a function of hit ratio and memory access times 5.3. Cache memory; Cache Parameters :block size ,placement & replacement policies [Ref. 3: pg 108-127] 5.4. Fully associative ,set associative & direct mapped cache organizations [Ref. 3: pg 127-131] 5.4.1. Address encodings for Tag, Index and block offsetSection 6: Parallelism (06 hrs)Instructional Objectives • Explain the situations in which parallelism will be effective. • Describe the significance of Amdahl’s law. • Identify popular processors in each category of the new processor taxonomy. • Describe where ”Cluster computing” fit in MIMD.Material /Sub Topics 6.1. Processor level parallelism 6.1.1. Instruction level parallelism [Ref. 2: pg 66-114 , pg 154-185] 22.214.171.124 Super scaling, Compiler assistance (eg: loop un-rolling) 6.1.2. Thread level parallelism –From multithreading to Hyper threading; 126.96.36.199. A new processor classification: STSP, STMP, MTSP, MTMP [Ref. 2: pg 196- 264] 6.1.3. Multicore processors [Ref: Teachers note] 188.8.131.52. Fundamentals 184.108.40.206. Graphical Processors (eg: nvidia GPU) 6.2. Multiprocessor parallelism [Ref. 2: pg 230-237]
6.2.1. Exploiting inherent parallelism in applications. (eg: matrix multiplication vs. embarrasingly parallel applications) 6.2.2. Amdahl’s law derivation from first principles 6.2.3. Shared memory architecture 6.2.4. Distributed memory architecturesSection 7: Operating System (OS) Concepts (20 hrs)Instructional Objectives • Describe the structure of an OS. • Describe the concept of a process, its context, creation, and termination. • Apply and analyze system calls embedded in C code fragments. • Write pthread-based C codes to achieve given tasks. • Describe various CPU scheduling algorithms. • Solve the critical section problem using Petersons solution. • Discuss the mutual exclusion solutions using semaphores. • Describe swapping and demand paging in virtual memory. • Discuss page replacement algorithms. • Describe the concept of "files” and file system mounting. • Describe the operation of the I/O sub-system.Material /Sub Topics 7.1 Introduction 7.1.1 Operating system services [Ref. 1: Pg 49-55] 7.1.2 Overview of system calls [Ref. 1: Pg 55-66] 7.1.3 OS structure [Ref. 1: Pg 70-76] 7.1.4 System boot procedure [Ref. 1: Pg 89-90] 7.2 Process Management 7.2.1 Processes and process context switching [Ref. 1: Pg 101-105] 7.2.2 System calls for process management [Ref. 1: Pg 110-116] 7.2.3 Multithreaded programming [Ref. 1: Pg 153-161] 7.2.4 Process scheduling [Ref. 1: Pg 105-110, Pg 183-206] 7.2.5 The critical section problem [Ref. 1: Pg 225-231]