This document contains the questions and answers from a Computer Architecture and Organization exam from 2010 at the Gandhi Institute of Education & Technology. It includes 10 short answer questions covering topics like memory address register function, cache memory units, floating point representation, and virtual memory mechanisms. It also includes longer questions on cache mapping techniques, types of ROM, Booth multiplication algorithm, and addressing modes. The exam is out of 70 total marks and contains both short answer and longer explanatory questions.
This document contains the exam questions and answers for the Computer Architecture and Organization course at Gandhi Institute for Education & Technology. It includes 9 multiple choice questions covering topics such as logical vs physical addresses, memory types like ROM and RAM, cache mapping schemes, and the differences between various memory technologies. It also includes the register organization of the 8085 microprocessor and explanations of microprogrammed control and the Wilkes model.
There are two categories of data compression methods: lossless and lossy. Lossless methods preserve the integrity of the data by using compression and decompression algorithms that are exact inverses, while lossy methods allow for data loss. Common lossless methods include run-length encoding and Huffman coding, while lossy methods like JPEG, MPEG, and MP3 are used to compress images, video, and audio by removing imperceptible or redundant data.
The document summarizes a study evaluating three data compression algorithms created by Dr. Samuel Sterns. The study was led by Myuran Kanga and evaluated the algorithms on various waveforms to determine compression accuracy and efficiency. Algorithm 2 used quantization, algorithm 3 added prediction of quantized data, and algorithm 4 used adaptive arithmetic coding for further compression. Waveforms like sine, square and sawtooth waves as well as noise were compressed and decompressed, and the results were analyzed for differences between original and decompressed signals.
This document discusses various methods of data compression. It begins by defining compression as reducing the size of data while retaining its meaning. There are two main types of compression: lossless and lossy. Lossless compression allows for perfect reconstruction of the original data by removing redundant data. Common lossless methods include run-length encoding and Huffman coding. Lossy compression is used for images and video, and results in some loss of information. Popular lossy schemes are JPEG, MPEG, and MP3. The document then proceeds to describe several specific compression algorithms and their encoding and decoding processes.
3 mathematical priliminaries DATA compressionShubham Jain
The document discusses different methods of data compression by modeling redundancy in data. It provides three examples: (1) exploiting a linear pattern in data points to compress to 2 bits per sample instead of 5 bits; (2) assigning shorter codes to more frequent symbols in a sequence to compress to 2.58 bits per symbol from 3 bits; and (3) using entropy coding which assigns codes based on symbol probabilities to maximize compression. The goal is to remove redundancy while preserving information content.
Types of Data compression, Lossy Compression, Lossless compression and many more. How data is compressed etc. A little extensive than CIE O level Syllabus
This document provides an introduction to data compression. It defines data compression as converting an input data stream into a smaller output stream. Data compression is popular because it allows for more data storage and faster data transfers. The document then discusses key concepts in data compression including lossy vs. lossless compression, adaptive vs. non-adaptive methods, compression performance metrics, and probability models. It also introduces several standard corpora used to test compression algorithms.
The document discusses structures for data compression. It begins by introducing general compression concepts like lossless versus lossy compression. It then distinguishes between vector and raster data, describing how each is structured and stored. For vectors, it discusses storing points and lines more efficiently. For rasters, it explains how resolution affects file size and covers storage methods like tiles. Overall, the document provides an overview of data compression techniques for different data types.
This document contains the exam questions and answers for the Computer Architecture and Organization course at Gandhi Institute for Education & Technology. It includes 9 multiple choice questions covering topics such as logical vs physical addresses, memory types like ROM and RAM, cache mapping schemes, and the differences between various memory technologies. It also includes the register organization of the 8085 microprocessor and explanations of microprogrammed control and the Wilkes model.
There are two categories of data compression methods: lossless and lossy. Lossless methods preserve the integrity of the data by using compression and decompression algorithms that are exact inverses, while lossy methods allow for data loss. Common lossless methods include run-length encoding and Huffman coding, while lossy methods like JPEG, MPEG, and MP3 are used to compress images, video, and audio by removing imperceptible or redundant data.
The document summarizes a study evaluating three data compression algorithms created by Dr. Samuel Sterns. The study was led by Myuran Kanga and evaluated the algorithms on various waveforms to determine compression accuracy and efficiency. Algorithm 2 used quantization, algorithm 3 added prediction of quantized data, and algorithm 4 used adaptive arithmetic coding for further compression. Waveforms like sine, square and sawtooth waves as well as noise were compressed and decompressed, and the results were analyzed for differences between original and decompressed signals.
This document discusses various methods of data compression. It begins by defining compression as reducing the size of data while retaining its meaning. There are two main types of compression: lossless and lossy. Lossless compression allows for perfect reconstruction of the original data by removing redundant data. Common lossless methods include run-length encoding and Huffman coding. Lossy compression is used for images and video, and results in some loss of information. Popular lossy schemes are JPEG, MPEG, and MP3. The document then proceeds to describe several specific compression algorithms and their encoding and decoding processes.
3 mathematical priliminaries DATA compressionShubham Jain
The document discusses different methods of data compression by modeling redundancy in data. It provides three examples: (1) exploiting a linear pattern in data points to compress to 2 bits per sample instead of 5 bits; (2) assigning shorter codes to more frequent symbols in a sequence to compress to 2.58 bits per symbol from 3 bits; and (3) using entropy coding which assigns codes based on symbol probabilities to maximize compression. The goal is to remove redundancy while preserving information content.
Types of Data compression, Lossy Compression, Lossless compression and many more. How data is compressed etc. A little extensive than CIE O level Syllabus
This document provides an introduction to data compression. It defines data compression as converting an input data stream into a smaller output stream. Data compression is popular because it allows for more data storage and faster data transfers. The document then discusses key concepts in data compression including lossy vs. lossless compression, adaptive vs. non-adaptive methods, compression performance metrics, and probability models. It also introduces several standard corpora used to test compression algorithms.
The document discusses structures for data compression. It begins by introducing general compression concepts like lossless versus lossy compression. It then distinguishes between vector and raster data, describing how each is structured and stored. For vectors, it discusses storing points and lines more efficiently. For rasters, it explains how resolution affects file size and covers storage methods like tiles. Overall, the document provides an overview of data compression techniques for different data types.
This document contains the questions and answers from a computer architecture and organization exam. It includes questions about the differences between computer architecture and organization, instruction formats, bus definitions, cache memory advantages, and virtual memory. The responses provide detailed explanations of concepts like locality of reference, thrashing, address mapping, cache hits and misses, and hierarchical memory systems. Justification is given for using a hierarchical approach to improve performance across different memory types. The differences between paging and segmentation in virtual memory are also distinguished.
This document provides an overview of data compression techniques. It discusses lossless compression algorithms like Huffman encoding and LZW encoding which allow for exact reconstruction of the original data. It also discusses lossy compression techniques like JPEG and MPEG which allow for approximate reconstruction for images and video in order to achieve higher compression rates. JPEG divides images into 8x8 blocks and applies discrete cosine transform, quantization, and run length encoding. MPEG spatially compresses each video frame using JPEG and temporally compresses frames by removing redundant frames.
This document provides an overview of data compression techniques. It discusses how data compression reduces the number of bits needed to represent data, saving storage space and transmission bandwidth. It describes lossy compression methods like JPEG and MPEG that eliminate redundant information, resulting in smaller file sizes but some loss of data quality. Lossless compression methods like ZIP and GIF are also covered, which compress data without any loss for file types like text where quality is important. Specific lossless compression techniques like run length encoding, Huffman coding, Lempel-Ziv coding are explained. The document concludes with a brief mention of image, video, audio and dictionary based compression methods.
Data Compression, Lossy and Lossless Data Compression,Classification of Lossy and Lossless Data Compression, Huffman Codding method, LZW method of Lossless Compression and Compression Ratio
This document discusses information theory and data compression. It covers three main topics:
1) Types of data compression including lossless compression, which retains all original data, and lossy compression, which permanently eliminates some information.
2) Compression methods like removing spaces, using single characters to represent repeated characters, and substituting smaller bit sequences for recurring characters.
3) Continuous amplitude signals, which are varying quantities over a continuum like time, and examples of finite vs infinite duration signals.
These slides cover the fundamentals of data communication & networking. it covers Data compression which compression data for transmitted over communication channel. It is useful for engineering students & also for the candidates who want to master data communication & computer networking.
This document discusses data compression algorithms including lossless and lossy methods. It defines lossless compression as allowing perfect reconstruction of the original data and lossy compression as permitting only approximate reconstruction. Specific lossless methods covered are run-length encoding, Huffman coding, and Lempel-Ziv encoding. Lossy methods discussed are JPEG compression for images, discrete cosine transform, and MPEG video compression. The document concludes that the presented approach of using the Hartley transform for image compression with separate magnitude and phase processing achieved good performance.
Comparison of various data compression techniques and it perfectly differentiates different techniques of data compression. Its likely to be precise and focused on techniques rather than the topic itself.
This document discusses different compression techniques including lossless and lossy compression. Lossless compression recovers the exact original data after compression and is used for databases and documents. Lossy compression results in some loss of accuracy but allows for greater compression and is used for images and audio. Common lossless compression algorithms discussed include run-length encoding, Huffman coding, and arithmetic coding. Lossy compression is used in applications like digital cameras to increase storage capacity with minimal quality degradation.
Comparison between Lossy and Lossless Compressionrafikrokon
This presentation compares lossy and lossless compression. It discusses the group members, topics to be covered including definitions of compression, lossless compression, and lossy compression. It explains that lossless compression allows exact recovery of original data while lossy compression involves some data loss. Lossy compression removes non-essential data and has data degradation but is cheaper and requires less space and time. Lossless compression works well with repeated data and allows exact data recovery but requires more space and time. The presentation discusses uses of each compression type and their advantages and disadvantages.
This document summarizes image compression techniques. It discusses why images need to be compressed to reduce file sizes and speed up transmission. It describes how digital images are composed of pixels and color components. The document then covers lossless compression algorithms like Run Length Encoding (RLE) and LZW, which use statistical redundancy or dictionaries to compress images without loss of information. It also mentions lossy compression techniques like quantization that can achieve higher compression ratios but result in some loss of visual quality.
This document discusses various data compression techniques. It begins by explaining why data compression is useful for optimizing storage space and transmission times. It then covers the concepts of entropy and lossless versus lossy compression methods. Specific lossless methods discussed include run-length encoding, Huffman coding, and Lempel-Ziv encoding. Lossy methods covered are JPEG for images, MPEG for video, and MP3 for audio. Key steps of each technique are outlined at a high level.
Hardware/software co-design for a parallel three-dimensional bresenham’s algo...IJECEIAES
Line plotting is the one of the basic operations in the scan conversion. Bresenham’s line drawing algorithm is an efficient and high popular algorithm utilized for this purpose. This algorithm starts from one end-point of the line to the other end-point by calculating one point at each step. As a result, the calculation time for all the points depends on the length of the line thereby the number of the total points presented. In this paper, we developed an approach to speed up the Bresenham algorithm by partitioning each line into number of segments, find the points belong to those segments and drawing them simultaneously to formulate the main line. As a result, the higher number of segments generated, the faster the points are calculated. By employing 32 cores in the Field Programmable Gate Array, a line of length 992 points is formulated in 0.31μs only. The complete system is implemented using Zybo board that contains the Xilinx Zynq-7000 chip (Z-7010).
The document discusses various methods for data compression, including lossless and lossy techniques. Lossless methods like run-length encoding and Huffman coding remove redundant data during compression and add it back during decompression so the original data is preserved exactly. Huffman coding assigns shorter codes to more frequent symbols and longer codes to less frequent symbols. Run-length encoding replaces repeated symbols with a single instance of the symbol and count. Lossy methods like JPEG and MP3 are used for media as small errors are imperceptible to humans, allowing greater compression.
Hardback solution to accelerate multimedia computation through mgp in cmpeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
OCR for Gujarati Numeral using Neural Networkijsrd.com
This papers functions within to reduce individuality popularity (OCR) program for hand-written Gujarati research. One can find so much of work for Indian own native different languages like Hindi, Gujarati, Tamil, Bengali, Malayalam, Gurumukhi etc., but Gujarati is a vocabulary for which hardly any work is traceable especially for hand-written individuals. Here in this work a nerve program is provided for Gujarati hand-written research popularity. This paper deals with an optical character recognition (OCR) system for handwritten Gujarati numbers. A several break up food ahead nerve program is suggested for variation of research. The functions of Gujarati research are abstracted by four different details of research. Reduction and skew- changes are also done for preprocessing of hand-written research before their variation. This work has purchased approximately 81% of performance for Gujarati handwritten numerals.
A new algorithm for data compression technique using vlsiTejeswar Tej
This project report presents a new data compression algorithm called K-RLE for use in wireless sensor networks. K-RLE is based on run-length encoding (RLE) but introduces a parameter K that allows runs of similar data values to be compressed together. This increases compression ratio compared to standard RLE. The project implements K-RLE compression in an FPGA with ADC, FIFOs and a compression controller block. Simulation results show K-RLE can achieve higher compression than RLE with lower hardware requirements and power consumption, making it suitable for wireless sensor networks where energy efficiency is important.
This document discusses data compression techniques. It begins by defining data compression as encoding information in a file to take up less space. It then covers the need for compression to save storage and transmission time. The main types of compression discussed are lossless, which allows exact reconstruction of data, and lossy, which allows approximate reconstruction for better compression. Specific lossless techniques covered include Huffman coding, which assigns variable length codes based on frequency. Lossy techniques like JPEG are also discussed. The document concludes by listing applications of compression techniques in files, multimedia, and communication.
This document discusses a proposed method for lossless image compression that uses a hybrid of wavelet transforms, embedded zero tree coding, and Huffman coding. Specifically, it uses discrete wavelet transforms to decompose images into sub-bands, applies embedded zero tree coding to the wavelet coefficients to encode insignificant coefficients as zero trees, and then uses Huffman coding for further compression. Tables of results are presented showing compression ratios, bits per pixel, and PSNR values for images compressed with different threshold values in the wavelet decomposition. The goal is to determine an optimal threshold level for each decomposition to maximize compression while perfectly reconstructing the original image content.
This document provides an overview of a research project on image compression. It discusses image compression techniques including lossy and lossless compression. It describes using discrete wavelet transform, lifting wavelet transform, and stationary wavelet transform for image transformation. Experiments were conducted to compare the compression ratio and processing time of different combinations of wavelet transforms, vector quantization, and Huffman/Arithmetic coding. The results were analyzed to evaluate the compression performance and efficiency of the different methods.
This document contains an exam for a computer architecture and organization course, including 10 multiple choice questions covering various topics in the subject. The questions address issues like addressing modes, instruction cycles, interrupts, cache memory, virtual memory, and different types of computer architecture like Von Neumann architecture. Detailed answers are provided for each question exploring the key concepts and distinguishing between related terms. The exam was prepared by an assistant professor and provides an overview of the content covered in the course.
The document discusses cache memory, virtual memory, and memory management in hardware. It describes how cache memory stores frequently used data from main memory for faster CPU access. Virtual memory allows programs to access more memory than physically available by mapping virtual addresses to physical addresses. The performance of cache memory is measured by hit and miss rates, with hits accessing the cache faster and misses requiring additional time to retrieve data from main memory.
This document contains the questions and answers from a computer architecture and organization exam. It includes questions about the differences between computer architecture and organization, instruction formats, bus definitions, cache memory advantages, and virtual memory. The responses provide detailed explanations of concepts like locality of reference, thrashing, address mapping, cache hits and misses, and hierarchical memory systems. Justification is given for using a hierarchical approach to improve performance across different memory types. The differences between paging and segmentation in virtual memory are also distinguished.
This document provides an overview of data compression techniques. It discusses lossless compression algorithms like Huffman encoding and LZW encoding which allow for exact reconstruction of the original data. It also discusses lossy compression techniques like JPEG and MPEG which allow for approximate reconstruction for images and video in order to achieve higher compression rates. JPEG divides images into 8x8 blocks and applies discrete cosine transform, quantization, and run length encoding. MPEG spatially compresses each video frame using JPEG and temporally compresses frames by removing redundant frames.
This document provides an overview of data compression techniques. It discusses how data compression reduces the number of bits needed to represent data, saving storage space and transmission bandwidth. It describes lossy compression methods like JPEG and MPEG that eliminate redundant information, resulting in smaller file sizes but some loss of data quality. Lossless compression methods like ZIP and GIF are also covered, which compress data without any loss for file types like text where quality is important. Specific lossless compression techniques like run length encoding, Huffman coding, Lempel-Ziv coding are explained. The document concludes with a brief mention of image, video, audio and dictionary based compression methods.
Data Compression, Lossy and Lossless Data Compression,Classification of Lossy and Lossless Data Compression, Huffman Codding method, LZW method of Lossless Compression and Compression Ratio
This document discusses information theory and data compression. It covers three main topics:
1) Types of data compression including lossless compression, which retains all original data, and lossy compression, which permanently eliminates some information.
2) Compression methods like removing spaces, using single characters to represent repeated characters, and substituting smaller bit sequences for recurring characters.
3) Continuous amplitude signals, which are varying quantities over a continuum like time, and examples of finite vs infinite duration signals.
These slides cover the fundamentals of data communication & networking. it covers Data compression which compression data for transmitted over communication channel. It is useful for engineering students & also for the candidates who want to master data communication & computer networking.
This document discusses data compression algorithms including lossless and lossy methods. It defines lossless compression as allowing perfect reconstruction of the original data and lossy compression as permitting only approximate reconstruction. Specific lossless methods covered are run-length encoding, Huffman coding, and Lempel-Ziv encoding. Lossy methods discussed are JPEG compression for images, discrete cosine transform, and MPEG video compression. The document concludes that the presented approach of using the Hartley transform for image compression with separate magnitude and phase processing achieved good performance.
Comparison of various data compression techniques and it perfectly differentiates different techniques of data compression. Its likely to be precise and focused on techniques rather than the topic itself.
This document discusses different compression techniques including lossless and lossy compression. Lossless compression recovers the exact original data after compression and is used for databases and documents. Lossy compression results in some loss of accuracy but allows for greater compression and is used for images and audio. Common lossless compression algorithms discussed include run-length encoding, Huffman coding, and arithmetic coding. Lossy compression is used in applications like digital cameras to increase storage capacity with minimal quality degradation.
Comparison between Lossy and Lossless Compressionrafikrokon
This presentation compares lossy and lossless compression. It discusses the group members, topics to be covered including definitions of compression, lossless compression, and lossy compression. It explains that lossless compression allows exact recovery of original data while lossy compression involves some data loss. Lossy compression removes non-essential data and has data degradation but is cheaper and requires less space and time. Lossless compression works well with repeated data and allows exact data recovery but requires more space and time. The presentation discusses uses of each compression type and their advantages and disadvantages.
This document summarizes image compression techniques. It discusses why images need to be compressed to reduce file sizes and speed up transmission. It describes how digital images are composed of pixels and color components. The document then covers lossless compression algorithms like Run Length Encoding (RLE) and LZW, which use statistical redundancy or dictionaries to compress images without loss of information. It also mentions lossy compression techniques like quantization that can achieve higher compression ratios but result in some loss of visual quality.
This document discusses various data compression techniques. It begins by explaining why data compression is useful for optimizing storage space and transmission times. It then covers the concepts of entropy and lossless versus lossy compression methods. Specific lossless methods discussed include run-length encoding, Huffman coding, and Lempel-Ziv encoding. Lossy methods covered are JPEG for images, MPEG for video, and MP3 for audio. Key steps of each technique are outlined at a high level.
Hardware/software co-design for a parallel three-dimensional bresenham’s algo...IJECEIAES
Line plotting is the one of the basic operations in the scan conversion. Bresenham’s line drawing algorithm is an efficient and high popular algorithm utilized for this purpose. This algorithm starts from one end-point of the line to the other end-point by calculating one point at each step. As a result, the calculation time for all the points depends on the length of the line thereby the number of the total points presented. In this paper, we developed an approach to speed up the Bresenham algorithm by partitioning each line into number of segments, find the points belong to those segments and drawing them simultaneously to formulate the main line. As a result, the higher number of segments generated, the faster the points are calculated. By employing 32 cores in the Field Programmable Gate Array, a line of length 992 points is formulated in 0.31μs only. The complete system is implemented using Zybo board that contains the Xilinx Zynq-7000 chip (Z-7010).
The document discusses various methods for data compression, including lossless and lossy techniques. Lossless methods like run-length encoding and Huffman coding remove redundant data during compression and add it back during decompression so the original data is preserved exactly. Huffman coding assigns shorter codes to more frequent symbols and longer codes to less frequent symbols. Run-length encoding replaces repeated symbols with a single instance of the symbol and count. Lossy methods like JPEG and MP3 are used for media as small errors are imperceptible to humans, allowing greater compression.
Hardback solution to accelerate multimedia computation through mgp in cmpeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
OCR for Gujarati Numeral using Neural Networkijsrd.com
This papers functions within to reduce individuality popularity (OCR) program for hand-written Gujarati research. One can find so much of work for Indian own native different languages like Hindi, Gujarati, Tamil, Bengali, Malayalam, Gurumukhi etc., but Gujarati is a vocabulary for which hardly any work is traceable especially for hand-written individuals. Here in this work a nerve program is provided for Gujarati hand-written research popularity. This paper deals with an optical character recognition (OCR) system for handwritten Gujarati numbers. A several break up food ahead nerve program is suggested for variation of research. The functions of Gujarati research are abstracted by four different details of research. Reduction and skew- changes are also done for preprocessing of hand-written research before their variation. This work has purchased approximately 81% of performance for Gujarati handwritten numerals.
A new algorithm for data compression technique using vlsiTejeswar Tej
This project report presents a new data compression algorithm called K-RLE for use in wireless sensor networks. K-RLE is based on run-length encoding (RLE) but introduces a parameter K that allows runs of similar data values to be compressed together. This increases compression ratio compared to standard RLE. The project implements K-RLE compression in an FPGA with ADC, FIFOs and a compression controller block. Simulation results show K-RLE can achieve higher compression than RLE with lower hardware requirements and power consumption, making it suitable for wireless sensor networks where energy efficiency is important.
This document discusses data compression techniques. It begins by defining data compression as encoding information in a file to take up less space. It then covers the need for compression to save storage and transmission time. The main types of compression discussed are lossless, which allows exact reconstruction of data, and lossy, which allows approximate reconstruction for better compression. Specific lossless techniques covered include Huffman coding, which assigns variable length codes based on frequency. Lossy techniques like JPEG are also discussed. The document concludes by listing applications of compression techniques in files, multimedia, and communication.
This document discusses a proposed method for lossless image compression that uses a hybrid of wavelet transforms, embedded zero tree coding, and Huffman coding. Specifically, it uses discrete wavelet transforms to decompose images into sub-bands, applies embedded zero tree coding to the wavelet coefficients to encode insignificant coefficients as zero trees, and then uses Huffman coding for further compression. Tables of results are presented showing compression ratios, bits per pixel, and PSNR values for images compressed with different threshold values in the wavelet decomposition. The goal is to determine an optimal threshold level for each decomposition to maximize compression while perfectly reconstructing the original image content.
This document provides an overview of a research project on image compression. It discusses image compression techniques including lossy and lossless compression. It describes using discrete wavelet transform, lifting wavelet transform, and stationary wavelet transform for image transformation. Experiments were conducted to compare the compression ratio and processing time of different combinations of wavelet transforms, vector quantization, and Huffman/Arithmetic coding. The results were analyzed to evaluate the compression performance and efficiency of the different methods.
This document contains an exam for a computer architecture and organization course, including 10 multiple choice questions covering various topics in the subject. The questions address issues like addressing modes, instruction cycles, interrupts, cache memory, virtual memory, and different types of computer architecture like Von Neumann architecture. Detailed answers are provided for each question exploring the key concepts and distinguishing between related terms. The exam was prepared by an assistant professor and provides an overview of the content covered in the course.
The document discusses cache memory, virtual memory, and memory management in hardware. It describes how cache memory stores frequently used data from main memory for faster CPU access. Virtual memory allows programs to access more memory than physically available by mapping virtual addresses to physical addresses. The performance of cache memory is measured by hit and miss rates, with hits accessing the cache faster and misses requiring additional time to retrieve data from main memory.
AVOIDING DUPLICATED COMPUTATION TO IMPROVE THE PERFORMANCE OF PFSP ON CUDA GPUScsandit
The document discusses improving the performance of the Tabu Search algorithm for solving the Permutation Flowshop Scheduling Problem (PFSP) on CUDA GPUs by avoiding duplicated computation among threads. It first provides background on GPU architecture, the PFSP problem, and related parallelization methods. It then observes that if two permutations share the same prefix, their completion time tables will contain identical column data equal to the length of the prefix, leading to duplicated computation. The paper proposes an approach where each thread is assigned a permutation and allocated shared memory to store and compute the completion time table in parallel, avoiding this duplicated work by leveraging the shared prefix property. Experimental results show the new approach runs up to 1.5 times faster than an existing
AVOIDING DUPLICATED COMPUTATION TO IMPROVE THE PERFORMANCE OF PFSP ON CUDA GPUScscpconf
Graphics Processing Units (GPUs) have been emerged as powerful parallel compute platforms for various
application domains. A GPU consists of hundreds or even thousands processor cores and adopts Single
Instruction Multiple Threading (SIMT) architecture. Previously, we have proposed an approach that
optimizes the Tabu Search algorithm for solving the Permutation Flowshop Scheduling Problem (PFSP)
on a GPU by using a math function to generate all different permutations, avoiding the need of placing all
the permutations in the global memory. Based on the research result, this paper proposes another
approach that further improves the performance by avoiding duplicated computation among threads,
which is incurred when any two permutations have the same prefix. Experimental results show that the
GPU implementation of our proposed Tabu Search for PFSP runs up to 1.5 times faster than another GPU
implementation proposed by Czapiński and Barnes.
The document discusses memory segmentation and paging techniques used in operating systems. Segmentation divides memory into variable-length segments, while paging divides memory into fixed-size pages. Paging maps logical pages to physical frame addresses using a page table for efficient memory access. It allows programs to access more memory than is physically available by swapping pages between memory and disk. The combination of segmentation and paging provides memory protection and reduces internal and external fragmentation.
This document discusses several memory management techniques:
1. Contiguous allocation allocates processes to contiguous regions of memory but can lead to fragmentation.
2. Paging divides memory into pages and processes into page tables to map virtual to physical addresses, reducing fragmentation. It uses translation lookaside buffers (TLBs) to speed address translation.
3. Segmentation divides processes into logical segments and uses segment tables to map segments to physical addresses. It provides a modular view of memory but external fragmentation remains an issue.
This document discusses several memory management techniques:
1. Contiguous allocation allocates processes to contiguous regions of memory but can lead to fragmentation.
2. Paging divides memory into pages and processes into page tables to map virtual to physical addresses, reducing fragmentation. It uses translation lookaside buffers (TLBs) to speed address translation.
3. Segmentation divides processes into logical segments and uses segment tables to map segments to physical addresses. It provides a modular view of memory but external fragmentation remains an issue.
This document discusses different memory management techniques including:
1. Contiguous allocation allocates processes to contiguous regions of memory but can lead to fragmentation. Paging and segmentation address this by allowing non-contiguous allocation.
2. Paging maps logical addresses to physical frames through a page table. It supports non-contiguous allocation but has translation overhead that is reduced using translation lookaside buffers.
3. Segmentation divides memory into logical segments and uses a segment table to map logical to physical addresses. It matches the user's view of memory but external fragmentation remained an issue until combined with paging.
The document discusses different memory management techniques used in operating systems:
1. Programs go through several steps before execution - compilation, loading, and execution where address binding can occur.
2. Memory management schemes separate logical and physical addresses using techniques like paging and segmentation to map virtual to physical addresses.
3. Swapping allows processes to be temporarily moved out of memory to disk to improve memory utilization at the cost of performance.
This document discusses and compares the different local replication methods available with EMC Symmetrix arrays: TimeFinder/Mirror, TimeFinder/Clone, and TimeFinder/Snap. TimeFinder/Mirror uses RAID-1 mirroring techniques to create replicas, allowing up to two concurrent replicas but limiting other features. TimeFinder/Clone creates independent replica devices that can be accessed without dependency on the source, allowing more concurrent replicas but lacking mirroring performance benefits. TimeFinder/Snap creates virtual replicas using pointers to optimize storage usage but requires copies if over 30% of data changes. Application needs around features, performance, and number of replicas determine the best choice.
1. Computing architectures can be categorized as general purpose, domain-specific, or application-specific according to their flexibility. General purpose computers are based on the von Neumann architecture and can execute any computation but with reduced performance for some algorithms. Domain-specific processors are tailored for a class of applications to improve performance of common operations. Application-specific processors are tailored for a single application and have no instruction fetch or decode cycles, directly implementing the application in hardware.
This document proposes an approach called M3 that uses memory mapping to scale machine learning algorithms to large datasets that exceed RAM size. The authors tested M3 on datasets up to 190GB for logistic regression and k-means algorithms. They found that M3's runtime scales linearly with dataset size and its speed on a single machine is comparable to an 8-instance Spark cluster and significantly faster than a 4-instance Spark cluster. The authors contribute M3 as an easy-to-apply approach that utilizes virtual memory to enable existing ML algorithms to work with out-of-core datasets.
Paging and Segmentation in Operating SystemRaj Mohan
The document discusses different types of memory used in computers including physical memory, logical memory, and virtual memory. It describes how virtual memory uses paging and segmentation techniques to allow programs to access more memory than is physically available. Paging divides memory into fixed-size pages that can be swapped between RAM and secondary storage, while segmentation divides memory into variable-length, protected segments. The combination of paging and segmentation provides memory protection and efficient use of available RAM.
The document discusses different memory management techniques used in operating systems. It begins with an overview of processes entering memory from an input queue. It then covers binding of instructions and data to memory at compile time, load time, or execution time. Key concepts discussed include logical vs physical addresses, the memory management unit (MMU), dynamic loading and linking, overlays, swapping, contiguous allocation, paging using page tables and frames, and fragmentation. Hierarchical paging, hashed page tables, and inverted page tables are also summarized.
Robust Fault Tolerance in Content Addressable Memory InterfaceIOSRJVSP
With the rapid improvement in data exchange, large memory devices have come out in recent past. The operational controlling for such large memory has became a tedious task due to faster, distributed nature of memory units. In the process of memory accessing it is observed that data written or fetched are often encounter with fault location and faulty data are written or fetched from the addressed locations. In real time applications, this error cannot be tolerated as it leads to variation in the operational condition dependent on the memory data. Hence, It is required to have an optimal controlling fault tolerance in content addressable memory. In this paper, we present an approach of fault tolerance approach by controlling the fault addressing overhead, by introducing a new addressing approach using redundant control modeling of fault address unit. The presented approach achieves the objective of fault controlling over multiple fault location in different dimensions with redundant coding.
IRJET- Chatbot Using Gated End-to-End Memory NetworksIRJET Journal
The document describes a proposed chatbot system that uses a gated end-to-end memory network model for hospital appointment booking. The model is trained on dialog data consisting of user utterances and bot responses related to booking appointments. It uses an attention mechanism over the dialog memory to select relevant parts of the conversation. The model is trained end-to-end to dynamically regulate interactions with the memory. Experiments show it can handle new combinations of fields when booking appointments in a simulated hospital reservation scenario.
Machine learning methods can be applied in several areas of computer architecture including reducing simulation time, exploring large design spaces, and optimizing resource management and hardware predictors. Supervised and unsupervised learning algorithms are discussed as well as applications like basic block instruction scheduling where machine learning has shown improvements over traditional heuristic approaches.
Main memory management techniques include paging and segmentation. Paging maps logical addresses to physical frames through a page table. It allows non-contiguous allocation but causes internal fragmentation. Segmentation maps a logical address to physical memory using a segment table containing base addresses and limits. It matches the user's logical view of memory and allows sharing through segments. Both techniques use memory protection rings and translation to virtualize the physical address space.
Lesson plan proforma database management systemSANTOSH RATH
This document contains a lesson plan and progress sheet for a Database Management System course being taught over 3 semesters in 2014-15. It lists 48 topics divided across 3 modules that will be covered over the course, along with the planned and actual dates for delivering each topic and any remarks. The topics include introductions to databases, data models, SQL, normalization, transactions, concurrency, locking, recovery techniques, and more advanced DBMS concepts.
This document contains a lesson plan and progress sheet for a Programming in C course being taught over 3 modules during the first semester of the 2014-2015 academic year at the Gandhi Institute for Education and Technology in Bhubaneswar, India. The plan outlines 46 topics to be covered, including introductions to algorithms, flowcharts, data types, operators, control structures, arrays, strings, pointers, functions, structures, unions, files and I/O. Dates are provided for when each topic is planned to be delivered and actual delivery dates and remarks.
This document contains a list of expected questions for the Theory of Computation subject in the 5th semester of the Computer Science branch. It includes short questions, long questions, and questions covering various topics like DFAs, NFAs, regular expressions, context-free grammars, pushdown automata, Turing machines, complexity classes, decidability, and more. The list was prepared by Assistant Professor Santosh Kumar Rath.
This document lists 30 short questions and 19 long questions related to the subject of Theory of Computation for 5th semester CSE students. The questions cover topics such as the differences between DFAs and NFAs, regular expressions, grammars, languages, automata, Turing machines, complexity classes, and decidability. Example problems include designing automata to recognize specific languages and proving properties of formal languages and grammars.
This document contains questions from various years (2008-2012) related to object oriented programming concepts in C++. The questions cover topics such as classes and objects, inheritance, polymorphism, exception handling, functions, pointers, arrays, operators, input/output etc. Short questions ranging from 2-5 marks and long questions ranging from 5-10 marks are included related to concepts, syntax, programs and differences.
This document contains questions for a database engineering examination. It includes questions about data definition language commands, differences between file processing systems and database systems, weak entity sets, cardinality, foreign keys, hashing techniques, lock types, differences between object-oriented and object-relational databases, rollup, entity-relationship modeling, converting models, B+ tree operations, relational algebra queries, relational calculus queries, SQL queries, and more. Students are asked to answer one compulsory question with multiple parts, and five questions from the remaining list.
This document lists expected questions from the Theory of Computation subject for 5th semester CSE students. It includes short questions and long questions on topics such as the differences between DFAs and NFAs, regular expressions, grammars, Turing machines, complexity classes, decidability, and more. A total of 30 short questions and 19 long questions are provided to help students prepare for their exam.
This document lists 30 short questions and 19 long questions related to the subject of Theory of Computation for 5th semester CSE students. The questions cover topics such as the differences between DFAs and NFAs, regular expressions, grammars, languages, automata, Turing machines, complexity classes, and decidability. Example problems include designing automata to recognize specific languages, proving languages are context-free, and determining the relationships between computational models such as finite automata and Turing machines.
This document contains a set of short and long questions related to database management systems. Some key topics covered include the entity-relationship model, relational data model, normalization, transaction processing, concurrency control, and database recovery. The questions range from definitions and short explanations to examples and multi-step problems involving conceptual and practical database concepts.
The document contains questions related to database management systems (DBMS). It covers topics like data modeling, relational algebra, SQL, transaction processing, concurrency control, and database design. Some key questions ask about the differences between primary and candidate keys, entity relationship modeling, normalization, and query optimization techniques.
This document contains model questions for an exam on object-oriented programming concepts in C++. It includes 30 short answer questions and 15 long answer questions covering topics such as object slicing, copy constructors, templates, exception handling, the this pointer, namespaces, inheritance, operator overloading, polymorphism, abstraction, iterators, containers, encapsulation, and friend functions. Students are asked to define key terms and write programs demonstrating various OOP concepts in C++.
System programming involves creating system software that provides services to computer hardware rather than users. It requires an awareness of hardware. The document discusses types of system software like operating systems, utility programs, and database management systems. It also discusses types of application software like spreadsheets and presentation programs. The evolution of system programming components is explained, including assemblers that translate assembly language into machine code, and loaders that load programs into memory for execution.
This document provides an overview of operating system concepts including system components, batch systems, spooling, multiprogramming, time-sharing systems, distributed systems, parallel systems, real-time embedded systems, system structures, system calls, system programs, and process management. It describes the basic functions of an operating system in managing hardware resources, running application programs, and allowing multiple processes to run concurrently through techniques like multiprocessing and time-sharing.
This document provides an overview of operating system concepts including system components, operating system services, system programs, system calls, process management, and process states. It describes the four main components of a computer system as hardware, operating system, application programs, and users. It defines key operating system concepts such as multiprogramming, time-sharing, distributed systems, and real-time systems. It also explains process management topics like process states, process control blocks, and context switching.
This document contains an assignment on operating system concepts from Gandhi Institute of Education & Technology. It includes short notes on topics like conditions of deadlock, page vs segment, Belady's anomaly, critical section, fragmentation, monitors vs semaphores, race condition, binary vs counting semaphores, thrashing, lazy swapper, garbage collection and priority inheritance protocol. It also contains 10 long questions covering concepts like Banker's algorithm, page replacement algorithms, Peterson's solution for critical section problem, semaphores, thrashing, segment replacement, busy waiting, conditions of deadlock, producer-consumer problem, page fault and demand paging.
This document contains a set of short questions and long questions related to operating system concepts for an assignment. The short questions cover topics like the differences between processes and programs, different scheduling algorithms, process states, and functions of operating systems. The long questions involve drawing Gantt charts for scheduling algorithms, calculating turnaround and waiting times, describing functions and models of operating systems, and explaining concepts like context switching, process control blocks, and inter-process communication.
This document contains 10 questions related to operating system concepts such as file accessing methods, storage area networks, cache hit ratio, disk scheduling algorithms, memory management, I/O subsystem services like buffering and caching, file allocation methods, and Unix concepts like filters and inodes. The questions are from previous years' assignments given by Gandhi Institute for Education & Technology and range from short answer to longer descriptive questions.
Build the Next Generation of Apps with the Einstein 1 Platform.
Rejoignez Philippe Ozil pour une session de workshops qui vous guidera à travers les détails de la plateforme Einstein 1, l'importance des données pour la création d'applications d'intelligence artificielle et les différents outils et technologies que Salesforce propose pour vous apporter tous les bénéfices de l'IA.
Flow Through Pipe: the analysis of fluid flow within pipesIndrajeet sahu
Flow Through Pipe: This topic covers the analysis of fluid flow within pipes, focusing on laminar and turbulent flow regimes, continuity equation, Bernoulli's equation, Darcy-Weisbach equation, head loss due to friction, and minor losses from fittings and bends. Understanding these principles is crucial for efficient pipe system design and analysis.
This study Examines the Effectiveness of Talent Procurement through the Imple...DharmaBanothu
In the world with high technology and fast
forward mindset recruiters are walking/showing interest
towards E-Recruitment. Present most of the HRs of
many companies are choosing E-Recruitment as the best
choice for recruitment. E-Recruitment is being done
through many online platforms like Linkedin, Naukri,
Instagram , Facebook etc. Now with high technology E-
Recruitment has gone through next level by using
Artificial Intelligence too.
Key Words : Talent Management, Talent Acquisition , E-
Recruitment , Artificial Intelligence Introduction
Effectiveness of Talent Acquisition through E-
Recruitment in this topic we will discuss about 4important
and interlinked topics which are
A high-Speed Communication System is based on the Design of a Bi-NoC Router, ...DharmaBanothu
The Network on Chip (NoC) has emerged as an effective
solution for intercommunication infrastructure within System on
Chip (SoC) designs, overcoming the limitations of traditional
methods that face significant bottlenecks. However, the complexity
of NoC design presents numerous challenges related to
performance metrics such as scalability, latency, power
consumption, and signal integrity. This project addresses the
issues within the router's memory unit and proposes an enhanced
memory structure. To achieve efficient data transfer, FIFO buffers
are implemented in distributed RAM and virtual channels for
FPGA-based NoC. The project introduces advanced FIFO-based
memory units within the NoC router, assessing their performance
in a Bi-directional NoC (Bi-NoC) configuration. The primary
objective is to reduce the router's workload while enhancing the
FIFO internal structure. To further improve data transfer speed,
a Bi-NoC with a self-configurable intercommunication channel is
suggested. Simulation and synthesis results demonstrate
guaranteed throughput, predictable latency, and equitable
network access, showing significant improvement over previous
designs
Open Channel Flow: fluid flow with a free surfaceIndrajeet sahu
Open Channel Flow: This topic focuses on fluid flow with a free surface, such as in rivers, canals, and drainage ditches. Key concepts include the classification of flow types (steady vs. unsteady, uniform vs. non-uniform), hydraulic radius, flow resistance, Manning's equation, critical flow conditions, and energy and momentum principles. It also covers flow measurement techniques, gradually varied flow analysis, and the design of open channels. Understanding these principles is vital for effective water resource management and engineering applications.
Blood finder application project report (1).pdfKamal Acharya
Blood Finder is an emergency time app where a user can search for the blood banks as
well as the registered blood donors around Mumbai. This application also provide an
opportunity for the user of this application to become a registered donor for this user have
to enroll for the donor request from the application itself. If the admin wish to make user
a registered donor, with some of the formalities with the organization it can be done.
Specialization of this application is that the user will not have to register on sign-in for
searching the blood banks and blood donors it can be just done by installing the
application to the mobile.
The purpose of making this application is to save the user’s time for searching blood of
needed blood group during the time of the emergency.
This is an android application developed in Java and XML with the connectivity of
SQLite database. This application will provide most of basic functionality required for an
emergency time application. All the details of Blood banks and Blood donors are stored
in the database i.e. SQLite.
This application allowed the user to get all the information regarding blood banks and
blood donors such as Name, Number, Address, Blood Group, rather than searching it on
the different websites and wasting the precious time. This application is effective and
user friendly.
Determination of Equivalent Circuit parameters and performance characteristic...pvpriya2
Includes the testing of induction motor to draw the circle diagram of induction motor with step wise procedure and calculation for the same. Also explains the working and application of Induction generator
Sachpazis_Consolidation Settlement Calculation Program-The Python Code and th...Dr.Costas Sachpazis
Consolidation Settlement Calculation Program-The Python Code
By Professor Dr. Costas Sachpazis, Civil Engineer & Geologist
This program calculates the consolidation settlement for a foundation based on soil layer properties and foundation data. It allows users to input multiple soil layers and foundation characteristics to determine the total settlement.
Sachpazis_Consolidation Settlement Calculation Program-The Python Code and th...
Co question 2010
1. GANDHI INSTITUTE FOR EDUCATION & TECHNOLOGY
FIFT SEMESTER EXAMINATION-2010
COMPUTER ARCHITECTURE & ORGANIZATION
FULL MARK: 70 TIME: - 3HOURS
Prepared by: Asst.Prof. Santosh Kumar Rath (CSE DEPARTMENT) Page 1
Answer Question No.1 which is compulsory and any five from the rest.
The figure in the right hand margin indicates marks.
1. Answer the following question: 2*10=20
(a) Write is the function of MAR?
Ans: MAR stands for Memory address register. MAR stores the address of the memory
location to be accessed by the processor.
(b) What are advantages of multi bus over single bus?
Ans: Multi bus architecture provides more paths for data transfer a s compared to
single bus architecture. By providing more paths for data transfer a significant
reduction in the number of clock cycle needed to execute an instruction in case of
multi bus architecture.
(c) Differentiate between direct access and sequential access?
Ans: in direct access we can fetch any elements from any position directly but in
sequential access we have to traverse whole data which we stored at first.
- the difference is that , direct access allows us to go directly to a specific pieces of
data using an index where as sequential access is used when data is sequentially
stored on a magnetic tape. we must traverse all the data before we reach, searching
for.
(d) What is the unit of data transfer between processor and cache memory?
Ans: Word or byte is the unit of data transfer between processor and cache memory.
(e) Mention one difference between SDRAM, DDR SDRAM?
Ans: SDRAM stands for synchronous dynamic random access memory where as DDR
SDRAM stands for double data rate synchronous dynamic random access memory.
Both SDRAM and DDR SDRAM can be differentiate by the factor speed i.e. DDR
SDRAM transfer data roughly twice the speed of SDRAM.
(f) Differentiate between DRAM and SRAM in term of speed, size, and cost?
Ans: Speed: DRAM (Dynamic Ram): Access time is greater i.e slower in speed.
SRAM (Static Ram): Less access time hence faster memories.
Size: DRAM: less
SRAM: More
2. GANDHI INSTITUTE FOR EDUCATION & TECHNOLOGY
FIFT SEMESTER EXAMINATION-2010
COMPUTER ARCHITECTURE & ORGANIZATION
FULL MARK: 70 TIME: - 3HOURS
Prepared by: Asst.Prof. Santosh Kumar Rath (CSE DEPARTMENT) Page 2
Cost: DRAM: Less SRAM: More
(g) Define seek time?
Ans: seek time is a measure of the amount of time required for the read/write head to
move between tracks over the surface of platter.
(h)What do you understand by exponent overflow?
Ans: Overflow is an exponent to IEEE standard when representation of floating point
number. Exponent overflow means exponent is having a value which is too larger i.e
size is more than the register size.
(i) IS ROM a RAM? Why?
Ans: ROM is not a RAM because ROM is non-volatile in nature where as RAM is
volatile in nature so A ROM cannot be a RAM.
(j) Differentiate between horizontal and vertical microprogramming?
Ans: A horizontal microprogramming requires a larger micro programmed memory but
a vertical microprogramming require more encoding and decoding of signal, hence
time to access leads to slower operation.
2) A) List the different cache mapping techniques? Explain any one of them?
Ans: Three types of mapping procedures are there?
(1) Associative Mapping-The fastest and most flexible cache organizations uses
associative mapping. The associative memory stores both the address and content of
memory word. This permits any location in cache to store word in main memory.
(2) Direct Mapping-Associative memories are expensive compared to RAM's because of
added logic associated with each cell.
(3) Set Associative Mapping-It is a more general method that includes pure associative
and direct mapping as special case. It is an improvement over the direct mapping
organization in that each word of cache can store two or more words of memory under
the same index address. Each data word is stored together with its tag and the
number of tag data items in one word of cache is said to form a set.
Direct Mapping:
• Each location in RAM has one specific place in cache where the data will be held.
• Consider the cache to be like an array. Part of the address is used as index into
the cache to identify where the data will be held.
3. GANDHI INSTITUTE FOR EDUCATION & TECHNOLOGY
FIFT SEMESTER EXAMINATION-2010
COMPUTER ARCHITECTURE & ORGANIZATION
FULL MARK: 70 TIME: - 3HOURS
Prepared by: Asst.Prof. Santosh Kumar Rath (CSE DEPARTMENT) Page 3
• Since a data block from RAM can only be in one specific line in the cache, it must
always replace the one block that was already there. There is no need for a
replacement algorithm.
B) a two way set associative cache memory uses block of four word. The caches can
accommodate a total of 2048 word from main memory. The main memory size is 128K
* 32. What is the size of cache memory?
Ans:
Given – Two ways set associative mapping
Number of blocks in a set =2
Main memory size = 128 K* 32
Cache Memory block size = 4 words
Total no of blocks= no of words/ cache memory block size= 2048/4=512
No of set= size of cache in block/no of block in a set=512/2=256
Size of cache memory= 2048/4=512 blocks.
3) A) Explain the different types of ROM? Explain the advantages and disadvantages?
Ans: ROM stands for Read Only Memory which is non-volatile in nature.
Read Only Memory is physical memory that is not-volatile, meaning, if there is not electricity,
the memory is not lost. ROMs are used to put programs which start hardware components. and
good example of this is the BIOS on your computer. these types of software are called firmware.
ROMs server their purpose, but have their limitations because once you burn the information
into a ROM chip, it cannot be charged, that is why they come up with new types of ROMs there
are three
PROM: Programmable read-only memory - on this type of memory, you can write into just
once, you cannot erase the data once you store it in a PROM
EPROM: Erasable Programmable read-only memory - on this type of memory you can erase the
whole data and rewrite with a new one. Meaning, you have to erase the everything and put new
data, but you cannot keep any of the old data once you override it with the new one - BIOS and
CMOS use EPROM
EEPROM: Electronically Erasable Programmable read-only memory - on this type of ROM you
can edit/modify the data and still keep your data. as you can see from the image above, all these
chips look alike, the difference in the internal circuitry. Some examples of these chips on
computers are the computer CMOS and BIOS. also the difference in these chips are the prices.
EEPROMs are more expensive then ROM chips
4. GANDHI INSTITUTE FOR EDUCATION & TECHNOLOGY
FIFT SEMESTER EXAMINATION-2010
COMPUTER ARCHITECTURE & ORGANIZATION
FULL MARK: 70 TIME: - 3HOURS
Prepared by: Asst.Prof. Santosh Kumar Rath (CSE DEPARTMENT) Page 4
B) Explain the mechanism of virtual memory:
Ans: if your computer lacks the random access memory (RAM) needed to run a
program or operation, Windows uses virtual memory to compensate.
Virtual memory combines your computer’s RAM with temporary space on your hard
disk. When RAM runs low, virtual memory moves data from RAM to a space called a
paging file. Moving data to and from the paging file frees up RAM to complete its work.
The purpose of virtual memory is to enlarge the address space, the set of addresses a
program can utilize. For example, virtual memory might contain twice as many
addresses as main memory. A program using all of virtual memory, therefore, would
not be able to fit in main memory all at once. Nevertheless, the computer could
execute such a program by copying into main memory those portions of the program
needed at any given point during execution.
Virtual Memory Advantages
---------------------------------
You can run more applications at once.
You can run larger applications with less real RAM.
Applications may launch faster because of File Mapping.
You don't have to buy more memory (RAM).
Virtual Memory Disadvantages
---------------------------------
Applications run slower.
It takes more time to switch between applications.
Less hard drive space for your use.
Reduced system stability
4) A) List the four alternative methods of rounding the result of floating point
operation.
Ans: Rounding Mode: this floating point format support four different modes.
1) Round to Zero: This result closest to zero is returned. Nothing is added to the
least significant bit. This is equivalent to truncation.
2) Round Up: The more positive result closest to the infinity precise result is
returned. If the result is +ve and either the strictly bit is 1, the result is
5. GANDHI INSTITUTE FOR EDUCATION & TECHNOLOGY
FIFT SEMESTER EXAMINATION-2010
COMPUTER ARCHITECTURE & ORGANIZATION
FULL MARK: 70 TIME: - 3HOURS
Prepared by: Asst.Prof. Santosh Kumar Rath (CSE DEPARTMENT) Page 5
rounded. if the result is –ve , the result is not rounded because the un rounded
result is the most + ve result closest to the precise result.
3) Round Down: in this case more –ve result is returned. If the result is –ve and
either the sickly bit is 1, 1 is added to the least significant bit. If the result is
+ve nothing is added to the least significant bit.
4) Round to Nearest: This result closest to the infinitely precise result is returned.
If the undelivered bit below the LSB, have a significance of more than half the
LSB. If the result is exactly half, 1 is added if the LSB is 1.
B) What are steps for floating point division.
Ans: A floating point division operation mostly follows 3 steps.
Step 1: Subtract exponent and add bias. The bias value is 127 in case of single
precision number and 1023 in case of double precision number.
Step 2: Divide the mantissa and determine the sign result.
Step 3: Normalize the data.
5) A) write an algorithm for signed operand multiplication using booth algorithm.
Ans:
Register used in Booths algorithm: A->Accumulator (initially Zero)
6. GANDHI INSTITUTE FOR EDUCATION & TECHNOLOGY
FIFT SEMESTER EXAMINATION-2010
COMPUTER ARCHITECTURE & ORGANIZATION
FULL MARK: 70 TIME: - 3HOURS
Prepared by: Asst.Prof. Santosh Kumar Rath (CSE DEPARTMENT) Page 6
M-> Multiplicand
Q-> Multiplier
SC -> Sequential Counter (No’s of bits present in Q Register)
This process is continuing till the sequential counter becomes Zero. The result is
stored in Register A and Q.
Example:
B) Represent 0.5 in IEEE 754 single precision format.
Ans: Step 1: Convert decimal number to binary format 0.5 * 2= 1.0 0.1
Step 2: Normalize the number 0.1 * 2-1
Single Precision – For the above given number S=0, E =-1, M=0
Bias for the single precision format is given by 127.
So E1 =E +127 =-1 +127 =(126)10= (01111110)2
Number in single precision format is given as
0 01111110 0
Sign bit Exponent Mantissa
7. GANDHI INSTITUTE FOR EDUCATION & TECHNOLOGY
FIFT SEMESTER EXAMINATION-2010
COMPUTER ARCHITECTURE & ORGANIZATION
FULL MARK: 70 TIME: - 3HOURS
Prepared by: Asst.Prof. Santosh Kumar Rath (CSE DEPARTMENT) Page 7
6 A) Explain the different type of Addressing Modes?
Ans: Addressing modes are an aspect of the instruction set architecture in most
central processing unit (CPU) designs. The various addressing modes that are defined
in a given instruction set architecture define how machine language instructions in
that architecture identify the operand (or operands) of each instruction. An addressing
mode specifies how to calculate the effective memory address of an operand by using
information held in registers and/or constants contained within a machine instruction
or elsewhere.
Types of Addressing Modes
Each instruction of a computer specifies an operation on certain data. The are various
ways of specifying address of the data to be operated on. These different ways of
specifying data are called the addressing modes. The most common addressing modes
are:
Immediate addressing mode
Direct addressing mode
Indirect addressing mode
Register addressing mode
Register indirect addressing mode
Displacement addressing mode
Stack addressing mode
To specify the addressing mode of an instruction several methods are used. Most often
used are:
a) Different operands will use different addressing modes.
b) One or more bits in the instruction format can be used as mode field. The value of
the mode field determines which addressing mode is to be used.
The effective address will be either main memory address of a register.
Immediate Addressing: This is the simplest form of addressing. Here, the operand is
given in the instruction itself. This mode is used to define constant or set initial values
of variables. The advantage of this mode is that no memory reference other than
instruction fetch is required to obtain operand. The disadvantage is that the size of the
number is limited to the size of the address field, which most instruction sets is small
compared to word length.
INSTRUCTION OPERAND
Direct Addressing: In direct addressing mode, effective address of the operand is
given in the address field of the instruction. It requires one memory reference to read
the operand from the given location and provides only a limited address space. Length
of the address field is usually less than the word length.
Ex : Move P, Ro, Add Q, Ro P and Q are the address of operand.
Indirect Addressing: Indirect addressing mode, the address field of the instruction
refers to the address of a word in memory, which in turn contains the full length
8. GANDHI INSTITUTE FOR EDUCATION & TECHNOLOGY
FIFT SEMESTER EXAMINATION-2010
COMPUTER ARCHITECTURE & ORGANIZATION
FULL MARK: 70 TIME: - 3HOURS
Prepared by: Asst.Prof. Santosh Kumar Rath (CSE DEPARTMENT) Page 8
address of the operand. The advantage of this mode is that for the word length of N, an
address space of 2N can be addressed. He disadvantage is that instruction execution
requires two memory reference to fetch the operand Multilevel or cascaded indirect
addressing can also be used.
Register Addressing: Register addressing mode is similar to direct addressing. The
only difference is that the address field of the instruction refers to a register rather
than a memory location 3 or 4 bits are used as address field to reference 8 to 16
generate purpose registers. The advantages of register addressing are Small address
field is needed in the instruction.
Register Indirect Addressing: This mode is similar to indirect addressing. The
address field of the instruction refers to a register. The register contains the effective
address of the operand. This mode uses one memory reference to obtain the operand.
The address space is limited to the width of the registers available to store the effective
address.
Displacement Addressing:
In displacement addressing mode there are 3 types of addressing mode. They are:
1) Relative addressing
2) Base register addressing
3) Indexing addressing.
This is a combination of direct addressing and register indirect addressing. The value
contained in one address field. A is used directly and the other address refers to a
register whose contents are added to A to produce the effective address.
Stack Addressing: Stack is a linear array of locations referred to as last-in first out
queue. The stack is a reserved block of location, appended or deleted only at the top of
the stack. Stack pointer is a register which stores the address of top of stack location.
This mode of addressing is also known as implicit addressing.
B) Explain the Addressing Mechanism in Big-Endian and Little- Endian format with
example?
Ans: There are two scheme used to assign the memory location for storing multi byte
data.1) Big-Endian Scheme II) Little-Endian scheme.
Big- Endian is the order in which big end (MSB) is stored first (at lowest storage
address)
Little-Endian is an order in which the ‘little end (LSB) is stored first.
Word Address Byte Address
Big-Endian
0 1 2 3
4 5 6 7
2k-4 2k-3 2k-2 2k-1
9. GANDHI INSTITUTE FOR EDUCATION & TECHNOLOGY
FIFT SEMESTER EXAMINATION-2010
COMPUTER ARCHITECTURE & ORGANIZATION
FULL MARK: 70 TIME: - 3HOURS
Prepared by: Asst.Prof. Santosh Kumar Rath (CSE DEPARTMENT) Page 9
Word Address Byte Address
Little-Endian
7) A) Explain the instruction execution cycle step by step?
Ans: The time period during which one instruction is fetched from memory and
executed when a computer is given an instruction in machine language. There are
typically four stages of an instruction cycle that the CPU carries out:
1. Fetch the instruction from memory. This step brings the instruction into the
instruction register, a circuit that holds the instruction so that it can be
decoded and executed.
2. Decode the instruction.
3. Read the effective address from memory if the instruction has an indirect
address.
4. Execute the instruction.
Steps 1 and 2 are called the fetch cycle and are the same for each instruction. Steps 3
and 4 are called the execute cycle and will change with each instruction.
The term refers to both the series of four steps and also the amount of time that it
takes to carry out the four steps. An instruction cycle also is called machine cycle.
B) Convert the arithmetic expression:
A* b + A *(B*D+C*E) into reverse polish notation.
Ans: Symbol Scanned Stack Expression
A ( A
* ( * A
B ( * A B
+ ( + A B *
A ( + A B * A
* ( + A B * A
( ( + * ( A B * A
B ( + * ( A B * A B
* ( + * ( A B * A B
D ( + * ( A B * A B D
+ ( + * ( + A B * A B D
C ( + * ( + A B * A B D C
3 2 1 0
7 6 5 4
2k-1 2k-2 2k-3 2k-4
10. GANDHI INSTITUTE FOR EDUCATION & TECHNOLOGY
FIFT SEMESTER EXAMINATION-2010
COMPUTER ARCHITECTURE & ORGANIZATION
FULL MARK: 70 TIME: - 3HOURS
Prepared by: Asst.Prof. Santosh Kumar Rath (CSE DEPARTMENT) Page
10
* ( + * ( + A B * A B D * C
E ( + * ( + A B * A B D * C E
) ( + * ( + ) A B * A B D * C E + * +
8 A) write short notes on any two.
a) Virtual memory
b) Micro programmed Control
c) Segmentation
Ans: a) Virtual memory: Virtual memory is a feature of an operating system that
enables a process to use a memory (RAM) address space that is independent of other
processes running in the same system, and use a space that is larger than the actual
amount of RAM present, temporarily relegating some contents from RAM to a disk,
with little or no overhead. Virtual memory is a technique that allows processes that
may not be entirely in the memory to execute by means of automatic storage allocation
upon request. The term virtual memory refers to the abstraction of separating
LOGICAL memory--memory as seen by the process--from PHYSICAL memory--memory
as seen by the processor. Because of this separation, the programmer needs to be
aware of only the logical memory space while the operating system maintains two or
more levels of physical memory space. The virtual memory abstraction is implemented
by using secondary storage to augment the processor's main memory. Data is
transferred from secondary to main storage as and when necessary and the data
replaced is written back to the secondary storage according to a predetermined
replacement algorithm. If the data swapped is designated a fixed size, this swapping is
called paging; if variable sizes are permitted and the data is split along logical lines
such as subroutines or matrices, it is called segmentation.
b) Micro programmed Control: While executing a program, the microprocessor needs
to access memory frequently to read instruction codes and data stored in memory and
the interfacing circuit enables that access.
The primary function of memory interfacing is to allow the microprocessor to read
from and write into a given register of memory chip.
1. Be able to select the chip
2. Identify the register 3.Enable the appropriate buffer.
c) Segmentation: A segment can be defined as a logical grouping of instruction.
- Segmentation is a techniques used to handled different problem of different size and
different logical structure.
- In this case, a small program is divided into a number of logical parts, each
individual logical parts are known as a segment.