Cam notes


Published on

1 Like
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Cam notes

  1. 1. CAM Computer Application Management POST GRADUATE DIPLOMA IN INTERNATIONAL BUSINESS (PGDIB-I Semester) Faculty- Y. Tomar COMPUTER SYSTEMS AND BASIC COMPUTER HARDWARE ORGANIZATION A Computer System: Processing Input Unit Output Unit Unit Permanent Storage All computer systems, no matter how small or large, have the same fundamental capabilities: processing, storage, input and output. Input unit includes devices like keyboard and mouse, which are used by the user to give some data to the computer. Processing unit is where these data are processed and turned into meaningful information. Processing unit also includes temporary storage (RAM) in which the data currently being processed are stored temporarily. To show the result of processes, to the user, output devices like monitors and printers are used. The output on a monitor is usually called softcopy and the output on a printer is usually called hardcopy. Sometimes we may want to store our data and information permanently so that we can refer to them again, later. For this purpose, interchangeable devices like floppy disk drives and CD-ROM drives, or permanently installed devices like hard disks are used as permanent storage mediums. An internal look to a PC: The following are the hardware components that exist almost in all PCs. Motherboard: It is a microcomputer circuit board that contains slots for connecting peripherals like RAM modules, CPU and adapter cards. Motherboards also have electronic circuitry for handling such tasks as I/O signals from those peripheral devices. A motherboard is the backbone of a computer system: The power of a PC highly depends on the peripherals that its motherboard supports. CPU: It is the brain of a computer system. It is the component, which controls what’s going on in the system at any moment. Other components act according to the orders of the CPU. All the current inputs and any previously stored data are processed by the CPU to obtain meaningful information. Faculty – Yogesh Tomar Computer Application Management (CAM)- PGDIB Page…1/16 BLS Institute of Management
  2. 2. RAM: It is the primary memory of a PC. Anything in a secondary storage device (permanent storage) that has to be processed by the CPU, must first be loaded to RAM, because there are no machine instructions to directly access and use any data which is stored in a secondary storage medium. RAM is a volatile memory, therefore if an electricity cut or a reset should occur, all the data in RAM are lost. Hard Disk: It is one of the most popular secondary storage devices. It is a magnetic medium that stores its contents permanently, even in the absence of electricity power. You store your documents, pictures, photos, songs, etc in Hard disks. Floppy Disk Drive: It is a device into which you insert interchangeable floppy disks. Floppy disks are also magnetic storage mediums. FDDs works much slower than Hard disks and floppy disks have much smaller storage capacities. Floppy disks are usually used to copy some files from your PC to another PC, vice versa. Graphics Card: This circuit board is responsible from the visual outputs that will be displayed on the monitor. Nowadays, graphics cards have their own memory modules and processor chips, by which they lessen the load of CPU and RAM, hence enabling us to see very detailed graphics and high quality animations and video. PCs are general-purpose devices that can be used in many areas of interest, and of course there exists many other hardware components that can be added to them to increase their functionalities. These include CD-ROM drives, sound cards, radio cards, TV cards, modem cards, etc… COMPUTER GENERATIONS Time Generations Descriptive Term Type of Computer Inventor Frame First 1946-195 Vacuum Lee De Mainframes Generation 6 Tubes Forest Second 1956-196 William Transistor Mainframes Generation 4 Shockley Jack Kilby, Third 1964-197 Integrated Mainframes Generation 0 Circuit Minicomputers Robert Noyce Mainframes Fourth 1970- Microprocessor Minicomputers Ted Hoff Generation today Microcomputers Faculty – Yogesh Tomar Computer Application Management (CAM)- PGDIB Page…2/16 BLS Institute of Management
  3. 3. The Fifth Generation Computer Systems project (FGCS) was an initiative by Japan's Ministry of International Trade and Industry, begun in 1982, to create a "fifth generation computer", which was supposed to perform much calculation using massive parallelism. It was to be the end result of a massive government/industry research project in Japan during the 1980s. It aimed to create an "epoch-making computer" with supercomputer-like performance and usable artificial intelligence capabilities. The term fifth generation was intended to convey the system as being a leap beyond existing machines. Computers using vacuum tubes were called the first generation; transistors and diodes, the second; integrated circuits, the third; and those using microprocessors, the fourth. Whereas previous computer generations had focused on increasing the number of logic elements in a single CPU, the fifth generation, it was widely believed at the time, would instead turn to massive numbers of CPUs for added performance INPUT/OUTPUT DEVICES In computing, input/output, or I/O, refers to the communication between an information processing system (such as a computer), and the outside world – possibly a human, or another information processing system. Inputs are the signals or data received by the system, and outputs are the signals or data sent from it. The term can also be used as part of an action; to "perform I/O" is to perform an input or output operation. I/O devices are used by a person (or other system) to communicate with a computer. For instance, keyboards and mouses are considered input devices of a computer, while monitors and printers are considered output devices of a computer. Devices for communication between computers, such as modems and network cards, typically serve for both input and output. Note that the designation of a device as either input or output depends on the perspective. Mouses and keyboards take as input physical movement that the human user outputs and convert it into signals that a computer can understand. The output from these devices is input for the computer. Similarly, printers and monitors take as input signals that a computer outputs. They then convert these signals into representations that human users can see or read. (For a human user the process of reading or seeing these representations is receiving input.) In computer architecture, the combination of the CPU and main memory (i.e. memory that the CPU can read and write to directly, with individual instructions) is considered the brain of a computer, and from that point of view any transfer of information from or to that combination, for example to or from a disk drive, is considered I/O. The CPU and its supporting circuitry provide memory-mapped I/O that is used in low-level computer programming in the implementation of device drivers. Higher-level operating system and programming facilities employ separate, more abstract I/O concepts and primitives. For example, most operating systems provide application programs with the concept of files. The C and C++ programming languages, and operating systems in the Unix family, traditionally abstract files and devices as streams, which can be read or written, or sometimes both. The C standard library provides functions for manipulating streams for input and output. An alternative to special primitive functions is the I/O monad, which permits programs to just describe I/O, and the actions are carried out outside the program. This is notable because the I/ O functions would introduce side-effects to any programming language, but now purely functional programming is practical. Faculty – Yogesh Tomar Computer Application Management (CAM)- PGDIB Page…3/16 BLS Institute of Management
  4. 4. I/O Interface - I/O Interface is required whenever the I/O device is driven by the processor. The interface must have necessory logic to interpret the device address generated by the processor. Handshaking should be implemented by the interface using appropriate commands like (BUSY,READY,WAIT) the processor can communicate with I/O device through the interface. If incase different data formated being exchanged; interface must be ablt to convert serial data to parallel form and vice-versa. There must be provision for generating interrupts and the corresponding type numbers for further processing by the processor if required Input device : A hardware device that sends information into the CPU. Without any input devices a computer would simply be a display device and not allow users to interact with it, much like a TV. Below is a listing of different types of computer input devices. • Digital camera • Joystick • Keyboard • Microphone • Mouse • Scanner • Web Cam Output device : Any peripheral that receives and/or displays output from a computer. Below are some examples of different types of output devices commonly found on a computer. • Monitor • Printer • Projector • Sound card • Speakers • Video card PRIMARY AND SECONDARY MEMORY Modern electronic computers generally possess several distinct types of memory, each of which "holds" or stores information for subsequent use. The vast majority of computer memory can be placed into one of two categories: primary memory and secondary memory. Primary memory, often called main memory, constitutes that device, or group of devices, that holds instructions and data for rapid and direct access by the computer's central processing unit (CPU). Primary memory is synonymous with random-access memory (RAM). As a computer performs its calculations, it is continuously reading and writing (i.e., storing and retrieving) information to and from RAM. For instance, instructions and data are retrieved from RAM for processing by the CPU, and the results are returned to RAM. Modern RAM is made of semiconductor circuitry, which replaced the magnetic core memory widely used in computers in the 1960s. RAM is a volatile form of information Faculty – Yogesh Tomar Computer Application Management (CAM)- PGDIB Page…4/16 BLS Institute of Management
  5. 5. storage, meaning that when electrical power is terminated any data that it contains is lost. There are other semiconductor memory devices accessed by the CPU that are generally considered as being distinct from primary memory (i.e., different from RAM). These memory units include cache memory, read-only memory (ROM), and Programmable Read Only Memory (PROM). Secondary memory, also called auxiliary memory or mass storage, consists of devices not directly accessible by the CPU. Hard drives, floppy disks, tapes, and optical disks are widely used for secondary storage. The input and output of these devices is much slower than for the semiconductor devices that provide the computer's primary memory. Although access times (i.e., the time to read or write information) are slow as compared to that of primary memory, secondary memory devices have important features that are unmatched by primary memory. First, most secondary storage devices are capable of containing much more information than is feasible for primary memory (hence the use of the term "mass storage" as a synonym for secondary memory). A second, and essential, feature of secondary memory is that it is non-volatile. This means that data is stored with or without electrical power being supplied to the device, as opposed to RAM, which can retain its data only so long as electrical power is present. Like primary memory, many secondary memory devices are capable of storing information, as well as retrieving it. Magnetic technology devices (such as hard drives, floppy disks, and tape) have this read-write capability, as do magneto-optical drives. However, some mass storage devices can only read data, as in the case of CD-ROM (Compact Disk-Read Only Memory) drives. CD-ROMs utilize optical technology; however, newer optical technologies, such as CD-RW (compact disk-rewriteable), can both read and write information like magnetic storage devices. MEMORY – RAM, ROM, Cache Memory: The system memory is the place where the computer holds current programs and data that are in use. There are various levels of computer memory, including ROM, RAM, cache, page and graphics, each with specific objectives for system operation. This section focusses on the role of computer memory, and the technology behind it. Although memory is used in many different forms around modern PC systems, it can be divided into two essential types: RAM and ROM. ROM, or Read Only Memory, is relatively small, but essential to how a computer works. ROM is always found on motherboards, but is increasingly found on graphics cards and some other expansion cards and peripherals. Generally speaking, ROM does not change. It forms the basic instruction set for operating the hardware in the system, and the data within remains intact even when the computer is shut down. It is possible to update ROM, but it's only done rarely, and at need. If ROM is damaged, the computer system simply cannot function. RAM, or Random Access Memory, is "volatile." This means that it only holds data while power is present. RAM changes constantly as the system operates, providing the storage for all data required by the operating system and software. Because of the demands made by increasingly powerful operating systems and software, system RAM requirements have accelerated dramatically over time. For instance, at the turn of the millennium a typical computer may have only 128Mb of RAM in total, but in 2007 computers commonly ship with Faculty – Yogesh Tomar Computer Application Management (CAM)- PGDIB Page…5/16 BLS Institute of Management
  6. 6. 2Gb of RAM installed, and may include graphics cards with their own additional 512Mb of RAM and more. Clearly, modern computers have significantly more memory than the first PCs of the early 1980s, and this has had an effect on development of the PC's architecture. The trouble is, storing and retrieving data from a large block of memory is more time-consuming than from a small block. With a large amount of memory, the difference in time between a register access and a memory access is very great, and this has resulted in extra layers of cache in the storage hierarchy. When accessing memory, a fast processor will demand a great deal from RAM. At worst, the CPU may have to waste clock cycles while it waits for data to be retrieved. Faster memory designs and motherboard buses can help, but since the 1990s "cache memory" has been employed as standard between the main memory and the processor. Not only this, CPU architecture has also evolved to include ever larger internal caches. The organisation of data this way is immensely complex, and the system uses ingenious electronic controls to ensure that the data the processor needs next is already in cache, physically closer to the processor and ready for fast retrieval and manipulation. Read on for a closer look at the technology behind computer memory, and how developments in RAM and ROM have enabled systems to function with seemingly exponentially increasing power. Pronounced ramm, acronym for random access memory, a type of computer memory that can be accessed randomly; that is, any byte of memory can be accessed without touching the preceding bytes. RAM is the most common type of memory found in computers and other devices, such as printers. There are two different types of RAM: DRAM (Dynamic Random Access Memory) and SRAM (Static Random Access Memory). The two types differ in the technology they use to hold data, with DRAM being the more common type. In terms of speed, SRAM is faster. DRAM needs to be refreshed thousands of times per second while SRAM does not need to be refreshed, which is what makes it faster than DRAM. DRAM supports access times of about 60 nanoseconds, SRAM can give access times as low as 10 nanoseconds. Despite SRAM being faster, it's not as commonly used as DRAM because it's so much more expensive. Both types of RAM are volatile, meaning that they lose their contents when the power is turned off. In common usage, the term RAM is synonymous with main memory, the memory available to programs. For example, a computer with 8MB RAM has approximately 8 million bytes of memory that programs can use. In contrast, ROM (read-only memory) refers to special memory used to store programs that boot the computer and perform diagnostics. Most personal computers have a small amount of ROM (a few thousand bytes). In fact, both types of memory (ROM and RAM) allow random access. To be precise, therefore, RAM should be referred to as read/write RAM and ROM as read-only RAM. MEMORY: Internal storage areas in the computer The term memory identifies data storage that comes in the form of chips, and the word storage is used for memory that exists on tapes or disks. Moreover, the term memory is usually used as a shorthand for physical memory, which refers Faculty – Yogesh Tomar Computer Application Management (CAM)- PGDIB Page…6/16 BLS Institute of Management
  7. 7. to the actual chips capable of holding data. Some computers also use virtual memory, which expands physical memory onto a hard disk. There are several different types of memory: Faculty – Yogesh Tomar Computer Application Management (CAM)- PGDIB Page…7/16 BLS Institute of Management
  8. 8. ROM or Read Only Memory, Computers almost always contain a small amount of read-only memory that holds instructions for starting up the computer. Unlike RAM, ROM cannot be written to. It is non-volatile which means once you turn off the computer the information is still there. PROM, short for programmable read-only memory A PROM is a memory chip on which data can be written only once. Once a program has been written onto a PROM, it remains there forever. Unlike RAM, PROM's retain their contents when the computer is turned off. The difference between a PROM and a ROM (read-only memory) is that a PROM is manufactured as blank memory, whereas a ROM is programmed during the manufacturing process. To write data onto a PROM chip, you need a special device called a PROM programmer or PROM burner. The process of programming a PROM is sometimes called burning the PROM. EPROM (erasable programmable read-only memory) is a special type of PROM that can be erased by exposing it to ultraviolet light. Once it is erased, it can be reprogrammed. An EEPROM is similar to a PROM, but requires only electricity to be erased. EEPROM- Acronym for electrically erasable programmable read-only memory. Pronounced double-ee-prom or e-e-prom, an EEPROM is a special type of PROM that can be erased by exposing it to an electrical charge. Like other types of PROM, EEPROM retains its contents even when the power is turned off. Also like other types of ROM, EEPROM is not as fast as RAM. EEPROM is similar to flash memory (sometimes called flash EEPROM). The principal difference is that EEPROM requires data to be written or erased one byte at a time whereas flash memory allows data to be written or erased in blocks. This makes flash memory faster. RAM (Random Access Memory) is a temporary (Volatile) storage area utilized by the CPU. Before a program can be ran the program is loaded into the memory which allows the CPU direct access to the program. Types of RAM SRAM Short for static random access memory, and pronounced ess-ram. SRAM is a type of memory that is faster and more reliable than the more common DRAM (dynamic RAM). The term static is derived from the fact that it doesn't need to be refreshed like dynamic RAM. SRAM is often used only as a memory cache usually found in the CPU (L1, L2 and L3 Cache) DRAM stands for dynamic random access memory, a type of memory used in most personal computers. Faculty – Yogesh Tomar Computer Application Management (CAM)- PGDIB Page…8/16 BLS Institute of Management
  9. 9. Types of DRAM Packages and DRAM Memory LAPTOP MEMORY (72, 144, 200) SO-DIMM SO-DIMM Short for Small Outline DIMM, a small version of a DIMM used commonly in notebook computers. 72 supports 32bit and 144 and 200 SO-DIMM pins supports a full 64-bit transfer. (144, 172) Micro-DIMM Micro-DIMM Short for Micro Dual Inline Memory Module, a competing memory used on laptops, mostly supports 144 and 172 pins. SIMM Acronym for single in-line memory module, a small circuit board that can hold a group of memory chips. Typically, SIMM's holds up 8 (on Macintoshes) or 9 (on PCs) RAM chips. On PCs, the ninth chip is often used for parity error checking. Unlike memory chips, SIMM's is measured in bytes rather than bits. SIMM's is easier to install than individual memory chips. A SIMM is either 30 or 72 pins. 30 pin SIMM (Usually FPM or EDO RAM) FPM RAM Short for Fast Page Mode RAM, a type of Dynamic RAM (DRAM) that allows faster access to data in the same row or page. Page-mode memory works by eliminating the need for a row address if data is located in the row previously accessed. It is sometimes called page mode memory. 72 pin SIMM (EDO RAM) EDO DRAM Short for Extended Data Output Dynamic Random Access Memory, a type of DRAM that is faster than conventional DRAM. Unlike conventional DRAM which can only access one block of data at a time, EDO RAM can start fetching the next block of memory at the same time that it sends the previous block to the CPU. Faculty – Yogesh Tomar Computer Application Management (CAM)- PGDIB Page…9/16 BLS Institute of Management
  10. 10. DIMM Short for dual in-line memory module, a small circuit board that holds memory chips. A single in-line memory module (SIMM) has a 32-bit path to the memory chips whereas a DIMM has 64-bit path. Because the Pentium processor requires a 64-bit path to memory, you need to install SIMM's two at a time. With DIMM's, you can install memory one DIMM at a time. A DIMM contains 168 pins. 168 pin DIMM (SDRAM) SDRAM Short for Synchronous DRAM, a new type of DRAM that can run at much higher clock speeds than conventional memory. SDRAM actually synchronizes itself with the CPU's bus and is capable of running at 133 MHz, about three times faster than conventional FPM RAM, and about twice as fast EDO DRAM . SDRAM is replacing EDO DRAM in many newer computers SDRAM delivers data in high speed burst 184 pin DIMM (DDR-SDRAM) DDR SDRAM Short for Double Data Rate-Synchronous DRAM, a type of SDRAM that supports data transfers on both edges of each clock cycle, effectively doubling the memory chip's data throughput. DDR-SDRAM is also called SDRAM II. 240 DIMM (DDR2-SDRAM) DDR2-SDRAM Short for Double Data Rate Synchronous DRAM 2 is a type of DDR that supports higher's speeds than it's predecessor DDR SDRAM 240 DIMM (DDR3-SDRAM) DDR3-SDRAM Short for Double Data Rate Synchronous DRAM 3 is the newest type of DDR that supports the fastest speed of all the SDRAM memory Faculty – Yogesh Tomar Computer Application Management (CAM)- PGDIB Page…10/16 BLS Institute of Management
  11. 11. 184 pin RIMM (RDRAM) RIMM Rambus Inline Memory Module, The memory module used with RDRAM chips. It is similar to a DIMM package but uses different pin settings. Rambus trademarked the term RIMM as an entire word. It is the term used for a module using Rambus technology. It is sometimes incorrectly used as an acronym for Rambus Inline Memory Module. A RIMM contains 184 or 232pins. Note must use all sockets in RIMM installation or use C_RIMM to terminate banks 232 pin RIMM (RDRAM) RDRAM Short for Rambus DRAM, a type of memory (DRAM) developed by Rambus, Inc. In 1997, Intel announced that it would license the Rambus technology for use on its future motherboards, thus making it the likely de facto standard for memory architectures. CLICK HERE for a picture and more information on RIMM Installation SIMM and DIMM Sockets RAM Desktop Installation Note RAM Memory Sticks come in the following sizes 8MB, 16MB, 32MB, 64MB, 128MB, 256MB, 512MB, 1GB, 2GB, 4GB and 8GB SIMM – Single Inline Memory Module Installation (30 or 72 pin) 1. Place SIMM in a 45 degree angle, push it upright to lock with the corresponding notch on the sides Faculty – Yogesh Tomar Computer Application Management (CAM)- PGDIB Page…11/16 BLS Institute of Management
  12. 12. 2. Must be installed in same pairs 3. Must populate first two slots of the SIMM sockets in order for it to work DIMM – Dual Inline Memory Module Installation (168, 184 or 240 pin) 1. The first thing you do is open the plastic retaining clips on each side of slots you are going to use. 2. Align the cut-out on the module pin connector with the engaging pin on the slot 3. Holding the module upright press down both ends. 4. When the module is correctly seated, retaining clips should lock automatically. 5. DIMM’s can be installed as a single pair (unless it states Dual Channel then you must install it in pairs) RIMM – Rambus Inline Memory Module Installation (184 or 232 pin) 1. The first thing you do is open the plastic retaining clips on each side of slots you are going to use. 2. Align the cut-out on the module pin connector with the engaging pin on the slot 3. Holding the module upright press down both ends. 4. When the module is correctly seated, retaining clips should lock automatically. 5. Must populate all RIMM slots available 3. If not unpopulated slots must use CRIMM’s (Continuity Rambus Inline Memory Module) Memory Troubleshooting MEMORY (when installing use a ground strap because of ESD) ESD (low and high humidity) Mixed Memory usually equals fried memory Parity Errors or ECC Errors (Memory correction errors) SIMMS must be installed in pairs RIMMS must be all installed in all slots or CRIMM's must be needed in unvacated RIMM slots General Protection Fault (memory overwrite) Not enough memory (computer is slow) NMI -Non Maskable Interrupt will cause BSOD (Blue Screen of Death) Multiple beeps when booting up check that memory is properly installed and working No Video (Reseat memory) Memory speeds set in BIOS/CMOS Setup Virtual Memory (Page fault) Chip Creep - Thermal expansion and contraction Special Thanks to Rambus, Cosair, PNY, Viking, American Megatrends, Centon, Samsung, Crucial and Micron Faculty – Yogesh Tomar Computer Application Management (CAM)- PGDIB Page…12/16 BLS Institute of Management
  13. 13. Types of software: A program or group of programs designed for end users. Software can be divided into two general classes: systems software and applications software. Systems software consists of low-level programs that interact with the computer at a very basic level. This includes operating systems, compilers, and utilities for managing computer resources. In contrast, applications software (also called end-user programs) includes database programs, word processors, and spreadsheets. Figuratively speaking, applications software sits on top of systems software because it is unable to run without the operating system and system utilities. System Software: System software is any computer software which manages and controls computer hardware so that application software can perform a task. Operating systems, such as Microsoft Windows, Mac OS X or Linux, are prominent examples of system software. System software contrasts with application software, which are programs that enable the end-user to perform specific, productive tasks, such as word processing or image manipulation. System software performs tasks like transferring data from memory to disk, or rendering text onto a display device. Specific kinds of system software include loading programs, Operating systems, device drivers, programming tools, compilers, assemblers, linkers, and utility software. Software libraries that perform generic functions also tend to be regarded as system software, although the dividing line is fuzzy; while a C runtime library is generally agreed to be part of the system, an OpenGL or database library is less obviously so. Faculty – Yogesh Tomar Computer Application Management (CAM)- PGDIB Page…13/16 BLS Institute of Management
  14. 14. If system software is stored on non-volatile memory such as integrated circuits, it is usually termed firmware. System Software can be classified as operating system and language processors. Operating system creates an interface between user and the system hardware. Language processors are those which help to convert computer language (Assembly and high level Languages) to machine level language. The example of language processors are assemblers, Compilers and interpreters. Application Software Application software is a subclass of computer software that employs the capabilities of a computer directly and thoroughly to a task that the user wishes to perform. This should be contrasted with system software which is involved in integrating a computer's various capabilities, but typically does not directly apply them in the performance of tasks that benefit the user. In this context the term application refers to both the application software and its implementation. A simple, if imperfect analogy in the world of hardware would be the relationship of an electric light bulb (an application) to an electric power generation plant (a system). The power plant merely generates electricity, not itself of any real use until harnessed to an application like the electric light that performs a service that benefits the user. Typical examples of software applications are word processors, spreadsheets, and media players. Multiple applications bundled together as a package are sometimes referred to as an application suite. Microsoft Office, which bundle together a word processor, a spreadsheet, and several other discrete applications, are typical examples. The separate applications in a suite usually have a user interface that has some commonality making it easier for the user to learn and use each application. And often they may have some capability to interact with each other in ways beneficial to the user. For example, a spreadsheet might be able to be embedded in a word processor document even though it had been created in the separate spreadsheet application. User-written software tailors systems to meet the user's specific needs. User-written software include spreadsheet templates, word processor macros, scientific simulations, graphics and animation scripts. Even email filters are a kind of user software. Users create this software themselves and often overlook how important it is. In some types of embedded systems, the application software and the operating system software may be indistinguishable to the user, as in the case of software used to control a VCR, DVD player or microwave oven. COMPUTER LANGUAGES Computer languages can be divided into two groups: high-level and low-level languages. High-level languages are designed to be easier to use, more abstract, and more portable than low-level languages. Syntactically correct programs in some languages are then compiled to Faculty – Yogesh Tomar Computer Application Management (CAM)- PGDIB Page…14/16 BLS Institute of Management
  15. 15. low-level language and executed by the computer. Most modern software is written in a high- level language, compiled into object code, and then translated into machine instructions. Computer languages could also be grouped based on other criteria. Another distinction could be made between human-readable and non-human-readable languages. Human-readable languages are designed to be used directly by humans to communicate with the computer. Non-human-readable languages, though they can often be partially understandable, are designed to be more compact and easily processed, sacrificing readability to meet these ends. Machine Language Machine code or machine language is a system of instructions and data executed directly by a computer's central processing unit. Machine code may be regarded as a primitive (and cumbersome) programming language or as the lowest-level representation of a compiled and/ or assembled computer program. Programs in interpreted language (often BASIC, Matlab, Smalltalk, Python, Ruby, etc), are not represented by machine code however, although their interpreter (which may be seen as a processor executing the higher level program) often is. Machine code is also referred to as native code, a term that, in the context of an interpreted language, may refer to the platform-dependent implementation of language features and libraries Assembly Language; Unlike the other programming languages catalogued here, assembly language is not a single language, but rather a group of languages. Each processor family (and sometimes individual processors within a processor family) has its own assembly language. In contrast to high level languages, data structures and program structures in assembly language are created by directly implementing them on the underlying hardware. So, instead of catalogueing the data structures and program structures that can be built (in assembly language you can build any structures you so desire, including new structures nobody else has ever created), we will compare and contrast the hardware capabilities of various processor families. This web page does not attempt to teach how to program in assembly language. Because of the close relationship between assembly languages and the underlying hardware, this web page will discuss hardware implementation as well as software. High-level language A type of advanced computer programming language that isn't limited by the type of computer or for one specific job and is more easily understood. Today, there are dozens of high-level languages; some commonly used high-level languages are BASIC, C, FORTAN and Pascal. ASSEMBLER, COMPILER AND INTERPRETER ASSEMBLER: Faculty – Yogesh Tomar Computer Application Management (CAM)- PGDIB Page…15/16 BLS Institute of Management
  16. 16. An assembler is a program that takes basic computer instructions and converts them into a pattern of bits that the computer's processor can use to perform its basic operations. Some people call these instructions assembler language and others use the term assembly language. In the earliest computers, programmers actually wrote programs in machine code, but assembler languages or instruction sets were soon developed to speed up programming. Today, assembler programming is used only where very efficient control over processor operations is needed. It requires knowledge of a particular computer's instruction set, however. Historically, most programs have been written in "higher-level" languages such as COBOL, FORTRAN, PL/I, and C. These languages are easier to learn and faster to write programs with than assembler language. The program that processes the source code written in these languages is called a compiler. Like the assembler, a compiler takes higher-level language statements and reduces them to machine code. COMPILER: A compiler is a special program that processes statements written in a particular programming language and turns them into machine language or "code" that a computer's processor uses. Typically, a programmer writes language statements in a language such as Pascal or C one line at a time using an editor . The file that is created contains what are called the source statements . The programmer then runs the appropriate language compiler, specifying the name of the file that contains the source statements. When executing (running), the compiler first parses (or analyzes) all of the language statements syntactically one after the other and then, in one or more successive stages or "passes", builds the output code, making sure that statements that refer to other statements are referred to correctly in the final code. Traditionally, the output of the compilation has been called object code or sometimes an object module. (Note that the term "object" here is not related to object-oriented programming.) The object code is machine code that the processor can process or "execute" one instruction at a time. INTERPRETER: In computer science, an interpreter normally means a computer program that executes, i.e. performs, instructions written in a programming language. While interpretation and compilation are the two principal means by which programming languages are implemented, these are not fully distinct categories, one of the reasons being that most interpreting systems also perform some translation work, just like compilers. An interpreter may be a program that either • executes the source code directly • translates source code into some efficient intermediate representation (code) and immediately executes this • explicitly executes stored precompiled code[1] made by a compiler which is part of the interpreter system The terms interpreted language or compiled language merely mean that the canonical implementation of that language is an interpreter or a compiler; a high level language is basically an abstraction which is (ideally) independent of particular implementations. Faculty – Yogesh Tomar Computer Application Management (CAM)- PGDIB Page…16/16 BLS Institute of Management