• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Shai 2
 

Shai 2

on

  • 1,402 views

basics of computers

basics of computers

Statistics

Views

Total Views
1,402
Views on SlideShare
1,402
Embed Views
0

Actions

Likes
0
Downloads
1
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft Word

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Shai 2 Shai 2 Document Transcript

    • COMPUTER APPLICATIONS IN MANAGEMENT (For those who joined in July 2005 and after) Time: Three hours Maximum : 50 marks SECTION A- (10 x 1 = 10 marks) Answer ALL questions. Explain the following: 1. What is software? 2. Expand ADCCP. 3. What do you mean by Electronic Mail? 4. List out any three input devices. 5. What is Analog Signals? 6. Define constant. 7. What is the shortcut key to underline the text? 8. What is sorting? 9. What connectivity provides direct connection between client computer and internet host? 10. Expand WWW. SECTION B – (4x 5 = 20 marks) Answer any FOUR questions. Each question carries 5 marks.
    • 11. Explain about the temporary storage device. 12. Give an overview of optical fiber communication. 13. Explain the query language in DBM packages. 14. State and explain the function in OS. 15. How are computers classified? 16. Define and explain e-commerce. SECTION C- (2 X 10 = 20 marks) Answer any TWO questions. Each question carries 10 marks 17. Write an essay on the merits and limitations of the computer. 18. Explain the internet application in various fields. 19. Give brief account on network fundamentals. _________ 8524/NIE
    • 5651/N1E NOVEMBER 2007 COMPUTER APPLICATIONS IN MANAGEMENT _______________________________________________ _ (For those who joined in July 2005 and after) Time: Three hours Maximum: 50 marks SECTION A – (10 x 1 = 10 marks) Answer ALL questions. Explain the following: 1. System. 2. Hardware. 3. End user. 4. National Language. 5. Node. 6. Short cut key for Bold the text. 7. Schema. 8. Modem. 9. DBMS. 10. Data Flow Diagram. SECTION B – (4 x 5 = 20 marks) Answer FOUR questions. 11. Explain the application of Spread sheets. 12. Elaborate the merits and demerits of any 3 types of Network System. 13. Discuss the Objectives of DBMS.
    • 14. State and explain the different measures for Security in Internet Process. 15. Explain the latest development in long distance Computer Network Models. 16. Elaborate the application of Computer in Management. SECTION C – (2 x 10 = 20 marks) Answer any TWO questions 17. Define programming. Explain the steps involved in the development of programs. 18. Write an essay on the application of E-Commerce in Management. 19. How should an internet worked business enterprises store, access, and distribute data and information about the internal operations and external environment. __________ 5651/NIE 1990/N1E NOVEMBER 2005 Paper V – COMPUTER APPLICATIONS IN MANAGEMENT _______________________________________________ _ Time: Three hours Maximum: 50 marks SECTION A – (10 x 1 = 10 marks) Answer ALL questions.
    • Explain the following: 20. What is hardware? 21. What is spreadsheet? 22. List down any three output devices. 23. What is data? 24. What do you mean by Attributes? 25. CDROM is both read and write device? Say true or false. 26. Which device converts speech into electrical form? 27. Which software reads a basic instruction, tests for syntax, converts to machine code and executes it? 28. Expand MS-DOS. 29. Which symbol distinguish a function from a normal entry in networks? SECTION B – (4 x 5 = 20 marks) Answer FOUR questions. Each question carries 5 marks. 30. What do you mean by structured programs? And give brief note on any one of the structured programs.
    • 31. Write an essay comparing assembly and high level languages. 32. Differentiate schema and subschema. 33. Write short note on the function of CPU. 34. What are the various types of database that have evolved? 35. Describe satellite communication system. 36. What are the various generations of computers? And explain briefly. 37. Give brief account on computer networks. 38. How e-mail and its features can used? Explain __________ 1990/NIE COMPUTER APPLICATIONS IN MANAGEMENT 2005 Section (A) 1. Advanced Data Communication Control Procedures (ADCCP) Advanced Data Communication Control Procedures (ADCCP): A bit-oriented Data-Link-Layer protocol used to provide point-to-point and point-to-multipoint transmission of data frames that contain error-control information. Note: ADCCP closely resembles high-level data link control (HDLC) and synchronous data link control (SDLC). 2. Software: Non-touchable part of computer. • Used to describe the instructions, given to a computer. • Program or group of programs.
    • • Computer instructions or data, anything that can be stored electronically is software. 3:Analog signal: • Analog signals are continuous in nature, they carry information in the form of waves e.g., the way sound travels in the medium such as telephone lines. • Analog communication uses general purpose communication channels. • These signals are characterized by two parameters such as amplitude and frequency 4: shortcut key to underline the text is ctrl +U 5: Email. Email is now an essential communication tools in business. It is also excellent for keeping in touch with family and friends. The advantages to email is that it is free (no charge per use) when compared to telephone, fax and postal services. 6: three input devices 1. Keyboard 2.mouse 3. Light pen 7: Expand WWW World Wide Web WWW - An intricate web of information linked by names and associations, the World Wide Web integrates text, video, photographs, graphics, and sound. Every site – or homepage - on the Web has an “http” address and can be accessed using a Web browser (e.g., Netscape). You can download information on many homepages to your own computer. You can view text-based information on the World Wide Web using a command line browser such as Lynx.
    • 8:Sorting: The most common type of sorting, and one that is applicable to our situation, is alphabetical ordering. This kind of ordering places the cells that start with the early letters of the alphabet (a, b, c...) at the top and the later letters (t, u, v...) at the bottom of the list. 1. First we need to select all the data so we can begin to sort it. Because each name has a corresponding score we need to select both columns to preserve the students' correct scores. 2. Left-click and hold on cell A1 then drag down-right to cell B10 to highlight all the data for sorting! Your spreadsheet should look like this: 3. Left-click the "sort ascending" button, located near the top, on the shortcut bar (it has a blue A on top and a red Z on bottom with a downward pointing arrow). 9: IEEE Internet Computing, ... such agents would be behavior to provide (and interpret ... Alternately, a direct connection could be made between the ... back to front compared with what’s needed ... 10: Constant: define data structures ... can use the type Dx we declare the following ... user does not need to change the definition of the function, i.e. constant. SECTION B
    • 11: Temporary storage device 1. Random Access Memory (RAM) • Basic to all computers. • In the form of integrated circuits that allow the stored data to be accessed in any order i.e. at random and without the physical movement of the storage medium or a physical reading head. • Made up of several small parts known as cells. • Each cell can store a fixed no. of bits. • Each cell has a unique no. assigned to it which is known as address of cell. • Also known as Read/Write memory. • Volatile in nature. • Usually it is known as memory of computer Types of RAM There are two basic types of RAM: 1. Dynamic RAM (DRAM) - The term dynamic indicates that the memory must be constantly refreshed or it will lose its contents. 2. Static RAM (SRAM) – • Faster and more reliable than DRAM. • The term static is derived from the fact that it doesn't need to be refreshed like dynamic RAM. • It can give access times as low as 10 nanoseconds. • Much more expensive than DRAM. • Due to its high cost, SRAM is often used only as a memory cache. • Cache memory - a special high-speed storage mechanism. It can be either a reserved section of main memory or an independent high-speed storage device. Other Types of RAM Other than the above basic types of RAM, there are few more newer versions – 1. FPM DRAM: Fast Page Mode DRAM, maximum data transfer rate is 176 mbps. 2. EDO DRAM: Extended data-out DRAM, maximum data transfer rate is 264 mbps. 3. SDRAM: Synchronous Dynamic RAM, maximum data transfer rate is 528 mbps. 4. DDR SDRAM: Double Data Rate SDRAM, maximum data transfer rate is 1064 mbps. 5. DDR2 SDRAM : New version of DDR RAM with higher clock frequencies. 6. RDRAM: Rambus DRAM, used for a special high speed data bus called Rambus channel. Maximum data transfer rate is 1600 mbps. 7. Credit Card Memory: A proprietary self contained DRAM memory module that plugs into a special slot for use in notebooks.
    • 8. PCMCIA Memory Card: Self contained DRAM memory module that plugs into a special slot for use in notebooks. It is not a proprietary and works with any notebook. 12: Optical fiber communication: Fiber-optic communication is a method of transmitting information from one place to another by sending pulses of light through an optical fiber. The light forms an electromagnetic carrier wave that is modulated to carry information. First developed in the 1970s, fiber-optic communication systems have revolutionized the telecommunications industry and have played a major role in the advent of the Information Age. Because of its advantages over electrical transmission, optical fibers have largely replaced copper wire communications in core networks in the developed world. The process of communicating using fiber-optics involves the following basic steps: Creating the optical signal involving the use a transmitter, relaying the signal along the fiber, ensuring that the signal does not become too distorted or weak, receiving the optical signal, and converting it into an electrical signal. Application: Optical fiber (or fibre) is a glass or plastic fiber that carries light along its length. Fiber optics is the overlap of applied science and engineering concerned with the design and application of optical fibers. Optical fibers are widely used in fiber-optic communications, which permits transmission over longer distances and at higher data rates (a.k.a "bandwidth") than other forms of communications. Fibers are used instead of metal wires because signals travel along them with less loss, and they are also immune to electromagnetic interference. Fibers are also used for illumination, and are wrapped in bundles so they can be used to carry images, thus allowing viewing in tight spaces. Specially designed fibers are used for a variety of other applications, including sensors and fiber lasers. Light is kept in the "core" of the optical fiber by total internal reflection. This causes the fiber to act as a waveguide. Fibers which support many propagation paths or transverse modes are called multi-mode fibers (MMF). Fibers which can only support a single mode are called single-mode fibers (SMF). Multi-mode fibers generally have a larger core diameter, and are used for short-distance communication links and for applications where high power must be transmitted. Single-mode fibers are used for most communication links longer than 550 meters. Joining lengths of optical fiber is more complex than joining electrical wire or cable. The ends of the fibers must be carefully cleaved, and then spliced together either mechanically or by fusing them together with an electric arc. Special connectors are used to make removable connections.
    • 13: Query language in DBM packages: Database Language:- As a language is required to understand any thing, similarly to create or manipulate a database we need to learn a language. Database language is divided into mainly 2 parts:- 1) DDL (Data definition language) 2) DML (Data Manipulation language) Data Definition Language (DDL) Used to specify a database scheme as a set of definitions expressed in a DDL 1. DDL statements are compiled, resulting in a set of tables stored in a special file called a data dictionary or data directory. 2. The data directory contains metadata (data about data) 3. The storage structure and access methods used by the database system are specified by a set of definitions in a special type of DDL called data storage and definition language 4. Basic idea: hide implementation details of the database schemes from the users Data Manipulation Language (DML) 1. Data Manipulation is: Retrieval of information from the database Insertion of new information into the database Deletion of information in the database Modification of information in the database 2. A DML is a language which enables users to access and manipulate data. The goal is to provide efficient human interaction with the system. 3. There are two types of DML: Procedural: the user specifies what data is needed and how to get it Nonprocedural: the user only specifies what data is needed Easier for user.
    • May not generate code as efficient as that produced by procedural languages 4. A query language is a portion of a DML involving information retrieval only. The terms DML and query language are often used synonymously. 14: OS: special system software that acts as an intermediary between a user of a computer and the computer hardware. • Provides an environment in which the user can execute programs/applications in a convenient and efficient manner. Functions of O/S 1. Process Management 2. Memory Management 3. Deadlock Handling 4. File Management 5. I/O Management 6. Protection & Security 7. Job scheduling 8. Interpretation of commands and instructions. 9. Co-ordination of compilers and, assembler, programs and other s/w of computer system. 10. Production of error messages. 11. Maintenance of internal time clock and log system usage for all users. 12. Provides easy communication between the computer system and users. 13. Resource Allocator 14. Intermediary 15. Executes application software 16. Memory management 17. CPU Scheduling 15: Classification of Computers There are 4 categories of computers- 1. Super Computer 2. Mainframe 3. Mini Computer 4. Micro Computer Super Computer • The highly sophisticated computer. • The most powerful computer made till now. • Used for very special, highly calculation-intensive tasks like scientific research, weather forecasting, quantum mechanical physics, climate research (global warming), molecular modeling, physical simulations (nuclear weapons), pollution control. Major universities, military agencies and scientific research laboratories are heavy users. • Very expensive, priced from $ 2 million to $ 20 million.
    • • Consume huge electricity, enough to lighten about 100 houses. • Can have hundreds of processors. • Speed is measured in nanoseconds. Mainframe A mainframe has • 1 to 16 CPUs (modern machines more) • Memory ranges from 128 Mb over 8 Gigabyte on line RAM • Its processing power ranges from 80 over 550 Mips It has often different cabinets for • Storage • I/O • RAM Separate processes (program) for • Task management • Program management • Job management • Serialisation • Catalogs • Inter address space • Communication Mini Computer • A midsized computer. In size and power, it is less than mainframes. • Is a multiprocessing system capable of supporting from 4 to 200 users simultaneously. • Can handle a great amount of data • Can support a number of terminals. • Slower than mainframes but support as many terminals as a mainframe can. • Lesser storage capacity. • Used for R&D Organisations and Universities. • Range is from $18,000 to $50,000. Micro Computer • Small in size. • Single user computer. • Much slower than the larger computers. • Used in small businesses, homes, and school/colleges classrooms. • Inexpensive and easy to use. • Also called PCs in short for Personal computers • Support multitasking. Types of Microcomputers are – a) Desktop – small enough to fit on a desk but are too big to carry around.
    • b) Laptop/Notebook – portable, light weight computers, can be carried around. can store the same amount of data and having a memory of the same size as that of a personal computer. PDA – Personal Digital Assistant is the smallest portable computer, not bigger than a cheque book, also known as palmtops. These are used for keeping record of phone numbers, dates etc. These also come with touch screen or electronic pen. 16: Definition of E-Commerce from Different Perspective 1. Communications Perspective EC is the delivery of information, products/services, or payments over the telephone lines, computer networks or any other electronic means. 2. Business Process Perspective EC is the application of technology toward the automation of business transactions and work flow. 3. Service Perspective EC is a tool that addresses the desire of firms, consumers, and management to cut service costs while improving the quality of goods and increasing the speed of service delivery. 4. Online Perspective EC provides the capability of buying and selling products and information on the internet and other online services. Benefit of e-Commerce • Access new markets and extend service offerings to customers • Broaden current geographical parameters to operate globally • Reduce the cost of marketing and promotion • Improve customer service • Strengthen relationships with customers and suppliers • Streamline business processes and administrative functions Scope of E-Commerce • Marketing, sales and sales promotion • Pre-sales, subcontracts, supply • Financing and insurance • Commercial transactions: ordering, delivery, payment • Product service and maintenance • Co-operative product development • Distributed co-operative working Use of public and private services • Business-to-administrations (e.g. customs, etc) • Transport and logistics • Public procurement • Automatic trading of digital goods • Accounting • Dispute resolution SECTION C 17: Merits and limitations of the computer.
    • “A computer is an electronic device, operating under the control of instructions stored in its own memory unit that can accept data (input); process data arithmetically and logically, produce information (output) from the processing done, and store the results for future use 1. Fast • Able to process data & give output in fractions of second. • A Powerful computer is capable of executing about 3 millions calculations/second. 2. Accurate • In spite of its high speed, error hardly occurs as its accuracy is consistently high enough. 3. Reliable • The output generated by the computer is very reliable. • But for reliable output, input should also be reliable. 4. Large Storage Capacity It can store huge data in small storage devices. 5. Versatile It can work upon numbers, graphics, audio, video etc. making it really versatile. 6. Works Automatically Once the instructions in the form of program are fed, it works automatically without any human help until the completion of that task. 7. Diligent It never feels tired & distracted. Its performance is constant. 8. No Emotions Computers do not have emotional, ego & psychological problems which are destructive in nature. 9. No IQ Computer works as a very good assistant as it has no IQ. It 100% obeys the user. 18: Internet application in various fields. Internet
    • The internet is a network of computers linking many different types of computers all over the world. It is a network of networks sharing a common mechanism for addressing computers and a common set of communication protocols for communication between two computers on the network. Internet or a network of networks is a group of two or more networks that are: 1. Interconnected physically 2. Capable of communicating and sharing data with each other 3. Able to act together as a single network. Utility and Role of Internet Nowadays if you talk of any application area, the role of Internet comes. The Internet allows people to access a vast information resource, talk to each other by email, and join electronic news, discussion and special mailing groups. Educationally speaking, it can open up a whole new vista for the user as well as provide access to information resources at one’s fingertips and provide a creative outlet for those who wish to create web pages. There are many advantages to using the Internet such as: · Email. Email is now an essential communication tools in business. It is also excellent for keeping in touch with family and friends. The advantages to email is that it is free (no charge per use) when compared to telephone, fax and postal services. · Information Retrieval. There is a huge amount of information available on the internet for just about every subject known to man, ranging from government law and services, trade fairs and conferences, market information, new ideas and technical support. Go to a search engine and search for any subject topic. You will find a heck of material. · Services. Many services are now provided on the Internet such as online banking, job seeking and applications, and hotel reservations. Often these services are not available off-line or cost more. · Buy or sell products.
    • The internet is a very effective way to buy and sell products all over the world. I think you all must have heard of www.baazi.com. It is a very popular site for selling and buying products. Communities. Communities of all types have sprung up on the Internet. It’s a great way to meet up with people of similar interest and discuss common issues. An example is like chat rooms, Electronic Bulletin Boards. Services/Applications of Internet Internet is an ocean of information and services. Wide variety of services remain available on internet. Whosoever wishes, can avail these services by visiting the website. Few popular services of internet are described below: 1. Internet telephony is a service that enables its users to communicate with other persons. Using this service an internet user can ring other person’s normal telephone. If other person picks up his phone, communication path b/w the two persons gets established and they talk in the same way as two telephone users do. This type of telephony is called Net-to-Phone telephony. 2. News group is very popular service of internet, in which a person has to become a member of a predefined group. Each group relates to a specific topic like multimedia, physics etc. Members of the group exchange information, news, problems, solutions etc. among them. Due to the categorization of group, like minded people become the members of the group. This keeps the discussion inline and helps members in seeking solutions for their questions and problems from other group members. 3. There are various web-sites on Internet, which provide electronic greeting cards for all occasions and all reasons. These cards are generally very colorful, attractive and often animated. People access these web sites and send cards to their relatives and friends. 4. There are various web sites on Internet, which provide facility for reading newspapers, magazines and articles on Internet. 5. Astrological web sites of internet provide horoscope, future predictions and suggestions to the visitors, who visit the web site. 6. There are various web sites on Internet, which provide job opportunities to the persons seeking job. They also help employers in searching the right candidate. 7. Music, songs and radio can be enjoyed over Internet.
    • 8. There are various web sites, which sell different kinds of products. These web sites can be visited and products can be purchased from there. These web sites are often referred to as e-mall. Payment for the purchased items is made on-line through credit cards. 9. Numerous free and paid computer games are available on the Internet. They can be downloaded and played. 10. Advance booking in hotels, trains, aero planes etc. can be do done through internet. Internet Security and protocol As internet is open to all, the security, privacy, authenticity and anonymity issues play a vital role as one need to be sure about these things, then only he can use the Internet to its fullest extent. As information and its transfer are crucial for every one who is using the internet, there need to be provisions, which is reliable then only one can connect to internet. The security of internet can be divided into two broad categories, namely: 1. Client-Server Security 2. Data & Transaction Security Client-Server Security This security is for preventing unauthorized access to restricted databases and other information which is confidential. This is an authorization mechanism, which makes it sure that the users, who are authorized only should be able to connect and access the information. These mechanisms are required to ensure that only authenticated users can access the resources, which they are entitled to. Password protection, encrypted smart cards, biometrics, firewalls etc. are some of the methods adopted to ensure client-server security. Data & Transaction Security As the data transmissions and transactions occur across the network, there are fair chances that they can be intercepted, read and manipulated as well as the source and destination can be tracked. To prevent this one has to provide security to the data and transactions, which is usually done by using data encryptions, which is implemented through various cryptographic methods. Security Methods for Client/Server & Data/Transaction Security 1. Password Scheme: This is an easy solution to provide security, so that unauthorized users do not get access to the data. This is the first level of security, which can be provided. Here,
    • the authorized users are assigned user names and a password is associated to them, which are to provided when connecting to the site. But this security measure can be broken easily, if common words or proper names are used as passwords, but if alphanumeric passwords of bigger length are used, then it is very difficult to break them. Another problem is that, if the login is remote, then the password travels through the system to server for authentication, in meantime, it can be trapped, for this reason the passwords are to be encrypted before transmitting them. But in spite of these threats, password schemes are still the most popular forms of ensuring security. 2. Firewalls Firewall is accepted as the network protection mechanism, it is a barrier between corporate network and the outside world, which will ensure that the authentic Users only connect as well only the data which is harmless enters and leaves the system. The term firewall can be defined as a device, a computer or a router which is placed between the network and the internet to control and monitor the traffic between the inside and outside world. Firewall is a device which is used to shield vulnerable areas from various dangers. The firewall system is located at a gateway point which actually is the connecting point to the outside world. Firewalls come in many varieties and offer different features, but the basic feature is to filter the traffic of data and control it. 3. Encryption This method of data and transaction security is used for retaining confidentiality and integrity of the data being transmitted. Data confidentiality is that property, which makes data contents to be safe from being read, while on the path of transmission, which is made sure by using cryptography algorithms for encrypting data, which no one can interpret. Along with data being confidential, it has to remain unmodified, while on transit i.e. the data should be intact without any modifications, while it is on way to destination. This is made sure by using various encryption techniques like Secret-Key Cryptography, Data Encryption Standard(DES), Public Key Cryptography RSA etc. Internet Tools Here is a list of some popular Internet tools. Some of these tools we will discuss in more detail later in this chapter.
    • · E-mail - Electronic mail is one of the most popular Internet tools. Users have their own e-mail address and mailbox, which allow them to exchange typed messages. Some e-mail programs allow users to send documents as attachments to their messages. · FTP - Short for File Transfer Protocol, ftp is a method used to transfer files from one networked computer to another. There are two ways to ftp files - privately and anonymously. If you have a private account on a machine (e.g., bullwinkle.ucdavis.edu), you can ftp to that machine with your login ID and password. To anonymously ftp, you must know the host name of the computer storing the information (e.g., itcap. ucdavis.edu). You can login by using anonymous as the ID and your e-mail address as the password. You can then use ftp to transfer any files to your own account or hard disk. · Newsreader - A newsreader is an application that allows you to access Usenet news. Usenet news consists of several thousand discussion groups where people can post and read articles about selected topics. You can subscribe to newsgroups, which interest you. The USENET News system (or netnews) is an international deorganised system for the dissemination of information on almost every topic. Newsgroups are divided into hierarchies, such as: alt - for alternate newsgroups aus - Australian newsgroups comp - computer related newsgroups news - newsgroups about news rec - recreational newsgroups sci - scientific newsgroups · Telnet - Telnet is a method used to connect to another computer on the Internet. This connection is often referred to as a terminal session. You may telnet to machines that provide access through either a private or public account. · SLIP/PPP - Short for Serial Line Internet Protocol (SLIP)/Point to Point Protocol (PPP), SLIP/PPP is an Internet protocol that allows dial-in access to the Internet through a special modem pool. You can use SLIP/PPP to dial-in to the campus modem pool (752-7925) and be connected to the Internet. While connected, you can run Netscape and other Internet applications. · WWW - An intricate web of information linked by names and associations, the World Wide Web integrates text, video, photographs, graphics, and sound. Every site – or homepage - on the Web has an “http” address and can be accessed using a Web browser (e.g., Netscape). You can download information on many homepages to your own computer. You can view text-based information on the World Wide Web using a command line browser such as Lynx. · Search Engines -The first thing most people do when they start using the WWW is to use a search engine to try to find information. There are many very common ones. A new one that we particular like is google.com. These have to be used with care. It often helps if you can start searching in a sub-area of the WWW.
    • · Gopher- gopher is a tool that is very similar to the World- Wide Web. It predates the WWW and is basically a system for the retrieval of text documents. Only a few sites remain. It does not support graphics, a markup language or hypertext links. Any good web browser can work with gopher sites. Specific gopher clients are no longer used Internet Protocols 19: Computer Network A computer network is defined as the interconnection of 2 or more independent computers or/and peripherals. Need of Networks – Communicate and collaborate – Share information – Share resources – Sharing computer files and disk space – Sharing high-quality printers – Access to common fax machines – Access to common modems – Multiple access to the Internet Classification of Networks 1. Local Area Networks (LANs) - a computer network covering a small geographic area, like a home, office, or group of buildings. Typically within 5-mile radius. 2. Metropolitan Area Networks (MANs)- are large computer networks usually spanning a city. (within 30 miles) 3. Wide-Area Networks (WANs) - any network whose communications links cross metropolitan, regional, or national boundaries. Network Topology The way in which the computers are interconnected together is known as TOPOLOGY. Types of physical topologies • Bus/Linear • Star • Ring • Tree • Mesh Linear or bus topology • Consists of a main cable, known as backbone cable, with a terminator at each end . • All nodes (file server, workstations, and peripherals) are connected to the cable. • Ethernet and LocalTalk networks use bus topology. • Consists of a main cable, known as backbone cable, with a terminator at each end . • All nodes (file server, workstations, and peripherals) are connected to the cable.
    • • Ethernet and LocalTalk networks use bus topology. • Advantages of Bus Topology • Easy to connect a computer or peripheral to a linear bus. • Requires less cable length. • Easy to extend. • If one node of the N/W is faulty, the N/W can still remain working. Disadvantages of Bus Topology • Entire network shuts down if there is a break in the main cable. • Terminators are required at both ends of the backbone cable. • Difficult to identify the problem if the entire network shuts down. • Not meant to be used as a stand-alone solution in a large building. Star topology • A star topology is designed with each node (file server, workstations, and peripherals) connected directly to a central network hub. • Data on a star network passes through the hub before continuing to its destination. • The hub manages and controls all functions of the network. • It also acts as a repeater for the data flow. Advantages of Star Topology • Easy to install. • No disruptions to the network other than connecting or removing devices. • Easy to detect faults and to remove parts. Disadvantages of Star Topology • Requires more cable length than a bus topology.
    • • If the hub fails, nodes attached are disabled. • More expensive than bus topology because of the cost of the hub. Tree Topology • A tree topology combines characteristics of bus and star topologies. • It consists of groups of star-configured workstations connected to a bus backbone cable. • Tree topologies allow for the expansion of an existing network. Advantages of a Tree Topology • Point-to-point wiring for individual segments. • Supported by several hardware and software venders. Disadvantages of Tree Topology • Overall length of each segment is limited by the type of cabling used. • If the backbone line breaks, the entire segment goes down. • More difficult to configure than other topologies. Ring topology • Is a type of computer network configuration where each network computer and device are connect to each other forming A large circle. • Data is divided into packets when transmitted. • Packet is sent around the ring until it reaches its final destination.
    • Advantages of ring topology • Requires lesser amount of cable and there are not much of installation problems • All stations have equal access Disadvantages of ring topology • Failure of one computer may impact others • Data transfer is slow Mesh topology • It requires that every terminal should be attached to each other. • All the computers must have adequate number of interfaces for the connections to be made. • Because of this requirement the installations is somewhat difficult. • The length of cable required is quite higher as compared to other topologies Advantages of mesh topology • Ease of troubleshooting. • Data transfer is faster. Disadvantages of mesh topology • uses a lot of cabling. • Complex • Most expensive topology
    • Nov 2005 1. Hardware: Hardware refers to the physical components of a computer system including any peripheral (I/O) equipment such as keyboard , printers, modems and mouse etc. 2. Spreadsheet: A table of values arranged in rows and columns. Each value can have a predefined relationship to the other values. If you change one value, therefore, you may need to change other values as well. 3. Any three Output Devices: printer, plotter, monitor 4. Data: Data refers to a collection of facts usually collected as the result of experience, observation or experiment, or processes within a computer system, or a set of premises. This may consist of numbers, words, or images, particularly as measurements or observations of a set of variables. Data is often viewed as a lowest level of abstraction from which information and knowledge are derived. 5. Attributes: In computing, an attribute is a specification that defines a property of an object, element, or file. An attribute of an object usually consists of a name and a value; of an element, a type or class name; of a file, a name and extension. 6. False, CD-ROM (an initialism of "Compact Disc Read-Only Memory") is a pre-pressed Compact Disc that contains data accessible to, but not writable by, a computer. 7. Transducer is the device which converts speech into electrical form. 8. Compiler 9. MS-DOS: Microsoft Disk Operating System MS-DOS (short for Microsoft Disk Operating System) is an operating system commercialized by Microsoft. It was the most commonly used member of the DOS
    • family of operating systems and was the main operating system for personal computers during the 1980s. It was based on the Intel 8086 family of microprocessors, particularly the IBM PC and compatibles. It was gradually replaced on consumer desktop computers by operating systems offering a graphical user interface (GUI), in particular by various generations of the Microsoft Windows operating system and Linux. MS-DOS was known before as QDOS (Quick and Dirty Operating System) and 86-DOS. 10. 11. Structured Programs: Structured programming can be seen as a subset or subdiscipline of procedural programming, one of the major programming paradigms. It is most famous for removing or reducing reliance on the GOTO statement. Historically, several different structuring techniques or methodologies have been developed for writing structured programs. The most common are: 1. Edsger Dijkstra's structured programming, where the logic of a program is a structure composed of similar sub-structures in a limited number of ways. This reduces understanding a program to understanding each structure on its own, and in relation to that containing it, a useful separation of concerns. 2. A view derived from Dijkstra's which also advocates splitting programs into sub- sections with a single point of entry, but is strongly opposed to the concept of a single point of exit. 3. Data Structured Programming or Jackson Structured Programming, which is based on aligning data structures with program structures. This approach applied the fundamental structures proposed by Dijkstra, but as constructs that used the high-level structure of a program to be modeled on the underlying data structures being processed. There are at least 3 major approaches to data structured program design proposed by Jean-Dominique Warnier, Michael A. Jackson, and Ken Orr. The two latter meanings for the term "structured programming" are more common, and that is what this article will discuss. Years after Dijkstra (1969), object-oriented programming (OOP) was developed to handle very large or complex programs (see below: Object-oriented comparison).
    • Low-level structure At a low level, structured programs are often composed of simple, hierarchical program flow structures. These are sequence, selection, and repetition: • "Sequence" refers to an ordered execution of statements. • In "selection" one of a number of statements is executed depending on the state of the program. This is usually expressed with keywords such as if..then..else..endif, switch, or case. • In "repetition" a statement is executed until the program reaches a certain state or operations are applied to every element of a collection. This is usually expressed with keywords such as while, repeat, for or do..until. Often it is recommended that each loop should only have one entry point (and in the original structural programming, also only one exit point), and a few languages enforce this. Some languages, such as Dijkstra's original Guarded Command Language, emphasise the unity of these structures with a syntax which completely encloses the structure, as in if..fi. In others, such as C, this is not the case, which increases the risk of misunderstanding and incorrect modification. A language is described as "block-structured" when it has a syntax for enclosing structures between bracketed keywords, such as an if-statement bracketed by if..fi as in ALGOL 68, or a code section bracketed by BEGIN..END, as in PL/I. However, a language is described as "comb-structured" when it has a syntax for enclosing structures within an ordered series of keywords. A "comb-structured" language has multiple structure keywords to define separate sections within a block, analogous to the multiple teeth or prongs in a comb separating sections of the comb. For example, in Ada, a block is a 4-pronged comb with keywords DECLARE, BEGIN, EXCEPTION, END, and the if- statement in Ada is a 4-pronged comb with keywords IF, THEN, ELSE, END IF. 12. High Level Languages: a high-level programming language is a programming language with strong abstraction from the details of the computer. In comparison to low-level programming languages, it may use natural language elements, be easier to use, or more portable across platforms. Such languages hide the details of CPU operations such as memory access models and management of scope. This greater abstraction and hiding of details is generally intended to make the language user-friendly, as it includes concepts from the problem domain instead of those of the machine used. A high level language isolates the execution semantics of a computer architecture from the specification of the program, making the process of developing a program simpler and more understandable with respect to a low-level
    • language. The amount of abstraction provided defines how 'high level' a programming language is. Assembly Languages: An assembly language is a low-level language for programming computers. It implements a symbolic representation of the numeric machine codes and other constants needed to program a particular CPU architecture. This representation is usually defined by the hardware manufacturer, and is based on abbreviations (called mnemonics) that help the programmer remember individual instructions, registers, etc. An assembly language is thus specific to a certain physical or virtual computer architecture (as opposed to most high-level languages, which are usually portable). Assembly languages were first developed in the 1950s, when they were referred to as second generation programming languages. They eliminated much of the error-prone and time-consuming first-generation programming needed with the earliest computers, freeing the programmer from tedium such as remembering numeric codes and calculating addresses. They were once widely used for all sorts of programming. However, by the 1980s (1990s on small computers), their use had largely been supplanted by high-level languages, in the search for improved programming productivity. Today, assembly language is used primarily for direct hardware manipulation, access to specialized processor instructions, or to address critical performance issues. Typical uses are device drivers, low-level embedded systems, and real-time systems. A utility program called an assembler is used to translate assembly language statements into the target computer's machine code. The assembler performs a more or less isomorphic translation (a one-to-one mapping) from mnemonic statements into machine instructions and data. (This is in contrast with high-level languages, in which a single statement generally results in many machine instructions. This is done by one of two means: a compiler is used to most-efficiently translate high-level language statements into machine code "executable" files; an interpreter executes similar statements directly and in its own application environment.) 13. Schema: The schema is the physical arrangement of the data as it appears in the DBMS. Schema: Represents the complete description of the database Subschema: The subschema is the logical view of the data as it appears to the application program. Subschema - subset of schema, application view of the database. 14. Functions of a CPU:
    • • The CPU unifies the system. • It controls the functions performed by the other components. • The CPU must be able to fetch instructions from memory, decode their binary contents and execute them. • It must also be able to reference memory and 1/0 ports as necessary in the execution of instructions. • In addition, the CPU should be able to recognize and respond to certain external control signals, such as INTERRUPT and WAIT requests. • CPU controls the system buses. • It schedules timing for computer processes • CPU accepts input • It executes instructions • It directs other components of the computer 15. Types of Databases: There are primarily two types: analytical databases and operational databases. Analytic Databases Analytic databases (a.k.a. OLAP- On Line Analytical Processing) are primarily static, read-only databases which store archived, historical data used for analysis. For example, a company might store sales records over the last ten years in an analytic database and use that database to analyze marketing strategies in relationship to demographics. Operational Databases Operational databases (a.k.a. OLTP On Line Transaction Processing), on the other hand, are used to manage more dynamic bits of data. These types of databases allow you to do more than simply view archived data. Operational databases allow you to modify that data (add, change or delete data). These types of databases are usually used to track real-time information. For example, a company might have an operational database used to track warehouse/stock quantities. As customers order products from an online web store, an operational database can be used to keep track of how many items have been sold and when the company will need to reorder stock. Database Models Besides differentiating databases according to function, databases can also be
    • differentiated according to how they model the data. What is a data model? A data model is a "description" of both a container for data and a methodology for storing and retrieving data from that container. Actually, there isn't really a data model "thing". Data models are abstractions, oftentimes mathematical algorithms and concepts. You cannot really touch a data model. But nevertheless, they are very useful. The analysis and design of data models has been the cornerstone of the evolution of databases. As models have advanced so has database efficiency. Before the 1980's, the two most commonly used Database Models were the hierarchical and network systems. Let's take a quick look at these two models and then move on to the more current models. Hierarchical Databases As its name implies, the Hierarchical Database Model defines hierarchically- arranged data. The most intuitive way to visualize this type of relationship is by visualizing an upside down tree of data. In this tree, a single table acts as the "root" of the database from which other tables "branch" out. You will be instantly familiar with this relationship because that is how all windows-based directory management systems (like Windows Explorer) work these days. Relationships in such a system are thought of in terms of children and parents such that a child may only have one parent but a parent can have multiple children. Parents and children are tied together by links called "pointers" (perhaps physical addresses inside the file system). A parent will have a list of pointers to each of their children. This child/parent rule assures that data is systematically accessible. To get to a low- level table, you start at the root and work your way down through the tree until you reach your target. Of course, as you might imagine, one problem with this system is
    • that the user must know how the tree is structured in order to find anything! The hierarchical model however, is much more efficient than the flat-file model because there is not as much need for redundant data. If a change in the data is necessary, the change might only need to be processed once. Consider the student flat file database example from our discussion of what databases are: Name Address Course Grade Mr. Eric Tachibana 123 Kensigton Chemistry 102 C+ Mr. Eric Tachibana 123 Kensigton Chinese 3 A Mr. Eric Tachibana 122 Kensigton Data Structures B Mr. Eric Tachibana 123 Kensigton English 101 A Ms. Tonya Lippert 88 West 1st St. Psychology 101 A Mrs. Tonya Ducovney 100 Capitol Ln. Psychology 102 A Ms. Tonya Lippert 88 West 1st St. Human Cultures A Ms. Tonya Lippert 88 West 1st St. European Governments A As we mentioned before, this flat-file database would store an excessive amount of redundant data. If we implemented this in a hierarchical database model, we would get much less redundant data. Consider the following hierarchical database scheme: However, as you can imagine, the hierarchical database model has some serious problems. For one, you cannot add a record to a child table until it has already been incorporated into the parent table. This might be troublesome if, for example, you wanted to add a student who had not yet signed up for any courses. Worse, yet, the hierarchical database model still create repetition of data within the database. You might imagine that in the database system shown above, there may be a higher level that includes multiple courses. In this case, there could be redundancy because students would be enrolled in several courses and thus each "course tree" would have redundant student information.
    • Redundancy would occur because hierarchical databases handle one-to-many relationships well but do not handle many-to-many relationships well. This is because a child may only have one parent. However, in many cases you will want to have the child be related to more than one parent. For instance, the relationship between student and class is a "many-to-many". Not only can a student take many subjects but a subject may also be taken by many students. How would you model this relationship simply and efficiently using a hierarchical database? The answer is that you wouldn't. Though this problem can be solved with multiple databases creating logical links between children, the fix is very kludgy and awkward. Faced with these serious problems, the computer brains of the world got together and came up with the network model. Network Databases In many ways, the Network Database model was designed to solve some of the more serious problems with the Hierarchical Database Model. Specifically, the Network model solves the problem of data redundancy by representing relationships in terms of sets rather than hierarchy. The model had its origins in the Conference on Data Systems Languages (CODASYL) which had created the Data Base Task Group to explore and design a method to replace the hierarchical model. The network model is very similar to the hierarchical model actually. In fact, the hierarchical model is a subset of the network model. However, instead of using a single-parent tree hierarchy, the network model uses set theory to provide a tree-like hierarchy with the exception that child tables were allowed to have more than one parent. This allowed the network model to support many-to-many relationships. Visually, a Network Database looks like a hierarchical Database in that you can see it as a type of tree. However, in the case of a Network Database, the look is more like several trees which share branches. Thus, children can have multiple parents and parents can have multiple children.
    • Nevertheless, though it was a dramatic improvement, the network model was far from perfect. Most profoundly, the model was difficult to implement and maintain. Most implementations of the network model were used by computer programmers rather than real users. What was needed was a simple model which could be used by real end users to solve real problems. Relational Databases In a relational database, data elements are organized as multiple tables with rows and columns. Each table is stored as a separate file. Each table column represents a data field and each row a data record. Data in one table is related to data in another table with a common field. Relational model provides greater flexibility of data organization and future enhancements in database as compared to hierarchical and network models. s.no Name Age Date of salary joining 1 Anuj 25 2-dec-1989 10,000 2 Bharat 27 4-july-1999 12,000 3 Sunny 25 3-june-1990 13,000 Object Oriented Databases: Object-oriented database model was introduced to overcome the shortcomings of conventional database models. An object-oriented database is a collection of objects whose behaviour, state and relationships are defined in accordance with object-oriented concepts. 15. Satellite Communication System:
    • Satellite communication systems consist of Earth-orbiting communications platforms that receive and retransmit signals from earth-based stations. A typical television satellite receives a signal from a base station and broadcasts it to a large number of terrestrial receivers. Signals to satellites are called "uplinks," and signals from satellites are called "downlinks." Uplinks have also been called "shooting the bird." The downlink covers an area called the "footprint," which may be very large or cover a focused area. Satellites use microwave frequencies. Since they are overhead, the transmissions are line of sight to the receiver. The most common frequency bands for satellites are listed here. Band Uplink Downlink L/S 1.610 to 1.625 GHz 2.483 to 2.50 GHz C 3.7 to 4.2 GHz 5.924 to 6.425 GHz Ku 11.7 to 12.2 GHz 14.0 to 14.5 GHz Ka 17.7 to 21.7 GHz 27.5 to 30.5 GHz As pictured in Figure S-2, there are "high-orbit" GEO (geosynchronous satellites), "low-orbit" LEO (low earth orbit) satellites, and satellites in a variety of mid-orbits and elliptical orbits (some spy satellites use these orbits so they can drop in for a close look). Geosynchronous satellites are placed in high stationary orbits 22,300 miles (42,162 kilometers) above the earth. The satellites are typically used for video transmissions. The speed and height of these satellites allow them to stay synchronized above a specific location on the earth at all times. One problem with high-orbit geosynchronous satellites is that a typical back-and-forth transmission has a delay of about a half second, which causes problems in time-critical computer data transmissions, as discussed in a moment. Satellites in LEO orbit are low enough to minimize this problem. LEOs are close to the earth, usually within a few hundred kilometers, and inclined to the equatorial plane. Since the satellites are near the earth, earth-based devices don't require as much power to communicate with the satellites. Thus, they are ideal for phones and hand-held devices. However, LEOs are in fast orbits and do not stay stationary above a point on the earth. Therefore, a country-wide or global communication system requires a constellation of satellites that basically project moving footprints above the earth. As one satellite moves out of position, another takes over coverage. Calls and other transmissions are handed off from one satellite to another in this process. This is just the opposite of cellular phone systems where people move in and out of cells.
    • There is debate about which system is better for data communications: GEO or LEO. While LEOs are ideal for mobile wireless devices, the current trend is to enable GEOs with more bandwidth. Still, the delay of GEOs is a problem for time-critical applications. 17. Generations of Computers: The Five Generations of Computers The history of computer development is often referred to in reference • to the different generations of computing devices. Each generation of computer is characterized by a major technological development that fundamentally changed the way computers operate, resulting in increasingly smaller, cheaper, more powerful and more efficient and reliable devices. Read about each generation and the developments that led to the current devices that we use today. First Generation - 1940-1956: Vacuum Tubes The first computers used vacuum tubes for circuitry and magnetic drums for memory, and were often enormous, taking up entire rooms. They were very expensive to operate and in addition to using a great deal of electricity, generated a lot of heat, which was often the cause of malfunctions. First generation computers relied on machine language, the lowest- level programming language understood by computers, to perform operations, and they could only solve one problem at a time. Input was based on punched cards and paper tape, and output was displayed on printouts. The UNIVAC and ENIAC computers are examples of first- generation computing devices. The UNIVAC was the first commercial computer delivered to a business client, the U.S. Census Bureau in 1951. Second Generation - 1956-1963: Transistors Transistors replaced vacuum tubes and ushered in the second generation of computers. The transistor was invented in 1947 but did not see widespread use in computers until the late 50s. The transistor was far superior to the vacuum tube, allowing computers to become smaller, faster, cheaper, more energy- efficient and more reliable than their first-generation predecessors. Though the transistor still generated a great deal of heat that subjected the computer to damage, it was a vast improvement over the vacuum
    • tube. Second-generation computers still relied on punched cards for input and printouts for output. Second-generation computers moved from cryptic binary machine language to symbolic, or assembly, languages, which allowed programmers to specify instructions in words. High-level programming languages were also being developed at this time, such as early versions of COBOL and FORTRAN. These were also the first computers that stored their instructions in their memory, which moved from a magnetic drum to magnetic core technology. The first computers of this generation were developed for the atomic energy industry. Third Generation - 1964-1971: Integrated Circuits The development of the integrated circuit was the hallmark of the third generation of computers. Transistors were miniaturized and placed on silicon chips, called semiconductors, which drastically increased the speed and efficiency of computers. Instead of punched cards and printouts, users interacted with third generation computers through keyboards and monitors and interfaced with an operating system, which allowed the device to run many different applications at one time with a central program that monitored the memory. Computers for the first time became accessible to a mass audience because they were smaller and cheaper than their predecessors. Fourth Generation - 1971-Present: Microprocessors The microprocessor brought the fourth generation of computers, as thousands of integrated circuits were built onto a single silicon chip. What in the first generation filled an entire room could now fit in the palm of the hand. The Intel 4004 chip, developed in 1971, located all the components of the computer - from the central processing unit and memory to input/output controls - on a single chip. In 1981 IBM introduced its first computer for the home user, and in 1984 Apple introduced the Macintosh. Microprocessors also moved out of the realm of desktop computers and into many areas of life as more and more everyday products began to use microprocessors. As these small computers became more powerful, they could be linked together to form networks, which eventually led to the development of the Internet. Fourth generation computers also saw the development of GUIs, the mouse and handheld devices. Fifth Generation - Present and Beyond: Artificial Intelligence Fifth generation computing devices, based on artificial intelligence, are still in development, though there are some applications, such as voice recognition, that are being used today. The use of parallel processing and superconductors is helping to make artificial intelligence a reality. Quantum computation and molecular and nanotechnology will radically change the face of computers in years to come. The goal of fifth-generation computing is to develop devices that respond to natural language input and are capable of learning and self-organization.
    • For more details please refer to the classroom notes. 18. Computer Networks: Already discussed in previous paper.
    • 19. E-mail: It can take days to send a letter across the country and weeks to go around the world. To save time and money, more and more people are relying on electronic mail. It's fast, easy and much cheaper than the using the postal service. What is e-mail? In its simplest form, e-mail is an electronic message sent from one device to another. While most messages go from computer to computer, e-mail can also be sent and received by mobile phones, PDAs and other portable devices. With e-mail, you can send and receive personal and business-related messages with attachments, such as photos or formatted documents. You can also send music, video clips and software programs. Let's say you have a small business with sales reps working around the country. How do you communicate without running up a huge phone bill? Or what about keeping in touch with far-flung family members? E-mail is the way to go. It's no wonder e-mail has become the the Internet's most popular service. Follow the Trail Just as a letter makes stops at different postal stations along the way to its final destination, e-mail passes from one computer, known as a mail server, to another as it travels over the Internet. Once it arrives at the destination mail server, it's stored in an electronic mailbox until the recipient retrieves it. This whole process can take seconds, allowing you to quickly communicate with people around the world at any time of the day or night. Sending and Receiving Messages To receive e-mail, you need an account on a mail server. This is similar to having a postal box where you receive letters. One advantage over regular mail is that you can retrieve your e-mail from any location on earth, provide that you have Internet access. Once you connect to your mail server, you download your messages to your computer or wireless device, or read them online. To send e-mail, you need a connection to the Internet and access to a mail server that forwards your mail. The standard protocol used for sending Internet e-mail is called SMTP, short for Simple Mail Transfer Protocol. It works in conjunction with POP-- Post Office Protocol--servers. Almost all Internet service providers and all major online services offer at least one e-mail address with every account. When you send an e-mail message, your computer routes it to an SMTP server. The server looks at the e-mail address (similar to the address on an envelope), then forwards it to the recipient's mail server, where it's stored until the addressee retrieves it. You can send e-mail anywhere in the world to anyone who has an e-mail address.
    • At one time, you could only send text messages without attachments via the Internet. With the advent of MIME, which stands for Multipurpose Internet Mail Extension, and other types of encoding schemes, such as UUencode, you can also send formatted documents, photos, audio and video files. Just make sure that the person to whom you send the attachment has the software capable of opening it.