Q.1:- What is computer networking?ANS: - Users and network administrators often have differentviews of their networks. Often, users share printers andsome servers form a workgroup, which usually means theyare in the same geographic location and are on the sameLAN. A community of interest has less of a connotation ofbeing in a local area, and should be thought of as a set ofarbitrarily located users who share a set of servers, andpossibly also communicate via peer-to-peer technologies.Network administrators see networks from both physical andlogical perspectives. The physical perspective involvesgeographic locations, physical cabling, and the networkelements (e.g., routers, bridges and application layergateways that interconnect the physical media. Logicalnetworks, called, in the TCP/IP architecture, subnets, maponto one or more physical media. For example, a commonpractice in a campus of buildings is to make a set of LANcables in each building appear to be a common subnet,using virtual LAN (VLAN) technology.Both users and administrators will be aware, to varyingextents, of the trust and scope characteristics of a network.Again using TCP/IP architectural terminology, an intranet isa community of interest under private administration usuallyby an enterprise, and is only accessible by authorized users(e.g. employees). Intranets do not have to be connected tothe Internet, but generally have a limited connection. Anextranet is an extension of an intranet that allows securecommunications to users outside of the intranet (e.g.business partners, customers).Informally, the Internet is the set of users, enterprises,andcontent providers that are interconnected by Internet Service
Providers (ISP). From an engineering standpoint, theInternet is the set of subnets, and aggregates of subnets,which share the registered IP address space and exchangeinformation about the reachability of those IP addressesusing the Border Gateway Protocol. Typically, the human-readable names of servers are translated to IP addresses,transparently to users, via the directory function of theDomain Name System (DNS).Over the Internet, there can be business-to-business (B2B),business-to-consumer (B2C) and consumer-to-consumer(C2C) communications. Especially when money or sensitiveinformation is exchanged, the communications are apt to besecured by some form of communications securitymechanism. Intranets and extranets can be securelysuperimposed onto the Internet, without any access bygeneral Internet users, using secure Virtual Private Network(VPN) technology.When used for gaming one computer will have to be theserver while the others play through it.HistoryBefore the advent of computer networks that were basedupon some type of telecommunications system,communication between calculation machines and earlycomputers was performed by human users by carryinginstructions between them. Many of the social behavior seenin todays Internet was demonstrably present in nineteenth-century telegraph networks, and arguably in even earliernetworks using visual signals.
In September 1940 George Stibitz used a teletype machineto send instructions for a problem set from his Model K atDartmouth College in New Hampshire to his ComplexNumber Calculator in New York and received results back bythe same means. Linking output systems like teletypes tocomputers was an interest at the Advanced ResearchProjects Agency (ARPA) when, in 1962, J.C.R. Licklider washired and developed a working group he called the"Intergalactic Network", a precursor to the ARPANet.In 1964, researchers at Dartmouth developed the DartmouthTime Sharing System for distributed users of large computersystems. The same year, at MIT, a research groupsupported by General Electric and Bell Labs used acomputer (DECs PDP-8) to route and manage telephoneconnections.Throughout the 1960s Leonard Kleinrock, Paul Baran andDonald Davies independently conceptualized and developednetwork systems which used datagrams or packets thatcould be used in a packet switched network betweencomputer systems.1965 Thomas Merrill and Lawrence G. Roberts created thefirst wide area network(WAN).The first widely used PSTN switch that used true computercontrol was the Western Electric 1ESS switch, introduced in1965.In 1969 the University of California at Los Angeles, SRI (inStanford), University of California at Santa Barbara, and theUniversity of Utah were connected as the beginning of theARPANet network using 50 kbit/s circuits. Commercialservices using X.25, an alternative architecture to theTCP/IP suite, were deployed in 1972.
Computer networks, and the technologies needed to connectand communicate through and between them, continue todrive computer hardware, software, and peripheralsindustries. This expansion is mirrored by growth in thenumbers and types of users of networks from the researcherto the home user.Today, computer networks are the core of moderncommunication. For example, all modern aspects of thePublic Switched Telephone Network (PSTN) are computer-controlled, and telephony increasingly runs over the InternetProtocol, although not necessarily the public Internet. Thescope of communication has increased significantly in thepast decade and this boom in communications would nothave been possible without the progressively advancingcomputer network.Networking methodsNetworking is a complex part of computing that makes upmost of the IT Industry. Without networks, almost allcommunication in the world would cease to happen. It isbecause of networking that telephones, televisions, theinternet, etc. work.One way to categorize computer networks is by theirgeographic scope, although many real-world networksinterconnect Local Area Networks (LAN) via Wide AreaNetworks (WAN)and wireless networks[WWAN]. Thesethree (broad) types are:Local area network (LAN)A local area network is a network that spans a relativelysmall space and provides services to a small number ofpeople.
A peer-to-peer or client-server method of networking may beused. A peer-to-peer network is where each client sharestheir resources with other workstations in the network.Examples of peer-to-peer networks are: Small officenetworks where resource use is minimal and a homenetwork. A client-server network is where every client isconnected to the server and each other. Client-servernetworks use servers in different capacities. These can beclassified into two types: 1. Single-service servers 2. print server,where the server performs one task such as file server, ;while other servers can not only perform in the capacity offile servers and print servers, but they also conductcalculations and use these to provide information to clients(Web/Intranet Server). Computers may be connected inmany different ways, including Ethernet cables, Wirelessnetworks, or other types of wires such as power lines orphone lines.The ITU-T G.hn standard is an example of a technology thatprovides high-speed (up to 1 Gbit/s) local area networkingover existing home wiring (power lines, phone lines andcoaxial cables).Wide area network (WAN)A wide area network is a network where a wide variety ofresources are deployed across a large domestic area orinternationally. An example of this is a multinational businessthat uses a WAN to interconnect their offices in differentcountries. The largest and best example of a WAN is theInternet, which is a network composed of many smallernetworks. The Internet is considered the largest network in
the world.. The PSTN (Public Switched Telephone Network)also is an extremely large network that is converging to useInternet technologies, although not necessarily through thepublic Internet.A Wide Area Network involves communication through theuse of a wide range of different technologies. Thesetechnologies include Point-to-Point WANs such as Point-to-Point Protocol (PPP) and High-Level Data Link Control(HDLC), Frame Relay, ATM (Asynchronous Transfer Mode)and Sonet (Synchronous Optical Network). The differencebetween the WAN technologies is based on the switchingcapabilities they perform and the speed at which sendingand receiving bits of information (data) occur.Metropolitan Area Network (MAN)A metropolitan network is a network that is too large for eventhe largest of LANs but is not on the scale of a WAN. It alsointegrates two or more LAN networks over a specificgeographical area ( usually a city ) so as to increase thenetwork and the flow of communications. The LANs inquestion would usually be connected via "backbone" lines.Wireless networks (WLAN, WWAN)A wireless network is basically the same as a LAN or a WANbut there are no wires between hosts and servers. The datais transferred over sets of radio transceivers. These types ofnetworks are beneficial when it is too costly or inconvenientto run the necessary cables. For more information, seeWireless LAN and Wireless wide area network. The mediaaccess protocols for LANs come from the IEEE.The most common IEEE 802.11 WLANs cover, dependingon antennas, ranges from hundreds of meters to a few
kilometers. For larger areas, either communications satellitesof various types, cellular radio, or wireless local loop (IEEE802.16) all have advantages and disadvantages. Dependingon the type of mobility needed, the relevant standards maycome from the IETF or the ITU.Network topologyThe network topology defines the way in which computers,printers, and other devices are connected, physically andlogically. A network topology describes the layout of the wireand devices as well as the paths used by datatransmissions.Network topology has two types: • Physical • logicalCommonly used topologies include: • Bus • Star • Tree (hierarchical) • Linear • Ring • Mesh o partially connected o fully connected (sometimes known as fully redundant)The network topologies mentioned above are only a generalrepresentation of the kinds of topologies used in computernetwork and are considered basic topologies.
Q.2:- Describe client server computing.ANS: - To truly understand how much of the Internetoperates, including the Web, it is important to understand theconcept of client/server computing. The client/server modelis a form of distributed computing where one program (theclient) communicates with another program (the server) forthe purpose of exchanging information.The clients responsibility is usually to: 1. Handle the user interface. 2. Translate the users request into the desired protocol. 3. Send the request to the server. 4. Wait for the servers response. 5. Translate the response into "human-readable" results. 6. Present the results to the user.The servers functions include: 1. Listen for a clients query. 2. Process that query. 3. Return the results back to the client.A typical client/server interaction goes like this: 1. The user runs client software to create a query. 2. The client connects to the server. 3. The client sends the query to the server. 4. The server analyzes the query. 5. The server computes the results of the query. 6. The server sends the results to the client. 7. The client presents the results to the user. 8. Repeat as necessary.
A typical client/server interactionThis client/server interaction is a lot like going to a Frenchrestaurant. At the restaurant, you (the user) are presentedwith a menu of choices by the waiter (the client). Aftermaking your selections, the waiter takes note of yourchoices, translates them into French, and presents them tothe French chef (the server) in the kitchen. After the chefprepares your meal, the waiter returns with your diner (theresults). Hopefully, the waiter returns with the items youselected, but not always; sometimes things get "lost in thetranslation."Flexible user interface development is the most obviousadvantage of client/server computing. It is possible to createan interface that is independent of the server hosting thedata. Therefore, the user interface of a client/serverapplication can be written on a Macintosh and the server canbe written on a mainframe. Clients could be also written forDOS- or UNIX-based computers. This allows information tobe stored in a central server and disseminated to differenttypes of remote computers. Since the user interface is theresponsibility of the client, the server has more computingresources to spend on analyzing queries and disseminatinginformation. This is another major advantage of client/servercomputing; it tends to use the strengths of divergent
computing platforms to create more powerful applications.Although its computing and storage capabilities are dwarfedby those of the mainframe, there is no reason why aMacintosh could not be used as a server for less demandingapplications.In short, client/server computing provides a mechanism fordisparate computers to cooperate on a single computingtask.DescriptionClient-server describes the relationship between twocomputer programs in which one program, the clientprogram, makes a service request to another, the serverprogram. Standard networked functions such as emailexchange, web access and database access, are based onthe client-server model. For example, a web browser is aclient program at the user computer that may accessinformation at any web server in the world. To check yourbank account from your computer, a web browser clientprogram in your computer forwards your request to a webserver program at the bank. That program may in turnforward the request to its own database client program thatsends a request to a database server at another bankcomputer to retrieve your account balance. The balance isreturned to the bank database client, which in turn serves itback to the web browser client in your personal computer,which displays the information for you.The client-server model has become one of the central ideasof network computing. Most business applications beingwritten today use the client-server model. So do theInternets main application protocols, such as HTTP, SMTP,Telnet, DNS, etc. In marketing, the term has been used todistinguish distributed computing by smaller dispersed
computers from the "monolithic" centralized computing ofmainframe computers. But this distinction has largelydisappeared as mainframes and their applications have alsoturned to the client-server model and become part ofnetwork computing.Each instance of the client software can send data requeststo one or more connected servers. In turn, the servers canaccept these requests, process them, and return therequested information to the client. Although this conceptcan be applied for a variety of reasons to many differentkinds of applications, the architecture remains fundamentallythe same.The most basic type of client-server architecture employsonly two types of hosts: clients and servers. This type ofarchitecture is sometimes referred to as two-tier. It allowsdevices to share files and resources. The two tierarchitecture means that the client acts as one tier andapplication in combination with server acts as another tier.These days, clients are most often web browsers, althoughthat has not always been the case. Servers typically includeweb servers, database servers and mail servers. Onlinegaming is usually client-server too. In the specific case ofMMORPG, the servers are typically operated by thecompany selling the game; for other games one of theplayers will act as the host by setting his game in servermode.The interaction between client and server is often describedusing sequence diagrams. Sequence diagrams arestandardized in the Unified Modeling Language.When both the client- and server-software are running on thesame computer, this is called a single seat setup.
Specific types of clients include web browsers, email clients,and online chat clients.Specific types of servers include web servers, ftp servers,application servers, database servers, mail servers, fileservers, print servers, and terminal servers. Most webservices are also types of servers.Comparison to Peer-to-Peer architectureAnother type of network architecture is known as peer-to-peer, because each host or instance of the program cansimultaneously act as both a client and a server, andbecause each has equivalent responsibilities and status.Peer-to-peer architectures are often abbreviated using theacronym P2P.Both client-server and P2P architectures are in wide usagetoday. You can find more details in Comparison ofCentralized (Client-Server) and Decentralized (Peer-to-Peer)Networking. both client server and a2dp will work onwindows and Linux.Comparison to Client-Queue-Client architectureWhile classic Client-Server architecture requires one of thecommunication endpoints to act as a server, which is muchharder to implement] Client-Queue-Client allows allendpoints to be simple clients, while the server consists ofsome external software, which also acts as passive queue(one software instance passes its query to another instanceto queue, e.g. database, and then this other instance pulls itfrom database, makes a response, passes it to databaseetc.). This architecture allows greatly simplified softwareimplementation. Peer-to-Peer architecture was originallybased on Client-Queue-Client concept.
Advantages • In most cases, a client-server architecture enables the roles and responsibilities of a computing system to be distributed among several independent computers that are known to each other only through a network. This creates an additional advantage to this architecture: greater ease of maintenance. For example, it is possible to replace, repair, upgrade, or even relocate a server while its clients remain both unaware and unaffected by that change. This independence from change is also referred to as encapsulation. • All the data is stored on the servers, which generally have far greater security controls than most clients. Servers can better control access and resources, to guarantee that only those clients with the appropriate permissions may access and change data. • Since data storage is centralized, updates to that data are far easier to administer than what would be possible under a P2P paradigm. Under a P2P architecture, data updates may need to be distributed and applied to each "peer" in the network, which is both time-consuming and error-prone, as there can be thousands or even millions of peers. • Many mature client-server technologies are already available which were designed to ensure security, friendliness of the user interface, and ease of use. • It functions with multiple different clients of different capabilities. • Reduces the total cost of ownership. • Increases Productivity • End User Productivity • Developer ProductivityDisadvantages
• Traffic congestion on the network has been an issue since the inception of the client-server paradigm. As the number of simultaneous client requests to a given server increases, the server can become severely overloaded. Contrast that to a P2P network, where its bandwidth actually increases as more nodes are added, since the P2P networks overall bandwidth can be roughly computed as the sum of the bandwidths of every node in that network. • The client-server paradigm lacks the robustness of a good P2P network. Under client-server, should a critical server fail, clients’ requests cannot be fulfilled. In P2P networks, resources are usually distributed among many nodes. Even if one or more nodes depart and abandon a downloading file, for example, the remaining nodes should still have the data needed to complete the download .Q.3. What is the internet? Is the search engine is very usefulto internet?ANS: - The Internet is a global network of interconnectedcomputers, enabling users to share information alongmultiple channels. Typically, a computer that connects to theInternet can access information from a vast array ofavailable servers and other computers by movinginformation from them to the computers local memory. Thesame connection allows that computer to send information toservers on the network; that information is in turn accessedand potentially modified by a variety of other interconnectedcomputers. A majority of widely accessible information onthe Internet consists of inter-linked hypertext documents andother resources of the World Wide Web (WWW). Computer
users typically manage sent and received information withweb browsers; other software for users interface withcomputer networks includes specialized programs forelectronic mail, online chat, file transfer and file sharing.The movement of information in the Internet is achieved viaa system of interconnected computer networks that sharedata by packet switching using the standardized InternetProtocol Suite (TCP/IP). It is a "network of networks" thatconsists of millions of private and public, academic,business, and government networks of local to global scopethat are linked by copper wires, fiber-optic cables, wirelessconnections, and other technologies.The terms Internet and World Wide Web are often used inevery-day speech without much distinction. However, theInternet and the World Wide Web are not one and the same.The Internet is a global data communications system. It is ahardware and software infrastructure that providesconnectivity between computers. In contrast, the Web is oneof the services communicated via the Internet. It is acollection of interconnected documents and other resources,linked by hyperlinks and URLs.The term internet is written both with capital and withoutcapital, and is used both with and without the definite article.
GrowthGraph of internet users per 100 inhabitants between 1997and 2007 by International Telecommunication UnionAlthough the basic applications and guidelines that make theInternet possible had existed for almost two decades, thenetwork did not gain a public face until the 1990s. On 6August 1991, CERN, a pan European organisation forparticle research, publicized the new World Wide Webproject. The Web was invented by English scientist TimBerners-Lee in 1989.An early popular web browser was ViolaWWW, patternedafter HyperCard and built using the X Window System. Itwas eventually replaced in popularity by the Mosaic webbrowser. In 1993, the National Center for SupercomputingApplications at the University of Illinois released version 1.0of Mosaic, and by late 1994 there was growing publicinterest in the previously academic, technical Internet. By1996 usage of the word Internet had become commonplace,
and consequently, so had its use as a synecdoche inreference to the World Wide Web.Meanwhile, over the course of the decade, the Internetsuccessfully accommodated the majority of previouslyexisting public computer networks (although some networks,such as FidoNet, have remained separate). During the1990s, it was estimated that the Internet grew by 100% peryear, with a brief period of explosive growth in 1996 and1997. This growth is often attributed to the lack of centraladministration, which allows organic growth of the network,as well as the non-proprietary open nature of the Internetprotocols, which encourages vendor interoperability andprevents any one company from exerting too much controlover the network. Using various statistics, AMD estimated the population ofinternet users to be 1.5 billion as of January 2009.Todays InternetThe My Opera Community server rack. From the top, userfile storage (content of files.myopera.com), "bigma" (themaster MySQL database server), and two IBM blade centers
containing multi-purpose machines (Apache front ends,Apache back ends, slave MySQL database servers, loadbalancers, file servers, cache servers and sync masters)Aside from the complex physical connections that make upits infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), andby technical specifications or protocols that describe how toexchange data over the network. Indeed, the Internet isdefined by its interconnections and routing policies.By December 31, 2008, 1.574 billion people were using theInternet according to Internet World StatisticsInternet protocolsThe complex communications infrastructure of the Internetconsists of its hardware components and a system ofsoftware layers that control various aspects of thearchitecture. While the hardware can often be used tosupport other software systems, it is the design and therigorous standardization process of the software architecturethat characterizes the Internet.The responsibility for the architectural design of the Internetsoftware systems has been delegated to the InternetEngineering Task Force (IETF). The IETF conductsstandard-setting work groups, open to any individual, aboutthe various aspects of Internet architecture. Resultingdiscussions and final standards are published in Requestsfor Comments (RFCs), freely available on the IETF web site.The principal methods of networking that enable the Internetare contained in a series of RFCs that constitute the InternetStandards. These standards describe a system known asthe Internet Protocol Suite. This is a model architecture that
divides methods into a layered system of protocols (RFC1122, RFC 1123). The layers correspond to the environmentor scope in which their services operate. At the top is thespace (Application Layer) of the software application, e.g., aweb browser application, and just below it is the TransportLayer which connects applications on different hosts via thenetwork (e.g., client-server model). The underlying networkconsists of two layers: the Internet Layer which enablescomputers to connect to one-another via intermediate(transit) networks and thus is the layer that establishesinternetworking and the Internet, and lastly, at the bottom, isa software layer that provides connectivity between hosts onthe same local link (therefor called Link Layer), e.g., a localarea network (LAN) or a dial-up connection. This model isalso known as the TCP/IP model of networking. While othermodels have been developed, such as the Open SystemsInterconnection (OSI) model, they are not compatible in thedetails of description, nor implementation.The most prominent component of the Internet model is theInternet Protocol (IP) which provides addressing systems forcomputers on the Internet and facilitates the internetworkingof networks. IP Version 4 (IPv4) is the initial version used onthe first generation of the todays Internet and is still indominant use. It was designed to address up to ~4.3 billion(109) Internet hosts. However, the explosive growth of theInternet has led to IPv4 address exhaustion. A new protocolversion, IPv6, was developed which provides vastly largeraddressing capabilities and more efficient routing of datatraffic. IPv6 is currently in commercial deployment phasearound the world.IPv6 is not interoperable with IPv4. It essentially establishesa "parallel" version of the Internet not accessible with IPv4software. This means software upgrades are necessary for
every networking device that needs to communicate on theIPv6 Internet. Most modern computer operating systems arealready converted to operate with both versions of theInternet Protocol. Network infrastructures, however, are stilllagging in this development.Internet structureThere have been many analyses of the Internet and itsstructure. For example, it has been determined that both theInternet IP routing structure and hypertext links of the WorldWide Web are examples of scale-free networks.Similar to the way the commercial Internet providers connectvia Internet exchange points, research networks tend tointerconnect into large subnetworks such as the following: • GEANT • GLORIAD • The Internet2 Network (formally known as the Abilene Network) • JANET (the UKs national research and education network)These in turn are built around relatively smaller networks.See also the list of academic computer networkorganizations.Computer network diagrams often represent the Internetusing a cloud symbol from which network communicationspass in and out.E-mailThe concept of sending electronic text messages betweenparties in a way analogous to mailing letters or memospredates the creation of the Internet. Even today it can be
important to distinguish between Internet and internal e-mailsystems. Internet e-mail may travel and be storedunencrypted on many other networks and machines out ofboth the senders and the recipients control. During this timeit is quite possible for the content to be read and eventampered with by third parties, if anyone considers itimportant enough. Purely internal or intranet mail systems,where the information never leaves the corporate ororganizations network, are much more secure, although inany organization there will be IT and other personnel whosejob may involve monitoring, and occasionally accessing, thee-mail of other employees not addressed to them. Today youcan send pictures and attach files on e-mail. Most e-mailservers today also feature the ability to send e-mail tomultiple e-mail addresses.The World Wide WebGraphic representation of a minute fraction of the WWW,demonstrating hyperlinksMany people use the terms Internet and World Wide Web (orjust the Web) interchangeably, but, as discussed above, thetwo terms are not synonymous.
The World Wide Web is a huge set of interlinked documents,images and other resources, linked by hyperlinks and URLs.These hyperlinks and URLs allow the web servers and othermachines that store originals, and cached copies of, theseresources to deliver them as required using HTTP (HypertextTransfer Protocol). HTTP is only one of the communicationprotocols used on the Internet.Web services also use HTTP to allow software systems tocommunicate in order to share and exchange business logicand data.Software products that can access the resources of the Webare correctly termed user agents. In normal use, webbrowsers, such as Internet Explorer, Firefox and AppleSafari, access web pages and allow users to navigate fromone to another via hyperlinks. Web documents may containalmost any combination of computer data including graphics,sounds, text, video, multimedia and interactive contentincluding games, office applications and scientificdemonstrations.Through keyword-driven Internet research using searchengines like Yahoo! and Google, millions of peopleworldwide have easy, instant access to a vast and diverseamount of online information. Compared to encyclopediasand traditional libraries, the World Wide Web has enabled asudden and extreme decentralization of information anddata.Using the Web, it is also easier than ever before forindividuals and organizations to publish ideas andinformation to an extremely large audience. Anyone can findways to publish a web page, a blog or build a website forvery little initial cost. Publishing and maintaining large,professional websites full of attractive, diverse and up-to-
date information is still a difficult and expensive proposition,however.Many individuals and some companies and groups use "weblogs" or blogs, which are largely used as easily updatableonline diaries. Some commercial organizations encouragestaff to fill them with advice on their areas of specialization inthe hope that visitors will be impressed by the expertknowledge and free information, and be attracted to thecorporation as a result. One example of this practice isMicrosoft, whose product developers publish their personalblogs in order to pique the publics interest in their work.Collections of personal web pages published by largeservice providers remain popular, and have becomeincreasingly sophisticated. Whereas operations such asAngelfire and GeoCities have existed since the early days ofthe Web, newer offerings from, for example, Facebook andMy Space currently have large followings. These operationsoften brand themselves as social network services ratherthan simply as web page hosts.Advertising on popular web pages can be lucrative, and e-commerce or the sale of products and services directly viathe Web continues to grow.In the early days, web pages were usually created as sets ofcomplete and isolated HTML text files stored on a webserver. More recently, websites are more often created usingcontent management or wiki software with, initially, very littlecontent. Contributors to these systems, who may be paidstaff, members of a club or other organisation or members ofthe public, fill underlying databases with content usingediting pages designed for that purpose, while casual visitorsview and read this content in its final HTML form. There mayor may not be editorial, approval and security systems built
into the process of taking newly entered content and makingit available to the target visitors.Remote accessThe Internet allows computer users to connect to othercomputers and information stores easily, wherever they maybe across the world. They may do this with or without theuse of security, authentication and encryption technologies,depending on the requirements.This is encouraging new ways of working from home,collaboration and information sharing in many industries. Anaccountant sitting at home can audit the books of a companybased in another country, on a server situated in a thirdcountry that is remotely maintained by IT specialists in afourth. These accounts could have been created by home-working bookkeepers, in other remote locations, based oninformation e-mailed to them from offices all over the world.Some of these things were possible before the widespreaduse of the Internet, but the cost of private leased lines wouldhave made many of them infeasible in practice.An office worker away from his desk, perhaps on the otherside of the world on a business trip or a holiday, can open aremote desktop session into his normal office PC using asecure Virtual Private Network (VPN) connection via theInternet. This gives the worker complete access to all of hisor her normal files and data, including e-mail and otherapplications, while away from the office.This concept is also referred to by some network securitypeople as the Virtual Private Nightmare, because it extendsthe secure perimeter of a corporate network into itsemployees homes.
CollaborationThe low cost and nearly instantaneous sharing of ideas,knowledge, and skills has made collaborative workdramatically easier. Not only can a group cheaplycommunicate and share ideas, but the wide reach of theInternet allows such groups to easily form in the first place.An example of this is the free software movement, which hasproduced Linux, Mozilla Firefox, OpenOffice.org etc.Internet "chat", whether in the form of IRC chat rooms orchannels, or via instant messaging systems, allowcolleagues to stay in touch in a very convenient way whenworking at their computers during the day. Messages can beexchanged even more quickly and conveniently than via e-mail. Extensions to these systems may allow files to beexchanged, "whiteboard" drawings to be shared or voice andvideo contact between team members.Version control systems allow collaborating teams to workon shared sets of documents without either accidentallyoverwriting each others work or having members wait untilthey get "sent" documents to be able to make theircontributions.Business and project teams can share calendars as well asdocuments and other information. Such collaboration occursin a wide variety of areas including scientific research,software development, conference planning, politicalactivism and creative writing.File sharingA computer file can be e-mailed to customers, colleaguesand friends as an attachment. It can be uploaded to awebsite or FTP server for easy download by others. It can be
put into a "shared location" or onto a file server for instantuse by colleagues. The load of bulk downloads to manyusers can be eased by the use of "mirror" servers or peer-to-peer networks.In any of these cases, access to the file may be controlled byuser authentication, the transit of the file over the Internetmay be obscured by encryption, and money may changehands for access to the file. The price can be paid by theremote charging of funds from, for example, a credit cardwhose details are also passed—hopefully fully encrypted—across the Internet. The origin and authenticity of the filereceived may be checked by digital signatures or by MD5 orother message digests.These simple features of the Internet, over a worldwidebasis, are changing the production, sale, and distribution ofanything that can be reduced to a computer file fortransmission. This includes all manner of print publications,software products, news, music, film, video, photography,graphics and the other arts. This in turn has caused seismicshifts in each of the existing industries that previouslycontrolled the production and distribution of these products.Streaming mediaMany existing radio and television broadcasters provideInternet "feeds" of their live audio and video streams (forexample, the BBC). They may also allow time-shift viewingor listening such as Preview, Classic Clips and Listen Againfeatures. These providers have been joined by a range ofpure Internet "broadcasters" who never had on-air licenses.This means that an Internet-connected device, such as acomputer or something more specific, can be used to accesson-line media in much the same way as was previouslypossible only with a television or radio receiver. The range of
material is much wider, from pornography to highlyspecialized, technical webcasts. Podcasting is a variation onthis theme, where—usually audio—material is downloadedand played back on a computer or shifted to a portablemedia player to be listened to on the move. Thesetechniques using simple equipment allow anybody, with littlecensorship or licensing control, to broadcast audio-visualmaterial on a worldwide basis.Webcams can be seen as an even lower-budget extensionof this phenomenon. While some webcams can give full-frame-rate video, the picture is usually either small orupdates slowly. Internet users can watch animals around anAfrican waterhole, ships in the Panama Canal, traffic at alocal roundabout or monitor their own premises, live and inreal time. Video chat rooms and video conferencing are alsopopular with many uses being found for personal webcams,with and without two-way sound.YouTube was founded on 15 February 2005 and is now theleading website for free streaming video with a vast numberof users. It uses a flash-based web player to stream andshow the video files. Users are able to watch videos withoutsigning up; however, if they do sign up, they are able toupload an unlimited amount of videos and build their ownpersonal profile. YouTube claims that its users watchhundreds of millions, and upload hundreds of thousands, ofvideos daily.Internet Telephony (VoIP)VoIP stands for Voice-over-Internet Protocol, referring to theprotocol that underlies all Internet communication. The ideabegan in the early 1990s with walkie-talkie-like voiceapplications for personal computers. In recent years manyVoIP systems have become as easy to use and as
convenient as a normal telephone. The benefit is that, as theInternet carries the voice traffic, VoIP can be free or costmuch less than a traditional telephone call, especially overlong distances and especially for those with always-onInternet connections such as cable or ADSL.VoIP is maturing into a competitive alternative to traditionaltelephone service. Interoperability between differentproviders has improved and the ability to call or receive acall from a traditional telephone is available. Simple,inexpensive VoIP network adapters are available thateliminate the need for a personal computer.Voice quality can still vary from call to call but is often equalto and can even exceed that of traditional calls.Remaining problems for VoIP include emergency telephonenumber dialling and reliability. Currently, a few VoIPproviders provide an emergency service, but it is notuniversally available. Traditional phones are line-poweredand operate during a power failure; VoIP does not do sowithout a backup power source for the phone equipment andthe Internet access devices.VoIP has also become increasingly popular for gamingapplications, as a form of communication between players.Popular VoIP clients for gaming include Ventrilo andTeamspeak, and others. PlayStation 3 and Xbox 360 alsooffer VoIP chat features.Internet accessCommon methods of home access include dial-up, landlinebroadband (over coaxial cable, fiber optic or copper wires),Wi-Fi, satellite and 3G technology cell phones.
Public places to use the Internet include libraries andInternet cafes, where computers with Internet connectionsare available. There are also Internet access points in manypublic places such as airport halls and coffee shops, in somecases just for brief use while standing. Various terms areused, such as "public Internet kiosk", "public accessterminal", and "Web payphone". Many hotels now also havepublic terminals, though these are usually fee-based. Theseterminals are widely accessed for various usage like ticketbooking, bank deposit, online payment etc. Wi-Fi provideswireless access to computer networks, and therefore can doso to the Internet itself. Hotspots providing such accessinclude Wi-Fi cafes, where would-be users need to bringtheir own wireless-enabled devices such as a laptop or PDA.These services may be free to all, free to customers only, orfee-based. A hotspot need not be limited to a confinedlocation. A whole campus or park, or even an entire city canbe enabled. Grassroots efforts have led to wirelesscommunity networks. Commercial Wi-Fi services coveringlarge city areas are in place in London, Vienna, Toronto, SanFrancisco, Philadelphia, Chicago and Pittsburgh. TheInternet can then be accessed from such places as a parkbench.Apart from Wi-Fi, there have been experiments withproprietary mobile wireless networks like Ricochet, varioushigh-speed data services over cellular phone networks, andfixed wireless services.High-end mobile phones such as smartphones generallycome with Internet access through the phone network. Webbrowsers such as Opera are available on these advancedhandsets, which can also run a wide variety of other Internetsoftware. More mobile phones have Internet access thanPCs, though this is not as widely used. An
MarketThe Internet has also become a large market for companies;some of the biggest companies today have grown by takingadvantage of the efficient nature of low-cost advertising andcommerce through the Internet, also known as e-commerce.It is the fastest way to spread information to a vast numberof people simultaneously. The Internet has alsosubsequently revolutionized shopping—for example; aperson can order a CD online and receive it in the mail withina couple of days, or download it directly in some cases. TheInternet has also greatly facilitated personalized marketingwhich allows a company to market a product to a specificperson or a specific group of people more so than any otheradvertising medium.Examples of personalized marketing include onlinecommunities such as MySpace, Friendster, Orkut, Facebookand others which thousands of Internet users join toadvertise themselves and make friends online. Many ofthese users are young teens and adolescents ranging from13 to 25 years old. In turn, when they advertise themselvesthey advertise interests and hobbies, which online marketingcompanies can use as information as to what those userswill purchase online, and advertise their own companiesproducts to those users.Q.4. Explain in brief “Transmission Media”.ANS: - Transmission media comprises; different types ofcables and wireless techniques that are used to connectnetwork devices in a Local Area Network (LAN), WirelessLocal Area Network (WLAN) or Wide Area Network (WAN).Choice of correct type of transmission media is veryimportant for the implementation of any network. It can make
a major impact on the performance, speed, cost andreliability of the network.Copper WiresConventional computer networks use copper wire because itis inexpensive, easy to install, and has low resistance toelectrical current. Unfortunately, copper wire is prone tointerference in the form electromagnetic energy emitted byneighbouring wires, especially those running in parallel.To minimise interference, twisted pair wiring, as used intelephone systems, can be used as illustrated in Figure 1.Figure 1: Twisted pair wiringA plastic coating on each wire prevents the copper in onewire from touching the copper in another. The twist helpsreduce interference by preventing electrical signals on thewire radiating energy (causing interference) and bypreventing signals on other wires interfering with the pair.A second type of copper wire is coaxial cable, similar to thatused for TV aerials. The coaxial cable provides betterprotection from interference by providing a metal shield asillustrated in Figure 2.Figure 2: Cross-section of a coaxial cable
The metal shield forms a flexible cylinder around the innerwire providing a barrier to electromagnetic radiation, bothincoming and outgoing. The cable can run parallel to othercables and can be bent round corners.Optical FibresOptical fibres use light to transmit data. A thin glass fibre isencased in a plastic jacket which allows the fibre to bendwithout breaking. A transmitter at one end uses a lightemitting diode (LED) or laser to send pulses of light downthe fibre which are detected at the other end by a lightsensitive transistor.Figure 3 illustrates a single fibre (a) and a sheath of threefibres (b). Other configurations are possible.Figure 3: Single fibre and a sheath of three fibresOptical fibres have four main advantages over copper wires. • They use light which neither causes electrical interference nor are they susceptible to electrical inteference • They are manufactured to reflect the light inwards, so a fibre can carry a pulse of light further than a copper wire can carry a signal
• Light can encode more information that electrical signals, so they carry more information than a wire • Light can carry a signal over a single fibre, unlike electricity which requires a pair of wiresFigure 4 illustrates the hybrid nature of neighbourhoodwiring. Optical fibres carry cable TV to each street with thehouses fed by coaxial cable (a). Optical fibres also carry thePlain Old Telephone Service (POTS) to the nearestexchange, with the local loop to the house consisting oftwisted pairs (b).Figure 4: Cable television and POTSRadio
A network that uses electromagnetic radio waves operatesat radio frequency and its transmissions are called RFtransmissions. Each host on the network attaches to anantenna, which can both send and receive RF.SatellitesRadio transmissions do not bend round the surface of theearth, but RF technology combined with satellites canprovide long-distance connections. Figure 5 illustrates asatellite link across an ocean.Figure 5: Satellite and ground stationsThe satellite contains a transponder consisting of a radioreceiver and transmitter. A ground station on one side of theocean sends a signal to the satellite, which amplifies it andtransmits the amplified signal at a different angle than itarrived at to another ground station on the other side of theocean.A single satellite contains multiple transponders (usually sixto twelve) each using a different radio frequency, making itpossible for multiple communications to proceedsimultaneously. These satellites are often geostationary, i.e.
they appear stationary in the sky. To achieve this, their orbitmust be 22,236 miles (35,785 kilometres) high.MicrowaveElectromagnetic radiation beyond the frequency range ofradio and television can be used to transport information.Microwave transmission is usually point-to-point usingdirectional antennae with a clear path between transmitterand receiver.Infrared Infrared transmission is usually limited to a small area, e.g. one room, with the transmitter pointed towards the receiver. The hardware is inexpensive and does not require an antennal.Q.5. What is functionality of modem? Describe in detail.ANS:- Modem (from modulator-demodulator) is a device thatmodulates an analog carrier signal to encode digitalinformation, and also demodulates such a carrier signal todecode the transmitted information. The goal is to produce asignal that can be transmitted easily and decoded toreproduce the original digital data. Modems can be usedover any means of transmitting analog signals, from drivendiodes to radio.The most familiar example is a voiceband modem that turnsthe digital 1s and 0s of a personal computer into sounds thatcan be transmitted over the telephone lines of Plain OldTelephone Systems (POTS), and once received on the other
side, converts those 1s and 0s back into a form used by aUSB, Ethernet, serial, or network connection. Modems aregenerally classified by the amount of data they can send in agiven time, normally measured in bits per second, or "bps".They can also be classified by Baud, the number of timesthe modem changes its signal state per second.Baud is not the modems speed in bit/s, but in symbols/s.The baud rate varies, depending on the modulationtechnique used. Original Bell 103 modems used amodulation technique that saw a change in state 300 timesper second. They transmitted 1 bit for every baud, and so a300 bit/s modem was also a 300-baud modem. However,casual computerists confused the two. A 300 bit/s modem isthe only modem whose bit rate matches the baud rate. A2400 bit/s modem changes state 600 times per second, butdue to the fact that it transmits 4 bits for each baud, 2400bits are transmitted by 600 baud, or changes in states.Faster modems are used by Internet users every day,notably cable modems and ADSL modems. Intelecommunications, "wide band radio modems" transmitrepeating frames of data at very high data rates overmicrowave radio links. Narrow band radio modem is used forlow data rate up to 19.2k mainly for private radio networks.Some microwave modems transmit more than a hundredmillion bits per second. Optical modems transmit data overoptical fibers. Most intercontinental data links now useoptical modems transmitting over undersea optical fibers.Optical modems routinely have data rates in excess of abillion (1x109) bits per second. One kilobit per second (kbit/sor kb/s or kbps) as used in this article means 1000 bits persecond and not 1024 bits per second. For example, a 56kmodem can transfer data at up to 56,000 bits (7kB) persecond over the phone line.
Narrowband/phone-line dialup modems28.8 kbit/s serial port modem from MotorolaA standard modem of today contains two functional parts: ananalog section for generating the signals and operating thephone, and a digital section for setup and control. Thisfunctionality is actually incorporated into a single chip, butthe division remains in theory. In operation the modem canbe in one of two "modes", data mode in which data is sent toand from the computer over the phone lines, and commandmode in which the modem listens to the data from thecomputer for commands, and carries them out. A typicalsession consists of powering up the modem (often inside thecomputer itself) which automatically assumes commandmode, then sending it the command for dialing a number.After the connection is established to the remote modem, themodem automatically goes into data mode, and the user cansend and receive data. When the user is finished, theescape sequence, "+++" followed by a pause of about asecond, is sent to the modem to return it to command mode,and the command ATH to hang up the phone is sent.
The commands themselves are typically from the Hayescommand set, although that term is somewhat misleading.The original Hayes commands were useful for 300 bit/soperation only, and then extended for their 1200 bit/smodems. Faster speeds required new commands, leading toa proliferation of command sets in the early 1990s. Thingsbecame considerably more standardized in the second halfof the 1990s, when most modems were built from one of avery small number of "chip sets". We call this the Hayescommand set even today, although it has three or four timesthe numbers of commands as the actual standard.Increasing speeds (V.21 V.22 V.22bis)2400 bit/s modem for a laptop.The 300 bit/s modems used frequency-shift keying to senddata. In this system the stream of 1s and 0s in computerdata is translated into sounds which can be easily sent onthe phone lines. In the Bell 103 system the originatingmodem sends 0s by playing a 1070 Hz tone, and 1s at1270 Hz, with the answering modem putting its 0s on2025 Hz and 1s on 2225 Hz. These frequencies werechosen carefully, they are in the range that suffer minimumdistortion on the phone system, and also are not harmonicsof each other.In the 1200 bit/s and faster systems, phase-shift keying wasused. In this system the two tones for any one side of theconnection are sent at the similar frequencies as in the 300bit/s systems, but slightly out of phase. By comparing thephase of the two signals, 1s and 0s could be pulled back out,for instance if the signals were 90 degrees out of phase, this
represented two digits, "1,0", at 180 degrees it was "1,1". Inthis way each cycle of the signal represents two digitsinstead of one. 1200 bit/s modems were, in effect, 600symbols per second modems (600 baud modems) with 2 bitsper symbol.Voiceband modems generally remained at 300 and 1200 bit/s (V.21 and V.22) into the mid 1980s. A V.22bis 2400-bit/ssystem similar in concept to the 1200-bit/s Bell 212 signallingwas introduced in the U.S., and a slightly different one inEurope. By the late 1980s, most modems could support allof these standards and 2400-bit/s operation was becomingcommon.For more information on baud rates versus bit rates, see thecompanion article List of device bandwidths.Using digital lines and PCM (V.90/92)In the late 1990s Rockwell and U.S. Robotics introducednew technology based upon the digital transmission used inmodern telephony networks. The standard digitaltransmission in modern networks is 64 kbit/s but somenetworks use a part of the bandwidth for remote officesignaling (eg to hang up the phone), limiting the effectiverate to 56 kbit/s DS0. This new technology was adopted intoITU standards V.90 and is common in modern computers.The 56 kbit/s rate is only possible from the central office tothe user site (downlink) and in the United States,government regulation limits the maximum power output toonly 53.3 kbit/s. The uplink (from the user to the centraloffice) still uses V.34 technology at 33.6k.Later in V.92, the digital PCM technique was applied toincrease the upload speed to a maximum of 48 kbit/s, but atthe expense of download rates. For example a 48 kbit/s
upstream rate would reduce the downstream as low as 40kbit/s, due to echo on the telephone line. To avoid thisproblem, V.92 modems offer the option to turn off the digitalupstream and instead use a 33.6 kbit/s analog connection, inorder to maintain a high digital downstream of 50 kbit/s orhigher. (See November and October 2000 update athttp://www.modemsite.com/56k/v92s.asp ) V.92 also addstwo other features. The first is the ability for users who havecall waiting to put their dial-up Internet connection on hold forextended periods of time while they answer a call. Thesecond feature is the ability to "quick connect" to ones ISP.This is achieved by remembering the analog and digitalcharacteristics of the telephone line, and using this savedinformation to reconnect at a fast pace.List of dialup speedsNote that the values given are maximum values, and actualvalues may be slower under certain conditions (for example,noisy phone lines) For a complete list see the companionarticle List of device bandwidths.Connection BitrateModem 110 baud 0.1 kbit/sModem 300 (300 baud) (Bell 103 or 0.3 kbit/sV.21)Modem 1200 (600 baud) (Bell 212A or 1.2 kbit/sV.22)Modem 2400 (600 baud) (V.22bis) 2.4 kbit/sModem 2400 (1200 baud) (V.26bis) 2.4 kbit/sModem 4800 (1600 baud) (V.27ter) 4.8 kbit/sModem 9600 (2400 baud) (V.32) 9.6 kbit/sModem 14.4 (2400 baud) (V.32bis) 14.4 kbit/s
Modem 28.8 (3200 baud) (V.34) 28.8 kbit/sModem 33.6 (3429 baud) (V.34) 33.6 kbit/s 56.0/33.6 kbit/Modem 56k (8000/3429 baud) (V.90) s 56.0/48.0 kbit/Modem 56k (8000/8000 baud) (V.92) s [Bonding Modem (two 56k modems)) 112.0/96.0 5(V.92) kbit/s ]Hardware compression (variable) 56.0-220.0(V.90/V.42bis) kbit/sHardware compression (variable) 56.0-320.0(V.92/V.44) kbit/sServer-side web compression 100.0-1000.0(variable) (Netscape ISP) kbit/sRadio modemsDirect broadcast satellite, WiFi, and mobile phones all usemodems to communicate, as do most other wirelessservices today. Modern telecommunications and datanetworks also make extensive use of radio modems wherelong distance data links are required. Such systems are animportant part of the PSTN, and are also in common use forhigh-speed computer network links to outlying areas wherefibre is not economical.Even where a cable is installed, it is often possible to getbetter performance or make other parts of the systemsimpler by using radio frequencies and modulationtechniques through a cable. Coaxial cable has a very largebandwidth, however signal attenuation becomes a majorproblem at high data rates if a digital signal is used. By usinga modem, a much larger amount of digital data can betransmitted through a single piece of wire. Digital cable
television and cable Internet services use radio frequencymodems to provide the increasing bandwidth needs ofmodern households. Using a modem also allows forfrequency-division multiple access to be used, making full-duplex digital communication with many users possible usinga single wire.Wireless modems come in a variety of types, bandwidths,and speeds. Wireless modems are often referred to astransparent or smart. They transmit information that ismodulated onto a carrier frequency to allow manysimultaneous wireless communication links to worksimultaneously on different frequencies.Transparent modems operate in a manner similar to theirphone line modem cousins. Typically, they were half duplex,meaning that they could not send and receive data at thesame time. Typically transparent modems are polled in around robin manner to collect small amounts of data fromscattered locations that do not have easy access to wiredinfrastructure. Transparent modems are most commonlyused by utility companies for data collection.Smart modems come with a media access controller insidewhich prevents random data from colliding and resends datathat is not correctly received. Smart modems typicallyrequire more bandwidth than transparent modems, andtypically achieve higher data rates. The IEEE 802.11standard defines a short range modulation scheme that isused on a large scale throughout the world.WiFi and WiMaxWireless data modems are used in the WiFi and WiMaxstandards, operating at microwave frequencies.
WiFi (Wireless Fidelity) is principally used in laptops forInternet connections (wireless access point) and wirelessapplication protocol (WAP)..BroadbandDSL modemADSL modems, a more recent development, are not limitedto the telephones "voiceband" audio frequencies. SomeADSL modems use coded orthogonal frequency divisionmodulation (DMT).Cable modems use a range of frequencies originallyintended to carry RF television channels. Multiple cablemodems attached to a single cable can use the samefrequency band, using a low-level media access protocol toallow them to work together within the same channel.Typically, up and down signals are kept separate usingfrequency division multiple access.New types of broadband modems are beginning to appear,such as doubleway satellite and power line modems.
Broadband modems should still be classed as modems,since they use complex waveforms to carry digital data.They are more advanced devices than traditional dial-upmodems as they are capable of modulating/demodulatinghundreds of channels simultaneously.Many broadband modems include the functions of a router(with Ethernet and WiFi ports) and other features such asDHCP, NAT and firewall features.When broadband technology was introduced, networkingand routers were unfamiliar to consumers. However, manypeople knew what a modem was as most internet accesswas through dial-up. Due to this familiarity, companiesstarted selling broadband modems using the familiar term"modem" rather than vaguer ones like "adapter" or"transceiver".Many broadband modems must be configured in bridgemode before they can use a router.Deep-space telecommunicationsMany modern modems have their origin in deep spacetelecommunications systems of the 1960s.Differences with deep space telecom modems vs landlinemodems • digital modulation formats that have high doppler immunity are typically used • waveform complexity tends to be low, typically binary phase shift keying • error correction varies mission to mission, but is typically much stronger than most landline modemsVoice modem
Voice modems are regular modems that are capable ofrecording or playing audio over the telephone line. They areused for telephony applications. See Voice modemcommand set for more details on voice modems. This typeof modem can be used as FXO card for Private branchexchange systems (compare V.92).PopularityA CEA study in 2006 found that dial-up Internet access is ona notable decline in the U.S. In 2000, dial-up Internetconnections accounted for 74% of all U.S. residentialInternet connections. The US demographic pattern for (dial-up modem users per capita) has been more or less mirroredin Canada and Australia for the past 20 years.Dial-up modem use in the US had dropped to 60% by 2003,and in 2006 stood at 36%. Voiceband modems were oncethe most popular means of Internet access in the U.S., butwith the advent of new ways of accessing the Internet, thetraditional 56K modem is losing popularity.