Token bus networkToken bus is a network implementing the token ring protocol over a "virtualring" on a coaxial cable. A token is passed around the network nodes and onlythe node possessing the token may transmit. If a node doesnt have anythingto send, the token is passed on to the next node on the virtual ring. Each nodemust know the address of its neighbour in the ring, so a special protocol isneeded to notify the other nodes of connections to, and disconnections from,the ring.Token bus was standardized by IEEE standard 802.4. It is mainly used forindustrial applications. Token bus was used by GM (General Motors) for theirManufacturing Automation Protocol (MAP) standardization effort. This is anapplication of the concepts used in token ring networks. The main differenceis that the endpoints of the bus do not meet to form a physical ring. The IEEE802.4 Working Group is disbanded. In order to guarantee the packet delay andtransmission in Token bus protocol, a modified Token bus was proposed inManufacturing Automation Systems and flexible manufacturing system (FMS)
1.Short for Carrier Sense Multiple Access / Collision Detection, a set of rules determining hownetworkdevices respond when two devices attempt to use a data channel simultaneously(called a collision). Standard Ethernet networks use CSMA/CD to physically monitor thetraffic on the line at participating stations. If no transmission is taking place at the time, theparticular station can transmit. If two stations attempt to transmit simultaneously, thiscauses a collision, which is detected by all participating stations. After a random timeinterval, the stations that collided attempt to transmit again. If another collision occurs, thetime intervals from which the random waiting time is selected are increased step by step.This is known as exponential back off.CSMA/CD is a type of contention protocol. Networks using the CSMA/CD procedure aresimple to implement but do not have deterministic transmission characteristics. TheCSMA/CD method is internationally standardized in IEEE 802.3 and ISO 8802.3.2.Carrier Sense Multiple Access (CSMA) is a probabilisticMedia Access Control (MAC)protocol in which a node verifies the absence of other traffic before transmitting on a sharedtransmission medium, such as an electrical bus, or a band of the electromagnetic spectrum."Carrier Sense" describes the fact that a transmitter uses feedback from a receiver thatdetects a carrier wave before trying to send. That is, it tries to detect the presence of anencoded signal from another station before attempting to transmit. If a carrier is sensed, thestation waits for the transmission in progress to finish before initiating its owntransmission."Multiple Access" describes the fact that multiple stations send and receive on the medium.Transmissions by one node are generally received by all other stations using the medium.Carrier sense multiple access with collisionavoidance (CSMA/CA),incomputer networking, is a wireless network multiple access method in which:acarrier sensing scheme is used.anode wishing to transmit data has to first listen to the channel for a predetermined amountof time to determine whether or not another node is transmitting on the channel within thewireless range. If the channel is sensed "idle," then the node is permitted to begin thetransmission process. If the channel is sensed as "busy," the node defers its transmission fora random period of time. Once the transmission process begins, it is still possible for theactual transmission of application data to not occur.
CSMA/CA is a modification of carrier sense multiple access.Collision avoidance is used to improve CSMA performance by not allowing wirelesstransmission of a node if another node is transmitting, thus reducing the probability ofcollision due to the use of a random truncated binary exponential backoff time.Optionally, but almost always implemented, an IEEE 802.11 RTS/CTS exchange can berequired to better handle situations such as the hidden node problem in wirelessnetworking.CSMA/CA is a layer 2 access method, not a protocol of the OSI model.
CSMA/CA (Carrier Sense Multiple Access/Collision Avoidance) is a protocol for carriertransmission in 802.11 networks. Unlike CSMA/CD (Carrier Sense MultipleAccess/Collision Detect) which deals with transmissions after a collision has occurred,CSMA/CA acts to prevent collisions before they happen.In CSMA/CA, as soon as a node receives a packet that is to be sent, it checks to be sure thechannel is clear (no other node is transmitting at the time). If the channel is clear, then thepacket is sent. If the channel is not clear, the node waits for a randomly chosen period oftime, and then checks again to see if the channel is clear. This period of time is called thebackoff factor, and is counted down by a backoff counter. If the channel is clear when thebackoff counter reaches zero, the node transmits the packet. If the channel is not clearwhen the backoff counter reaches zero, the backoff factor is set again, and the process isrepeated.
definition -FDDI (Fiber Distributed Data Interface) is a set of ANSI and ISO standards for datatransmission on fiber optic lines in a local area network (LAN) that can extend in range up to200 km (124 miles). The FDDI protocol is based on the token ring protocol. In addition tobeing large geographically, an FDDI local area network can support thousands of users. FDDIis frequently used on the backbone for a wide area network (WAN).An FDDI network contains two token rings, one for possible backup in case the primary ringfails. The primary ring offers up to 100 Mbps capacity. If the secondary ring is not needed forbackup, it can also carry data, extending capacity to 200 Mbps. The single ring can extend themaximum distance; a dual ring can extend 100 km (62 miles).FDDI is a product of American National Standards Committee X3-T9 and conforms to theOpen Systems Interconnection (OSI) model of functional layering. It can be used tointerconnect LANs using other protocols. FDDI-II is a version of FDDI that adds the capabilityto add circuit-switched service to the network so that voice signals can also be handled.Work is underway to connect FDDI networks to the developing Synchronous OpticalNetwork (SONET).From Wikipedia, the free encyclopediaJump to: navigation, searchDual-attach FDDI Board
Fiber Distributed Data Interface (FDDI) provides a 100 Mbit/s opticalstandard for data transmission in a local area network that can extend inrange up to 200 kilometers (124 miles). Although FDDI logical topology is aring-based token network, it does not use the IEEE 802.5 token ringprotocolas its basis; instead, its protocol is derived from the IEEE 802.4 token bustimedtoken protocol. In addition to covering large geographical areas, FDDI localarea networks can support thousands of users. As a standard underlyingmedium it uses optical fiber, although it can use copper cable, in which case itmay be refer to as CDDI (Copper Distributed Data Interface). FDDI offers botha Dual-Attached Station (DAS), counter-rotating token ring topology and aSingle-Attached Station (SAS), token bus passing ring topology.FDDI was considered an attractive campus backbone technology in the earlyto mid 1990s since existing Ethernet networks only offered 10 Mbit/s transferspeeds and Token Ring networks only offered 4 Mbit/s or 16 Mbit/s speeds.Thus it was the preferred choice of that era for a high-speed backbone, butFDDI has since been effectively obsoleted by fast Ethernet which offered thesame 100 Mbit/s speeds, but at a much lower cost and, since 1998, by GigabitEthernet due to its speed, and even lower cost, and ubiquity.FDDI, as a product of American National Standards Institute X3T9.5 (nowX3T12), conforms to the Open Systems Interconnection (OSI) model offunctional layering of LANs using other protocols. FDDI-II, a version of FDDI,adds the capability to add circuit-switched service to the network so that itcan also handle voice and video signals. Work has started to connect FDDInetworks to the developing Synchronous Optical Network SONET.A FDDI network contains two rings, one as a secondary backup in case theprimary ring fails. The primary ring offers up to 100 Mbit/s capacity. When anetwork has no requirement for the secondary ring to do backup, it can alsocarry data, extending capacity to 200 Mbit/s. The single ring can extend themaximum distance; a dual ring can extend 100 km (62 miles). FDDI has alarger maximum-frame size (4,352 bytes) than standard 100 Mbit/s Ethernetwhich only supports a maximum-frame size of 1,500 bytes, allowing betterthroughput.
Designers normally construct FDDI rings in the form of a "dual ring of trees"(see network topology). A small number of devices (typically infrastructuredevices such as routers and concentrators rather than host computers)connect to both rings - hence the term "dual-attached". Host computers thenconnect as single-attached devices to the routers or concentrators. The dualring in its most degenerate form simply collapses into a single device.Typically, a computer-room contains the whole dual ring, although someimplementations have deployed FDDI as a Metropolitan area network.
client–server1.definition -Client/server describes the relationship between two computer programs in which one program, theclient, makes a service request from another program, the server, which fulfills the request. Althoughthe client/server idea can be used by programs within a single computer, it is a more important idea in anetwork. In a network, the client/server model provides a convenient way to interconnect programsthat are distributed efficiently across different locations. Computer transactions using the client/servermodel are very common. For example, to check your bank account from your computer, a clientprogram in your computer forwards your request to a server program at the bank. That program may inturn forward the request to its own client program that sends a request to a database server at anotherbank computer to retrieve your account balance. The balance is returned back to the bank data client,which in turn serves it back to the client in your personal computer, which displays the information foryou.The client/server model has become one of the central ideas of network computing. Mostbusiness applications being written today use the client/server model. So does the Internets mainprogram, TCP/IP. In marketing, the term has been used to distinguish distributed computing bysmaller dispersed computers from the "monolithic" centralized computing of mainframecomputers. But this distinction has largely disappeared as mainframes and their applications havealso turned to the client/server model and become part of network computing.In the usual client/server model, one server, sometimes called a daemon, is activated and awaitsclient requests. Typically, multiple client programs share the services of a common serverprogram. Both client programs and server programs are often part of a larger program orapplication. Relative to the Internet, your Web browser is a client program that requests services(the sending of Web pages or files) from a Web server (which technically is called a HypertextTransport Protocol or HTTP server) in another computer somewhere on the Internet. Similarly,your computer with TCP/IP installed allows you to make client requests for files from FileTransfer Protocol (FTP) servers in other computers on the Internet.Other program relationship models included master/slave, with one program being in charge ofall other programs, and peer-to-peer, with either of two programs able to initiate a transaction.
2.Theclient–server characteristic describes the relationship of cooperating programs in anapplication. The server component provides a function or service to one or many clients, whichinitiate requests for such services.Functions such as email exchange, web access and database access, are built on the client–servermodel. Users accessing banking services from their computer use a web browser client to send arequest to a web server at a bank. That program may in turn forward the request to its owndatabase client program that sends a request to a database server at another bank computer toretrieve the account information. The balance is returned to the bank database client, which inturn serves it back to the web browser client displaying the results to the user. The client–servermodel has become one of the central ideas of network computing. Many business applicationsbeing written today use the client–server model. So do the Internets main application protocols,such as HTTP, SMTP, Telnet, and DNS.The interaction between client and server is often described using sequence diagrams. Sequencediagrams are standardized in the Unified Modeling Language.Specific types of clients include web browsers, email clients, and online chat clients.Specific types of servers include web servers, ftp servers, application servers, database servers,name servers, mail servers, file servers, print servers, and terminal servers. Most web servicesare also types of servers.Comparison to peer-to-peer architecture This section requires expansion.In peer-to-peer architectures, each host or instance of the program can simultaneously act as botha client and a server, and each has equivalent responsibilities and status.Both client–server and peer-to-peer architectures are in wide usage today. Details may be foundin Comparison of Centralized (Client-Server) and Decentralized (Peer-to-Peer) Networking.Advantages In most cases, a client–server architecture enables the roles and responsibilities of a computing system to be distributed among several independent computers that are known to each other only through a network. This creates an additional advantage to this architecture: greater ease of maintenance. For example, it is possible to replace, repair, upgrade, or even relocate a server while its clients remain both unaware and unaffected by that change. All data is stored on the servers, which generally have far greater security controls than most clients. Servers can better control access and resources, to guarantee that only those clients with the appropriate permissions may access and change data.
Since data storage is centralized, updates to that data are far easier to administer in comparison to a P2P paradigm. In the latter, data updates may need to be distributed and applied to each peer in the network, which is both time-consuming and error-prone, as there can be thousands or even millions of peers. Many mature client–server technologies are already available which were designed to ensure security, friendliness of the user interface, and ease of use. It functions with multiple different clients of different capabilities.Disadvantages As the number of simultaneous client requests to a given server increases, the server can become overloaded. Contrast that to a P2P network, where its aggregated bandwidth actually increases as nodes are added, since the P2P networks overall bandwidth can be roughly computed as the sum of the bandwidths of every node in that network. The client–server paradigm lacks the robustness of a good P2P network. Under client–server, should a critical server fail, clients’ requests cannot be fulfilled. In P2P networks, resources are usually distributed among many nodes. Even if one or more nodes depart and abandon a downloading file, for example, the remaining nodes should still have the data needed to complete the download
Architecture of P2P systems1.definition -1) Peer-to-peer is a communications model in which each party has the same capabilities andeither party can initiate a communication session. Other models with which it might becontrasted include the client/server model and the master/slave model. In some cases, peer-to-peer communications is implemented by giving each communication node both server and clientcapabilities. In recent usage, peer-to-peer has come to describe applications in which users canuse the Internet to exchange files with each other directly or through a mediating server.IBMs Advanced Peer-to-Peer Networking (APPN) is an example of a product that supports thepeer-to-peer communication model.2) On the Internet, peer-to-peer (referred to as P2P) is a type of transient Internet network thatallows a group of computer users with the same networking program to connect with each otherand directly access files from one anothers hard drives. Napster and Gnutella are examples ofthis kind of peer-to-peer software. Major producers of content, including record companies, haveshown their concern about what they consider illegal sharing of copyrighted content by suingsome P2P users.Meanwhile, corporations are looking at the advantages of using P2P as a way for employees toshare files without the expense involved in maintaining a centralized server and as a way forbusinesses to exchange information with each other directly.How Does Internet P2P Work?The user must first download and execute a peer-to-peer networking program. (Gnutellanet iscurrently one of the most popular of these decentralized P2P programs because it allows users toexchange all types of files.) After launching the program, the user enters the IP address ofanother computer belonging to the network. (Typically, the Web page where the user got thedownload will list several IP addresses as places to begin). Once the computer finds anothernetwork member on-line, it will connect to that users connection (who has gotten their IPaddress from another users connection and so on).Users can choose how many member connections to seek at one time and determine which filesthey wish to share or password protect.2.Peer-to-peer systems often implement an abstract overlay network, built at ApplicationLayer, on top of the native or physical network topology. Such overlays are used for indexing
and peer discovery and make the P2P system independent from the physical network topology.Content is typically exchanged directly over the underlying Internet Protocol (IP) network.Anonymous peer-to-peer systems are an exception, and implement extra routing layers toobscure the identity of the source or destination of queries.In structured peer-to-peer networks, peers (and, sometimes, resources) are organized followingspecific criteria and algorithms, which lead to overlays with specific topologies and properties.They typically use distributed hash table-based (DHT) indexing, such as in the Chord system(MIT).Unstructured peer-to-peer networks do not provide any algorithm for organization oroptimization of network connections.. In particular, three models of unstructuredarchitecture are defined. In pure peer-to-peer systems the entire network consists solely ofequipotent peers. There is only one routing layer, as there are no preferred nodes with any specialinfrastructure function. Hybrid peer-to-peer systems allow such infrastructure nodes to exist,often called supernodes. In centralized peer-to-peer systems, a central server is used forindexing functions and to bootstrap the entire system.. Although this has similaritieswith a structured architecture, the connections between peers are not determined by anyalgorithm. The first prominent and popular peer-to-peer file sharing system, Napster, was anexample of the centralized model. Gnutella and Freenet, on the other hand, are examples of thedecentralized model. Kazaa is an example of the hybrid model.P2P networks are typically used for connecting nodes via largely ad hoc connections.[citationneeded] Data, including digital formats such as audio files, and real time data such as telephonytraffic, is passed using P2P technology.A pure P2P network does not have the notion of clients or servers but only equal peer nodes thatsimultaneously function as both "clients" and "servers" to the other nodes on the network. Thismodel of network arrangement differs from the client–server model where communication isusually to and from a central server. A typical example of a file transfer that does not use the P2Pmodel is the File Transfer Protocol (FTP) service in which the client and server programs aredistinct: the clients initiate the transfer, and the servers satisfy these requests.The P2P overlay network consists of all the participating peers as network nodes. There are linksbetween any two nodes that know each other: i.e. if a participating peer knows the location ofanother peer in the P2P network, then there is a directed edge from the former node to the latterin the overlay network. Based on how the nodes in the overlay network are linked to each other,we can classify the P2P networks as unstructured or structured.Structured systemsStructured P2P networks employ a globally consistent protocol to ensure that any node canefficiently route a search to some peer that has the desired file, even if the file is extremely rare.Such a guarantee necessitates a more structured pattern of overlay links. By far the mostcommon type of structured P2P network is the distributed hash table (DHT), in which a variant
of consistent hashing is used to assign ownership of each file to a particular peer, in a wayanalogous to a traditional hash tables assignment of each key to a particular array slot.Distributed hash tablesDistributed hash tablesDistributed hash tables (DHTs) are a class of decentralized distributed systems that provide alookup service similar to a hash table: (key, value) pairs are stored in the DHT, and anyparticipating node can efficiently retrieve the value associated with a given key. Responsibilityfor maintaining the mapping from keys to values is distributed among the nodes, in such a waythat a change in the set of participants causes a minimal amount of disruption. This allows DHTsto scale to extremely large numbers of nodes and to handle continual node arrivals, departures,and failures.DHTs form an infrastructure that can be used to build peer-to-peer networks. Notable distributednetworks that use DHTs include BitTorrents distributed tracker, the Kad network, the Stormbotnet, YaCy, and the Coral Content Distribution Network.Some prominent research projects include the Chord project, the PAST storage utility, the P-Grid, a self-organized and emerging overlay network and the CoopNet content distributionsystem (see below for external links related to these projects).DHT-based networks have been widely utilized for accomplishing efficient resourcediscovery for grid computing systems, as it aids in resource management and scheduling ofapplications. Resource discovery activity involve searching for the appropriate resource typesthat match the user’s application requirements. Recent advances in the domain of decentralizedresource discovery have been based on extending the existing DHTs with the capability of multi-dimensional data organization and query routing. Majority of the efforts have looked atembedding spatial database indices such as the Space Filling Curves (SFCs) including theHilbert curves, Z-curves, k-d tree, MX-CIF Quad tree and R*-tree for managing, routing, andindexing of complex Grid resource query objects over DHT networks. Spatial indices are wellsuited for handling the complexity of Grid resource queries. Although some spatial indices canhave issues as regards to routing load-balance in case of a skewed data set, all the spatial indicesare more scalable in terms of the number of hops traversed and messages generated whilesearching and routing Grid resource queries.