A bus topology is a linear LAN architecture in which transmissions from network stations propagate the length of the medium and are received by all other stations. A ring topology is a LAN architecture that consists of a series of devices connected to one another by unidirectional transmission links to form a single closed loop. A star topology is a LAN architecture in which the endpoints on a network are connected to a common central hub, or switch, by dedicated links. A tree topology is a LAN architecture that is identical to the bus topology, except that branches with multiple nodes are possible in this case.
1) Figure a shows a star topology. All the nodes are connected to a central point and the shape formed is a star. The nodes are independent of each other; therefore a break in cable will not disrupt network communications. 2) Figure b shows a ring topology. The computers are all connected to form a ring shape. From the figure we can see that a break in cable will disrupt communications in the network. 3) Figure c shows a tree topology. The computers are connected in a tree fashion. There is a backbone hub and then many branches from it. This can be an interconnection of star networks. 4) Figure d is a complete or mesh topology. The nodes in this network have a direct connection to every node in the network. This is not a cost effective or physically practical network. 5) Figure e shows an intersecting ring topologies 6) Figure e is irregular topology. The connections do not follow any pattern.
As new and existing network applications increase the demands of using high-resolution graphics, video and other rich media data types, pressure is growing at the desktop, the server, the hub and the switch for increased bandwidth. There are four categories of bandwidth-intensive applications: .Scientific modeling, publication, and medical imaging applications produce multimedia and graphics files that range in size from megabytes to gigabytes to terabytes. Internet and intranet applications create traffic pattern composed of text, graphics and images, and they are expected to expand in the near future to include more bandwidth-intensive audio, video and voice. Data warehousing and backup applications handle gigabytes or terabytes of data distributed among hundreds of servers and storage systems. Mission-critical business applications, such as desktop video conferencing, interactive white boarding, and real-time video, not only require more raw bandwidth, they also demand low latency and limited jitter to be effective. What does Gigabit Ethernet provide? High bandwidth and high speed. That is why Gigabit Ethernet approaches the market and is used to satisfy different kinds of applications. Why need Gigabit Ethernet?
Currently, there are two standards of Gigabit Ethernet technology, one is IEEE 802.3z , and the other one is IEEE 802.3ab . Figure-1 shows the functional elements of these two standards in Gigabit Ethernet technology. Fiber Cabling Specifications There are two physical layers use fiber optic as the transmission medium. 1000BASE-SX (‘S’ for Short wavelength) is targeted at lowest cost multimode fiber runs in horizontal and shorter backbone applications. 1000BASE-LX (‘L’ for Long wavelength) is targeted at longer multimode building fiber backbones and single-mode campus backbones. In figure-1, SMF stands for Single-Mode Fiber; while MMF stands for Multimode Fiber. 1000BASE-LX is specified use on either multimode or single-mode Fiber. when 1000BASE-LX uses single-mode fiber, it can cover 5km. Note that the distance Gigabit Ethernet can reach depends on the bandwidth (measured in MHz*km)—the greater the bandwidth of the fiber, the further the distance supported. Also, IEEE specifies minimum rather than maximum ranges, and under average operating conditions, the minimum specifies distance can be exceeded by a factor of three or four. However, most network managers are conservative when they design networks and use the IEEE specifications as the maximum distances. Both of 1000BASE-LX and 1000BASE-SX use 8B/10B physical coding sublayer (PCS). There is another specification using fiber optic, which is known as 1000BASE-LH (‘LH’ for Long Haul). It is not IEEE specification but multivendor specification; each vendor has a set of transceivers covering different distances. The minimum range of distance can be 1km-49km or 50km-100km depending on the wavelength. Copper Cabling Specifications There are two specification transmitting over copper cabling. 1000BASE-CX (‘C’ for Copper) defines transceivers or physical layer devices for shielded copper cabling. It is intended for short-haul copper connections (25 meters or less) within wiring closets. The advantage of 1000BASE-CX has is that it can be generated quickly and is inexpensive to implement. 1000BASE-T (‘T’ for Twisted pair), helps network managers boost their network performance in a simple, cost-effective way. It runs over four-pair category 5 unshielded twisted pair for distances up to 100 meters, enabling network managers to build networks with diameters of 200 meters Category 5 copper cabling is today the dominant horizontal/floor cabling, providing connectivity to both desktops and workgroup aggregators; and it is one of the major options for building risers/backbone cabling for connection of different floor wiring closets. 1000BASE-T is the most cost-effective high-speed networking technology available now.
Can Gigabit Ethernet be used in WAN? To answer this question, we have to consider the factors that make Ethernet used only in LAN. The Limitation of distance of Gigabit Ethernet as well as Ethernet and Fast Ethernet is due to the collision and the attenuation of the signal. To solve the problem attenuation of the signal, we can use Fiber Optic medium to enable the Gigabit Ethernet to run over long distance. To solve the collision, we may segment the networks into more segments and improve the CSMA/CD method. The collision is still the major issue concerning to the distance limitation of Gigabit Ethernet. I wish I could come up with some great idea to solve this problem during me further study and work.
A backbone is a larger transmission line that carries data gathered from smaller lines that interconnect with it. At the local level, a backbone is a line or set of lines that local area networks connect to for a wide area network connection or within a local area network to span distances efficiently (for example, between buildings). On the Internet or other wide area network, a backbone is a set of paths that local or regional networks connect to for long-distance interconnection. The connection points are known as network nodes or telecommunication data switching exchanges (DSEs).
transmission time=latency+length/data transfer rate
The total system bandwidth is a way of measuring the capacity or throughput of a network. It refers to the volume of traffic that can be transferred across the network in a period of time, usually 1 second. It is frequently expressed in millions or billions of bits per second, as Mbps or Gbps.
Switching delays at routers and the time required to find and set up a communication path can cause latency delays on large networks such as the Internet that are several orders of magnitude larger than local area networks.
Most of us are familiar with the Internet 404 errors that occur when the time to access information is longer than the timeout allowed for searching. Some of this is from latency delays, although most of it is from busy hosts or missing nodes.
The original DARPANET, from which the Internet evolved, was designed as a military network to survive a nuclear attack. A key concern was reliability, in this case the ability to continue to transfer messages in the event of failures on the network. Error tolerant and error free communications are key reliability concerns.
Networks are also concerned with protection from unauthorized use, loss or compromise of data and with external threats to the ability to transfer messages or the integrity of messages. These topics are the subject of future lectures on distributed system security.
Personal Digital Assistants, Lap Top Computers and cellular phones are examples of mobile devices that may need access to a network at different locations. Networks must be designed to allow this to occur in an efficient, secure and productive manner.
Metrics have been established to measure the reliability and bandwidth of networks. These often focus on throughput or bandwidth adjusted for possible failures and expressed as a minimum acceptable quality of service and a desired level.
Most networks are designed for point to point transfer of information between two nodes. There may also be a requirement for one-to-many communication, and some networks are designed to do this in an efficient manner.
Network Performance Coulouris et al, Table 3-1 100-500 0.01-2 worldwide GSM, 3G WWAN 5-20 1.5-20 5-50 km WiMax WMAN 5-20 2-54 150-1500m WiFi WLAN 5-20 0.5-2 10-30 m Bluetooth WPAN 100-500 0.5-600 worldwide Internet Internet 10 1-150 2-50 km ATM MAN 100-500 0.01-600 worldwide IP routing WAN 1-10 10-1000 1-2 km Ethernet LAN Latency ms BW Mbps Range Example Type
Network applications increase the demands of using high-resolution graphics, video and other rich media data types, pressure is growing at the desktop, the server, the hub and the switch for increased bandwidth. There are various categories of bandwidth-intensive applications, for example: Mission-critical business applications Data warehousing and backup applications Internet and intranet applications Scientific modeling, publication, and medical imaging applications Local Area Network Bandwidth
Due to collisions in CD/MA and signal attenuation, all versions of Ethernet, including Gigabit Ethernet as well as Ethernet and Fast Ethernet including have a limited working distance, making them more suitable for local area networks than wide area networks.. Gigabit Ethernet limitations
From Table 3-1 in the text, we see that Ethernet has a latency of 5 to 10 ms. Gigabit Ethernet has a bandwidth of 1000 Mbps. How long will it take to transfer a file of 10,000 bytes? Assume no parity, so 1 byte = 8 bits. Latency 0.005 to 0.01 seconds plus data transfer (80,000 / 1,000,000,000 = 0.00008 seconds). Since the latency is several orders of magnitude slower than the transfer time, latency dominates.
Compare the Gigabit Ethernet example to WiFi at 2 Mbps. How long will it take to transfer the same file of 10,000 bytes? Latency 0.005 to 0.02 seconds plus data transfer (80,000 / 2,000,000 = 0.04 seconds) for a total transfer time of 0.045 to 0.06 seconds.
Note that both examples ignore handshaking, connection overhead, error correction and other delays that we will discuss later in the course.
How long will it take to transfer 25 pictures from a camera to a photo printer using Bluetooth? Assume each picture is 1 MB and that an acknowledgement of 10 bytes has to go back to the camera to acknowledge each picture before the next can transfer. Bluetooth has a bandwidth of 0.5 to 2 Mbps and a latency of 5-20 ms. Don’t forget: B = byte and b = bit
A Wide Area Network (WAN) is a data communications network that covers a relatively broad geographic area and often uses transmission facilities provided by common carriers, such as telephone companies.
A WAN is an interconnection of LANs.
A WAN functions at the lower three layers of the OSI model.
David Willis compares sharing files over WANs to herding hippos through a garden hose. (Orfali et all, page 59)
I had a Terabyte of Data in Missouri and a T3 connection to my backup system in Georgia. It takes over 62 hours to send a TB over T3 with a perfect connection, 100% efficiency, no other traffic and no transmission overhead. A week is more likely. Sending tapes by Fed-Ex was faster. That is what we did.
In order to send many messages across a network, individual messages can be broken into smaller chunks, called packets. Packets usually have a maximum size, to allow nodes in the network to reserve enough memory buffer space to ensure that the message can be received, to allow sharing of the network, and to improve reliability and fault detection.
A basic problem in almost every network is resource contention. One of the most basic resources is connections between nodes. Unless you have a fully connected network (slide 15d ), you are likely to have a time when two or more messages want to use the same connection at the same time. A fully connected network is not practical—can you imagine spending your first few months at NJIT connecting 12,000 individual wires between your computer and every other computer on campus?
Frequency Division Multiplexing (FDM) works like radio. Each radio station uses a different carrier frequency and imposes a signal on that frequency. You tune your radio to the frequency of the station you want to hear. The PSTN used to depend heavily on FDM for analog circuits, sending 24 calls over a T1 line at different frequencies. Frequencies of light have different colors. Today’s laser optic circuits can have many different colors of laser light traveling on the same circuit at the same time with FDM.
The alternative to FDM is Time Division Multiplexing, or TDM. During the Second World War, the French Resistance got coded messages over BBC radio by TDM. For example, at 7:35 pm “The next song is dedicated to Clara,” might mean one thing while “This song request is from Harry” might be a different request. But the particular resistance unit listens for its message at specific times. There are several different forms of TDM, including CDSM, Tokens and Frame Relay.
In Collision Detection Shared Multiplexing, or CDSM, a node wishing to send a message over a connection first “listens” for traffic, and if it does not “hear” any, tries to send its message. As it tries it continues to listen so that it can detect another node that also tries at the same time, creating a collision that garbles the message. If it detects a collision, it stops sending and waits for a period of time before trying again. Each node waits a random amount of time so that the same two nodes don’t continue to collide indefinitely.
Token ring networks, (slide 15b ) pass a token around the network from station to station. The token can either have a message attached or not. A node that wishes to send a message must wait until it receives a token without a message, then it attaches the message to the token and passes it on. It works something like trying to find an unoccupied taxi in New York City during rush hour.
Frame relay and ATM are among several technologies that send messages across a connection with very precise timing. Messages are broken into packets , sending a packet from one message, then another packet from the same or a different message. Messages can be reassembled from packets at their destination.
Nodes on a network need to have an identifier, so that messages can be sent to the proper node. These identifiers are called addresses . A telephone number is an address. So is a Uniform Resource Locator (URL). The Internet uses IPV4 and IPV6 addresses as primary identifiers. IP addresses are covered in the Sockets lecture.
Several packets can be sent across a connection in the same time period by sending a byte from one message, followed by a byte from the next.
In the PSTN, a DS1 line is the digital equivalent of an analog T1 line. It sends 24 messages at a time, alternating bytes, and converts them back into individual messages at the destination by reassembly.
In order to accomplish communications, networks need standard rules to define what is to be done and how to do it. These rules are formally specified in documents called protocols. A protocol must be specific enough that two nodes, technologies, systems or other parties can communicate without any difficulty.
In 1992, the International Standards Organization (ISO) defined a protocol suite defining a seven layer reference model for open systems. By specifying agreed upon layer boundaries, it is possible to divide network tasks into common segments such that collections of cooperating behaviors can accomplish the tasks performed by each segment. These seven layers are shown in the diagram on the next slide.
The physical layer just sends bits that might be encoded as different voltage layers for a specified instant of time on an electric wire or as pulses of light on a fiber optic line. There are many possible ways to distinguish a 1 or a 0 on a communications medium.
The Data Link layer groups bits into frames or other units and adds additional bits of information to group the bits, indicate the beginning and end of a character, assign sequence numbers for ordering, and provide for error detection and correction with parity and checksums. Sequence numbers, parity bits and checksums are all examples of overhead.
The Network Layer adds information that allows the receiver of a message to identify traffic that belongs to it and allows intermediate devices to route information to the proper destination. The most common form uses Internet Protocol, which uses IP addresses and ports to identify clients and servers. Each message contains the addresses and port numbers of both the client and the server as overhead.
The Transport Layer can add information for synchronization, breaking messages into chunks, acknowledgement of receipt, timeouts and retransmission of data not acknowledged. The most common transport protocols are TCP and UDP.
The Session Layer is an enhancement of the Transport Layer and can add information for dialog control, synchronization, error recovery, and similar functions. The session and lower levels are all concerned with getting a bit stream across a connection reliably.
The Presentation Layer is the lowest layer that is concerned with the meaning of the bits transmitted. It identifies collections of bits with identifiers so that they can be assigned meaning. Data can be collected into fields and records and assigned labels.
While the Application Layer was originally designed to contain a collection of standardized network applications like electronic mail and file transfer, it has become a general purpose container for applications and protocols that do not fit into the lower layers. It lacks a clear separation between applications, application specific protocols, and general purpose protocols such as File Transfer Protocol.
The Network layer is responsible for preparing packets to move across the network. One requirement is to break up messages into packets that can be no larger than the Maximum Transfer Unit (MTU), including both the header and the data field . For example, the MTU for Ethernet is 1500 bytes. The IP protocol MTU is 64 KB, although most systems are set for 8 KB to allow for smaller I/O buffers. If IP packets are sent over Ethernet, they must be fragmented to the Ethernet MTU size.
The early specifications of network layers were defined before the OSI model and only include four layers, as shown on the next slide.
This was done for the Defense Advanced Research Projects Agency (for DARPANET). When anti-war sentiment was common on college campuses during the Vietnam war, it was renamed the Advanced Research Projects Agency (and ARPANET).
When portions of ARPANET were opened to public use, those portions became the Internet.
Use the state chart on the previous page. What is the minimum time to transfer one packet using TCP with a latency of 10 ms? Assume small messages and fast transmission like Gigabit Ethernet, so that data transfer time is negligible compared to connection latency.
Note that actual timing is affected by other network traffic and timeout settings. There are multiple timeout settings in TCP.