ARTICLE IN PRESS




                 International Journal of Information Management 25 (2005) 107–115
                  ...
ARTICLE IN PRESS

               A.H. Kemp / International Journal of Information Management 25 (2005) 107–115
108

connec...
ARTICLE IN PRESS

               A.H. Kemp / International Journal of Information Management 25 (2005) 107–115       109

...
ARTICLE IN PRESS

              A.H. Kemp / International Journal of Information Management 25 (2005) 107–115
110

dependa...
ARTICLE IN PRESS

               A.H. Kemp / International Journal of Information Management 25 (2005) 107–115       111

...
ARTICLE IN PRESS

               A.H. Kemp / International Journal of Information Management 25 (2005) 107–115
112

this p...
ARTICLE IN PRESS

              A.H. Kemp / International Journal of Information Management 25 (2005) 107–115    113

   A...
ARTICLE IN PRESS

                A.H. Kemp / International Journal of Information Management 25 (2005) 107–115
114

8. Th...
ARTICLE IN PRESS

                 A.H. Kemp / International Journal of Information Management 25 (2005) 107–115          ...
Upcoming SlideShare
Loading in...5
×

Quality Service To Internet Connection

548

Published on

Published in: Business, Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
548
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
7
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Quality Service To Internet Connection

  1. 1. ARTICLE IN PRESS International Journal of Information Management 25 (2005) 107–115 www.elsevier.com/locate/ijinfomgt Getting what you paid for: quality of service and wireless connection to the internet Andrew H. Kempà School of Electronic and Electrical Engineering, University of Leeds, Leeds, LS2 9JT, UK Abstract ‘Quality of service’ (QoS) has risen in importance for controlling and measuring the transfer of information over communications networks as they have migrated from individual-links to interconnected- networks (and indeed to the Internet itself). Charging for services contributes to the requirement for prescribed levels of quality. Two different basic paradigms for communications networks are used, circuit- switched and packets-switched. The difference between these two have a significant impact on performance and hence on QoS issues. The Internet uses packet-switched and different QoS solutions have been standardised, IntServ and DiffServ. The mobile communications sector currently uses circuit-switching but is migrating to a packet-switched basis. This paper briefly reviews these developments and looks at possible future developments. r 2005 Elsevier Ltd. All rights reserved. Keywords: QoS; DiffServ; IntServ; Multimedia data 1. Introduction You are driving down the Autoroute du soleil with the family and two weeks glorious holiday ahead. Its a long drive but there is no disquiet in the car thanks to the film the passengers are engrossed in. They selected what they wanted to view, the latest Harry Potter released only yesterday, and now it is being streamed straight to them via a satellite internet protocol (IP) ÃTel.: +44 113 343 2078; fax: +44 113 343 2054. E-mail address: A.H.Kemp@leeds.ac.uk (A.H. Kemp). 0268-4012/$ - see front matter r 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.ijinfomgt.2004.10.011
  2. 2. ARTICLE IN PRESS A.H. Kemp / International Journal of Information Management 25 (2005) 107–115 108 connection. You are just thankful you paid the extra for gold-star delivery (and peace while you drive). This vision relies on the supporting system having defined levels of service which it can deliver. The important area of providing what is promised is built on contrary solutions in different networks ultimately being made to work together effectively. For instance in this exemplar (which is taken from the MASSSIVE project proposal to the European Commission 6th framework research programme, or EC 6FP, call) the source is considered to stream the film over the Internet to a proprietary satellite-to-vehicle network for the final link. But the selection of, and payment for, the film could additionally use the cell phone infrastructure. Alternatively, the film could be received over a terrestrial digital video broadcast (DVB-T) connection, again requiring a global system for mobile communications (GSM) or universal mobile telecommunications system (UMTS) connection for the uplink traffic (user to backbone communications). For all these different, and to some extent competing networks, to cooperate and appear a unified system to the end user, requires a complementary approach to quality of service (QoS) provision. Project SOQUET from the ECs 5th framework research is active in this area (Lauterjung & Kemp, 2003) particularly in the field of the user perceived quality level. This paper describes the need for QoS in modern communications systems, describes the basic methodology of providing QoS, and the methods implemented on the Internet for QoS provision. It then describes some advantages of wireless connectivity, the methodology of mobile Internet connection, and some of the state of the art developments taking place in multimedia content description before concluding. 2. Why is QoS necessary? Different end-user services or applications have different demands of the communications systems which they use. For example a telephone conversation requires that what one party, says can be heard with little delay by the other party, i.e. there is little or no pause in the speech while the data is transferred. Consider the difficulty experienced during a phone call when a significant delay is noticed (perhaps talking to someone on a distant continent over a satellite link). Furthermore, a few errors in a digitised phone conversation will go unnoticed. In contrast the requirements for the transfer of a computer data file are the absence of any errors and a complete insensitivity to delay (computers just wait without complaining but cannot cope with errors). Equally, different communications solutions provide differing services. If a dedicated channel (e.g. pair of telephone wires) can be provided between the communicating parties then once the connection has been established there will be a similar delay across the channel for each transmission and this will be a function of the length of the interconnecting channel, i.e. the distance the transmission must travel. This has been the basic paradigm of telephone system operation. However, if the interconnected parties are infrequently communicating, the channel will commonly be idle and this results in an inefficient use of resources. Alternatively, the channel could be shared between many users, each breaking their transmissions into packets, but then delay would vary depending on other users traffic and an additional element would be introduced negotiating access to the channel. This inherently more efficient method is the basic paradigm of Internet operation.
  3. 3. ARTICLE IN PRESS A.H. Kemp / International Journal of Information Management 25 (2005) 107–115 109 Matching the application service requirements to what the network can provide is where quality of service, or QoS, comes in. This is performed by defining application QoS parameters and network QoS parameters and solving the difficult task of defining a process for their negotiation and provision in response to user demands. This area is of particular importance in the many situations where users pay for services and so predetermined quality levels need to be defined to allow determination of the charging rate and degree of successful delivery. 3. QoS provision Any communications channel or network of channels can be defined by five parameters: The bandwidth of the channel which defines the maximum rate at which data can be sent across the channel. The delay in traversing the channel. This includes the propagation delay which is simply the length of the channel divided by the speed of propagation (which in free space is 3 Â 108 msÀ1 but will be nearer 2 Â 108 msÀ1 in physical media), and the delays in the transmitting and receiving equipment and, particularly for the Internet, in routers traversed during transmission. The jitter or variation in the delay which is experienced. In large networks and where data has been split up into packets not all packets will experience the same delay in traversing the network. This is because different packets, even those making the same source-to-destination journey, do not necessarily follow the same route or experience the same congestion conditions. The bit error rate (or BER) experienced over the channel. All channels experience some noise. This noise can be the result of other users causing interference, noise from natural sources such as the sun and electric storms or noise in electrical and electronic devices. All these noise sources will result in a characteristic fraction of data being received in error. The multipath nature of the channel. This is particularly prevalent in radio channels where the transmission will propagate to the receiver by numerous different paths (e.g. a direct line-of-sight path and a path reflected by an adjacent building). Each of these paths will be of different length and hence take a different amount of time to traverse and will also cause the transmission to be attenuated by a different amount. At the receiver the occurrence of multiple copies of the same transmission received at slightly different times can cause additional errors. This effect causes ‘inter-symbol interference’. How a network uses these parameters determines the QoS seen by the users for the chosen application. Telephone networks have been designed to provide a connected channel between the communicating parties, i.e. resources are switched to connect the ends of the conversation and the data (e.g. speech during a phone conversation) travels from the source to the destination as soon as it is sent. This arrangement is known as circuit-switched or connection-oriented. The term connection-oriented is used rather than connected since in reality the link might not actually be connected but the service it provides is designed to mimic a connected link. Conversely, the typical arrangement for data networks (and the Internet in particular) is that many users are simultaneously connected to the communications network and the data to be sent is broken into smaller units (known as packets) each of which carries the address of its destination. At the end of each section of cable as it traverses the network is a router. At the routers the packet is examined and the router determines which output path to send the incoming packet to,
  4. 4. ARTICLE IN PRESS A.H. Kemp / International Journal of Information Management 25 (2005) 107–115 110 dependant on its destination and the current network state. This arrangement is known as a packet-switched or connectionless architecture. It can be compared to the postal service where each packet can be compared to a letter. Each letter of course carries the address of its destination and post offices will route the letter as it travels from where it was posted to its destination. Clearly, different letters starting from the same point and with the same destination will not always take the same route or the same time and sometimes letters get lost or damaged. These parallels all hold true for the Internet. 4. Internet QoS The Internet developed from the interconnection of many existing networks once the transport control protocol/internet protocol (TCP/IP) was established as the only official protocol suite at the beginning of 1983. The growth in the number of devices interconnected by the Internet since then has been exponential and continues to double each year (Tanenbaum, 2003). Correspond- ingly, the range of applications available over the Internet has also mushroomed from simple file transfer, to applications too numerous to mention, but as varied as Internet-based supply chain management (Rahman, 2003), Internet-based telephone services (Duck & Read, 2003) and the invidious exchange of pornographic material. Nevertheless all traffic across the Internet is broken into packets for transmission. By selecting the TCP/IP protocol suite for the Internet just two transport layer protocols were permitted (though provision does exist for the addition of others). These two proto- cols are the transport control protocol (TCP) itself and the user datagram protocol (UDP). TCP is a connection-oriented protocol and UDP a connectionless protocol. TCP provides error-free, in-order delivery of packets between processes running in hosts connected to the Internet whereas UDP provides a best effort delivery of packets between such processes. Originally most exchange of data required reliable, error-free, in-order delivery of packets and hence TCP could be used. For certain traffic, notably network control traffic, TCP was not suitable since action to avoid network congestion, which is itself inherent in TCP, would delay the network control traffic: just when it most needed rapid delivery! Conse- quently, another transport service was needed and this was provided by UDP. It does not have the sophisticated mechanisms to avoid network congestion, UDP just keeps plugging away regardless. Now, as the Internet has developed, initial simple exchange of data files has migrated to the exchange of a wide range of data. This varies from voice traffic where a moderate bit error rate can be tolerated but delay in delivery is not acceptable, to bank data files where cryptographic security, in-order delivery and correction of bit errors are paramount but delay is not significant, and to video streaming where a constant delay is acceptable but variation in delay (jitter) is problematic and a low level of errors will go unnoticed. Additionally, Internet users are now paying for the services provided and consequently measures of the quality of the provided service is demanded. Previously the issue of quality over the Internet was addressed through a simple over-provision of resources, notably bandwidth of channels and speed of routers. The current level of use no longer allows that. However, an important fact is that the Internet makes no guarantees: it works on a best effort basis!
  5. 5. ARTICLE IN PRESS A.H. Kemp / International Journal of Information Management 25 (2005) 107–115 111 4.1. Internet implementation of QoS The origins of the Internet made provision of QoS unnecessary. Initially there was limited traffic and the exchange of data files predominated. Hence, existing protocols corrected errors and noticeable delays were virtually non-existent. As more users joined the Internet community and traffic levels increased the bandwidth of the network was increased to accommodate. Eventually the group which determines standards used over the Internet, the Internet engineering task force (IETF) recognised that the level of usage and an increasing number of applications (not to mention the many groups charging for Internet services) needed a method of defining QoS over the existing network. To facilitate this Internet QoS requirement, the IETF formed a workgroup which developed the Integrated Services (or ‘IntServ’) method of addressing QoS. This solution relies on the potential receiver of a data flow reserving resources (bandwidth and queue space) at all routers spanning the path across the Internet of the flow. It achieves this by exchanging information with each of these routers. Once the required resources have been reserved across the end-to-end path, particular levels of service can be established. IntServ defines ‘Guaranteed QoS’ and ‘Controlled load network service’ besides the standard Internet best-effort service. How- ever, before IntServ could become widely accepted its failings were recognised. In parti- cular, problems of scalability and inflexibility of the service models became apparent. The scalability problem was recognition that for every data flow, every router on its path would need to reserve resources and keep a record of the flow state. Since many, many flows may need to simultaneously exist the router overhead would be prohibitive. Within the IntServ definition two service classes were defined (Guaranteed QoS and Controlled load network service); it was felt that this did not allow enough freedom of relative distinctions of service class or enough flexibility. Having recognised these failings the IETF set up a new workgroup to develop an architecture which allowed different service classes to be handled in different ways, i.e. for providing scalable and flexible service differentiation (Kurose & Ross, 2003). This methodology is termed Differentiated services or ‘DiffServ’. When a packet enters the network it is marked with the class of traffic which it belongs to. In the core of the network the different classes of traffic are treated differently. This architecture addresses the problems of scalability by reducing the problem to just labelling the packets as they enter the network. From then on the routers simply apply different service criteria to packets in response to the DiffServ class indicated at each router- to-router hop across the network. The definition of ‘per hop behaviour’ (or PHB) is still being developed, but the first two defined PHBs are Assured Forwarding (AF) and Expedited Forwarding (EF). Others are sure to follow. 5. The simplicity of wireless connection? In recent years wireless communications has implemented a global revolution. This has involved the widespread introduction of wireless devices where previously mechanical or wired communications had been used. The examples of this are many fold and extend from remote control units for TVs, to car-key fobs, to cordless and mobile phones. This process, some may say
  6. 6. ARTICLE IN PRESS A.H. Kemp / International Journal of Information Management 25 (2005) 107–115 112 this progress, is set to continue as computer keyboards become untethered, and numerous devices around the home and workplace are provided with wireless interconnection. Many new devices from printers to TVs have become far more user friendly and the installation problems have been recognised and removed. Since people are generally rather wary of electronic devices this trend has been particularly prevalent in the design and implementation of wireless devices. This move has largely been successful to the extent that the majority of society is now very ready to use mobile phones, etc. and no longer have the technology phobia that until recently was so prevalent. This in itself is not a quality of service issue but a quality of installation issue. Certainly wireless connection to printers and keyboards is saving the installation chore of crawling under dusty desks. Beyond that, wireless connectivity combined with an improved level of installation consideration is leading to simplicity of wireless connection. This is set to continue as the internet standards migrate to IPv6 (Deering & Hinden, 2003). This will define enough IP addresses for all devices to have their own, hence allowing universal device addressing and interconnection, and now by a common interface. The mobile phones that are currently so common are GSM phones. They are really a great success story for European industry and standardisation. The phones just becoming popular and that carry higher bit rate services including multimedia are UMTS phones. As the UMTS roll-out of services becomes fully functional they are forecast to provide up to 2 Mbps services. GSM and early UMTS, uses connection-oriented links, much as the plain old telephone system (or POTS) still does. However, in the future the UMTS system is moving to rely on packet switching. This improves network efficiency and enhances the capacity for carrying data. 6. Mobile connection to the Internet In the last couple of decades communications technology has leapt forward and so too has computing technology. The complementary advancement of both areas is perhaps illustrated most clearly by the development of wireless and mobile access to the Internet. Two contrasting types of wireless Internet access are briefly described below. The first provides limited mobility, but does provide significant data rates (typically 11 Mbps but up to 54 Mbps). The second provides extensive mobility but provides only very limited data rates (typically 64 kbps but possibly in excess of 384 kbs and forecast to reach 2 Mbps). If a laptop computer, or other computing device, uses a wireless local area network (WLAN) such as an IEEE802.11b network, aka WiFi network, then the laptop will typically obtain an IP address (which is essential for operation on any TCP/IP based network) using the Dynamic Host Configuration Protocol (DHCP) from the service provider. It will do this by exchanging radio messages with the wired-network via an access point. If the laptop moves but stays within the range of this access point then it will be able to maintain network connection and continue its contact with the applications, such as Internet access, which it is using. If however the laptop moves out of range of this access point the connection to the network will be lost and the running network applications will crash. For the laptop to connect to another access point or re-connect to the original access point it must move back into range and requires re-connection through the exchange of radio messages.
  7. 7. ARTICLE IN PRESS A.H. Kemp / International Journal of Information Management 25 (2005) 107–115 113 Alternatively dynamic wireless mobility to the Internet, can be provided through the mobile phone infrastructure. However the 2nd generation systems (e.g. GSM) and even the 3rd generation systems (such as UMTS) have limited data rates available compared to WLANs. As a consequence Internet access is facilitated through the wireless access protocol (WAP), predominantly used in Europe or i-mode, predominantly used in Japan. WAP devices use a character representation scheme optimised for small screens and low-data rates called the WAP mark-up language (WML) instead of the more usual (for wireline access) hypertext mark-up language (HTML). For all manner of roaming the Mobile IP standard provides the required manage- ment of addresses for access to the Internet from varied locations. The basic paradigm used for Mobile IP is that data destined for a particular IP address will go to its home or permanent address. This permanent address will maintain the current location and for- ward the data on. This methodology is termed indirect routing and has the problem that if for instance a device has roamed to a distant location and a (new) neighbour sends a message to it, then the message will travel all the way to the original permanent address before being forwarded to the new location. To avoid this inefficiency direct routing can be used and is being considered as an extension to the Mobile IP standard. The security problems associated with forwarding of data and addresses (and other aspects) have been very much in the minds of the Mobile IP standardisation group and so security is prominent in Mobile IP. 7. Organic location The classical way of electronically determining the location of a device depends on a back- bone network of devices at accurately known positions. The range from at least three of these known points is determined by accurately measuring the propagation time (and hence range) between the new device and the known positions. With this information trilateration is used to fix the devices position to some particular degree of precision. This precision is dependant of how precise the range measurement is and the accuracy of the known locations. However, this method is prohibitively inefficient in terms of requiring a large backbone of accurately surveyed devices. Research and development is taking place (Ochieng, Walsh, Cooper, & Kemp, 2004) to realise methods of organically using fixed points at the edge of a network of nodes to determine the position of all the nodes. This is achieved by determining the range between all points within the network which can communicate. It is then possible to fix the location of nodes on paths across the networks and progressively fix the position of more and more nodes until the solution converges on a set of locations with varying degrees of precision. The precision of each nodes location will be dependant on the number and precision of the ranges used in calculating it. As more and more devices carry wireless transceivers (aka combined transmitters and receivers) the possibilities for fixing the location of all points in a network grow and this promotes the deployment of location aware applications. In visions of the house of tomorrow location aware devices and applications play a key role, for example see the European Intelligent House project (vom Bogel, 2002). ¨
  8. 8. ARTICLE IN PRESS A.H. Kemp / International Journal of Information Management 25 (2005) 107–115 114 8. The ‘bits about the bits’ Many new software tools have become available which make generation of multimedia resources a rapid and easy task. As Gunnlaugsdottir (2003) points out the vast wealth of information (and increasingly just data) which is available makes locating the information you want increasingly difficult. Recognising this, the moving picture experts group have developed a standard, MPEG7 (CoverPages, 2003), which aims to facilitate automatic generation of meta-data or data describing the data, i.e. the bits about the bits. This is a significant step: soon when multimedia material is generated, automatic genera- tion of meta-data will also take place. This will allow subsequent easy location of exactly what is required. Going beyond this simple search functionality however is the work of MPEG-21 which is being pursued by the large EC funded integrated project, Enthrone (Negru, 2004). Here the meta-data is used not only to search for required material but to then use the identified material to compose and generate comprehensive quality answers. For example, a search on Napoleons life would lead to generation of a multimedia documentary detail- ing the key facts regarding his birth, dates and scenes of battles, his coronation as emperor and the impact of this, successes, failures, exiles, with illustrations of where these were, etc. and all these facts would generate links to further multimedia information. In this way the current difficulty in finding what you want and then having to assemble this into a cohesive whole would be converted to a situation where the information you require would automatically be composed into an accessible, easy to understand and pleasant to acquire form. 9. Conclusion The research which has taken place in the field of QoS has developed effective solutions for maintaining facets of quality for internet traffic and for mobile cell phone traffic. This work is on- going as new requirements occur but still the Internet cannot provide guarantees of quality it can only allocate resources in a differentiated way. Conversely, mobile phone traffic which has provided QoS through circuit switching is now becoming packet oriented. Currently research effort is also unifying these two areas to provide a solution in the increasingly common case where traffic traverses multiple network types. The vision of streaming a new film to your car via a satellite link, paid for over a mobile phone link may not be here yet but is currently being researched and should arrive soon. Far-sighted QoS research is developing frameworks for provision of QoS over heterogeneous networks rather than attempting to generalise particular end-to-end solutions. The most recently released version of IP, versions 6 (IPv6), has largely introduced such a framework for TCP/IP networks. Continuation of this paradigm is widely viewed as an attractive way to proceed. The common availability of small cheap wireless devices is driving engineers to ensure that ease of installation is assured and new organic connectivity promises to make location information increasingly available. This information and its management will lead to new applications and enhancement of existing ones. Other interesting developments are taking place in the area of cataloguing, searching and retrieving information from the increasingly vast wealth of multimedia
  9. 9. ARTICLE IN PRESS A.H. Kemp / International Journal of Information Management 25 (2005) 107–115 115 data which is being generated. This effort is going beyond simply retrieving key search words but is actually generating a coherent presentation in response to a search. References CoverPages (2003). Moving picture experts group: Mpeg-7 standard. Ejournal, CoverPages. Deering, S. & Hinden, R. (2003). Internet Protocol Version 6 (IPv6) Specification. Technical report, Internet Engineering Task Force, RFC2460. Duck, M., & Read, R. (2003). Data communication and computer networks (2nd ed.). Englewood Cliffs, NJ: Prentice- Hall ISBN0130930474. Gunnlaugsdottir, J. (2003). Seek and you will find, share and you will benefit: Organising knowledge using groupware systems. International Journal of Information Management, 23, 363–380. Kurose, J. F., & Ross, K. W. (2003). Computer networking: A top-down approach featuring the internet (2nd ed.). Reading MA: Addison-Wesley ISBN0201976994. Lauterjung, J., & Kemp, A.H. (2003). Perceived quality-of-service for UMTS and DVB-T traffic. In IST summit, Aveiro, Portugal. Negru, O. (2004). The Enthorne project, www.enthorne.org. Ochieng, W., Walsh, D.M.A., Cooper, J., & Kemp, A.H. (2004). Free network mobile people and product location for enhanced personal and property security. EPSRC proposal ref. no. GR/S98627/01. Rahman, Z. (2003). Internet-based supply chain management: Using the internet to revolutionize your business. International Journal of Information Management, 23, 493–505. Tanenbaum, A. S. (2003). Computer networks (4th ed.). Englewood Cliffs, NJ: Prentice-Hall ISBN0130661023. vom Bogel, G. (2002). www.inhaus-duisburg.de. ¨ Dr. Andrew H. Kemp received a B.Sc. from the University of York, UK in 1984 and Ph.D. from the University of Hull, UK in 1991. His doctoral studies investigated the use of complementary sequences as in multi-functional architectures for use in CDMA systems. He spent several years working in Libya and South Africa assisting in seismic exploration and worked at the University of Bradford as a research assistant investigating the use of Blum, Blum and Shub sequences for cryptographically secure third-generation systems. More recently he helped develop wireless fieldbus systems for industrial sites and is now lecturing at the University of Leeds, UK in communications. Andrew has over 30 scientific journal and conference papers and a book chapter published. His research interests are in multipath propagation studies to assist system development and wireless broadband connection to computer networks.

×