The Technological Foundations of E-Government.doc.doc
The Technological Foundations of E-Government
Book Chapter for:
Electronic Government: Information, Technology, and Transformation
[Foundations of E-Government section]
Edited by Hans J. Scholl
ME Sharpe, Armonk, NY
[Advances in Management Information Systems (AMIS) series]
School of Public Administration (PCA 363B)
College of Social Work, Justice, and Public Affairs
Florida International University
Miami, FL 33199
Dr. Sukumar Ganapati is an Assistant Professor in the School of Public Administration at Florida
International University. He teaches graduate courses in Information Technology and E-
government in the school. He has also taught courses in Geographic Information Systems
(GIS). He has undertaken several IT projects at both local and international levels. These
include the Access Indonesia project sponsored by the U.S. Department of Education.
Chapter # (TBD)
The Technological Foundations of E-Government
This chapter provides an overview of the technological foundations of e-government. IT
practitioners need to be aware of alternative technological choices in their strategic decision
making. The current e-government literature has focused mainly on Web based services.
Besides Web, four additional related areas of interest are identified in this chapter: IP based
services, Sensor based services, Location based services, and Broadband Infrastructure. The
technological principles underlying the five areas and their applications for e-government are
Keywords: Web services; IP services; RFID; GIS; Broadband Infrastructure
This chapter provides an overview of the technological foundations of e-government. Garson
(2006, p. 19) defines e-government as the “provision of governmental services by electronic
means, usually over the Internet.” Although Internet is indeed at the core of e-government, there
are several related technological areas that are often overlooked in considering e-government
applications. Chief Information Officers (CIOs) and Information Technology (IT) managers need
to be aware of such technological choices in their strategic decision making. Understanding the
strengths and weaknesses of these emerging technological alternatives is important for
adopting the newer technologies; else, these choices are made on an ad hoc basis.
Due to its emphasis on Internet use, the e-government literature has focused on the
development of Web based services. However, besides Web based services, at least four
additional related areas of interest for e-government could be identified. These are: (i) Internet
Protocol (IP) based services; (ii) Sensor based services; (iii) Location based services; and (iv)
Broadband Infrastructure. The technology in the above areas is rapidly evolving. Practitioners,
policymakers, and researchers of e-government have to often play catch up in dealing with
these technological developments. Yet, the literature focusing on these technologies and their
implications for e-government applications is thin. The present chapter aims to fill this gap.
Consequently, the chapter discusses the five areas with a technological view on their prospects
and problems for e-government. While the five areas hold prospects for e-government, each is
not entirely standalone. Indeed, systems spanning one or more of these areas are at the cutting
edge of e-government in the 21st century. However, the interconnections between the systems
also bring up the issues of security and interoperability.
The rest of the chapter is structured as follows. The second section reviews the state of the art
of technology in e-government. Sections 3 through 7 describe the advancements in the five
technological areas mentioned before. The fourth section concludes with the problems and
prospects of technological developments for e-government.
2. THE STATE OF THE ART OF TECHNOLOGY IN E-GOVERNMENT
Electronic government or e-government capitalizes on the advances in computer and
communications technology since World War II. Computer technology enabled large scale
storage (i.e. memory) and mathematical (e.g. calculations, logic) capability. Since the World
War II, computers have evolved from room-sized equipments based on vacuum tubes to small
sized desktop and laptop machines based on transistors in microprocessor chips.
Communications technology enabled the networking between computers, i.e. the Internet. The
technology has also evolved significantly since World War II, from copper wire based landline
phone systems to optical fiber cables and wireless based communication systems. The
combination of computer and communications technology has enhanced the ability to
disseminate information in real time, to increase efficiency of routine chores, and to collaborate
between different actors.
Technological advances of computer and communications technology have occurred faster than
the changes in the industrial era. Alvin Toffler (1970) referred to the too much change in too
short a period of time from industrial to the super-industrial society as “future shock.” Unlike the
industrial era changes, however, the technological evolution has not been at the cost of
affordability. Gordon Moore, a co-founder of Intel, predicted as early as 1965 that the number of
transistors on a microprocessor chip will double approximately every two years at inexpensive
rates. Moore’s Law, as the prophecy has come to be known, is more broadly applied to other
features of computer and communications technology too. Networks, which were for elite use in
defense (i.e. ARPANET) and research (i.e. BITNET) institutions, have become more widely
accessible through proliferation of private Internet Service Providers (ISPs). Greater affordability
of computers and communications technology has been conducive to the growth of e-
government in the public sector and e-commerce in the private sector. Although some forms of
digital divide persist (Servon, 2002), the divide has been narrowing on several aspects (e.g.
between racial, gender, and income groups). According to Pew Internet Research (Horrigan,
2007), 71 percent of American adults use Internet from some location.
The growth of Internet has contributed to the Web becoming the base for e-government. The
Web has facilitated information dissemination, public interactivity, electronic transactions, and
even organizational transformation. Indeed, Web based services have attracted much attention
in the e-government literature. However, there are at least four additional related areas of
technology that have impacted e-government processes. These areas include: Internet Protocol
(IP) based services; Sensor based services; Location based services; and Broadband
Infrastructure. IP based services are similar to Web-based services in being dependent on the
computer and communications technology infrastructure. However, there is a difference in the
scope between the two: while Web-based services are delivered through browsers, the IP
based services are broader (e.g. for voice communications, video conferencing, etc.). Sensor
devices like Radio Frequency Identification (RFID) tags can identify objects uniquely and can be
read automatically from a remote location. Services based on such sensor devices are used for
identification, inventory tracking, and supply chain management. Location based services are
explicitly oriented towards geographical mapping and location of persons or objects; these
services are significant for e-government since they describe the spatial characteristics.
Broadband infrastructure does not refer to services per se, but the backbone that supports e-
government services. The infrastructure is critical for augmenting e-government services.
The above five areas are not entirely stand-alone. Web and IP based services can be easily
integrated. Web services also incorporate the location based services, such as GIS and GPS.
When combined with GPS, RFID and wireless devices (e.g. cellphones) become powerful tools
for location on the field. These devices can also be connected using the Internet to relay the
information over the Web. Transportation agencies use such interconnectivity between systems
for traffic management, including real time traffic alerts. A driver with a GPS device in the car,
for example, can gauge the traffic ahead based on his/ her location. Indeed, the
interconnectivity between the systems is at the cutting edge of e-government.
The interconnectivity between the systems also brings up problematic issues, such as security
and interoperability. IP based services, for example, are open to the same security threats as
the Web based services. RFID devices, if not properly managed, are vulnerable to security
threats. Moreover, the technological standards between the systems could differ; or they could
run under proprietary systems that are incompatible with other systems. This brings up the issue
of interoperability between the systems, for the devices to communicate with each other. In the
following sections, the technological foundations of the five areas are considered first, and then
their prospects and problems for e-government are explored.
3. WEB BASED SERVICES
The Internet technology has enabled the growth of Web based systems for information
dissemination and email systems for communication between people since the early 1990s. At
its very basic, the Web based system consists of a server computer, which is a publicly
accessible repository of information (e.g. content, documents, databases), and a client
computer, which accesses (i.e. reads) the information in the server. The email systems allow
private transmission of messages and documents between computers over the Internet. The
systems have since evolved into complex ones. Web portals, for example, make use of multiple
servers to serve Web pages; content and applications could be distributed across the servers
and mirror one server’s content on another for contingency (e.g. if one server went down,
another will serve the same information) or speed (e.g. to balance load between servers during
times of excessive demands).
The current Web 2.0 technologies in the 21st century are distinctive from the Web 1.0
applications of the 1990s. Web 1.0 was related to basic information dissemination through static
Web pages (e.g. using Hyper Text Markup Language, HTML) and basic database manipulation
for dynamic Web pages (e.g. using Structured Query Language, SQL). This generation of Web
served customary information published and owned by the producers with hyperlinks to other
Web pages of related interest. In the Web 2.0 era, the usage of Extensible Markup Language
(XML) is more prevalent than HTML. XML facilitates the sharing of structured data and allows
for serving dynamic content over the Web. O’Reilly (2005) outlined several characteristics of
currently evolving Web 2.0 technologies:
(i) they treat the Web as a platform rather than as a base for singular applications (e.g.
mashups, which overlay information from multiple Web sources into one Web service
using Application Programming Interfaces (APIs); peer to peer networking such as
Napster and KaZaA, where client machines are used to act as servers for deploying
(ii) they harness collective intelligence (e.g. through blogs, wikis, podcasts, and social
(iii) data ownership is a key element;
(iv) their softwares are not end products—they are services that are continually
(v) they use lightweight programs that build on existing Web platforms, rather than using
(vi) their software can be used across devices.
The literature on the prospects and problems of Web-based services for e-government is rich.
There are four stages of Web based services in e-government: (i) Web presence, in which
governments deploy basic information about themselves on the Web; (ii) Web interaction, which
includes interactive features between government decision-makers and citizens (e.g. through
emails, interactive dialog boxes); (iii) Web transaction, in which various government transactions
such as procurement, contracting, payment are processed through the Web; and (iv) Web
transformation, where public organizations are themselves transformed from hierarchical,
stovepipe model to horizontal, collaborative model.
Government agencies have made remarkable strides in the usage of Web services. Although
information dissemination is at the core of the Web services, interactivity is also increasingly
being used. Email has displaced traditional snail mail to become a dominant mode of contacting
senators and other policymakers. Social networking sites like MySpace are used by political
candidates for outreach to the younger constituency for votes. Federal sites such as
regulations.gov allow for public participation in the federal rulemaking process by enabling
citizens to view and comment on regulations and other actions for federal agencies. Web
transactions such as renewal of driving license, payment of fines, checking on the status of an
application, etc. could be done online. According to West (2007), 86 percent of federal and state
websites have fully executable online services. In terms of Web transformation, spillover
activities that span several departments are centralized through the Web using cross agency
portals. The first government portal, usa.gov, combined the services from different departments
under one portal. Other examples include: grants.gov, which is a central repository for federal
grants and for streamlining grants management; usajobs.gov, which is dedicated to federal jobs.
Such sites eschew the traditional stovepipe models of government agencies and transform them
into horizontal networks. There is also a convergence of government websites to be arranged
according to the audience needs, rather than specific department functions. In this, websites are
typically arranged for four audience categories: citizens, businesses, employees, and visitors.
Web services could be classified into four categories: government to citizen (G2C) services;
government to business (G2B) services; government to government (G2G) services; and intra-
governmental services (IG). G2C services focus on citizen demanded government services (e.g.
driving licenses, birth certificates, etc.) which could be carried out through the Web, rather than
face to face interactions with the bureaucracy. G2B services focus on business oriented
government services (e.g. business licenses, local taxes) that can be carried out over the Web.
G2G services act between different levels of government for intergovernmental transactions, to
meet reporting requirements, and for performance measurement. IG services are back office
employee oriented and internal management services specific to the agency.
A few government agencies—especially at the federal level—have adapted to the Web 2.0
environment. For example, podcasts and RSS (Really Simple Syndication) feeds are available
from most federal websites. The podcasts provide department specific videos and RSS feeds
provide latest announcements or policy developments (similar to latest news). Blogs have
increasingly gained significance in American politics. They have become powerful tools for
political activism, public participation, and campaign communication (Lawson-Borders and Kirk,
2005). A few government sponsored blogs have also emerged (e.g. U.S. Department of State’s
Dipnote, which provides an alternative source to mainstream media reporting on American
foreign policy). A potent aspect of e-government in terms of Web 2.0 is the proprietary
ownership of large amounts of data that is collected by different agencies from individual
citizens and businesses on a mandatory basis. The data enables governments to create profiles
of different entities through data mining techniques. Indeed, data mining has emerged as a key
concern of federal government agencies for different purposes, ranging from improving service
to analyzing and detecting terrorism activities (GAO, 2004).
The use of Web-based services is prone to hacking, snooping, and other security breaches, and
attacks by spy worms and viruses. Consequently, sensitive government data could be
compromised. Although email has become a staple for communications, phishing emails
mislead citizens and government officials alike; unwanted emails (i.e. spam) is also a standing
problem. Moreover, public sector email communications are generally not private; hence,
private emails could become available to the public domain.
4. IP BASED SERVICES
Similar to Web based services, IP based services draw on the Internet technology.
Fundamentally, these services rest on the Transmission Control Protocol and Internet Protocol
(TCP/ IP) standard, the most widely used standard for networking between computers. TCP/ IP
represents a significant advancement over traditional phone networks. Traditional phones use
circuit switching, which uses a dedicated circuit (or channel) between nodes and terminals for
communication. The circuit is not available to other users until it is released. However, TCP/ IP
systems use packet switching, which do not use dedicated circuits. Rather, data from the
sending device is broken down into small packets and transmitted over the Internet using the
best available route; the packets are then reassembled at the receiver’s device. TCP/IP thus
represents a more efficient use of network bandwidth. The network can balance the
transmission load across various pieces of equipment, and if a problem occurs with an
equipment, the data could be re-routed over another equipment in the network. Packet
switching has lowered the cost of communications, enabled new services and features,
expanded network resiliency, and enhanced consumer choice. Various IP based services have
emerged as a result besides the Web based services. Such services include the Voice over
Internet Protocol (VoIP) and Internet Protocol Television (IPTV). These services, however,
require broadband (i.e. high bandwidth, like DSL, cable, or T1) rather than dial-up connections
to perform efficiently.
The most significant among IP based services is the VoIP, which has become a popular
alternative to traditional phone systems. According to Telegeography (2007), the number of
VoIP subscribers increased from 6.5 million in mid-2006 to 11.8 million in mid-2007. The
increase in popularity is due to cost and other advantages. For residential consumers, VoIP
rates are generally lower than traditional phones. For enterprises, deploying VoIP is estimated
to be one-third the cost of traditional phone systems; operating costs could be 50-60 percent
less. VoIP also has other advantages, such as sophisticated messaging and conferencing
applications and simplified management. VoIP represents a convergence of data, voice, and
video (Triple Play) using the same broadband network. Email and phone conversations are thus
transmitted on the same network. Consequently, email and voice mail queues could be merged
to make either type of message retrievable by phone or computer. VoIP’s voice services work
over the computer, a special VoIP phone, or traditional phones. While using the computer, one
requires a microphone and software to process the voice; the special VoIP phones plug into the
broadband connection directly; the traditional phones require a VoIP adapter. Skype, which is
used by residential customers and businesses, provide both voice and video connections using
peer to peer networking. People with Skype accounts can call each other for free regardless of
their location; they pay a fee when calling a phone. VoIP services for residential and business
use are also provided by ISPs, phone, and cable companies. Several vendors of VoIP have
emerged for enterprise wide solutions, notable among them being Alcatel, Avaya, Cisco, Nortel,
Mitel, Siemens, and so on. Federal agencies such as the Department of Defense, Department
of Commerce, Social Security Agency have adopted VoIP to provide a unified and
comprehensive range of services. Several states, counties, and cities have also jumped into the
bandwagon of VoIP technology. In particular, government agencies with call centers (e.g. 311,
511, 911 systems) that have to field many phone inquiries could find it expedient to implement
VoIP. Indeed, following incidents like the September 11, 2001 terrorist attacks and Hurricane
Katrina, when emergency communications between first responders failed, there have been
calls for an IP-based nationwide 911 system. Unlike traditional telephone systems that fall under
state regulation, VoIP falls directly under the Federal Communications Commission’s (FCC)
jurisdiction; hence, VoIP norms are uniform nationwide.
The implementation of VoIP, however, is not without its problems. Reviewing the security
considerations of VoIP, the National Institute of Science and Technology (NIST) observed,
“Because of the integration of voice and data in a single network, establishing a secure VOIP
and data network is a complex process that requires greater effort than that required for data-
only networks” (Kuhn et al, 2005, p. 5). VoIP is time-critical, where time-lag between packets of
voice transmitted between source and destination could result in lower Quality of Service (QoS)
than that in data transmission. At the same time, VoIP is vulnerable to the same security
problems as other systems that depend on the Internet (e.g. worms, which can compromise
servers). Hence, similar to data networks, the VoIP services also need to be protected with
software and hardware devices (e.g. firewalls, antivirus protection, and intrusion detection
systems). However, the implementation of security measures could deteriorate the VoIP’s QoS,
including latency (greater time taken for a voice transmission from the source to destination),
jitter (non-uniform packet delays, particularly due to low bandwidth), packet loss, and Denial of
Service (DoS). Another major issue is that not all VoIP services connect directly to 911
emergency services. Skype, for example, cannot be used to call 911. The FCC imposed 911
obligations on providers of VoIP services, particularly those that allow users to make calls to
and receive calls from the regular telephone network. In addition, the FCC requires
interconnected VoIP providers to comply with the Communications Assistance for Law
Enforcement Act of 1994 (CALEA), which allows law enforcement officials to wiretap digital
Similar to VoIP, IPTV also uses the IP network, but delivers television and video services. Being
IP based, IPTV is unlike traditional TV and Cable. In the traditional TV, users have to tune in to
channels. The content is constantly delivered by the provider to each customer, who then
selects the content to watch. In an IP network, only the content selected by the consumer is
delivered. This selective delivery frees up bandwidth, thus allowing for significantly more content
and functionality. For example, IPTV provides picture-in-picture functionality for channel surfing
without leaving an existing program; it allows downloading of photos or music from personal
computers. IPTV is related to Internet TV (ITV) since both are Internet based. However, ITV
consists mainly of Web-based video streaming. IPTV is not solely Web based and often
requires additional software and hardware (e.g. set-top box) for high quality video. Youtube is a
prime example of ITV, where people upload videos and are accessible to the general public.
IPTV is used for both live TV and Video On Demand (VOD). Live TV (including synchorous
communications like web-conferences, distance learning, corporate communications) use
multicasting, which is a bandwidth-conserving technology to reduce traffic by simultaneously
delivering a single stream of information to many recipients. VOD uses streaming of content for
real time viewing, or downloading the content for later viewing. In live TV, the size of data is not
known a priori and could be infinite; in VOD, the video is a pre-recorded finite file. Motorola,
Seachange, Tut, Verimatrix are some of the leading IPTV vendors (Multimedia Research Group,
In combination with VoIP, the consumer base of IPTV is projected to grow exponentially,
according to various industry analysts such as Infonetics, Insight Research Corporation, and
Multimedia Research Group (New Millennium Research Council, 2006). IPTV has become a
staple medium for viewing games. Major League Baseball has been offering streaming video
since 2002; the 2006 FIFA World Championship was also viewed using IPTV throughout the
world. In 2006, the Earth Day Network and Communications Technology (ComTek) partnered to
offer a live, two-way IPTV broadcast to 16,000 high school and college classrooms in the U.S.
Students could view the broadcast through a Web, email questions to environmental experts
and religious leaders, and have two-way communications through VoIP. In 2005, the IEEE
Spectrum magazine predicted IPTV to be a technology winner since it can use relatively low-
speed broadband through telephone wires, rather than a requiring more costly optical fiber
upgrade (Alfonsi, 2005). Public sector enterprises that depend on legacy copper network for
telephones could thus use IPTV with little loss of QoS.
IPTV has much potential for government applications. First responder solutions could be
distributed community wide through video and interactive communications for public safety in
emergency situations. IPTV could be used as interactive channels for community broadcasting
of municipal meetings. Interactive video, voice, and data could be used for distance learning
and off-site training sessions (e.g. Webinars). Special interest virtual conferences (e.g.
Webcasts) could be held using IPTV. The scope of political debates could be also enhanced
through the interactive capabilities. Youtube, for example, was used to field questions for
Democratic Presidential candidates in the debate held in Charleston in July, 2007. Lastly, IPTV
could be used for telemedicine, wherein doctors can monitor and treat patients interactively from
5. SENSOR BASED SERVICES
Sensors are devices that respond to an environmental stimulus (such as heat, light, sound,
pressure, magnetism, or motion). For example, motion detectors are sensors that respond to
any movement in the area of their coverage, and issue security alerts in case of an
unauthorized intrusion. Other common sensors include cameras, scanners, lasers, radar
systems, thermal devices, seismographs, etc. Among these, Radio Frequency Identification
(RFID) systems have gained significance for e-government, and hold much potential for future
use. Hence, this section focuses mainly on RFID devices. Although the technology of using
radio frequencies is not new, RFIDs gained popularity in commercial and government
applications only in the late 1990s. RFID has grown by leaps and bounds since then. The
number of RFID devices doubled to 1.2 billion units between 2005 and 2006; they are expected
to reach 700 billion units by 2015 (Bevan, 2007).
RFID is an automatic identification technology using tags and readers to capture data about
objects. The RFID tag typically contains a unique identification code that can be attached to
objects and living beings. The code can be read by the reader using radio waves. The RFID tag
represents a revolutionary change over the traditional bar codes that are used to identify objects
in retail stores. Bar codes, which use the Universal Product Code (UPC), cannot uniquely
identify objects—they identify a class of objects. They require line of sight for reading them; they
can be read only one at a time; they cannot be read if they are dirty or damaged; their
information cannot be updated. Unlike bar codes, RFID tags, which use the Electronic Product
Code (EPC), can be used to uniquely identify objects. RFID tags do not require line of sight for
reading them; they can be batch processed since many tags can be read instantaneously; they
are more durable; and their information can be overwritten and updated (Wyld, 2005, p. 12).
Since RFID tags are read by radio waves, they do not need to be swiped like magnetic stripe
From a technological perspective, RFID consists mainly of tags and readers. A middleware is
used to process the data from the tag. An RFID tag has an integrated circuit (IC) chip, which
contains the unique EPC data. The chip’s memory could be read-only (i.e. data cannot be
changed), read-write (i.e. data can be changed), or a combination of both. The chip is linked to
an antenna, which is a small coil of wires. The tag could be packaged in different forms and
sizes, depending on the function. It could be packaged in smart cards (e.g. identification cards
serving multiple purposes, credit cards that can be scanned instead of swiping, etc.), smart
labels (that can be attached to books, packages, etc.), disks (which can be attached to an object
with a screw), glass cases (for implantation in animals and human beings), and so on. The tags
could be as small as grain of rice (e.g. Hitachi’s mu chip). The tags could also be passive (which
have no power source, and are activated when they are in the vicinity of a reader at short
distance), active (which have a power source and emit radio waves continuously, so that they
can be read at greater distances), and semi-passive (which have a battery, but are activated
based on a sensor that automatically responds to an environmental stimulus such as
temperature, movement, or vibration). Passive tags are typically used where they need to be
read at very short distance (e.g. e-passport, credit cards); active tags are used when they need
to be read at longer distances (e.g. electronic toll collection); semi-passive tags are used to
monitor environment (e.g. sense earthquake tremors, changes in temperature in a remote
The RFID readers can be small hand-held devices that are portable or can be large and fixed. A
reader comprises of an antenna, transceiver, and a decoder. The range of the reader depends
on the size and efficiency of the antenna, and the power of transceiver. There could be one or
more antenna, depending on the desired read range. The RFID transceiver sends out radio
waves either on demand (in case of small hand-held devices) or continuously (in the case
of a fixed reader). If an RFID tag is in the transceiver’s active range, the tag’s unique code is
read by the reader. The radio frequency of the transceiver gives the intensity of the radio
waves for transmitting information—higher the frequency, the more powerful is the reader. Low
frequency (125–134 KHz) readers are used upto 18 inches; high frequency (13.553–13.567
MHz) are used for 3 to10 feet; ultra-high frequency (400–1,000 MHz) are used for 10 to 30 feet;
and microwave frequencies (2.45 GHz) are used for higher distances (Wyld, 2005, p. 20).
Typically, when a reader receives a tag’s signal, it passes that information to the decoder, which
then forwards the unique code for processing to the back-end system (e.g. looking up or adding
to a computer database).
RFIDs have gained much popularity in the private and public sector for supply chain
management, which is the tracking of materials and products from a supplier to manufacturer to
wholesaler to retailer to consumer. The principal goal of the supply chain management system
is to reduce inventory (i.e. shelf life), so that goods need to be moved down the chain in an
efficient manner. RFIDs enable greater visibility in the supply chain management since the
inventory of any particular node in the chain could be centrally read and managed. Wal-Mart
was among the early adopters of RFID in requiring its suppliers to provide RFID-tagged pallets
and cases to the distribution centers. The Wal-Mart could centrally monitor the inventory of the
stores, and replenish goods in a timely way based on the demand of the goods in the store. The
consequent efficiencies with RFID implementation were expected to save Wal-Mart upto $8.35
billion annually (Wyld, 2005).
Several federal agencies have also undertaken RFID initiatives, the notable ones being the
Department of Defense (DoD), the Food and Drug Administration (FDA), the Department of
Agriculture (USDA), and the Social Security Agency (SSA). DoD has a complex supply chain
management, with many domestic and overseas locations. RFID is used for logistic support
through fully automated visibility and management of assets, hands-off processing of materiel
transactions, and to streamline business processes. Since 2005, DoD has phased in RFID
tagging of pallets by DoD manufacturers and suppliers of shipments. FDA has required
pharmaceutical companies to use RFID to have better control over the prescription drug supply
chain. USDA’s National Animal Identification System (NAIS) envisages the management and
tracking of individual animals in order to trace and control animal diseases. RFID ear tags or
implanted devices are used in the NAIS for identifying large animals like cattle. SSA has been
using RFID since 2003 for its internal office supply store, wherein tagged items are scanned at
checkout for inventory management. A few state and local governments have also adopted
RFID for inventory management. Electronic toll collection is a prime example at the state and
local levels, wherein drivers do not have to stop and pay tolls at the toll booth. Overhead
readers in the booth automatically read RFID transponders in the vehicle, and the appropriate
amount is deducted from the transponder’s account. Hospitals use implanted RFID chips (e.g.
VeriChip) to monitor patient’s health (especially for senior citizens).
The use of RFID is not without controversy. The principal concern with government’s use of
RFID is privacy—that the big brother is watching every move. The concern is more acute when
RFIDs are used to track human movements or are implanted in human beings. Such fears have
precluded people from installing transponders in the car. A consumer group called Consumers
Against Supermarket Privacy Invasion and Numbering (CASPIAN) has been at the forefront
raising awareness about the downside of implanting RFID chips by corporations and
government. The founders of the movement call RFID as “spy chips” since they can invade
one’s privacy, allow snooping by others, and increase government surveillance (Albrecht and
McIntyre, 2005). Security is also a major concern since RFID tags can be read by readers for
illegitimate purposes. Thus credit cards and e-passports could be compromised with appropriate
readers that could eavesdrop and make unauthorized use. The National Institute of Science and
Technology highlighted four types of major risks with RFIDs: business process risks; business
intelligence risk; privacy risk; and externality risk (Karygiannis, et al, 2007). The report set
guidelines for security and privacy. These guidelines include: implementation of firewalls to
separate RFID databases from other databases in the organization; usage of encrypted radio
signals; authentication of approved users of RFID systems; shielding RFID tags and tag reading
areas to prevent unauthorized access; implementation of procedures for auditing, logging, and
time stamping to help in detecting security breaches; and disposal of tags and recycling
procedures to permanently disable or destroy sensitive data.
6. LOCATION BASED SERVICES
Broadly, location based services relate to spatial descriptions of persons or objects. These
services assist in determining precise geographical locations and describe the spatial attributes
of a jurisdiction. At its very basic, a location based service is a graphical map which represents
geographical boundaries (e.g. political, physical, climatic), linear elements (e.g. river, streets),
and point objects or living beings (e.g. buildings, people). Although maps have been in
existence for centuries, the evolution of computers and communication systems have
revolutionized the location based systems to enable spatial descriptions in real time (e.g. spatial
movements). The location based services have benefited government processes in several
areas, including natural resource management, health management, disaster management, law
enforcement, real property services, land management, and planning and economic
development. There are two major components of location based technologies in this respect:
the Geographic Information Systems (GIS) and the Geographical Positioning Systems (GPS).
GIS and GPS are two distinctive technologies—while GIS is oriented toward mapping a
geographical space, GPS is oriented toward locating an object or living being in the
From a technological perspective, GIS is commonly understood as “a system of hardware,
software, data, people, organizations and institutional arrangements for collecting, storing,
analyzing, and disseminating information about areas of the earth” (Dueker and Kjerne, 1989, p.
7-8). It helps manipulate, analyze and present information that is tied to a spatial location.
Fundamentally, GIS comprises of three data components: spatial, attribute, and raster. Spatial
data represent locations and shapes (i.e. polygons, lines, and points) of geographic features
(e.g. boundaries of census tracts, zip codes, counties, states, etc.). Attribute data (qualitative or
quantitative) provide the spatial characteristics that describe a geographical feature (e.g.
population of a jurisdiction). Raster data consist of images (e.g. aerial photographs). GIS
combines the three data to provide a graphical representation of geographical features. Several
attribute layers are combined to give a composite depiction of the feature. The power of GIS for
e-government is in the ease of condensing vast amounts of attribute data from various sources
into graphic visuals in order to display spatial relationships. Moreover, the GIS data can be
analyzed (e.g. topographical analysis of a site to achieve optimum drainage configuration;
forecast of hurricane paths) and queried (e.g. location of hospitals within a given distance from
an accident location) interactively. Lastly, GIS enables building “what-if” scenarios with
alternative data projections and can be useful for simulation. Graphic visuals like thematic maps
of population distribution can be manipulated on the fly to make complex data projections
understandable to both the lay people as well as experts (pictures speak a thousand words).
GIS technology has been refined quite significantly since the 1980s. Traditional desk top based
GIS has since evolved into Web-based GIS, so that spatial information is deployed over the
Internet. Web-GIS is more dynamic than a static map display. Unlike static maps, Web-GIS
allows for pan and zoom to obtain maps based on user defined parameters. It combines the
three data components (which could be distributed across servers) with search and query
interfaces to provide maps and reports interactively. Thus, lay users with an Internet connection
can also access GIS, without having to go through steep learning curves or expensive GIS
software. Mapquest.com, for example, has become common for two dimensional route
mapping. Google Earth enables three-dimensional GIS, where users can fly over terrains
virtually. With mashups, Web-GIS is a powerful tool for real time mapping applications, like
traffic alerting systems (e.g. sigalert.com).
GIS has gained popularity across federal, state, and municipal governments to deploying Web-
based spatial information. The federal government has facilitated the use of GIS by developing
geospatial standards and by providing spatial and attribute data. For example, the Federal
Geographic Data Committee (FGDC) was established in 1990 as an inter-agency committee to
promote the National Spatial Data Infrastructure (NSDI) for the coordinated development, use,
sharing, and dissemination of geospatial data on a national basis. FGDC develops the
geospatial data standards in cooperation with other public, private, and academic institutions.
Government organizations have also emerged as important sources of spatial, attribute, and
raster data. Geodata.gov purports to be a one-stop site for federal, state, and local spatial data;
many states also have geospatial data clearinghouses (Goodchild et al, 2007). U.S. Census
Bureau had originally developed the Topologically Integrated Geographic Encoding and
Referencing system (TIGER) for spatial data. The bureau has also emerged as the principal
resource for attribute data (population, housing, economic data). The Center for Disease Control
(CDC) uses GIS to better portray geographic relationships that affect public health outcomes
and risks, disease transmission, access to health care, and so on
(http://www.cdc.gov/nchs/gis.htm). DoD uses GIS in the millitary for intelligence gathering,
terrain analysis, mission planning, and facilities management. State and local governments
have increasingly adopted GIS for a wide variety of purposes. The uses include: land
management, transportation planning, parks and recreation, environmental monitoring,
infrastructure services, and promoting citizen participation. Many city and county governments
make the public domain property data (e.g. transactions, property taxes, ownerhship) available
through Web-GIS. According to Kaylor (2005), over 60 percent of the municipal websites
surveyed in the Municipal e-Government Assessment Project (MeGAP) had “data rich, highly
interactive GIS features.”
In contrast to GIS’s mapping function, GPS is used to determine location in geographical space
using satellites. GPS consists of three segments: the space segment; the control segment; and
the user segment. The space segment comprises of the satellites that were placed in orbit by
U.S. Department of Defense (DoD) for military applications initially, but have been made
available for civilian use since the 1980s. A constellation of 24 satellites (called NAVSTAR) orbit
at about 12,000 miles above the earth and make about two orbits in a 24 hour cycle. These
GPS satellites emit two radio signals consisting of three bits of information: the pseudorandom
code (an identification code of satellite), ephemeris data (location of the satellite, sent
periodically) and almanac data (the status of satellite, current date and time, sent continuously).
The control segment comprises the master control (located in Colorado) and a network of five
ground stations located around the world. The ground controls monitor the paths of the satellites
and update the ephemeris and almanac data. The GPS unit consists of a receiver and an
antenna capable of reading the signals emitted by the satellites. The receiver essentially
determines its location (latitude, longitude, altitude) by calculating its distance from satellites.
The GPS unit requires at least three satellites in view to locate its position in two dimensions
and at least four satellites to locate in three dimensions (locating a point in three dimensional
space requires at least four distances from other known locations). The distance from a satellite
is calculated using the ephemeris data, with differential error adjustments based on the
pseudorandom code and the ephemeris data.
Since the original scope of the US GPS program was for military purposes, the DoD has
regulated its civilian use. For example, DoD used the Selective Availability (SA) feature of the
GPS to introduce random errors of several hundred feet into the civilian systems, so that the
errors can confound accuracy of long range missiles. The federal government disabled SA
features in 2000 and discontinued the procurement of satellites with SA capabilities in 2007.
Yet, the DoD could restrict the GPS use in case of a national emergency. In a direct challenge
to the US GPS monopoly, the European Union and the European Space Agency began to
develop Galileo as a global satellite navigation system (GNSS) for civilian purposes. The
system, which is expected to be operational by 2008, will comprise of a constellation of 30
satellites. Several non-European countries, including China, India, Saudi Arabia have also
joined the program. The Galileo is expected to comprise of five navigation service groups
available worldwide: open service (available freely for mass market applications with reduced
accuracy); safety of life service (available for safety critical transport applications, with the same
accuracy as open service, but implemented on frequency bands reserved for Aeronatuical
Radio-Navigation services); commercial service (encrypted fee based services with high
accuracy), public regulated service (robust signals protected against jamming and spoofing, and
available during crisis periods; for government authorized applications, including police,
coastguards and customs officials); and search and rescue service (for quick reception of
distress messages from anywhere on earth, precise location of alerts, return link to reduce false
alerts). Galileo is also expected to be interoperable with the US GPS system.
The most common use of GPS is in navigation systems, such as ships in the ocean, airplanes,
and cars. GPS is increasingly used for land surveys since they yield more accurate results than
traditional theodolite methods. Metreologists use GPS is for weather forecasting;
seismographers use GPS for studying tectonic motions in earthquake studies. Combined with
GIS, objects can be located in real time on a map using GPS. Local governments use the
technology for dispatching first responder vehicles. For example, in case of a 911 call of a crime
event, the dispatcher can identify and dispatch the police vehicle nearest to the event. Tourism
oriented data (e.g. location of restaurants, recreational facilities) can also be accessed through
the integrated GIS/ GPS services.
7. BROADBAND INFRASTRUCTURE
As identified in Section 2, advances in computers as well as communications technologies
enabled the growth in e-government services. Unlike computers, which are private goods, the
communications infrastructure is a public good. Hence, governments typically have a stake in
developing the infrastructure. Studies show that the communications infrastructure investments
are significant for economic growth and development. The telephone lines (e.g. copper wires)
form the basic infrastructure component for communications, but dial-up computer modems are
not sufficient in the rapidly evolving world of broadband requirements. Broadband refers to the
high speed Internet communications, which are typically faster than the 56.6 kilobytes per
second (kbps) offered by dial-up modems (FCC defined the first generation threshold of
broadband as 200 kbps). With the increase in demand of high bandwidth due to IP based
services, the demand for broadband infrastructure has escalated. According to the Pew
Internet’s 2007 survey, 47 percent of adults have broadband at home, up 5 percent from 2006
(Horrigan, 2007). While there is extensive coverage of basic infrastructure of telephone lines
and power lines (overhead or underground) across the country, the availability of more
advanced broadband infrastructure is uneven and yet to catch up. The catch up game is an
interminable one since the broadband technology is also evolving quickly. Technology
policymakers in the state and local governments need to be aware of the evolving technologies
to make judicious infrastructure choices.
The evolving broadband communications infrastructure includes both wired and wireless
technologies. Wired infrastructure is based on a cable connection (e.g. telephone, optical fiber,
or coaxial); wireless infrastructure is based on radio wave signals that do not require a physical
cable connection. Examples of wired broadband include Digital Subscriber Line (DSL), Cable,
Fiber to the Home (FTTH), and Broadband over Powerline (BPL). Wireless infrastructure
includes Wi-Fi hotspots, Ultra Wide Band, and Mesh networks. The choice between a wired and
wireless infrastructure is a paradoxical one for many local governments. Wired connections
have better QoS, but are less flexible due to the requirement of physical connectivity; hence,
extensive infrastructure has to be laid to enable such connections. Wireless connections are
flexible, but could have less QoS and be more prone to dropped calls and security lapses.
Wired infrastructure requires only marginal investments in urban areas where the infrastructure
may already have been installed; wireless infrastructure may be more advantageous in rural
areas where it is expensive to lay the wired infrastructure. Of course, the choices are not
mutually exclusive among the various wired and wireless systems; hybrid systems have also
evolved. Solutions for “last mile” problems (i.e. the final leg of connectivity to a customer from a
hub), for example, could be based on such hybrid systems.
DSL and Cable broadband build on existing copper wire connections of telephone and cable TV
respectively. The communications are based on transmission of electrical signals over the
copper network. Since this infrastructure already exists in most urban areas, additional
infrastructure investments are usually minimal. DSL and Cable are both widely available for
urban consumers at more affordable rates than other systems. The additional capacity in the
existing copper network is due to packet switching, which frees up space for routing more
communications. Routers, modems, and filters need to be added at the user end to separate
voice and data. DSL and Cable prevail the broadband Internet penetration—in 2006, DSL
constituted 50 percent of home broadband connections and Cable constituted 41 percent.
According to FCC (2007), the number of DSL and Cable lines increased exponentially from 4
million in June 2000 to nearly 53 million in June 2006. Theoretically, DSL and Cable could offer
speeds upto 10 and 30 megabytes per second (mbps) respectively; however, the actual speeds
are lower and reduce with additional users on the network at the same time. Although DSL and
Cable speeds represent significant improvement for data transfer, the QoS may deteriorate for
voice and multimedia services.
Unlike DSL and Cable, communications in optical fiber networks is through light signals, which
provides several advantages. Signal degradation and interference is less in optical fibers than
copper. Optical fiber cables are thinner than copper, allowing more lines in the same diameter
cable. Moreover, optical fibers provide broadband speed upto 10 gigabyte per second (gbps)
(the T-carrier lines and Optical Carrier lines). Installing optical fiber cables provide cost savings
over the long run due to the higher reliability and lower maintenance. Yet, optical fiber cables
have not become as popular as DSL and Cable. For, the copper wire networks are more
extensive and the initial costs of laying copper cables are lower. The number of optical fiber
based lines increased from nearly 0.4 million to about 0.7 million in June 2006 (FCC, 2007).
Two types of network connectivity are based on the optical fibers: Fiber to the Home (FTTH),
which delivers communication to the end user; and Fiber to the Curb (FTTC), which delivers to a
platform and the last mile could be served by other modes (e.g. DSL, Cable, or wireless
systems). According to the FTTH Council (2006), FTTH served 936 communities in 47 states by
Broadband over Powerline (BPL) provides yet another prospect for wired broadband access
through the existing infrastructure network of electric power lines. When transmitting electricity,
power lines use a limited range of frequencies. BPL takes advantage of the unused
transmission capability of the power lines for communications, without disrupting the power
output. Hence, it is also called Power Line Communication (PLC). BPL is an emerging
technology, which can provide broadband speeds between 500 kbps and 3 mbps. FCC
identifies two components of BPL systems (NTIA, 2004, p. 1-1): Access and In-house. Access
BPL systems are the outdoor network of devices that use electrical power lines for transmitting
broadband data to, from, and within the geographic area. In-house BPL systems are the indoor
wiring and power outlets for networking within a building, and for connecting end-user devices to
the access BPL network. A basic BPL network is illustrated in Figure 1. Access BPL equipment
consists of injectors, repeaters, and extractors. BPL injectors (also, couplers) interface between
high speed optical fiber or other high speed broadband and the power lines (overhead or
underground). Repeaters are required at periodic distances on long power lines to keep the
signals from attenuating or distorting. Extractors provide the interface between the power line
carrying the BPL signals and the user’s building. The user could then connect a device (e.g.
computer, IPTV) with a BPL modem in a power outlet to have high-speed internet access. Early
problems with BPL have included radio interference over the utility line, which negatively affects
ham radio operators (American Radio Relay League, ARRL, protested against BPL
implementation with FCC). BPL implementation in the U.S. has lagged behind Europe due to its
peculiarity of power lines: unlike Europe, US utilities have differing standards of power systems
and grids. While European distribution transformers feed several homes (100 to 200), US
utilities typically have few (4 to 8) homes per transformer (Tongia, 2004). The number of BPL
based lines increased from nearly 4,000 in 2005 to a little over 5,000 in 2006. (FCC, 2007).
According to the United Power Line Council (UPLC, 2007), which is the FCC certified BPL
database manager, there were 35 BPL deployments across United States, ranging from small
pilot projects to large scale commercial deployments. These BPL deployments include
Cincinnati (catering to over 50,000 homes) and Manassas (catering to over 700 households).
BPL holds potential particularly for multihousing units (e.g. apartment complexes) where there
are scale efficiencies in using the power lines for providing Internet to several households.
[Insert Figure 1 around here]
Unlike the wired communications infrastructure described above, wireless infrastructure does
not require physical cables for making broadband connections. Wireless communications are
based on radio waves, where frequencies emitted by radio base stations, towers, and radio
devices are read by wireless devices using antenna. Although wired systems make up the major
portion of broadband penetration, wireless services have emerged as a significant contender in
the market. Satellite and other wireless based lines increased from over 0.65 million in June
2000 to 23 million in June 2006 (FCC, 2007). According to CTIA-Wireless Association (2006),
the number of wireless subscribers increased from 109.5 million in 2000 to 233 million in 2006;
about 12.8 percent of households in 2006 were wireless only. Wireless based communication
devices (e.g. cell phones, Personal Digital Assistants, PDAs) have also grown exponentially in
the 21st century. Government enterprises have increasingly adopted the wireless devices in the
work place—a Government Computer News (GCN) survey revealed that 86 percent of agency
managers use wireless technologies for conducting agency business (Walker, 2004). Thus,
wireless is a major technological development to contend with for e-government, both from
citizen and agency’s perspective. Wireless provides mobile Internet access to citizens and
government officials (e.g., coffee shops, cars); a single device can be used to make phone calls,
pay bills electronically, and access entertainment and data.
The transmission and reception of electromagnetic radio frequency is at the core of wireless
communications; hence, wireless infrastructure needs to address the management of frequency
spectrum. The FCC and the National Telecommunications and Information Administration
(NTIA) share responsibility for managing the spectrum. While FCC manages the spectrum used
by individuals (e.g., garage door openers), private sector (e.g., radio and television
broadcasters), and public safety and health officials (e.g., police and emergency medical
technicians), NTIA manages the spectrum used by the federal government (e.g., air traffic
control and national defense). Generally, devices using a particular radio frequency require FCC
license or NTIA authorization; these devices are protected from interference since other devices
are prohibited from using the frequency. Unlicensed devices do not require such license or
Wireless could be analog or digital. Analog refers to modulation (amplitude or frequency) of
sinusoidal radio wave forms for communications delivery and reception; cellular phones and
FM/AM radios are typically analog devices. Digital refers to binary (0 or 1) radio wave
transmission and reception. Digital wireless offers more advantages over analog: it can
accommodate more users (due to packet switching on channels), reduced background noise,
better sound quality, and more security. Moreover, digital wireless is IP based, so that it can
support Internet communications (Web browsing, emailing, etc.). Indeed, analog devices are
getting outmoded: from mid-February, 2009, analog television services will be terminated under
the Digital Television Transition and Public Safety Act of 2005. Contentious debates have
followed in the auction process and usage of the recovered analog spectrum (700 MHz). Yet,
the auction is expected to raise over $10 billion to be put into the Digital Television Transition
and Public Safety Fund. From an e-government perspective, the fund will pay for emergency
and essential services, such as public safety interoperable communications, a national tsunami
warning program, enhanced 911, and essential air-services.
Analog cellphones are also giving way to the 3G (third generation) digital mobile phones (e.g.
Portable Communication System, PCS phones). FCC discontinued the requirement of cellphone
providers to provide analog services from mid-February, 2009. Wireless companies have
already started to deploy broadband technologies on their mobile cellular networks operating on
licensed spectrum. Smart phones use the broadband for integrating voice (VOIP), data
(document), and Internet (Web browsing, emails). Newer 3G technologies, such as Evolution
Data Only (EVDO) and Universal Mobile Telecommunications System (UMTS) provide wireless
broadband services at speeds ranging from 300 kbps to 1 mbps.
Wireless communications infrastructure has also significantly advanced from the traditional cell
phone infrastructure, where communications between two phones are enabled through radio
communication with a tower in the geographic area. In contrast to the analog cell phones, the
digital wireless could be short-, medium-, or long-range broadband devices. Short-range
Personal Area Networks (PANs) span about 30 feet (they comply with IEEE 802.15 family of
standards). For example, Bluetooth and Ultra Wide Band (UWB) are PAN technologies.
Bluetooth equipment use unlicensed frequency (2.4 GHz) and have speeds of upto 720 kbps;
they could be used for home security, streaming audio, ad-hoc file sharing. UWB uses low-
powered, pulse modulation (often exceeding 1 GHz) and can have much higher speeds upto
100 mbps; the higher speeds allow it to be used for wireless monitors and faster data transfer
between various devices. Medium range wireless is used for point-to-point communications upto
300 feet (they comply with IEEE 802.11 family of standards). Wireless Fidelity (Wi-Fi) devices
(e.g. network cards used in laptops) are typically medium range. Wi-Fi hotspots are venues
equipped with Wi-Fi antenna, enabling access to wireless Internet; as of the writing of this
article, JiWire.com (2007), which tracks hotspots around the world, identified over 63,700
hotspots in the United States. Several mobile service providers use Wi-Fi hot spots to
complement their cellular services. Longer range networks are point-to-point or point-to-
multipoint that can span upto 30 miles. Wireless Metropolitan Area Networks (WMANs) are such
long-range networks, which can provide last mile connectivity. WMANs are vendor specific or
comply with IEEE (802.16) standards and often use Local Multipoint Distribution Service
(LMDS) for data speed upto 155 mbps within a 2 mile range. A more recent long-range
technology is the WiMax, which are based on improved 802.16 standards. WiMax networks
employ Orthogonal Frequency Division Multiplexing (OFDM) to provide data speed upto 75
mbps. OFDM, unlike LMDS, does not require line of sight for data transfer and can penetrate
through obstructions like buildings and trees. Thus, WiMax represents an improvement over
WMANs. Mesh networks represent another recent development in the long-range wireless
networks. They consist of several nodes (antenna) at short distances (i.e. there is no central
tower), enabling each antenna as an access point to broadcast at lower power with less
From a government perspective, the provision of medium-range and long-range wireless
infrastructure has gained significance. In an effort to increase digital connectivity for tourism and
economic development, several cities have provided municipal broadband through Wi-Fi, Wi-
Max, or mesh networks. The infrastructure is fully municipality-owned (e.g. Coffman Cove,
Alaska; Scottsburg, Indiana) or joint ventures with commercial operators (e.g. with Earthlink in
Philadelphia). Debates rage over whether or not municipalities should provide such wireless
services; industry advocates have argued that such services may be better provided by private
agencies (Gillett, 2006; New Millennium Research Council, 2005). Notwithstanding these
debates, deployment of city or region wide wireless services holds potential for e-government
processes, particularly for first responder services (e.g. police, fire, paramedics) and for field
work (e.g., on-site data processing by inspectors).
8. TECHNOLOGICAL PROSPECTS AND PROBLEMS FOR E-GOVERNMENT
The above review shows significant development in recent technological developments with
respect to e-government. Progress in computer and communications technology facilitated e-
government processes. E-government is generally considered as the services provided through
Web, over the Internet. Yet, other technologies are also of significance to e-government. Four
such technological areas were identified in this chapter. IP based services such as VoIP and
IPTV provide both voice and video communications enhancement. Sensor devices like RFIDs
are used for unique identification of objects. GIS and GPS provide location based services,
including mapping and location in real time. Broadband infrastructure—wired as well as wireless
—provides the backbone for e-government processes.
The above technological developments offer several prospects for e-government. Web based
government services provide information, interactivity, and transactions; they are also
transforming government organizations. Recent studies (e.g. West, 2007) show that Web
services are getting saturated across federal, state, and local government organizations. Yet,
there is much room for development with the evolution of Web 2.0 technologies. Blogs and
podcasts increase the capacity for public participation and discussion, and act as alternative
news forums. Since governments have vast amounts of demographic, geographic, economic,
health, agriculture, and other public domain data, they have the potential for becoming primary
sources for such data.
Although Web based systems have been at the core of e-government, other related
technologies have also facilitated e-government processes. Similar to Web services, IP based
services such as VoIP and IPTV are also based on packet switching technology. While the
implementation of VoIP and IPTV holds cost advantages for government enterprises, they are
particularly beneficial for organizations with call centers that have to interact with the public (e.g.
311, 511, 911 systems). RFID devices are used for inventory and supply chain management,
electronic toll collection, tracking animal movements, smart ID cards, and so on. GIS is a
powerful tool for managing land, environmental resources, transportation, and other services.
GPS is used for real time tracking and dispatching of first response emergency vehicles such as
police cars, ambulances, and fire tenders. Wired and wireless broadband infrastructure enables
better communications, and is particularly useful for delivering audio and video. DSL, Cable,
optical fibers, BPL provide high bandwidth connections; Wi-Fi hotspots (e.g. in airports, coffee
shops) and Wi-Max provide medium- and long-range wireless connections.
Integration of two or more of the above technologies holds prospects for enhancing efficiency of
e-government. Sensor networks, for example, combine the Web, sensors like RFIDs, GPS, and
wireless communications for several purposes. For example, SensorNet, based in Oakridge
National Laboratory, combines such technologies for high risk incident management (e.g. near-
real-time detection, identification, and assessment of chemical, biological, radiological, nuclear,
and explosive (CBRNE) threats). SensorNet (http://www.sensornet.gov/) is aimed to provide a
common data highway for the processing and dissemination of data from CBRNE,
meteorological, video and other sensors in order to provide near-real-time information to
emergency management decision makers and first responders. The WaterWatch program of US
Geological Survey (USGS) similarly uses a network of sensors across the country to provide a
“real-time streamflow” map to track short-term changes in rivers and streams. The changes are
updated periodically and can be viewed in Google Earth. Mobile digital phones allow one to
speak over the phone, check emails, surf the Web, take photographs, and geocode (with GPS).
These multipurpose smart phones allow site inspectors and other field officials to conduct their
job on-site itself.
The application of the above technologies, however, is not unproblematic. Security and privacy
is a common concern in implementing the technologies. In Web based systems, while hacking,
snooping, worms, and viruses could compromise the system integrity, spam and phishing
emails could burden email inboxes. IP based systems are also prone to similar security
breaches. RFIDs are looked upon as “spy chips” that could invade one’s privacy. With the
spread of GIS and GPS systems, locative spam is expected to become a common phenomenon
(Scharl, 2007). Usage of Web based GIS also raise security issues, ranging from the privacy
concerns of individual citizens (e.g. Google Streetview that shows street level photographs in
Google Maps) to the national security of countries (e.g. Google Earth’s mapping of defense
facilities). Wireless devices are also prone to snooping and other security problems. Security
and privacy is of particular concern for e-government for two reasons. First, governments host
sensitive data, such as that related to national security, financial transactions, personal data,
etc. Second, government organizations could themselves misuse the data, unless laws explicitly
guarantee privacy of citizens and circumscribe the use of such data.
A second concern with interconnecting the different technologies is the issue of interoperability.
In this, the systems could be based on different standards or be proprietary. The standards of
legacy systems in government enterprises may be different from the new ones, or different
departments within the same organization may have different preferences, so that they may be
incompatible. Moreover, the evolution of competing standards could hinder interoperability.
Interoperability between proprietary systems is rendered difficult when the hardware systems
are not compatible, or software codes are not compatible (e.g. back-end databases that cannot
communicate with each other). The proprietary systems could create lock-in, e.g. iPhones do
not allow alternative carriers or other programs. Klischewski (2004) identifies two dimensions of
interoperability: information integration and process integration. He argues that interoperability
requires a guiding vision of integration and both technical and inter-organizational cooperation.
Technically, establishing open systems into which individual units can “plug and play” and
establishing standards across different units could facilitate interoperability. In this, the IP
standards have facilitated integration of data and voice (e.g. Web browsing, emailing, VoIP,
IPTV). XML and FGDC have become de facto standards for data sharing and geospatial
Albrecht, K. and McIntyre, L. Spychips: How Major Corporations and Government Plan
to Track Your Every Purchase and Watch Your Every Move. Nashville: Nelson Current Books,
Alfonsi, Benjamin. I Want My IPTV: Internet Protocol Television Predicted a Winner,
IEEE Distributed Systems Online, 6, 2 (2005) (available at
http://ieeexplore.ieee.org/iel5/8968/30522/01407761.pdf, accessed on September 29, 2007).
Bevan, J. M. Track & Trace, Paper, Film and Foil Converter, 81, 8; 32-36, 2007.
CTIA-The Wireless Association. CTIA Semi-Annual Wireless Industry Survey, 2006
(excerpt available at http://files.ctia.org/pdf/CTIA_Survey_Year_End_2006_Graphics.pdf,
accessed on September 30, 2007).
Federal Communications Commission (FCC). High-Speed Services for Internet Access:
Status as of June 30, 2006. Industry Analysis and Technology Division (Wireline Competition
Bureau), Washington, D.C., 2007 (available at http://fjallfoss.fcc.gov/edocs_public/attachmatch/
DOC-270128A1.doc, accessed on September 29, 2007).
FTTH Council. U.S. Optical Fiber Communities – 2006, 2006 (available at
http://www.ftthcouncil.org/documents/959055.pdf, accessed on September 29, 2007).
Garson, G.D. Public Information Technology and E-Governance: Managing the Virtual
State. Sudbury, MA: Jones and Bartlett Publishers, 2006.
Gillett, S.E. Municipal Wireless Broadband: Hype or Harbinger? Sothern California Law
Review, 79, (2006) pp. 561-594.
Goodchild, M. F., Fu, P., and Rich P. Sharing Geographic Information: An Assessment
of the Geospatial One-Stop, Annals of the Association of American Geographers, 97, 2 (2007):
Government Accounting Office (GAO). Data Mining: Federal Efforts Cover a Wide
Range of Uses. GAO-04-548, Report to the Ranking Minority Member, Subcommittee on
Financial Management, the Budget, and International Security, Committee on Governmental
Affairs, U.S. Senate, 2004.
Horrigan, J. B. Home Broadband Adoption 2007, Pew Internet & American Life Project,
2007 (available at http://www.pewinternet.org/pdfs/PIP_Broadband%202007.pdf, accessed on
September 29, 2007).
JiWire.com. Wi-Fi Locations in United States, 2007 (available at
http://www.jiwire.com/browse-hotspot-united-states-us.htm, accessed on September 29, 2007).
Karygiannis, T. Eydt, B., Barber, G., Bunn, L.,Phillips, T. Guidelines for Securing Radio
Frequency Identification (RFID) Systems. Recommendations of the National Institute of
Standards and Technology (NIST). NIST Special Publication 800-98, 2007.
Kaylor, C.H. The Next Wave of E-Government: The Challenges of Data Architecture,
Bulletin of the American Society for Information Science and Technology, December/January
(2005): pp. 18-22.
Klischewski, R. Information Integration or Process Integration? How to Achieve
Interoperability in Administration, Lecture Notes in Computer Science (LNCS), R. Traunmüller
(Ed.): EGOV 2004, LNCS 3183 (2004): pp. 57–65.
Kuhn, D. R., Walsh, T. J., and Fries, S. Security Considerations for Voice Over IP
Systems Recommendations of the National Institute of Standards and Technology, NIST
Special Publication 800-58, 2005.
Lawson-Borders, G. and Kirk, R. Blogs in Campaign Communication, American
Behavioral Scientist,49, 4 (2005): pp. 548-559.
Lewan, T. Chip Implants Linked to Animal Tumors, Washington Post, September 8,
2007 (available at http://www.washingtonpost.com/wp-
dyn/content/article/2007/09/08/AR2007090800997_pf.html, accessed on September 29, 2007).
Multimedia Research Group, Inc. IPTV Market Leaders Report: March 2007, 2007
(available at http://www.mrgco.com/TOC_IPTV_MLR0307.html, accessed on September 29,
National Telecommunications and Information Administration (NTIA). Potential
Interference from Broadband over Power Line (BPL) Systems To Federal Government
Radiocommunications at 1.7 - 80 MHz. Phase 1 Study, 1, NTIA Report 04-413, 2004 (available
at http://www.ntia.doc.gov/ntiahome/fccfilings/2004/bpl/index.html, accessed on September 29,
New Millennium Research Council. ‘Not in the Public Interest—The Myth of Municipal
Wi-Fi Networks’: Why Municipal Schemes to Provide Wi-Fi Broadband Service with Public
Funds are Ill-Advised, Washington, D.C., 2005.
New Millennium Research Council. The State of IPTV 2006: The Advent of Personalized
Programming, Washington, D.C., 2006 (available at
http://www.newmillenniumresearch.org/archive/IPTV_Report_060706.pdf, accessed on
September 29, 2007).
O’Reilly, T. What is Web 2.0: Design Patterns and Business Models for the Next
Generation of Software. O'Reilly Media, Inc., 2005 (available at
http://www.oreilly.com/pub/a/oreilly/tim/news/2005/09/30/what-is-web-20.html, accessed on
September 29, 2007).
Scharl, A. Towards the GeospatialWeb:Media Platforms for Managing Geotagged
Knowledge Repositories, In A. Scharl and K. Tochtermann (Eds), The GeospatialWeb - How
Geo-Browsers, Social Software and the Web 2.0 are Shaping the Network Society. London:
Servon, L. Bridging the Digital Divide: Technology, Community, and Public Policy.
Malden, Ma: Blackwell Publishing, 2002.
TeleGeography. U.S. VoIP Research Service, Washington, D.C., San Diego, Exeter:
TeleGeography Research (A Division of PriMetrica, Inc.), 2007 (available at
http://www.telegeography.com/products/voip/pdf/USVoIP_Exec_Summ.pdf, accessed on
September 29, 2007).
Toffler, A. Future Shock. New York: Random House, 1970.
Tongia, R. Can broadband over powerline carrier (PLC) compete? A techno-economic
analysis, Telecommunications Policy, 28 (2004) pp. 559–578.
United Power Line Council (UPLC). Status of Broadband over Power Line 2007,
Washington, D.C., 2007 (available at
accessed on September 29, 2007).
Walker, R.W. Government users are wild for wireless devices, Government Computer
News (GCN), 2004 (available at http://www.gcn.com/print/23_20/26652-1.html, accessed on
September 29, 2007).
West, D. M. State and Federal E-Government in the United States, 2007 (available at
http://www.insidepolitics.org/egovt07us.pdf, accessed on September 29, 2007).
Wyld, D.C. RFID: The Right Frequency for Government. E-Government Series, IBM
Center for E-Government, 2005.
Figure 1. A basic BPL system
Fiber / T1 BPL Transformer
Modem + PC
Source: NTIA (2004)