Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Technology trends that will shape our future
1.
2. Technology Trends that will endure
1. Long Term Evolution (LTE)
2. Smart Grids
3. Software Defined Networks
4. NoSQL
5. Near Field Communication (NFC)
6. Internet of Things (IoT)/Machine to Machine (M2M)
7. Big Data
8. Social Networks
9. Cloud Computing
03/14/12 Tinniam V Ganesh tvganesh.85@gm
2
3. Communication
networks Ubiquitous computing
Internet of Things (IoT)
LTE Smart
Grids
SDNs
Cloud Computing
NFC
Social Networks Big Data
NoSQL
Scalable,
distributed
systems
03/14/12 Tinniam V Ganesh tvganesh.85@gm
3
4. Technology then, now and beyond
Then
• "It is conceivable that cables of telephone wires could be laid underground, or
suspended overhead, communicating by branch wires with private dwellings,
country houses, shops, manufactories, etc." – Alexander Graham Bell, 1877
• I think there is a world market for maybe five computers. - IBM Chairman
Thomas Watson, 1943
• Computers in the future may have only 1,000 vacuum tubes and perhaps only
weigh 1 1/2 tons. - Popular Mechanics, 1949
• 640K ought to be enough for anybody. - Microsoft Chairman Bill Gates,1981
Now
• IBM's Deep Blue computer defeats world chess champion in game one – ABC
news, 1996
• Computer Wins on ‘Jeopardy!’ New York Times, 2011
Future
Gesture recognition, cognitive intelligence, ubiquitous computing etc.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
4
6. The forces behind LTE (4G)
There are 2 main drivers for LTE
Mobile data growth
Smart devices
03/14/12 Tinniam V Ganesh tvganesh.85@gm
6
7. Mobile data growth
Mobile data is growing at an exponential speed
Mobile data in US & Europe expected to grow at a CAGR of 55% & 42% respectively
Mobile data revenues expected to grow at a rate of 18%
Mobile broadband connections will reach 1 billion by 2012 segmented between 3G &
4G technologies
Customers Angered as iPhones Overload AT&T – New York Times, 2009
03/14/12 Tinniam V Ganesh tvganesh.85@gm
7
8. Mobile devices
In the last 2 years
• 1 billion new mobile subscriptions added
• 2 billion wireless devices sold
Device range from Mobile phones, Smartphones, Netbooks, PDAs, Wireless dongles and
Tablets
• These devices are bandwidth hogs
• Currently there are 3.5 billion subscribers worldwide
03/14/12 Tinniam V Ganesh tvganesh.85@gm
8
9. The rise and rise of data
03/14/12 Tinniam V Ganesh tvganesh.85@gm
9
10. The birth of LTE (4G)
To handle the growth in mobile data traffic a feasibility study on the
UTRA & UTRAN Long Term Evolution was started in December 2004.
The objective was "to develop a framework for the evolution of the 3GPP radio-access
technology towards a high-data-rate, low-latency and packet-optimized radio-
access technology”
In 3GPP Release 8 the specifications of the EPS (Evolved Packet Core) which consisted of
the E-UTRAN (Enhanced UMTS Terrestrial Radio Access Network) and EPC (Evolved
Packet Core) was created.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
10
11. The evolution to 4G
• 1G Advanced Mobile Phone Service (AMPS)
Analog, Circuit Switched, FDMA, FDD
US trials 1978; deployed in Japan (’79) & US (’83)
• 2G : Global System for Mobile (GSM)
Digital, Circuit Switched, FDMA and TDMA, FDD
• 2G : Code Division Multiple Access (CDMA)
Digital, Circuit Switched, FDMA, SS, FDD
Introduced in 1994 in North America
03/14/12 Tinniam V Ganesh tvganesh.85@gm
11
12. 2.5G, 3G
2.5G (GPRS)
– 2.5 G GPRS is an enhancement over the GSM.
– Provides packet switched services (56 Kbps, 114 Kbps)
– Commercial available in 2001/2002
3G (IMS)
– IMT-200 was formed to handle higher network capacity
– 144 Kbps for mobile service
– 2MBps for fixed access
– Operates in the 2Ghz band
– The main technologies were selected
– Wideband CDMA (WCDMA)
– CDMA 2000 (an evolution of IS 95 CDMA)
– 2003/2004
03/14/12 Tinniam V Ganesh tvganesh.85@gm
12
13. 3.5G & 4G
3.5 G
– HSD(U)PA (High Speed Down/Up link packet access)
– HSPA (High Speed Packet access)
– Current HSDPA deployments support down-link speeds of 1.8, 3.6, 7.2 and
14.0 Megabit/s.
4G (LTE)
To handle even higher data throughputs we have the 4G technology
4. Long Term Evolution (LTE)
5. Wireless Interoperability for Microwave Access (WiMAX)
6. Uses an all-IP core network
7. Data rates upto 100 Mbps
8. Introduced in 2010 – 2011 by Sprint/AT&T in US
03/14/12 Tinniam V Ganesh tvganesh.85@gm
13
14. GSM Evolution for Data Access
03/14/12 Tinniam V Ganesh tvganesh.85@gm
14
03/14/12 INWARDi-http://www.inwardi.com 14
15. Evolution from 1G to 4G
3.99G (4G)
3.5G
Migration to 4G
HSDPA
LTE
Upto 100
Mbps
HSUPA
HSPA
7.2Mbps
14Mbps
2008
2010+
03/14/12 Tinniam V Ganesh tvganesh.85@gm
15
16. Elements of the LTE System
LTE encompasses the evolution of
• Radio access through E-UTRAN (eNodeB)
• Non-radio aspects under the term System Architecture Evolution (SAE)
Entire system composed of LTE & SAE is called Evolved Packet System (EPS)
At a high level a LTE network is composed of
• Access network comprised of E-UTRAN
• Core Network called Evolved Packet Core (EPC)
03/14/12 Tinniam V Ganesh tvganesh.85@gm
16
18. LTE Network Elements
UE – User Equipment used to connect to the EPS (Evolved Packet System). This is
an LTE capable UE
The LTE network is comprised of a) Access Network b) Core Network
Access network
ENB (eNodeB) – The evolved RAN consists of single node, the eNodeB that
interfaces with UE. The eNodeB hosts the PHY,MAC, RLC & RRC layers. It handles
radio resource management & scheduling.
Core Network (Evolved Packet Core-EPC)
MME (Mobility Management Entity) – Performs paging, chooses the SGW
during UE attach
S-GW (Serving Gateway) – routes & and forwards user data packets
P-GW (Packet Gateway) – provides connectivity between the UE and the
external packet networks.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
18
20. LTE Technologies
LTE uses OFDM (Orthogonal Frequency Division Multiplexing) for lower latency and
better spectral efficiency
Uses MIMO (Multiple In Multiple Out) LTE uses several transmit & receive paths
reducing interference with increase in spectral efficiency and throughput.
Flatter architecture – Fewer Network elements in the LTE Evolved Packet Core(EPC).
This results in lower latency because of lesser number of hops as compared to 3G.
Absence of RNC like Network Element(NE).
03/14/12 Tinniam V Ganesh tvganesh.85@gm
20
21. Operator strategies for LTE deployment
Voice & SMS are the main source of revenue for the telecom companies. LTE strategy
should be one of the following
3. Data only services on LTE
4. Data only services on LTE with 2G – 3G voice
5. Data service on LTE and IMS based VOIP (VoLTE)
6. Voice and data service on LTE (VoLGA)
The above 3 strategies are not alternatives but a path for evolution
10. Data only option targeted to customers with LTE dongles, netbooks, air cards etc
11. Voice will remain a major revenue generator for CSP. LTE for data and 2G- 3G
network for voice calls. Operators will need to implement a CS-Fallback option (3GPP
23.272) to support voice on 2G-3G networks. The call will fall back to GSM-UMTS
network for voice
12. Voice and data service on LTE uses the IMS for voice services and LTE for data. This
is based on an all –IP network. On Feb 15,2010 the One Voice was adopted by GSMA
as the Voice over LTE (VoLTE). This is an end-to-end LTE ecosystem.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
21
22. LTE benefits
LTE provides download speeds of 100 Mbps downlink and
50Mbps uplink
Reduced latency ( <10 ms ) for better user experience
LTE has a better spectral efficiency (can squeeze more data)
Lower cost per bit (flatter architecture)
LTE base station cost is 1/5th of HSPA cost
Backwards compatible
o Works with GSM/EDGE/UMTS
o Utilizes existing 2G, 3G and new spectrum
Reduced CAPEX/OPEX via simple architecture
In short LTE provides high throughput, low latency , better
spectral efficiency and lower cost-per-bit.
LTE provides for better QoE (Quality of Experience)
03/14/12 Tinniam V Ganesh tvganesh.85@gm
22
23. LTE Services
Based on the features of LTE newer an richer services can be provided
2. Video services (high throughput)
3. Low Latency (Gaming)
4. Real time video conferencing (high quality)
5. M2M services
03/14/12 Tinniam V Ganesh tvganesh.85@gm
23
24. Telecom in the Indian context
• India will be re-auctioning the 2G spectrum from the 122 cancelled licenses
• The auction of the 3G spectrum by the government recently concluded in May, 2010
after 34 days and 183 rounds of intense bidding by nine private operators for 22
circles. the 3G spectrum auction was a bonanza for the Indian Government which
netted Rs. 66000 crores
• India is to auction 4G spectrum later this year. 29 Feb 2012,
• Empowered Group of Ministers (EGoM) has decided to earmark the 700 Mhz band
for offering fourth generation (4G) mobile services. March 5,2012
03/14/12 Tinniam V Ganesh tvganesh.85@gm
24
26. The Electric Grid
The Grid
“The Grid,” refers to the electric grid, a network of transmission lines, substations,
transformers and more that deliver electricity from the power plant to your home or
business.
• It’s what you plug into when you flip on your light switch or power up your computer.
Issue
• The issue with the traditional energy grid is that there are enormous losses in
transmission and grid would be strained during peak usage. Moreover any outage of
the energy grid would have a domino effect and could effectively cause a blackout in
large areas.
– The blackout in US in 2003 which was the largest blackout in US history
03/14/12 Tinniam V Ganesh tvganesh.85@gm
26
27. What makes the Grid “smart”
• The Smart Grid will have intelligent sensors and smart meters
• The digital technology that allows for two-way communication between the utility and
its customers, and the sensing along the transmission lines is what makes the grid
smart
• These technologies will work with the electrical grid to respond digitally to our
quickly changing electric demand.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
27
28. Aspects of a Smart Grid
The Smart Grid is made up of the following 4 aspects
Smart Home
Distribution Intelligence
Operation Centers
Plug-in vehicles
03/14/12 Tinniam V Ganesh tvganesh.85@gm
28
29. Smart Home
Smart Homes will be equipped with smart meters instead of the traditional meters.
These meters will be equipped with 2 way communication with your energy utility.
All the appliances in your home will be networked into a “Energy Management
System” the EMS.
Through the EMS you will be able to monitor your energy usage and ensure that save
money by utilizing your appliances during off peak hours.
Smart Appliances will be able to communicate with the energy utility and
automatically turn off during peak periods and turn on during when the cost of the
energy is low.
This is known “demand response” when consumers change their consumption
patterns based on lower cost or other incentives offered by the utility companies.
The homes of the future will have solar panels or wind turbines will generate power
and sell the excess power back to the Smart Grid.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
29
30. Distribution Intelligence
The smart grid with its transformers,
switches, substations will be fitted with
sensors that will measure and monitor
the energy flow through the grid.
These sensors will be able to quickly
detect faults and isolate the faulty
network from the rest of the network.
The Smart Grid will have computer
software that will provide the grid with
the capacity to self-heal in case of
outages and provide better resiliency
to the network.
Besides security systems will play a
key role in the Smart Grid.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
30
31. Grid Operation Centers
• The Energy grid consists of transformers, power lines and transmission towers. It is
absolutely essential that only as much power as needed is generated.
• Otherwise like water sloshing through water pipes excess power generated can cause
oscillations and result in the grid to become unstable eventually leading to a black
out.
• The Smart Grid will have sensors all along the way which measure and monitor the
energy usage and be able to respond quickly to any instability.
• It will have the power to self-heal.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
31
32. Plug-in Electric Vehicles (PEV)
• Plug-in Electric Vehicles like Chevy’s
Volt, Ford’s Electric Focus, the
Nissan’s Leaf and the Tesla’s electric
vehicle.
• The electric vehicle will run entirely on
electricity and will be eventually lead to
reducing the carbon emissions and a
greener future.
• The PEVs will plug into the grid and
will charge during the off-peak periods.
• The advantage of the PEVs is that the
Smart Grid can utilize the energy
stored in the PEVs to other parts of the
network which need them most.
• The PEVs can serve as distributed
source of stored energy supplying the
energy to isolated regions during
blackouts.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
32
33. Key benefits from Smart Grids
Some of the key advantages of smart grids
- Better resiliency to failures and quicker recovery times
- Automatic re-routing of energy transmission in case of network failures
- Faster response to outages with the ability to isolate the faults
- Better integration with renewable energy like wind, solar energy
- Reduced losses and more efficiency built into the grid.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
33
35. Smart Grid in Indian context
• With India recently launching a Smart Grid Task Force and Smart Grid Forum
coupled with $900B in investment planned for generation, transmission, distribution
and power quality.
• Research from Zpryme indicates that in 2015 India’s smart grid market will be $1.9
billion.
• Further, Zpryme predicts the country’s basic electrical infrastructure needs will grow
beyond that, totaling $5.9 billion in the same year.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
35
36. Smart Grid in the Indian context
• Electricity generation in June of 2010 was 162 gigawatts, and is predicted to rise seven
to ten percent until 2018. By 2032, energy generation is expected to be 800
gigawatts.
• Technical issues also plague India’s grid, and line losses are averaging 26 percent with
some states as high as 62 percent.
• Widespread electrical theft. When energy theft is factored in, transmission line losses
average 50 percent.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
36
37. Guiding principles for Smart Grids – MoP
That it is based on an Indian model and developed indigenously
Focuses on power shortage problems
Addresses theft prevention and loss reduction
Provides power in rural areas
Development of alternative sources of power
It is affordable of power
03/14/12 Tinniam V Ganesh tvganesh.85@gm
37
38. Agencies under MoP
The Ministry of Power (MoP) is the umbrella under which an array of agencies
operate, each in charge of a separate policy area
There are 4 agencies under MOP are
5. R-APDRP
6. DRUM
7. Indian Smart Grid Task Force
8. BIS
03/14/12 Tinniam V Ganesh tvganesh.85@gm
38
39. R-APDRP
R-APDRP - The Restructured-Accelerated Power Development and Reform Program (R-
APDRP) of 2008 intends to implement distribution reform and strengthen IT
innovation in India.
Split into two phases
the first concentrates on information and communications technology (ICT) and
investments of power infrastructure to first measure and mitigate inefficiencies and
theft.
the second phase will focus on implementing changes based upon that data and power
transfer systems with the broader goal of modernizing the electrical system as it is
built.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
39
40. Agencies under MoP (contd.)
DRUM: Established just after the APDRP, the goal of the Distribution Reform, Upgrades
and Management (DRUM) program is to work in tandem with the RAPDRP to create
three “Centers of Excellence” that will serve as guides to improving the electrification
of the rest of India.
Smart Grid Task Force & Forum: The Smart Grid Task Force and Smart Grid Forum
were created in 2010 to coordinate smart grid activities in India.
Bureau of Indian Standards (BIS):BIS has taken the lead in adopting international
standards (particularly IEC 62056 and IEC 61850) common in the global smart grid
community.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
40
41. Indian Smart Grid Task Force (ISGTF)
Under the ISGTF
• The eight smart grid pilots, which were announced in July 19, 2011 by Sam Pitroda,
who is the chairman of Smart Grid India Task Force and adviser to the Prime
Minister, are set to be rolled out in the coming months.
• , The Power ministry is going to finalize these 8 projects of worth 500 crore (US $
9.69 million) - Economic Times
03/14/12 Tinniam V Ganesh tvganesh.85@gm
41
43. Paradigm shift in networking
• Networks and networking, as we know it, is on the verge of a momentous change,
thanks to a path breaking technological concept known as Software Defined Networks
(SDN).
• SDN is the result of pioneering effort by Stanford University and University of
California, Berkeley and is based on the Open Flow Protocol and represents a
paradigm shift to the way networking elements operate.
OpenFlow link: http://www.openflow.org/
03/14/12 Tinniam V Ganesh tvganesh.85@gm
43
44. The philosophy behind SDNs
• Software Defined Networks (SDN) decouples the routing and switching of the data
flows and moves the control of the flow to a separate network element namely, the
flow controller.
• The motivation for this is that the flow of data packets through the network can be
controlled in a programmatic manner.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
44
45. SDN architecture
The OpenFlow Protocol has 3 components to it.
• The Flow Controller that controls the flows,
• the OpenFlow switch
• Flow Table
A secure connection between the Flow Controller and the OpenFlow switch.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
45
46. OpenFlow protocol
• The OpenFlow Protocol is an open source API specification for modifying the flow
table that exists in all routers, Ethernet switches and hubs.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
46
48. SDN benefits
Advantage of SDNs
separating the control and data plane of network routers
the ability to modify and control different traffic flows through a set of network
resources
the ability to virtualizes the network resources.
Virtualized network resources are known as a “network slice”. A slice can span several
network elements including the network backbone, routers and hosts.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
48
50. NoSQL
In large web applications where
performance and scalability are key
concerns a non –relational database
like NoSQL is a better choice to the
more traditional relational databases
like MySQL, ORACLE, PostgreSQL etc
03/14/12 Tinniam V Ganesh tvganesh.85@gm
50
51. ACID property DBs
• While the traditional databases are designed to preserve the ACID (atomic, consistent,
isolated and durable) properties of data, these databases are capable of only small and
frequent reads/writes.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
51
52. Scalability criteria
However when there is a need to scale the application to be capable of handling
millions of transactions the NoSQL model works better.
There are several examples of such databases – the more reputed are Google’s
BigTable, HBase, Amazon’s Dynamo, CouchDB & MongoDB.
These databases are based on a large number of regular commodity servers. Accesses
to the data are based on get(key) or set(key,value) type of APIs.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
52
53. Distributed Hash Table (DHT)
The database is itself distributed across several commodity servers.
Accesses to the data are based on a consistent hashing scheme for example the
Distributed Hash Table (DHT) method.
In this method the key is hashed efficiently to one of the servers which can be
visualized as lying on the circumference of the circle.
The Chord System is one such example of the DHT algorithm.
Once the destination server is identified the server does a local search in its data for
the key value.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
53
56. Distributing data
• The ability to distribute data and the queries to one of several servers provides the key
benefit of scalability.
• Clearly having a single database handling an enormous amount of transactions will
result in performance degradation as the number of transaction increases.
• Applications that have to frequently access and manage petabytes of data will clearly
have to move to the NoSQL paradigm of databases.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
56
57. CAP theorem
• While databases like NoSQL, HBase, Dynamo etc do not have ACID properties they
generally follow the CAP postulate.
• The CAP (Consistency, Availability and Partition Tolerance) theorem states that it is
difficult to achieve all the 3 CAP features simultaneously.
• The NoSQL types of databases in order to provide for availability, typically also
replicates data across servers in order to be able to handle server crashes.
• Since data is replicated across servers there is the issue of maintaining consistency
across the servers. Amazon’s Dynamo system is based on a concept called “eventual
consistency” where the data becomes consistent after a few seconds.
• What this signifies is that there is a small interval in which it is not consistent.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
57
58. More on NoSQL
• The NoSQL since it is non-relational does not provide for the entire spectrum of SQL
queries.
• Since NoSQL is not based on the relational model queries that are based on JOINs
must necessarily be iterated over in these applications.
• Hence the design of any application that needs to leverage the benefits of such non-
relational databases must clearly separate
– the data management layer from
– the data storage layer.
• By separating the data management layer from how the data is stored we can easily
accrue the benefits of databases like NoSQL.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
58
59. Challenges
• However the design of distributing data across several commodity servers has its own
challenges, besides the ability to have an appropriate function to distribute the queries
to.
• For e.g. the NoSQL database has to be able handle the requirement of new servers
joining the system.
• Similarly since the NoSQL database is based on general purpose commodity servers
the DHT algorithm must be able to handle server crashes and failures.
• In a distributed system this is usually done as follows. The servers in the system
periodically convey messages to each other in order to update and maintain their list
of the active servers in the database system. This is performed through a method
known as “gossip protocol”
03/14/12 Tinniam V Ganesh tvganesh.85@gm
59
62. Near Field Communication (NFC)
• Near field communication (NFC) is a set of standards for smartphones and similar
devices to establish radio communication with each other by touching them together
or bringing them into close proximity, usually no more than a few centimetres.
• Communication is also possible between an NFC device and an unpowered NFC chip,
called a "tag".
• NFC standards cover communications protocols and data exchange formats, and are
based on existing radio-frequency identification (RFID) standards including ISO/IEC
14443
03/14/12 Tinniam V Ganesh tvganesh.85@gm
62
63. Near Field Communication (NFC)
• Near Field Communications
(NFC) is a technology whose time
has come. Mobile phones enabled
with NFC technology can be used
for a variety of purposes
• One such purpose is integrating
credit card functionality into
mobile phones using NFC.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
63
64. NFC players
• Already the major players in mobile are integrating NFC into their newer versions of
mobile phones including Apple’s iPhone, Google’s Android, and Nokia.
• We will never again have to carry in our wallets with a stack of credit cards. Our
mobile phone will double up as a Visa, MasterCard, etc.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
64
65. NFC applications
• NFC also allows retail stores to send promotional coupons to subscribers who are in
the vicinity of the shopping mall.
• Posters or trailers of movies running in a theatre can be sent as multi-media clips
when traveling near a movie hall.
• NFC also allows retail stores to send promotional coupons to subscribers who are in
the vicinity of the shopping mall besides allowing exchanging contact lists with friends
when they are close proximity.
• Exchange of information such as schedules, maps, business card and coupon
delivery in a few hundred milliseconds
• Transferring images, posters for displaying and printing
03/14/12 Tinniam V Ganesh tvganesh.85@gm
65
68. Connected world
• We are progressively moving towards a more connected world, using a variety of
devices to connect to each other and to the Net.
• We are connected to the network through the mundane telephone, mobile phone,
desktop, laptop or iPads.
• We use the devices for sending, receiving, communicating or for our entertainment.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
68
69. Internet of Things (IoT)
• In 2005, the International
Telecommunications Standardization
Sector (ITU-T), which coordinates
standards for telecommunications on
behalf of the International
Telecommunication Union, came up
with a seminal report, “The Internet of
Things.”
• The report visualizes a highly
interconnected world made of tiny
passive or intelligent devices that
connect to large databases and to the
“network of networks” or the Internet.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
69
70. Adding another dimension
This ‘Internet of Things’ or M2M (machine-to-machine) network adds another dimension
to the existing notions of networks. It envisages an anytime, anywhere, anyone,
anything network bringing about a complete ubiquity to computing.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
70
71. The IoT Eco-system
The devices in this M2M network will be made up of passive elements, sensors and
actuators that communicate with the network.
Soon everyday articles from tires to toasters will have these intelligent devices
embedded in them.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
71
72. Tomorrow’s ubiquitous world of tags,
sensors and smart systems
03/14/12 Tinniam V Ganesh tvganesh.85@gm
72
Info Day, Budapest, 22 January 2009
73. The future
“We are heading into a new era of ubiquity, where the users of the Internet will be
counted in billions, and where humans may become the minority as generators and
receivers of traffic. Changes brought about by the Internet will be dwarfed by those
prompted by the networking of everyday objects “ – UN report
03/14/12 Tinniam V Ganesh tvganesh.85@gm
73
74. RFID tags
• Radio Frequency Identification (RFID) was the early and pivotal enabler of this
technology, with a tiny tag responding in the presence of a receiver which emits a
signal. Retailers keep track of the goods going out of warehouses to their stores with
this technology.
– (e.g. JustBooks)
03/14/12 Tinniam V Ganesh tvganesh.85@gm
74
75. IoT applications
• In a typical scenario one can imagine a retail store in which all items are RFID tagged.
• A shopping cart fitted with a receiver can automatically track all items placed in the
cart for immediate payment and check-out.
• Another interesting application is in the payment of highway tolls.
• Similarly, plans are already afoot for embedding intelligent devices in the tires of
automobiles.
• The devices will be used for measuring the tire pressure, speed etc., and warn the
drivers of low pressure or tire wear and tear. The devices will send data to the
network, which can be processed.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
75
76. IoT applications (contd.)
• This technology is also well suited for insurance companies which can give discounts
to safe drivers based on the data sent by these sensors. Other promising applications
include an implantable device capable of remote monitoring of patients with heart
problems. It can warn the physician when it detects an irregularity in the patient’s
heart rhythm.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
76
77. IoT (Bridges & Mines)
• The ‘Internet of Things’ can also play an important role in monitoring the stress and
the load on bridges and forewarn when the stress is too great and a collapse is
imminent.
• In mines, the sensors can send real-time info on the toxicity of the air, the structural
strength of the walls or the possibility of flooding.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
77
79. Data explosion
• Global mobile data traffic grew 2.3-fold
in 2011, more than doubling for the
fourth year in a row.
• The 2011 mobile data traffic growth
rate was higher than anticipated. Last
year's forecast projected that the
growth rate would be 131 percent. This
year's estimate is that global mobile
data traffic grew 133 percent in 2011.
• The devices that are connected to the
net range from mobiles, laptops,
tablets, sensors and the millions of
devices based on the “internet of
things”.
• All these devices will constantly spew
data on the internet .
03/14/12 Tinniam V Ganesh tvganesh.85@gm
79
80. Data explosion
Monthly global mobile data traffic will
surpass 10 exabytes (10 18)in 2016.
Over 100 million smartphone users will
belong to the "gigabyte club" (over 1
GB per month) by 2012.
Mobile data traffic will grow at
a compound annual growth rate
(CAGR) of 78 percent from 2011 to
2016, reaching 10.8 exabytes per
month by 2016.
Monthly global mobile data traffic will
surpass 10 exabytes in 2016
Global mobile data traffic will increase
18-fold between 2011 and 2016.
Source : Cisco’s Visual networking index –
http://www.cisco.com/en/US/solutions/collateral/ns341/ns525/ns537/ns705/ns827/white_paper_c11-520862.html
03/14/12 Tinniam V Ganesh tvganesh.85@gm
80
81. Devices galore
• By the end of 2012, the number of
mobile-connected devices will exceed
the number of people on earth, and by
2016 there will be 1.4 mobile devices
per capita.
• There will be over 10 billion mobile-
connected devices in 2016, including
machine-to-machine (M2M) modules-
exceeding the world's population at
that time (7.3 billion).
03/14/12 Tinniam V Ganesh tvganesh.85@gm
81
82. Region wise contribution
The number of mobile-connected devices will exceed the world's population in
2012.
• The average mobile connection speed will surpass 1 Mbps in 2014.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
82
83. Bandwidth hungry devices
Due to increased usage on smartphones, handsets will exceed 50 percent of mobile
data traffic in 2014.
• Monthly mobile tablet traffic will surpass 1 exabyte per month in 2016.
• Tablets will exceed 10 percent of global mobile data traffic in 2016.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
83
85. Data, data everywhere …
• Fortunately the explosion in data has been accompanied by falling prices in storage
and extraordinary increases in processing capacity.
•
• The data by themselves are useless.
• However if processed they can provide insights into trends and patterns which can be
used to make key decisions.
• For e.g. the data exhaust that comes from a user’s browsing trail, click stream provide
important insight into user behavior which can be mined to make important
decisions.
• Similarly inputs from social media like Twitter, Facebook provide businesses with key
inputs which can be used for making business decisions.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
85
86. What is BigData?
• BigData involves determining patterns, trends and outliers among mountains of data
• The patterns are used to make business and strategic decisions
• Analyzing large data sets—so-called big data—will become a key basis of competition,
underpinning new waves of productivity growth and innovation
03/14/12 Tinniam V Ganesh tvganesh.85@gm
86
87. The 3 V’s of Big Data
• Big Data enters the picture. Big Data enables the management of the 3 V’s of data
– volume,
– velocity and
– variety.
There is a virtual explosion of data
The rate at which the data is generated, or the velocity, is also growing phenomenally the
variety and the number of devices that are connected to the network.
Besides there is a tremendous variety to the data. Data is both structured, semi-structured
and unstructured. Logs could be in plain text, CSV,XML, JSON and so on.
The issue of 3 V’s of data makes Big Data most suited for crunching this enormous
proliferation of data at the velocity at which it is generated.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
87
88. Case study - Walmart
• Wal-Mart, a large US retail store, has 460
terabytes of data stored on Teradata
mainframes
• HURRICANE FRANCES was on its way,
barreling across the Caribbean, threatening a
direct hit on Florida's Atlantic coast.
• A week ahead of the storm's landfall, Wal-
Mart‘s CIO pressed her staff to come with
forecasts based on available terabytes of
shopper’s history data
• The experts mined the data and found that
the stores would indeed need certain
products - and not just the usual flashlights,
for e.g pop-tarts and beer
• Trucks filled with toaster pasteries and six-
packs were sent to Florida and sold quickly
03/14/12 Tinniam V Ganesh tvganesh.85@gm
88
89. Where is Big Data being used?
• Big Data has been used by energy companies in identifying key locations for
positioning their wind turbines. To identify the precise location requires that
petabytes (10 15) of data be crunched rapidly and appropriate patterns be identified.
• Used in predicting traffic grid locks
• Big Data including identifying brand sentiment from social media, to customer
behavior from click exhaust to identifying optimal power usage by consumers.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
89
90. What makes Big Data different?
• The key difference between Big Data and traditional processing methods are that the
volume of data that has be processed and the speed with which it has to be processed.
As mentioned before the 3 V’s of volume, velocity and variety make traditional
methods unsuitable for handling this data.
Associated technologies for processing
– Hadoop
– Cloud
03/14/12 Tinniam V Ganesh tvganesh.85@gm
90
92. The enterprise & social networking
• We are in the midst of a Social
Networking revolution as we progress
to the next decade.
• As technology becomes more complex
in a flatter world, cooperating and
collaborating will not only be necessary
but also essentially imperative.
• McKinsey in its recent report “Wiring
the Open Source Enterprise” talks
of the future of a “networked
enterprise” which will require the
enterprise to integrate Web 2.0
technologies into its enterprise
computing fabric.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
92
93. The networked enterprise
• Social Software utilizing Web 2.0
technologies will soon become the new
reality if organizations want to stay
agile.
• Social Software includes those
technologies that enable the enterprise
to collaborate through blogs, wikis,
podcasts, and communities.
• A collaborative environment will
unleash greater fusion of ideas and can
trigger enormous creative processes in
the organization.
• According to Prof. Clay Shirky of New
York University the underused human
potential at companies represents an
immense “cognitive surplus” which can
be tapped by participatory tools such
as Social Software.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
93
94. Key benefits of Social Networking
A fully operational social network in the
organization will enable
• quicker decision making,
• trigger creative collaboration and
• bring together a faster ROI for the
enterprise.
A shared knowledge pool enables
• easier access to key information from
across the enterprise and
• facilitates faster decision making.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
94
95. The Social Network community
The social network community could include
• Employees
• Customers
• Contractors
• Suppliers
03/14/12 Tinniam V Ganesh tvganesh.85@gm
95
96. Organizations using Social Networking
• American Express – has a blog entitled OPEN Forum
• Bank of America – has two social networking pages: Small Business Community on
Clearspace and Medal Me! on Facebook
• Ford has a social network on FORDSocial
• IBM has a social network developerWorks
• Starbucks has My Starbucks Idea
And many more
03/14/12 Tinniam V Ganesh tvganesh.85@gm
96
98. What is the Cloud Computing?
What is the cloud computing ?
3. Cloud computing is Internet-based
computing, whereby shared resources,
software, and information are provided to
on demand, like the electricity grid.
4. Cloud Computing refers to both the
applications delivered as services over the
Internet and the hardware and systems
software in the datacenters that provide
those services
5. Allows for a utility style usage of
computing resources
6. Cloud promises real cost savings and
agility to the customer
7. Applications can expand and contract with
the natural ebb and flow of the business life
cycle
8. Uses virtualization technologies
03/14/12 Tinniam V Ganesh tvganesh.85@gm
98
99. Why the cloud?
Cloud Architectures address key difficulties surrounding large-scale data processing.
2. In traditional data processing it is difficult to get as many machines as an application
needs.
3. Second, it is difficult to get the machines when one needs them.
4. Third, it is difficult to distribute and coordinate a large-scale job on different
machines, run processes on them, and provision another machine to recover if one
machine fails.
5. Fourth, it is difficult to auto scale up and down based on dynamic workloads.
6. Fifth, it is difficult to get rid of all those machines when the job is done.
Cloud Architectures solve such difficulties
03/14/12 Tinniam V Ganesh tvganesh.85@gm
99
100. The cloud perspective
• From an engineering perspective
the cloud is a computing architecture
characterized by a large number of
interconnected identical computing
devices that can scale on demand and
that communicate via an IP network.
• From a business perspective it is
computing services that are scalable
and billed on a usage basis.
• Allows customers to shift traditional
Capital Expenditures (CapEx) into
their Operating Expenditure (OpEx)
budgets
03/14/12 Tinniam V Ganesh tvganesh.85@gm
100
101. What is virtualization?
Transforms from “one server- one application” to multiple virtual
machines on each physical machine
Run Multiple OS on a single machine
a thin layer is introduced over either the hardware or on top of the OS
Uses a technology known as Hypervisor
Leaders in Virtualization – VMware, Citrix Xen
03/14/12 Tinniam V Ganesh tvganesh.85@gm
101
102. What is a hypervisor
The Hypervisor is a layer of software running directly on computer
hardware replacing the operating system thereby allowing the computer
hardware to run multiple guest operating systems concurrently
The Hypervisor presents to the guest OS a virtual OS and manages the
execution of the guest OS.
Multiple instances of a variety of OS’es can share a virtualized resource
03/14/12 Tinniam V Ganesh tvganesh.85@gm
102
103. Benefits of Hypervisor
• Increase server utilization
• Consolidate server farms
• Decrease complexity
• Decrease Total Cost of Ownership (TCO)
03/14/12 Tinniam V Ganesh tvganesh.85@gm
103
105. Cloud based Services
Infrastructure as a Service (IaaS)
Infrastructure as a Service (IaaS) is the
• delivery of hardware (server, storage and network),
• Delivery of associated software (operating systems virtualization technology, file
system), as a service.
• users must deploy and manage the software services themselves
• Amazon Web Services Elastic Compute Cloud (EC2) and Secure Storage Service (S3)
are examples of IaaS offerings.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
105
106. Cloud based services
Platform as a Service (PaaS)
Platform as a Service (PaaS) is
• an application development and deployment platform delivered as a service over the
Web
• This platform consists of infrastructure software, and typically includes a database,
middleware and development tools.
• Some PaaS offerings have a specific programming language or API. For example,
Google AppEngine is a PaaS offering where developers write in Python or Java.
Software as a Service (SaaS)
• A SaaS provider typically hosts and manages a given application in their own data
center and makes it available to multiple tenants and users over the Web.
• Oracle CRM On Demand, Salesforce.com, and Netsuite are some of the well known
SaaS examples.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
106
108. Types of Cloud Architectures
There are 3 types of Cloud Architecture
2. Public clouds
3. Private clouds
4. Hybrid clouds
03/14/12 Tinniam V Ganesh tvganesh.85@gm
108
109. Public Clouds
Public Clouds
• External organizations provide the infrastructure and management required to
implement the cloud.
• Typically billed based on usage.
• Transfers the cost from a capital expenditure to an operational expense
• Temporary applications or applications with burst resource requirements typically
benefit from the public cloud’s ability
• Public clouds have the disadvantage of hosting your data in an offsite organization
outside the legal and regulatory umbrella of your organization.
• In addition, as most public clouds leverage a worldwide network of data centers.
• These issues result in potential regulatory compliance issues which preclude the use
of public clouds for certain organizations or business applications.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
109
110. Private Clouds
Private Clouds
• The infrastructure is controlled completely by the enterprise.
• Typically, private clouds are implemented in the enterprise’s data center and
managed by internal resources.
• A private cloud maintains all corporate data in resources under the control of the
legal and contractual umbrella of the organization. This eliminates the regulatory,
legal and security concerns associated with information being processed on third
party computing resources.
• Currently, private clouds require Capital Expenditure and Operational Expenditure.
• Requires highly skilled labor to ensure that business services can be met.
• The private cloud can also be used by existing legacy IT departments to dramatically
reduce their costs and as an opportunity to shift from a cost center to a value center
in the eyes of the business.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
110
111. Hybrid Clouds
Hybrid Clouds
• To meet the benefits of both approaches, newer execution models have been
developed to combine public and private clouds into a unified solution.
• Applications with significant legal, regulatory or service level concerns for
information can be directed to a private cloud.
• Other applications with less stringent regulatory or service level requirements can
leverage a public cloud infrastructure.
• Implementation of a hybrid model requires additional coordination between the
private and public service management system.
• Involves a federated policy management tool, seamless hybrid integration, federated
security, information asset management, coordinated provisioning control, and
unified monitoring systems.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
111
112. Business Benefits of Cloud Architectures
There are some clear business benefits to building applications using Cloud Architectures
• Almost zero upfront infrastructure investment: If you have to build a large-scale
system it may cost a fortune to invest in real estate, hardware (racks, machines,
routers, backup power supplies), hardware management (power management,
cooling), and operations personnel. Now, with utility-style computing, there is no
fixed cost or startup cost.
• Just-in-time Infrastructure:
• The solutions are low risk because you scale only as you grow.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
112
114. Major Cloud providers
Amazon EC2
• Amazon Elastic Compute Cloud (EC2) is at one end of the spectrum. An EC2 instance
looks much like physical hardware, and users can control nearly the entire software
stack, from the kernel upwards. This low level makes it inherently difficult for
Amazon to offer automatic scalability and failover, because the semantics associated
with replication and other state management issues are highly application-dependent.
Google’s AppEngine
• AppEngine is targeted exclusively at traditional web applications, enforcing an
application structure of clean separation between a stateless computation tier and a
stateful storage tier. AppEngine’s impressive automatic scaling and high-availability
mechanisms, and the proprietary MegaStore data storage available to AppEngine
applications, all rely on these constraints.
Microsoft Azure
• Applications for Microsoft’s Azure are written using the .NET libraries, and compiled
to the Common Language Runtime, a language-independent managed environment.
Thus, Azure is intermediate between application frameworks like AppEngine and
hardware virtual machines like EC2.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
114
115. When to use the Cloud?
• A first case is when demand for a service varies with time. Provisioning a data center
for the peak load it must sustain a few days per month leads to underutilization at
other times, for example.
Instead, Cloud Computing lets an organization pay by the hour for computing
resources, potentially leading to cost savings even if the hourly rate to rent a machine
from a cloud provider is higher than the rate to own one.
• A second case is when demand is unknown in advance. For example, a web startup
will need to support a spike in demand when it becomes popular, followed potentially
by a reduction once some of the visitors turn away.
• Finally, organizations that perform batch analytics can use the cloud computing to
finish computations faster: using 1000 EC2 machines for 1 hour costs the same as
using 1 machine for 1000 hours.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
115
116. Typical charges on the Cloud
For example in the Elastic Compute Cloud (EC2) from Amazon Web Services (AWS) sells
2. 1.0-GHz x86 ISA “slices” for 10 cents per hour,
3. A new “slice”, or instance, can be added in 2 to 5 minutes.
4. Amazon’s Scalable Storage Service (S3) charges $0.12 to $0.15 per gigabyte-month
Windows Azure Google App
Engine
CPU 0.12 hr 0.10/hr
Bandwidth (out) $0.15/GB $0.12/GB
Bandwidth (in) $0.10/GB $0.10/GB
Pricing unavailable
Storage (Files) $0.15/GB/month
(Blobstore)
$9.99/month (SQL $0.15/GB/month
Storage (DB) Server up to 1GB) ($0.005/GB/day)
03/14/12 Tinniam V Ganesh tvganesh.85@gm
116
117. Amazon EC2 pricing
Standard on- Linux/UNIX Windows Data Transfer IN
demand Usage Usage
instance All data transfer in $0.000 per GB
Small (Default) $0.085 per $0.12 per
hour hour
Data Transfer OUT
First 1 GB / month $0.000 per GB
Large $0.34 per $0.48 per
hour hour
Up to 10 TB / month $0.120 per GB
Extra Large $0.68 per $0.96 per
hour hour Next 40 TB / month $0.090 per GB
03/14/12 Tinniam V Ganesh tvganesh.85@gm
117
118. What does the cloud look like?
Data center of Facebook,Oregon, US
03/14/12 Tinniam V Ganesh tvganesh.85@gm
118
119. Cloud Worthiness
Processing Pipelines
• Document processing pipelines – convert hundreds of thousands of documents from
Microsoft Word to PDF, OCR millions of pages/images into raw searchable text
• Image processing pipelines – create thumbnails or low resolution variants of an
image, resize millions of images
• Video transcoding pipelines – transcode AVI to MPEG movies
• Indexing – create an index of web crawl data
• Data mining – perform search over millions of records
Batch Processing Systems
• Back-office applications (in financial, insurance or retail sectors)
• Log analysis – analyze and generate daily/weekly reports
• Nightly builds – perform nightly automated builds of source code repository every
night in
• parallel
• Automated Unit Testing and Deployment Testing – Test and deploy and perform
automated unit testing (functional, load, quality) on different deployment
configurations every night
03/14/12 Tinniam V Ganesh tvganesh.85@gm
119
120. Cloud worthiness
Websites
• Websites that “sleep” at night and auto-scale during the day
• Instant Websites – websites for conferences or events (Super Bowl, sports
tournaments)
• Promotion websites
• “Seasonal Websites” - websites that only run during the tax season or the holiday
season
Mobile interactive applications (M2M).
M2M services will be attracted to the cloud because these services generally rely on
large data sets that are most conveniently hosted in large datacenters.
Parallel batch processing.
Cloud Computing presents a unique opportunity for batch-processing and analytics jobs
that analyze terabytes of data and can take hours to finish. If there is enough data
parallelism in the application, users can take advantage of the using hundreds of
computers for a short time.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
120
121. Cloud worthiness
The rise of analytics.
A special case of compute-intensive batch processing is business analytics. A growing
share of computing resources is now spent on understanding customers, supply
chains, buying habits, ranking, and so on..
Extension of compute-intensive desktop applications.
The latest versions of the mathematics software packages Matlab and Mathematica are
capable of using Cloud Computing to perform expensive evaluations.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
121
122. Security in the Cloud
Data Confidentiality and Auditability
• Current cloud offerings are essentially public (rather than private) networks, exposing
the system to more attacks. There are also requirements for auditability, in the sense
of Sarbanes-Oxley and Health and Human Services Health Insurance Portability and
Accountability Act (HIPAA) regulations that must be provided for corporate data to
be moved to the cloud.
• Other standards include PCI, SOX,
03/14/12 Tinniam V Ganesh tvganesh.85@gm
122
123. Key benefits of the Cloud
• More efficient resource utilization: Better infrastructure utilization
• Usage-based costing: Utility-style pricing allows billing the customer only for the
infrastructure that has been used. The customer is not liable for the entire
infrastructure that may be in place.
• Potential for shrinking the processing time: Parallelization is the one of the
great ways to speed up processing. If one compute-intensive or data intensive job that
can be run in parallel takes 500 hours to process on one machine, with Cloud
Architectures, it would be possible to spawn and launch 500 instances and process
the same job in 1 hour. Having available an elastic infrastructure provides the
application with the ability to exploit parallelization in a cost-effective manner
reducing the total processing time.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
123
124. The future
A recent study by IDC predicts
• Revenue from cloud innovation will reach $1.1 trillion a year by 2015
• cloud computing will generate over two million jobs in India and nearly 14 million
new jobs worldwide by 2015.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
124
125. Cloud providers in India
Source: www.techno-pulse.com
03/14/12 Tinniam V Ganesh tvganesh.85@gm
125
126. Tinniam V Ganesh
Feel free to get in touch with me at tvganesh.85@gmail.com
Do read my technical blog Giga thoughts : http://gigadom.wordpress.com/
03/14/12 Tinniam V Ganesh tvganesh.85@gm
126
129. Bandwidth shortage
• A key issue of the computing infrastructure of today is data affinity, which is the result
of the dual issues of data latency and the economics of data transfer.
• Jim Gray (Turing award in 1998) whose paper on “Distributed Computing
Economics” states
$1
≈ 1 GB sent over the WAN
≈ 10 Tops (tera cpu operations)
≈ 8 hours of cpu time
≈ 1 GB disk space
≈ 10 M database accesses
≈ 10 TB of disk bandwidth
≈ 10 TB of LAN bandwidth
There is a disproportionate contribution by the WAN bandwidth and may lead to a
bandwidth shortage
03/14/12 Tinniam V Ganesh tvganesh.85@gm
129
130. Spectrum crunch
• The growth in mobile data traffic has been exponential.
• Given the current usage trends, coupled with the theoretical limits of available
spectrum, the world will run out of available spectrum for the growing army of mobile
users.
• The demand for wireless capacity will outstrip spectrum availability by the middle of
this decade or by 2014.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
130
131. IPv4 exhaustion
• The issue is that IPv4 can address only 2^32 or 4.3 billion devices.
• Already the pool has been exhausted because of new technologies like IMS which uses
an all IP Core and the Internet of things with more devices, sensors connected to the
internet – each identified by an IP address.
• The solution to this problem has been addressed long back and requires that the
Internet adopt IPv6 addressing scheme. IPv6 uses 128-bit long address and allows 3.4
x 1038 or 340 trillion, trillion, trillion unique addresses.
• IPv6 adoption is not happening at the pace it should.
03/14/12 Tinniam V Ganesh tvganesh.85@gm
131