SlideShare a Scribd company logo
1 of 18
Download to read offline
An attempt to clear
the confusion that engulfs
 Cloud Computing




            1
On the LinkedIn professional network there two discussion threads running currently in the
Telecom Professionals Group and some that are running on the IT Next Group. The ones in
the former Group are




ClintonStop Following Follow Clinton

   1. I hear a lot about the 'Cloud' and 'Cloud Computing'. Can someone
      explain to me what that is?




RamiroStop Following Follow Ramiro

   2. Telecom trends 2011- What do you think?


A lot of views have been expressed by the participants in these two discussion threads by
the proponents and opponents of Cloud Computing.

Philippe Portes participating in the first these discussion threads brought out some hard
truths about his experience with Cloud Computing about 2 days ago .which led to some
invigorating discussions by Ariel Gollon and Tim Templeton.

Although Ramiro Gonzales had suggested various topics while opening this second
discussion thread, according to the statistics he has compiled, Cloud Computing
Communications has held centre stage amongst all the other possible topics of discussions
on Telecom Trends 2011.

I have been chipping in with comments in both these discussion threads to clear some
confusions that exist.

Finally Dirk de Vos’ post 23 Hrs. ago on the 5th.May, 2011 has provoked me to try and
prepare this note which is aimed at touching on some fundamental aspects of Cloud
Computing and Network Security with respect to an organisation’s internal databases.

To do this I will need to address the fundamentals of Cloud computing, and also the
fundamentals of network security issues.




                                            2
Cloud Computing

Let me reproduce here the under-noted extracts of Wikipedia information on Cloud
Computing with my comments along the way.

Cloud computing refers to the provision of computational resources on demand via a
computer network. Because the cloud is an underlying delivery mechanism, cloud based
applications and services may support any type of software application or service in use
today. Before the advent of computer networks, both data and software were stored and
processed on or near the computer. The development of Local Area Networks LAN allowed
for a tiered architecture in which multiple CPUs and storage devices may be organized to
increase the performance of the entire system. LANs were widely deployed in corporate
environments in the 1990's, and are notable for vendor specific connectivity limitations.
These limitations gave rise to the marketing term "Islands of Information" which was widely
used within the computing industry. The widespread implementation of the TCP/IP protocol
stack and the subsequent popularization of the web has lead to multi-vendor networks that
are no longer limited by company walls.

Cloud computing fundamentally allows for a functional separation between the resources
used and the user's computer. The computing resources may or may not reside outside the
local network, for example in an internet connected datacenter. What is important to the
individual user is that they 'simply work'. This separation between the resources used and
the user's computer also has allowed for the development of new business models. All of the
development and maintenance tasks involved in provisioning the application are performed
by the service provider. The user's computer may contain very little software or data
(perhaps a minimal operating system and web browser only), serving as little more than a
display terminal for processes occurring on a network of computers far away. Consumers
now routinely use data intensive applications driven by cloud technology which were
previously unavailable due to cost and deployment complexity. In many companies
employees and company departments are bringing a flood of consumer technology into the
workplace and this raises legal compliance and security concerns for the corporation.

The common shorthand for a provided cloud computing service (or even an aggregation of
all existing cloud services) is "The Cloud". The most common analogy to explain cloud
computing is that of public utilities such as electricity, gas, and water. Just as centralized and
standardized utilities free individuals from the difficulties of generating electricity or pumping
water, cloud computing frees users from certain hardware and software installation and
maintenance tasks through the use of simpler hardware that accesses a vast network of
computing resources (processors, hard drives, etc.). The sharing of resources reduces the
cost to individuals.

The phrase “cloud computing” originated from the cloud symbol that is usually used by flow
charts and diagrams to symbolize the internet. The principle behind the cloud is that any
computer connected to the internet is connected to the same pool of computing power,
applications, and files. Users can store and access personal files such as music, pictures,
videos, and bookmarks or play games or use productivity applications on a remote server
rather than physically carrying around a storage medium such as a DVD or thumb drive.
Almost all users of the internet may be using a form of cloud computing though few realize it.
Those who use web-based email such as Gmail, Hotmail, Yahoo, a Company owned email,
or even an e-mail client program such as Outlook, Evolution, Mozilla, Thunderbird. Or
Entourage are making use of cloud email servers. Hence, desktop applications which
connect to cloud email would be considered cloud applications.



                                                3
Cloud computing utilizes the network as a means to connect user end point devices (end
points) to resources that are centralized in a data center. The data center may by accessed
via the internet or a company network, or both. In many cases a cloud service may allow
access from a variety of end points such as a mobile phone, a PC or a tablet. Cloud services
may be designed to be vendor agnostic, working equally well with Linux, Mac and PC
platforms. They also can allow access from any internet connected location, allowing mobile
workers to access business systems remotely as in Telecommuting, and extending the
reach of business services provided by Outsourcing.

A user endpoint with minimal software requirements may submit a task for processing. The
service provider may pool the processing power of multiple remote computers in "the cloud"
to achieve the task, such as data warehousing of hundreds of terabytes, managing and
synchronizing multiple documents online, or computationally intensive work. These tasks
would normally be difficult, time consuming, or expensive for an individual user or a small
company to accomplish. The outcome of the processing task is returned to the client over
the network. In essence, the heavy lifting of a task is outsourced to an external entity with
more resources and expertise.

The services - such as data storage and processing - and software are provided by the
company hosting the remote computers. The clients are only responsible for having a simple
computer with a connection to the Internet, or a company network, in order to make requests
to and receive data from the cloud. Computation and storage is divided among the remote
computers in order to handle large volumes of both, thus the client need not purchase
expensive hardware to handle the task.

Technical description

The National Institute of Standards and Technology (NIST) provides a concise and specific
definition:

Cloud computing is a model for enabling convenient, on-demand network access to a shared
pool of configurable computing resources (e.g., networks, servers, storage, applications, and
services) that can be rapidly provisioned and released with minimal management effort or
service provider interaction.

Cloud computing provides computation, software, data access, and storage services that do
not require end-user knowledge of the physical location and configuration of the system that
delivers the services. Parallels to this concept can be drawn with the electricity grid, where
end-users consume power without needing to understand the component devices or
infrastructure required to provide the service.

Cloud computing describes a new supplement, consumption, and delivery model for IT
services based on Internet protocols, and it typically involves provisioning of dynamically
scalable and often virtualized resources. It is a byproduct and consequence of the ease-of-
access to remote computing sites provided by the Internet. This may take the form of web-
based tools or applications that users can access and use through a web browser as if they
were programs installed locally on their own computers.

Cloud computing providers deliver applications via the internet, which are accessed from a
Web browser, while the business software and data are stored on servers at a remote
location. In some cases, legacy applications (line of business applications which until now
have been prevalent in thick client Windows computing) are delivered via a screen sharing
technology such as Citrix XenApp, while the compute resources are consolidated at a

                                              4
remote data center location; in other cases entire business applications have been coded
using web based technologies such as AJAX.

Most cloud computing infrastructures consist of services delivered through shared data-
centers. The Cloud may appear as a single point of access for consumers' computing needs,
notable examples include the iTunes Store, and the iPhone App Store. Commercial offerings
may be required to meet service level agreements (SLAs), but specific terms are less often
negotiated by smaller companies.

Characteristics

The key characteristic of cloud computing is that the computing is "in the cloud"; that is, the
processing (and the related data) is not in a specified, known or static place(s). This is in
contrast to a model in which the processing takes place in one or more specific servers that
are known. All the other concepts mentioned are supplementary or complementary to this
concept.

Architecture




Cloud computing sample architecture

Cloud architecture, the systems architecture of the software systems involved in the
delivery of cloud computing, typically involves multiple cloud components communicating
with each other over application programming interfaces (APIs), usually web services and 3-
tier architecture. This resembles the Unix philosophy of having multiple programs each doing
one thing well and working together over universal interfaces. Complexity is controlled and
the resulting systems are more manageable than their monolithic counterparts.

The two most significant components of cloud computing architecture are known as the front
end and the back end. The front end is the part seen by the client, i.e. the computer user.
This includes the client’s network (or computer) and the applications used to access the
cloud via a user interface such as a web browser. The back end of the cloud computing
architecture is the ‘cloud’ itself, comprising various computers, servers and data storage
devices.




                                              5
Deployment models




Cloud computing types

Public cloud

Public cloud or external cloud describes cloud computing in the traditional mainstream
sense, whereby resources are dynamically provisioned on a fine-grained, self-service basis
over the Internet, via web applications / web services, from an off-site third-party provider
who bills on a fine-grained utility computing basis.

The best examples of public clouds are the various search engines like Google which
serve as our information bank for all information available on the public domain and
email applications for email service providers like, Gmail, Yahoo, Hotmail, etc.

Community cloud

A community cloud may be established where several organizations have similar
requirements and seek to share infrastructure so as to realize some of the benefits of cloud
computing. The costs are spread over fewer users than a public cloud (but more than a
single tenant). This option may offer a higher level of privacy, security and/or policy
compliance. In addition it can be economically attractive as the resources (storage,
workstations) utilized and shared in the community are already exploited and have reached
their return of investment. Examples of community clouds include Google’s "Gov Cloud".

Hybrid cloud and hybrid IT delivery

The main responsibility of the IT department is to deliver services to the business. With the
proliferation of cloud computing (both private and public) and the fact that IT departments
must also deliver services via traditional, in-house methods, the newest catch-phrase has
become “hybrid cloud computing. Hybrid cloud is also called hybrid delivery by the major
vendors including HP, IBM, Oracle, and VMware who offer technology to manage the
complexity in managing the performance, security and privacy concerns that results from the
mixed delivery methods of IT services.

A hybrid storage cloud uses a combination of public and private storage clouds. Hybrid
storage clouds are often useful for archiving and backup functions, allowing local data to be
replicated to a public cloud.

                                             6
Another perspective on deploying a web application in the cloud is using Hybrid Web
Hosting, where the hosting infrastructure is a mix between cloud hosting and managed
dedicated servers – this is most commonly achieved as part of a web cluster in which some
of the nodes are running on real physical hardware and some are running on cloud server
instances.

Combined cloud

Two clouds that have been joined together are more correctly called a "combined cloud". A
combined cloud environment consisting of multiple internal and/or external providers "will be
typical for most enterprises". By integrating multiple cloud services users may be able to
ease the transition to public cloud services while avoiding issues such as PCI compliance.

Private cloud

Douglas Parkhill first described the concept of a "private computer utility" in his 1966 book
The Challenge of the Computer Utility. The idea was based upon direct comparison with
other industries (e.g. the electricity industry) and the extensive use of hybrid supply models
to balance and mitigate risks.

"Private cloud" and "internal cloud" have been described as neologisms, but the concepts
themselves pre-date the term cloud by 40 years. Even within modern utility industries, hybrid
models still exist despite the formation of reasonably well-functioning markets and the ability
to combine multiple providers.

Some vendors have used the terms to describe offerings that emulate cloud computing on
private networks. These (typically virtualization automation) products offer the ability to host
applications or virtual machines in a company's own set of hosts. These provide the benefits
of utility computing – shared hardware costs, the ability to recover from failure, and the ability
to scale up or down depending upon demand.

Private clouds have attracted criticism because users "still have to buy, build, and manage
them" and thus do not benefit from lower up-front capital costs and less hands-on
management, essentially "[lacking] the economic model that makes cloud computing such
an intriguing concept". Enterprise IT organizations use their own private cloud(s) for mission
critical and other operational systems to protect their critical infrastructure. Therefore, for all
intents and purposes, "private clouds" are not an implementation of cloud computing at all,
but are in fact an implementation of a technology subset: the basic concept of virtualized
computing.

However, as will be seen from the notes on network security issues to follow, private
clouds are absolutely essential for 100% security of an organization’s or enterprise’s
internal databases.

Cloud engineering

Cloud engineering is the application of a systematic, disciplined, quantifiable, and
interdisciplinary approach to the ideation, conceptualization, development, operation, and
maintenance of cloud computing, as well as the study and applied research of the approach,
i.e., the application of engineering to cloud. It is a maturing and evolving discipline to
facilitate the adoption, strategisation, operationalisation, industrialisation, standardization,
productisation, commoditisation, and governance of cloud solutions, leading towards a cloud
ecosystem. Cloud engineering is also known as cloud service engineering.


                                                7
Cloud storage

Cloud storage is a model of networked computer data storage where data is stored on
multiple virtual servers, generally hosted by third parties, rather than being hosted on
dedicated servers. Hosting companies operate large data centers; and people who require
their data to be hosted buy or lease storage capacity from them and use it for their storage
needs. The data centre operators, in the background, virtualise the resources according to
the requirements of the customer and expose them as virtual servers, which the customers
can themselves manage. Physically, the resource may span across multiple servers.

Again from the notes on network security issues to follow, you will see that it is not
advisable to leave sensitive business data on such hosted storage.

The Intercloud

The Intercloud is an interconnected global "cloud of clouds" and an extension of the Internet
"network of networks" on which it is based. The term was first used in the context of cloud
computing in 2007 when Kevin Kelly stated that "eventually we'll have the Intercloud, the
cloud of clouds. This Intercloud will have the dimensions of one machine comprising all
servers and attendant cloud-books on the planet". It became popular in 2009 and has also
been used to describe the data centre of the future.

The Intercloud scenario is based on the key concept that each single cloud does not have
infinite physical resources. If a cloud saturates the computational and storage resources of
its virtualization infrastructure, it could not be able to satisfy further requests for service
allocations sent from its clients. The Intercloud scenario aims to address such situation, and
in theory, each cloud can use the computational and storage resources of the virtualization
infrastructures of other clouds. Such form of pay-for-use may introduce new business
opportunities among cloud providers if they manage to go beyond theoretical framework.
Nevertheless, the Intercloud raises many more challenges than solutions concerning cloud
federation, security, inter-operability, quality of service, vendor's lock-ins, trust, legal issues,
monitoring and billing.

The concept of a competitive utility computing market which combined many computer
utilities together was originally described by Douglas Parkhill in his 1966 book, the
"Challenge of the Computer Utility". This concept has been subsequently used many times
over the last 40 years and is identical to the Intercloud.

Issues

Privacy

The cloud model has been criticized by privacy advocates for the greater ease in which the
companies hosting the cloud services control, and thus, can monitor at will, lawfully or
unlawfully, the communication and data stored between the user and the host company.
Instances such as the secret NSA programme, working with AT&T, and Verizon, which
recorded over 10 million phone calls between American citizens, causes uncertainty among
privacy advocates, and the greater powers it gives to telecommunication companies to
monitor user activity. While there have been efforts (such as US-EU safe Harbour) to
"harmonise" the legal environment, providers such as Amazon still cater to major markets


                                                 8
(typically the United States and the European Union) by deploying local infrastructure and
allowing customers to select "availability zones.

Compliance

In order to obtain compliance with regulations including FISMA, HIPPA, and SOX in the
United States, the Data Protection Directive in the EU and the credit card industry's PCI
DSS, users may have to adopt community or hybrid deployment modes which are typically
more expensive and may offer restricted benefits. This is how Google is able to "manage
and meet additional government policy requirements beyond FISMA and Rackspace Cloud
are able to claim PCI compliance. Customers in the EU contracting with cloud providers
established outside the EU/EEA have to adhere to the EU regulations on export of personal
data.

Many providers also obtain SAS 70 Type II certification (e.g. Amazon, Salesforce.com,
Google and Microsoft, but this has been criticised on the grounds that the hand-picked set of
goals and standards determined by the auditor and the audited are often not disclosed and
can vary widely. Providers typically make this information available on request, under non-
disclosure agreement.

Legal

In March 2007, Dell applied to trademark the term "cloud computing" (U. S. Trademark
77,139,082) in the United States. The "Notice of Allowance" the company received in July
2008 was cancelled in August, resulting in a formal rejection of the trademark application
less than a week later. Since 2007, the number of trademark filings covering cloud
computing brands, goods and services has increased at an almost exponential rate. As
companies sought to better position themselves for cloud computing branding and marketing
efforts, cloud computing trademark filings increased by 483% between 2008 and 2009. In
2009, 116 cloud computing trademarks were filed, and trademark analysts predict that over
500 such marks could be filed during 2010.

Other legal cases may shape the use of cloud computing by the public sector. On October
29, 2010, Google filed a lawsuit against the U.S. Department of Interior, which opened up a
bid for software that required that bidders use Microsoft's Business Productivity Online Suite.
Google sued, calling the requirement "unduly restrictive of competition”. Scholars have
pointed out that, beginning in 2005, the prevalence of open standards and open source may
have an impact on the way that public entities choose to select vendors.

Open source

Open source software has provided the foundation for many cloud computing
implementations. In November 2007, the Free Software Foundation released the Affero
General Public License, a version of GPLV3 intended to close a perceived legal loophole
associated with free software designed to be run over a network.

Open standards

Most cloud providers expose APIs which are typically well-documented (often under a
Creative Commons license) but also unique to their implementation and thus not
interoperable. Some vendors have adopted others' APIs and there are a number of open
standards under development, including the OGF’s Open Cloud Computing Interface. The
Open Cloud Consortium (OCC) is working to develop consensus on early cloud computing
standards and practices.

                                              9
Security

The relative security of cloud computing services is a contentious issue which may
be delaying its adoption. Issues barring the adoption of cloud computing are due in large
part to the private and public sectors unease surrounding the external management of
security based services. It is the very nature of cloud computing based services, private or
public, that promote external management of provided services. This delivers great incentive
amongst cloud computing service providers in producing a priority in building and
maintaining strong management of secure services.

Organizations have been formed in order to provide standards for a better future in cloud
computing services. One organization in particular, the Cloud Security Alliance is a non-profit
organization formed to promote the use of best practices for providing security assurance
within cloud computing.

The notes that follow on Network Security issues will expose the folly of
endeavouring to arrange security of cloud computing in general, although measures
like CPI DSS, and some measures taken for ensuring security of online banking
transactions, are relevant.

Availability and performance

In addition to concerns about security, businesses are also worried about acceptable levels
of availability and performance of applications hosted in the cloud.

There are also concerns about a cloud provider shutting down for financial or legal reasons,
which has happened in a number of cases.

Strong network connectivity is an essential requirement for availability and
performance of cloud computing.

Sustainability and Siting

Although cloud computing is often assumed to be a form of “green computing”, there is as of
yet no published study to substantiate this assumption. Siting the servers affects the
environmental effects of cloud computing. In areas where climate favours natural cooling
and renewable electricity is readily available, the environmental effects will be more
moderate. Thus countries with favourable conditions, such as Finland, Sweden and
Switzerland, are trying to attract cloud computing data centres.

SmartBay, marine research infrastructure of sensors and computational technology, is being
developed using cloud computing, an emerging approach to shared infrastructure in which
large pools of systems are linked together to provide IT services.

Research

A number of universities, vendors and government organizations are investing in research
around the topic of cloud computing. Academic institutions include University of Melbourne
(Australia), Georgia Tech, Yale, Wayne State, Virginia Tech, University of Wisconsin–
Madison, Carnegie Mellon, MIT, Indiana University, University of Massachusetts, University
of Maryland, IIT Bombay, North Carolina State University, Purdue University, University of

                                              10
California, University of Washington, University of Virginia, University of Utah, University of
Minnesota, among others.

Joint government, academic and vendor collaborative research projects include the
IBM/Google Academic Cloud Computing Initiative (ACCI). In October 2007 IBM and Google
announced the multi- university project designed to enhance students' technical knowledge
to address the challenges of cloud computing. In April 2009, the National Science
Foundation joined the ACCI and awarded approximately million in grants to 14 academic
institutions.

In July 2008, HP, Intel Corporation and Yahoo announced the creation of a global, multi-data
centre, open source test bed, called Open Cirrus, designed to encourage research into all
aspects of cloud computing, service and data centre management. Open Cirrus partners
include the NSF, the University of Illinois (UIUC), Karlsruhe Institute of Technology, the
Infocomm Development Authority (IDA) of Singapore, the Electronics and
Telecommunications Research Institute (ETRI) in Korea, the Malaysian Institute for
Microelectronic Systems (MIMOS), and the Institute for System Programming at the Russian
Academy of Sciences (ISPRAS). In Sept. 2010, more researchers joined the HP/Intel/Yahoo
Open Cirrus project for cloud computing research. The new researchers are China Mobile
Research Institute (CMRI), Spain's Supercomputing Center of Galicia (CESGA by its
Spanish acronym), and Georgia Tech's Center for Experimental Research in Computer
Systems (CERCS) and China Telecom.

In July 2010, HP Labs India announced a new cloud-based technology designed to simplify
taking content and making it mobile-enabled, even from low-end devices. Called
SiteonMobile, the new technology is designed for emerging markets where people are more
likely to access the internet via mobile phones rather than computers. In November 2010,
HP formally opened its Government Cloud Theatre, located at the HP Labs site in Bristol,
England. The demonstration facility highlights high-security, highly flexible cloud computing
based on intellectual property developed at HP Labs. The aim of the facility is to lessen fears
about the security of the cloud. HP Labs Bristol is HP’s second-largest central research
location and currently is responsible for researching cloud computing and security.

The IEEE Technical Committee on Services Computing in IEEE Computer Society sponsors
the IEEE International Conference on Cloud Computing (CLOUD). CLOUD 2010 was held
on July 5–10, 2010 in Miami, Florida

On March 23, 2011, Google, Microsoft, HP, Yahoo, Verizon, Deutsche Telecom and 17
other companies formed a non-profit organisation called Open Network Foundation, focused
on providing support for a new cloud initiative called Software-Defined Networking. The
initiative is meant to speed innovation through simple software changes in
telecommunications networks, wireless networks, data centres and other networking areas.

Criticism of the term

Some have come to criticize the term as being either too unspecific or even misleading.
CEO Larry Ellison of Oracle Corporation asserts that cloud computing is "everything that we
already do", claiming that the company could simply "change the wording on some of our
ads" to deploy their cloud-based services. Forrester Research VP Frank Gillett questions the
very nature of and motivation behind the push for cloud computing, describing what he calls
"cloud washing"—companies simply relabeling their products as "cloud computing", resulting
in mere marketing innovation instead of "real" innovation. GNU’s Richard Stallman insists
that the industry will only use the model to deliver services at ever increasing rates over
proprietary systems, otherwise likening it to a "marketing hype campaign".

                                              11
I could not agree more with the critics of the term. Oracle has pioneered Web based
computing with centralised data centres for organisations and enterprises for many years
now. They can well change the label of their products to be Cloud Compliant. Basically
companies are taking advantage of the new movement to re-package old wine in new
bottles.

The availability of higher speed Internet access from both stationery and mobile devices
today compared to what was available a few years back, is what has given the push for
Cloud Computing – utilisation of public cloud services.

Conclusion

There is absolutely no doubt, that the public cloud will thrive with the search engines, the
emailing services, the payment services, online banking services, and some other inter-
people applications which are normally carried out over the Internet.

However, organisations and enterprises will have to be circumspect about how much
of their business operations they can offload to the Cloud Applications, if at all.

The nest section of this note for my fellow contributors, and the data communications and IT
fraternity across the world, addresses this issue.

Network Security Issues
Each organisation has its own security perceptions and requirements. To get a grip on this
they need to ask themselves the following very pertinent questions to determine how they
should lay out their IT Infrastructure.

       Is your organisation’s internal data important and exclusive to
       you? If so, how secure is this information?
                      If leaked, what would result?
                      …..Business loss?

                      ….Revenue loss?

                      ….Erosion of profit?

                      ….Erosion of corporate value?


                      If damaged, what would result?
                      …Breakdown of business operations?
                      …Un-fulfilled delivery commitments?
                      …Un-fulfilled commercial commitments?
                      …Erosion of corporate value?


       Would you like to protect your organisation from such
       eventuality?

                                                 12
If the answers to all the above questions are positive, then they need to understand where
the security threats are emanating from. To enable them to do this I reproduce for ready
reference the series of discussions started by me in the IT Next Group of LinkedIn available
in URL http://www.linkedin.com/groups?search=&answerCategory=myq&gid=2261770.

How secure is VPN (MPLS or otherwise) for MLO (multi-locational
organisation) INTRANET connectivity? The MLOs may be banks, corporate
organisations, and Govt. Organisations.

Before we address this question it is necessary to bring to the fore some basic facts about
VPN connectivity MPLS or otherwise, which may or may not be known to the readers of this
post.
All VPNs irrespective of the protocols being used (MPLS, Frame Relay, ATM), are laid out
over the IP Backbone over different telephone service providers (TSPs). These IP
backbones of TSPs not only serve the VPN networks of different subscribers, but also serve
the public data networks through PSTN, ISDN, PDN, Broadband, and are also connected to
the National Internet Exchange Gateways (NIEX). All these networks connect to the national
IP backbone of the TSP through a Tier 1 switch at each city / town.
As is known by all those who are aware of the functioning of IP networks, in such a network
all routers connected to the network through these Tier 1 switches have continuous physical
access to each other. Further another characteristic of IP networks is that it supports
concatenous or simultaneous communications between all routers connected to the network
through the Tier 1 switches at each POP (point of presence). Thus while routers A and B are
in communication, a router C in the network could be simultaneously communicating with A
or B or both. This is the beauty and also the bane of IP networks. Beauty because unlike
circuit switched networks, there is no blocking of communications between any pair of
routers even if one of them is already engaged in communication with another router. In the
circuit switched scenario the third communication device would be blocked from
communicating with either of the communication devices already engaged in communication
resulting                  in                   a                 busy                    tone.
This feature is a bane since in networks which have public domain access, as do the IP
backbones of all TSPs, the third router could be that of a hacker sitting in the public domain
who is provided a continuous physical access to the VPN router port of an organisation
through the Tier 1 switches in each city / town. Once this continuous physical access is
available to a hacker / cracker, he / she can get into the LAN associated with the VPN router
through the process of snooping and spoofing, and to the internal databases residing in the
INTRANET.
Thus while VPN facilitates the secure transport of data between points A and B in the
network through the TSP IP backbone using the various security protocols like IP Sec, they
expose the internal databases of the organisation to outside intrusion since it has public
domain           access          from          the          TSP         IP         backbone..
Thus we see that internal data bases of an organisation are vulnerable when the INTRANET
connectivity of an MLO is arranged through VPN (MPLS or otherwise).
To give you a view of how a VPN is connected through a typical TSP’s IP backbone, I would
refer the reader to see the first two slides of VPN.ppt which shows schematics of the
topology and architecture of a typical TSP IP backbone. This is available in
http://www.slideshare.net/pankajmitra

9 months ago




                                              13
Why is VPN growing in popularity in the IT world despite the inherent
vulnerability of internal data bases connected through VPN based
INTRANETs?

VPNs are themselves laid out over telecom service providers IP networks – see PowerPoint
presentation VPN.ppt – along with all other public data services and the Internet. Thus
internal data bases connected through such VPN / MPLS VPN networks can be accessed
from the public domain networks for reasons explained in Slide 3 of this presentation.
However, most IT consultants and System Integrators lead their customers to believe that
their data bases are secure when connected through VPN / MPLS VPN networks. They do it
for                         the                       following                         reasons
A. It means less work for them – they do not have to write router tables as is required for
point-to-point                                   leased                                    lines.
B. They lead customers to believe that it is cheaper to have VPN / MPLS VPN networks than
point-to-point leased line networks. This is again a myth as is shown in the document MPLS-
P2P.doc.                          See                   http://www.slideshare.net/pankajmitra.
C. Customer IT managers also find this convenient as their work is also reduced since they
are connected to the service provider through a single or two WAN port router to the nearest
VPN node of the service provider. For any network problem they haul up the service provider
and                           sit                     back                          themselves.
D. Thus customer IT managers choose the easy way. This is fine as long as there is no
intrusion on the data bases from hackers sitting in the public domain who have continuous
physical access to the VPN router ports. The troubles will start if and when data bases get
hacked. They will get into a nightmarish situation in trying to retrieve the data bases if there
is    anything     left    to      retrieve. The    easy     way      is    the    hard     way.
E. If on the other hand, the Consultant, the system Integrator, and the IT managers of the
company took the trouble of setting up a point-to-point leased line network by configuring the
router tables of their private network, the hard way; the network will then be free from any
intrusion from hackers as such a network denies physical access to the public domain and
consequently to hackers. There will be no hacking and the Network administrators and the IT
managers will have a trouble free life – the easy way. Thus the hard way is the easy way.

“The hard way is the easy way, and the easy way is the hard way”

9 months ago

Are firewalls breakable?

A firewall is a dedicated appliance with embedded software, or software running on a
computer, which inspects network traffic passing through it, and denies or permits passage
based               on              a               set               of             rules.

It is normally placed between a protected network and an unprotected network and acts like
a gate to protect assets to ensure that nothing private goes out and nothing malicious comes
in.

A firewall's basic task is to regulate some of the flow of traffic between computer networks of
different trust levels. Typical examples are the Internet which is a zone with no trust and an
internal network which is a zone of higher trust. A zone with an intermediate trust level,
situated between the Internet and a trusted internal network, is often referred to as a
"perimeter            network"          or         De-militarised           Zone        (DMZ).
There           are          several       types           of         firewall      techniques:
1. Packet filter: Packet filtering inspects each packet passing through the network and
accepts or rejects it based on user-defined rules. Although difficult to configure, it is fairly

                                               14
effective and mostly transparent to its users. It is susceptible to IP spoofing.
2. Application Gateway: Applies security mechanisms to specific applications, such as FTP
and Telnet servers. This is very effective, but can impose performance degradation.
3. Circuit-level gateway: Applies security mechanisms when a TCP or UDP connection is
established. Once the connection has been made, packets can flow between the hosts
without                                   further                                 checking.
4. Proxy server: Intercepts all messages entering and leaving the network. The proxy server
effectively          hides          the           true         network          addresses.

For all the types of Firewalls the rules and filters are set using software algorithms.
The hackers / crackers have a technique of masking their data packets to conform to the
filters and rules for access to the network. This is also known as spoofing. Once through into
the network they can then seize the computers or the Proxy through Telnet / SSH access
and go about disabling the software algorithms which were used to set the rules and filters
and open up the system for them to do whatever they wish to do. The IDS (intrusion
detection systems) and the IDP (intrusion detection and protection) systems available today
can make it difficult for hackers to go into the network, but not impossible. It will take them
more time, but then they can eventually get through. To be able to do this the only thing that
the hackers need is a continuous physical connectivity to the routers between the Internet
connection and the Proxy server or the direct bank of computers which is available to them
through broadband Internet connection. The race between the protector and the spoofer is a
continuing process. You may go on spending money to increase the deterrence depending
on the value you ascribe to the asset (data bases) you are trying to protect. If the asset is
also perceived to be valuable by the Hacker / Cracker or those who have employed them
then they would also be willing to devote more time to the job at hand. A high end security
system with IDS / IDP built in and several layers of firewall would cost in multiples of Crores
Rupees.       And     yet   they    would     not    be    totally  invincible   to    Hackers.

The only real way to prevent the Hackers / Crackers from getting in is to segregate the
corporate network (INTRANET) from any form of public domain access – the Internet or VPN
(which also provides continuous access to the corporate network from the public domain).
See VPN.ppt IN http://www.slideshare.net/pankajmitra. This way we can disable the
continuous physical access to the INTRANET from the Internet and from the intruders.

By adding systems like IDS / IDP you can delay the cracking, but not prevent it totally. Thus
all firewalls are breakable although the effort required depends on the degree of protection
built in. This is what prompts one to say “For every firewall there is a water hose”

9 months ago

If we isolate our INTRANET from the Internet to achieve 100% security of
internal databases how do we communicate with clients, vendors, the general
public, consultants, and participate in e-Commerce activity and online banking
activity(in                 case                  of                   banks)?

This may be done by placing the MLO’s (multi-locational organisation’s) Web server together
with its storage on a separate Internet LAN at the organisation’s central location. This LAN
may be connected through a standard firewall from Cisco or others and a point-to-point (p2p)
leased line of appropriate bandwidth (depending on the busyness of the link and the number
of hits on the server within a certain time) to the nearest POP (point-of-presence) of the
Internet Service Provider (ISP). The web server designated as PS will have the e-Commerce
or online banking client services, all the publishable information of the organisation, and the
Internet email gateway for the organisation. The various fields in the PS will be replicated in
an Intermediate Server (IS), and a Company Communications Server (CS) which resides in

                                              15
the INTRANET. All in and out communications from the INTRANET to the Internet is routed
through the CS to and from the various databases and the organisation’s mail server. The IS
is connected to a three position electro-mechanical microcontroller driven RJ45 switch which
connects it to either the INTRANET LAN or to the Internet LAN, never to both together. Two-
way synchronisation takes place between the CS and IS when it is connected to the
INTRANET LAN and with the PS when it is connected to the Internet LAN. This ensures free
flow of information between the INTRANET and the Internet without impairing the 100%
security of databases residing in the INTRANET. This system is designated STS (the
acronym for Secure Transfer System) and may be seen in slides 6 to 10 of PVDTN
Presentation 1.ppt in http://www.slideshare.net/pankajmitra . The denial of direct Internet
access to the INTRANET ensures its 100% security of the databases connected to it.

The MLO and people who wish to communicate or interact with it do so through the PS and
its associated storage which are connected to the Internet, through the Internet from
anywhere in the world.

9 months ago

Integrate voice and fax over your 100% secure, 100% uptime INTRANET to
save inter-locational telecom and travelling costs.

There are two ways in which voice and fax may be integrated over INTRANETS built with
p2p leased lines. The conventional VoIP or the path breaking patented PVDTN system. In
the former voice / fax packets are sent along with data packets through the p2p links of the
WAN. In the latter using channel splitters two parallel networks are run over the same p2p
leased line backbone – an IP packet switched network (for data communications and all IP
services) and a circuit switched network (for voice and fax communications). The latter is
more bandwidth efficient – see FAQ3 in the presentation PVDTN FAQs.ppt in
http://www.slideshare.net/pankajmitra.

In the latter system by adding between 30 and 40% of the bandwidth required for data
communications over your IP data network built over point-to-point (p2p) leased lines, the
total inter-locational voice and fax communications, multiple simultaneous NET meetings
using voice and voice-data conferencing for different work groups with officers from their
respective work places at a moment’s notice can be carried out over secure INTRANET.
This will eliminate the PSTN and other public telephony costs currently being incurred for
such communications, and also save travelling costs and time for conducting such meetings.
How the savings takes place is shown in FAQ1 in the presentation PVDTN FAQs.ppt in
http://www.slideshare.net/pankajmitra.

The money so saved will help to recover the capital expenditure within a year or two.

Since the inter-locational voice / fax communications are carried out over the 100% secure
INTRANET they are free from eaves dropping possible in public telephone networks.

9 months ago

Inferences

The following inferences may be drawn from the above discussions.

           A. Any network which is laid out over the IP backbone of a Telephone Service
              provider shares this space with various public domain networks like the


                                            16
PSTN, ISDN, PDN, Broadband services, and hence provides continuous
     physical access from the public domain being part of the same IP Backbone.

B. Such networks like VPN networks, if used to connect an organisation’s
   internal databases will expose them to hacker / cracker attacks from the
   public domain.

C. Firewalls are breakable and hence cannot give 100% security to the
   organisation’s internal databases as long as there is continuous physical
   access from the public domain.

D. The only way to ensure 100% security of an organisation’s internal databases
   is to connect them through an INTRANET which has no physical access from
   the public domain. This is done by building the INTRANET WAN using point-
   to-point leased lines, and ensuring that there is no public domain access
   (Internet, ISDN, PSTN, and other shared networks like VPN) to the
   INTRANET LAN at each organisation location.

E. Since all organisations have to have their presence on the Internet and
   participate in the World Wide Web (www), this is enabled through Web based
   Proxy Servers (PS) connected to the Internet. The PS will house the
   organisation’s external mail gateway, the collaboration tools, the e-Commerce
   applications, the online banking applications (in case of Banks), and will be
   open to access from all members of the Internet community, with different
   levels of access based on their relationship with the organisation.

F. To facilitate e-Commerce, e-Banking, etc, there will have to be a free flow of
   information back and forth between this PS and its associated storage and
   the INTRANET Communications Gateway Server (CS), through a secure
   transfer system which ensures that there is no direct contact between the
   INTRANET LAN and the Internet LAN, ensuring that there is no impairment of
   the 100% security of the INTRANET.

G. Since the INTRANET provides a dedicated 100% secure network to the
   organisation, technology now exists to integrate voice, fax, conferencing
   (voice and voice-data) cost effectively by increasing the link bandwidths by 40
   to 50%, thereby eliminating public telephony between all company locations
   and saving approximately 50 to 75% present telephone costs, and a
   substantial part of the inter-locational travelling costs. The savings thus made
   can pay back the infrastructure for 100% security of the organisation’s internal
   databases within a few years.

H. Thus all data and information which is important and exclusive to the
   organisation should reside in the Private Cloud connected through the
   organisation’s INTRANET.

I.   The e-Commerce activity and transactions like the online banking
     transactions may be secured by taking measures like dynamic passwords
     passed on to the user’s dedicated mobile number for every transaction. Thus
     while hackers and crackers may view your accounts with the bank by hacking

                                  17
into your static username and password they will not be able to make any
   transactions without the account holders mobile phone. This can of course be
   overcome by them if they can get messages coming into the users’ mobile
   phones diverted to theirs. To do this they will need to have access to this
   dedicate phone number or have a contact in the mobile service provider who
   can help to get this done. Arduous processes no doubt.

J. Your computer in the work place may be connected either to the Internet for
   browsing activity or the INTRANET for other computing activity, but never to
   both together. This may be achieved through a Secure Switch which connects
   your machine to either of the INTRANET or Internet LANs.

K. The public Clouds like the search engines and the email services are meant
   for universal access and do not need any security measures. The individual
   applications can determine what may be done on these Clouds. By hacking
   through your static username and passwords, hackers / crackers may have
   access to your Web mail. It is best to delete all emails sensitive to your
   business from the Web mail storage if you do not want intruders to have
   access to these.

L. Individual Organisation / Bank Web Proxy Servers may be accessed by all
   people in the Internet community and they will get information / access
   depending on their relationship with the Organisation / Bank. Hackers /
   Crackers may hack through to the static usernames and passwords of any
   person and acquire their access rights to these Web sites. However, the
   additional precautions mentioned in (I) above will prevent them from carrying
   out transactions, unless they move further to get messages sent to the
   individuals dedicated mobile phone diverted to theirs to access the dynamic
   passwords associated with each transaction. This is definitely a more difficult
   task though not impossible.

M. Claims of Cloud Storage or Cloud Infrastructure, or SAAS service providers
   that they will be able to provide security to your data are belied as indicated in
   A, B, and C above. Hackers / Crackers will find a way to get into your data
   and applications residing in the Public Cloud through the continuous physical
   access they enjoy through the Common IP Backbone of the Telephone
   Service Provider / s. Hence take such assurances with a pinch of salt.

N. The answers to the security questions posed above, and the above
   inferences drawn from the series of discussions listed should, we hope, help
   each organisation’s IT Infrastructure planners to choose they way they wish to
   lay out their IT Infrastructure.

O. If you still need further help and / or clarifications, you may contact us at
   midautel@bsnl.in or pankajmitra@gmail.com




                                   18

More Related Content

What's hot

Media and Entertainment Network Exchange Concept
Media and Entertainment Network Exchange ConceptMedia and Entertainment Network Exchange Concept
Media and Entertainment Network Exchange ConceptJason Banks
 
Zpryme Report on Cloud and SAS Solutions
Zpryme Report on Cloud and SAS SolutionsZpryme Report on Cloud and SAS Solutions
Zpryme Report on Cloud and SAS SolutionsPaula Smith
 
Chapter 5. infrastructure ti
Chapter 5. infrastructure tiChapter 5. infrastructure ti
Chapter 5. infrastructure tiAditya TroJhan
 
2013 Future of Cloud Computing - 3rd Annual Survey Results
2013 Future of Cloud Computing - 3rd Annual Survey Results2013 Future of Cloud Computing - 3rd Annual Survey Results
2013 Future of Cloud Computing - 3rd Annual Survey ResultsMichael Skok
 
Cloud computing course and tutorials
Cloud computing course and tutorialsCloud computing course and tutorials
Cloud computing course and tutorialsUdara Sandaruwan
 
Cloud for dummies easycloud
Cloud for dummies   easycloudCloud for dummies   easycloud
Cloud for dummies easycloudAlessandro Greco
 
Introduction to cloud computing
Introduction to cloud computingIntroduction to cloud computing
Introduction to cloud computingvishnu varunan
 
Assignment on Cloud Computing
Assignment on Cloud ComputingAssignment on Cloud Computing
Assignment on Cloud ComputingAl Shahriar
 
Penetrating the Cloud: Opportunities & Challenges for Businesses
Penetrating the Cloud: Opportunities & Challenges for BusinessesPenetrating the Cloud: Opportunities & Challenges for Businesses
Penetrating the Cloud: Opportunities & Challenges for BusinessesCompTIA
 
Arcus Advisors Report_Quality of Service
Arcus Advisors Report_Quality of ServiceArcus Advisors Report_Quality of Service
Arcus Advisors Report_Quality of ServiceScott Landman
 
Cloud Computing Myth Busters - Know the Cloud
Cloud Computing Myth Busters - Know the CloudCloud Computing Myth Busters - Know the Cloud
Cloud Computing Myth Busters - Know the CloudMicrosoft Private Cloud
 
9577الحوسبة السحابية
9577الحوسبة السحابية9577الحوسبة السحابية
9577الحوسبة السحابيةuniversal group
 
Senza Fili Leveraging802.16e Wi Max 091111
Senza Fili Leveraging802.16e Wi Max 091111Senza Fili Leveraging802.16e Wi Max 091111
Senza Fili Leveraging802.16e Wi Max 091111Monica Paolini
 
Can we hack open source #cloud platforms to help reduce emissions?
Can we hack open source #cloud platforms to help reduce emissions?Can we hack open source #cloud platforms to help reduce emissions?
Can we hack open source #cloud platforms to help reduce emissions?Tom Raftery
 
cloud computing final year project
cloud computing final year projectcloud computing final year project
cloud computing final year projectAmeya Vashishth
 
OCC-Executive-Summary-20150323
OCC-Executive-Summary-20150323OCC-Executive-Summary-20150323
OCC-Executive-Summary-20150323Les Williams
 

What's hot (20)

M commerce
M commerceM commerce
M commerce
 
Demystifying the cloud
Demystifying the cloudDemystifying the cloud
Demystifying the cloud
 
Cloud computing
Cloud computingCloud computing
Cloud computing
 
Media and Entertainment Network Exchange Concept
Media and Entertainment Network Exchange ConceptMedia and Entertainment Network Exchange Concept
Media and Entertainment Network Exchange Concept
 
Zpryme Report on Cloud and SAS Solutions
Zpryme Report on Cloud and SAS SolutionsZpryme Report on Cloud and SAS Solutions
Zpryme Report on Cloud and SAS Solutions
 
Chapter 5. infrastructure ti
Chapter 5. infrastructure tiChapter 5. infrastructure ti
Chapter 5. infrastructure ti
 
Cloud Computing and It's Types in Mobile Network
Cloud Computing and It's Types in Mobile NetworkCloud Computing and It's Types in Mobile Network
Cloud Computing and It's Types in Mobile Network
 
2013 Future of Cloud Computing - 3rd Annual Survey Results
2013 Future of Cloud Computing - 3rd Annual Survey Results2013 Future of Cloud Computing - 3rd Annual Survey Results
2013 Future of Cloud Computing - 3rd Annual Survey Results
 
Cloud computing course and tutorials
Cloud computing course and tutorialsCloud computing course and tutorials
Cloud computing course and tutorials
 
Cloud for dummies easycloud
Cloud for dummies   easycloudCloud for dummies   easycloud
Cloud for dummies easycloud
 
Introduction to cloud computing
Introduction to cloud computingIntroduction to cloud computing
Introduction to cloud computing
 
Assignment on Cloud Computing
Assignment on Cloud ComputingAssignment on Cloud Computing
Assignment on Cloud Computing
 
Penetrating the Cloud: Opportunities & Challenges for Businesses
Penetrating the Cloud: Opportunities & Challenges for BusinessesPenetrating the Cloud: Opportunities & Challenges for Businesses
Penetrating the Cloud: Opportunities & Challenges for Businesses
 
Arcus Advisors Report_Quality of Service
Arcus Advisors Report_Quality of ServiceArcus Advisors Report_Quality of Service
Arcus Advisors Report_Quality of Service
 
Cloud Computing Myth Busters - Know the Cloud
Cloud Computing Myth Busters - Know the CloudCloud Computing Myth Busters - Know the Cloud
Cloud Computing Myth Busters - Know the Cloud
 
9577الحوسبة السحابية
9577الحوسبة السحابية9577الحوسبة السحابية
9577الحوسبة السحابية
 
Senza Fili Leveraging802.16e Wi Max 091111
Senza Fili Leveraging802.16e Wi Max 091111Senza Fili Leveraging802.16e Wi Max 091111
Senza Fili Leveraging802.16e Wi Max 091111
 
Can we hack open source #cloud platforms to help reduce emissions?
Can we hack open source #cloud platforms to help reduce emissions?Can we hack open source #cloud platforms to help reduce emissions?
Can we hack open source #cloud platforms to help reduce emissions?
 
cloud computing final year project
cloud computing final year projectcloud computing final year project
cloud computing final year project
 
OCC-Executive-Summary-20150323
OCC-Executive-Summary-20150323OCC-Executive-Summary-20150323
OCC-Executive-Summary-20150323
 

Similar to Cloud computing

Cloud computing
Cloud computingCloud computing
Cloud computingMIDAUTEL
 
SURVEY OF CLOUD COMPUTING
SURVEY OF CLOUD COMPUTINGSURVEY OF CLOUD COMPUTING
SURVEY OF CLOUD COMPUTINGijwscjournal
 
SURVEY OF CLOUD COMPUTING
SURVEY OF CLOUD COMPUTINGSURVEY OF CLOUD COMPUTING
SURVEY OF CLOUD COMPUTINGijwscjournal
 
Cloud computing applicatio
Cloud  computing  applicatioCloud  computing  applicatio
Cloud computing applicatioChetan Sontakke
 
Cloud computing
Cloud computingCloud computing
Cloud computingJawhar Ali
 
Cloud computing
Cloud computingCloud computing
Cloud computingkanchu17
 
Cloud Computing Security Issues in Infrastructure as a Service” report
Cloud Computing Security Issues in Infrastructure as a Service” reportCloud Computing Security Issues in Infrastructure as a Service” report
Cloud Computing Security Issues in Infrastructure as a Service” reportVivek Maurya
 
The Nitty Gritty of Cloud Computing
The Nitty Gritty of Cloud ComputingThe Nitty Gritty of Cloud Computing
The Nitty Gritty of Cloud ComputingMike Tase
 
Cloud Computing
Cloud ComputingCloud Computing
Cloud ComputingArwa
 
Cloud computing notes unit I as per RGPV syllabus
Cloud computing notes unit I as per RGPV syllabusCloud computing notes unit I as per RGPV syllabus
Cloud computing notes unit I as per RGPV syllabusNANDINI SHARMA
 
Cloud computing writeup
Cloud computing writeupCloud computing writeup
Cloud computing writeupselvavijay1987
 
Historical development of cloud computing
Historical development of cloud computingHistorical development of cloud computing
Historical development of cloud computinggaurav jain
 

Similar to Cloud computing (20)

Cloud computing
Cloud computingCloud computing
Cloud computing
 
SURVEY OF CLOUD COMPUTING
SURVEY OF CLOUD COMPUTINGSURVEY OF CLOUD COMPUTING
SURVEY OF CLOUD COMPUTING
 
SURVEY OF CLOUD COMPUTING
SURVEY OF CLOUD COMPUTINGSURVEY OF CLOUD COMPUTING
SURVEY OF CLOUD COMPUTING
 
Cloud computing
Cloud computingCloud computing
Cloud computing
 
Cloud computing
Cloud computingCloud computing
Cloud computing
 
Cloud computing applicatio
Cloud  computing  applicatioCloud  computing  applicatio
Cloud computing applicatio
 
Cloud computing
Cloud computingCloud computing
Cloud computing
 
cc.doc
cc.doccc.doc
cc.doc
 
Cloud computing
Cloud computingCloud computing
Cloud computing
 
Cloud computing
Cloud computingCloud computing
Cloud computing
 
Cloud Computing Security Issues in Infrastructure as a Service” report
Cloud Computing Security Issues in Infrastructure as a Service” reportCloud Computing Security Issues in Infrastructure as a Service” report
Cloud Computing Security Issues in Infrastructure as a Service” report
 
Cloud computing report
Cloud computing reportCloud computing report
Cloud computing report
 
What Is Cloud Computing
What Is Cloud ComputingWhat Is Cloud Computing
What Is Cloud Computing
 
The Nitty Gritty of Cloud Computing
The Nitty Gritty of Cloud ComputingThe Nitty Gritty of Cloud Computing
The Nitty Gritty of Cloud Computing
 
Cloud Computing
Cloud ComputingCloud Computing
Cloud Computing
 
Cloud computing notes unit I as per RGPV syllabus
Cloud computing notes unit I as per RGPV syllabusCloud computing notes unit I as per RGPV syllabus
Cloud computing notes unit I as per RGPV syllabus
 
Cloud computing writeup
Cloud computing writeupCloud computing writeup
Cloud computing writeup
 
Cloud Computing Essay
Cloud Computing EssayCloud Computing Essay
Cloud Computing Essay
 
Cloud Computing Essay
Cloud Computing EssayCloud Computing Essay
Cloud Computing Essay
 
Historical development of cloud computing
Historical development of cloud computingHistorical development of cloud computing
Historical development of cloud computing
 

More from MIDAS Automation & Telecommunications Pvt. Ltd. (MIDAUTEL) (15)

Resurrection of isdn
Resurrection of isdnResurrection of isdn
Resurrection of isdn
 
Pvdtn
PvdtnPvdtn
Pvdtn
 
Smsdg layout & functioning
Smsdg layout & functioningSmsdg layout & functioning
Smsdg layout & functioning
 
Nwan
NwanNwan
Nwan
 
Llbu
LlbuLlbu
Llbu
 
Ngn
NgnNgn
Ngn
 
Vo p pstn
Vo p   pstnVo p   pstn
Vo p pstn
 
Mobile
MobileMobile
Mobile
 
Telephony
TelephonyTelephony
Telephony
 
Sts presentation
Sts presentationSts presentation
Sts presentation
 
Vpn1 a
Vpn1 aVpn1 a
Vpn1 a
 
Vpn1
Vpn1Vpn1
Vpn1
 
Mpls p2 p
Mpls   p2 pMpls   p2 p
Mpls p2 p
 
Pvdtn fa qs
Pvdtn fa qsPvdtn fa qs
Pvdtn fa qs
 
Pvdtn presentation
Pvdtn presentationPvdtn presentation
Pvdtn presentation
 

Recently uploaded

UiPath Studio Web workshop series - Day 7
UiPath Studio Web workshop series - Day 7UiPath Studio Web workshop series - Day 7
UiPath Studio Web workshop series - Day 7DianaGray10
 
VoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBXVoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBXTarek Kalaji
 
UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdf
UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdfUiPath Solutions Management Preview - Northern CA Chapter - March 22.pdf
UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdfDianaGray10
 
How Accurate are Carbon Emissions Projections?
How Accurate are Carbon Emissions Projections?How Accurate are Carbon Emissions Projections?
How Accurate are Carbon Emissions Projections?IES VE
 
9 Steps For Building Winning Founding Team
9 Steps For Building Winning Founding Team9 Steps For Building Winning Founding Team
9 Steps For Building Winning Founding TeamAdam Moalla
 
Videogame localization & technology_ how to enhance the power of translation.pdf
Videogame localization & technology_ how to enhance the power of translation.pdfVideogame localization & technology_ how to enhance the power of translation.pdf
Videogame localization & technology_ how to enhance the power of translation.pdfinfogdgmi
 
Basic Building Blocks of Internet of Things.
Basic Building Blocks of Internet of Things.Basic Building Blocks of Internet of Things.
Basic Building Blocks of Internet of Things.YounusS2
 
Bird eye's view on Camunda open source ecosystem
Bird eye's view on Camunda open source ecosystemBird eye's view on Camunda open source ecosystem
Bird eye's view on Camunda open source ecosystemAsko Soukka
 
NIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 WorkshopNIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 WorkshopBachir Benyammi
 
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...DianaGray10
 
UiPath Platform: The Backend Engine Powering Your Automation - Session 1
UiPath Platform: The Backend Engine Powering Your Automation - Session 1UiPath Platform: The Backend Engine Powering Your Automation - Session 1
UiPath Platform: The Backend Engine Powering Your Automation - Session 1DianaGray10
 
UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8DianaGray10
 
Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...
Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...
Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...Will Schroeder
 
AI Fame Rush Review – Virtual Influencer Creation In Just Minutes
AI Fame Rush Review – Virtual Influencer Creation In Just MinutesAI Fame Rush Review – Virtual Influencer Creation In Just Minutes
AI Fame Rush Review – Virtual Influencer Creation In Just MinutesMd Hossain Ali
 
Building Your Own AI Instance (TBLC AI )
Building Your Own AI Instance (TBLC AI )Building Your Own AI Instance (TBLC AI )
Building Your Own AI Instance (TBLC AI )Brian Pichman
 
Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024D Cloud Solutions
 
Machine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfMachine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfAijun Zhang
 
Meet the new FSP 3000 M-Flex800™
Meet the new FSP 3000 M-Flex800™Meet the new FSP 3000 M-Flex800™
Meet the new FSP 3000 M-Flex800™Adtran
 

Recently uploaded (20)

20150722 - AGV
20150722 - AGV20150722 - AGV
20150722 - AGV
 
UiPath Studio Web workshop series - Day 7
UiPath Studio Web workshop series - Day 7UiPath Studio Web workshop series - Day 7
UiPath Studio Web workshop series - Day 7
 
VoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBXVoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBX
 
UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdf
UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdfUiPath Solutions Management Preview - Northern CA Chapter - March 22.pdf
UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdf
 
How Accurate are Carbon Emissions Projections?
How Accurate are Carbon Emissions Projections?How Accurate are Carbon Emissions Projections?
How Accurate are Carbon Emissions Projections?
 
9 Steps For Building Winning Founding Team
9 Steps For Building Winning Founding Team9 Steps For Building Winning Founding Team
9 Steps For Building Winning Founding Team
 
Videogame localization & technology_ how to enhance the power of translation.pdf
Videogame localization & technology_ how to enhance the power of translation.pdfVideogame localization & technology_ how to enhance the power of translation.pdf
Videogame localization & technology_ how to enhance the power of translation.pdf
 
Basic Building Blocks of Internet of Things.
Basic Building Blocks of Internet of Things.Basic Building Blocks of Internet of Things.
Basic Building Blocks of Internet of Things.
 
Bird eye's view on Camunda open source ecosystem
Bird eye's view on Camunda open source ecosystemBird eye's view on Camunda open source ecosystem
Bird eye's view on Camunda open source ecosystem
 
NIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 WorkshopNIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 Workshop
 
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
 
UiPath Platform: The Backend Engine Powering Your Automation - Session 1
UiPath Platform: The Backend Engine Powering Your Automation - Session 1UiPath Platform: The Backend Engine Powering Your Automation - Session 1
UiPath Platform: The Backend Engine Powering Your Automation - Session 1
 
201610817 - edge part1
201610817 - edge part1201610817 - edge part1
201610817 - edge part1
 
UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8
 
Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...
Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...
Apres-Cyber - The Data Dilemma: Bridging Offensive Operations and Machine Lea...
 
AI Fame Rush Review – Virtual Influencer Creation In Just Minutes
AI Fame Rush Review – Virtual Influencer Creation In Just MinutesAI Fame Rush Review – Virtual Influencer Creation In Just Minutes
AI Fame Rush Review – Virtual Influencer Creation In Just Minutes
 
Building Your Own AI Instance (TBLC AI )
Building Your Own AI Instance (TBLC AI )Building Your Own AI Instance (TBLC AI )
Building Your Own AI Instance (TBLC AI )
 
Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024
 
Machine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfMachine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdf
 
Meet the new FSP 3000 M-Flex800™
Meet the new FSP 3000 M-Flex800™Meet the new FSP 3000 M-Flex800™
Meet the new FSP 3000 M-Flex800™
 

Cloud computing

  • 1. An attempt to clear the confusion that engulfs Cloud Computing 1
  • 2. On the LinkedIn professional network there two discussion threads running currently in the Telecom Professionals Group and some that are running on the IT Next Group. The ones in the former Group are ClintonStop Following Follow Clinton 1. I hear a lot about the 'Cloud' and 'Cloud Computing'. Can someone explain to me what that is? RamiroStop Following Follow Ramiro 2. Telecom trends 2011- What do you think? A lot of views have been expressed by the participants in these two discussion threads by the proponents and opponents of Cloud Computing. Philippe Portes participating in the first these discussion threads brought out some hard truths about his experience with Cloud Computing about 2 days ago .which led to some invigorating discussions by Ariel Gollon and Tim Templeton. Although Ramiro Gonzales had suggested various topics while opening this second discussion thread, according to the statistics he has compiled, Cloud Computing Communications has held centre stage amongst all the other possible topics of discussions on Telecom Trends 2011. I have been chipping in with comments in both these discussion threads to clear some confusions that exist. Finally Dirk de Vos’ post 23 Hrs. ago on the 5th.May, 2011 has provoked me to try and prepare this note which is aimed at touching on some fundamental aspects of Cloud Computing and Network Security with respect to an organisation’s internal databases. To do this I will need to address the fundamentals of Cloud computing, and also the fundamentals of network security issues. 2
  • 3. Cloud Computing Let me reproduce here the under-noted extracts of Wikipedia information on Cloud Computing with my comments along the way. Cloud computing refers to the provision of computational resources on demand via a computer network. Because the cloud is an underlying delivery mechanism, cloud based applications and services may support any type of software application or service in use today. Before the advent of computer networks, both data and software were stored and processed on or near the computer. The development of Local Area Networks LAN allowed for a tiered architecture in which multiple CPUs and storage devices may be organized to increase the performance of the entire system. LANs were widely deployed in corporate environments in the 1990's, and are notable for vendor specific connectivity limitations. These limitations gave rise to the marketing term "Islands of Information" which was widely used within the computing industry. The widespread implementation of the TCP/IP protocol stack and the subsequent popularization of the web has lead to multi-vendor networks that are no longer limited by company walls. Cloud computing fundamentally allows for a functional separation between the resources used and the user's computer. The computing resources may or may not reside outside the local network, for example in an internet connected datacenter. What is important to the individual user is that they 'simply work'. This separation between the resources used and the user's computer also has allowed for the development of new business models. All of the development and maintenance tasks involved in provisioning the application are performed by the service provider. The user's computer may contain very little software or data (perhaps a minimal operating system and web browser only), serving as little more than a display terminal for processes occurring on a network of computers far away. Consumers now routinely use data intensive applications driven by cloud technology which were previously unavailable due to cost and deployment complexity. In many companies employees and company departments are bringing a flood of consumer technology into the workplace and this raises legal compliance and security concerns for the corporation. The common shorthand for a provided cloud computing service (or even an aggregation of all existing cloud services) is "The Cloud". The most common analogy to explain cloud computing is that of public utilities such as electricity, gas, and water. Just as centralized and standardized utilities free individuals from the difficulties of generating electricity or pumping water, cloud computing frees users from certain hardware and software installation and maintenance tasks through the use of simpler hardware that accesses a vast network of computing resources (processors, hard drives, etc.). The sharing of resources reduces the cost to individuals. The phrase “cloud computing” originated from the cloud symbol that is usually used by flow charts and diagrams to symbolize the internet. The principle behind the cloud is that any computer connected to the internet is connected to the same pool of computing power, applications, and files. Users can store and access personal files such as music, pictures, videos, and bookmarks or play games or use productivity applications on a remote server rather than physically carrying around a storage medium such as a DVD or thumb drive. Almost all users of the internet may be using a form of cloud computing though few realize it. Those who use web-based email such as Gmail, Hotmail, Yahoo, a Company owned email, or even an e-mail client program such as Outlook, Evolution, Mozilla, Thunderbird. Or Entourage are making use of cloud email servers. Hence, desktop applications which connect to cloud email would be considered cloud applications. 3
  • 4. Cloud computing utilizes the network as a means to connect user end point devices (end points) to resources that are centralized in a data center. The data center may by accessed via the internet or a company network, or both. In many cases a cloud service may allow access from a variety of end points such as a mobile phone, a PC or a tablet. Cloud services may be designed to be vendor agnostic, working equally well with Linux, Mac and PC platforms. They also can allow access from any internet connected location, allowing mobile workers to access business systems remotely as in Telecommuting, and extending the reach of business services provided by Outsourcing. A user endpoint with minimal software requirements may submit a task for processing. The service provider may pool the processing power of multiple remote computers in "the cloud" to achieve the task, such as data warehousing of hundreds of terabytes, managing and synchronizing multiple documents online, or computationally intensive work. These tasks would normally be difficult, time consuming, or expensive for an individual user or a small company to accomplish. The outcome of the processing task is returned to the client over the network. In essence, the heavy lifting of a task is outsourced to an external entity with more resources and expertise. The services - such as data storage and processing - and software are provided by the company hosting the remote computers. The clients are only responsible for having a simple computer with a connection to the Internet, or a company network, in order to make requests to and receive data from the cloud. Computation and storage is divided among the remote computers in order to handle large volumes of both, thus the client need not purchase expensive hardware to handle the task. Technical description The National Institute of Standards and Technology (NIST) provides a concise and specific definition: Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. Cloud computing provides computation, software, data access, and storage services that do not require end-user knowledge of the physical location and configuration of the system that delivers the services. Parallels to this concept can be drawn with the electricity grid, where end-users consume power without needing to understand the component devices or infrastructure required to provide the service. Cloud computing describes a new supplement, consumption, and delivery model for IT services based on Internet protocols, and it typically involves provisioning of dynamically scalable and often virtualized resources. It is a byproduct and consequence of the ease-of- access to remote computing sites provided by the Internet. This may take the form of web- based tools or applications that users can access and use through a web browser as if they were programs installed locally on their own computers. Cloud computing providers deliver applications via the internet, which are accessed from a Web browser, while the business software and data are stored on servers at a remote location. In some cases, legacy applications (line of business applications which until now have been prevalent in thick client Windows computing) are delivered via a screen sharing technology such as Citrix XenApp, while the compute resources are consolidated at a 4
  • 5. remote data center location; in other cases entire business applications have been coded using web based technologies such as AJAX. Most cloud computing infrastructures consist of services delivered through shared data- centers. The Cloud may appear as a single point of access for consumers' computing needs, notable examples include the iTunes Store, and the iPhone App Store. Commercial offerings may be required to meet service level agreements (SLAs), but specific terms are less often negotiated by smaller companies. Characteristics The key characteristic of cloud computing is that the computing is "in the cloud"; that is, the processing (and the related data) is not in a specified, known or static place(s). This is in contrast to a model in which the processing takes place in one or more specific servers that are known. All the other concepts mentioned are supplementary or complementary to this concept. Architecture Cloud computing sample architecture Cloud architecture, the systems architecture of the software systems involved in the delivery of cloud computing, typically involves multiple cloud components communicating with each other over application programming interfaces (APIs), usually web services and 3- tier architecture. This resembles the Unix philosophy of having multiple programs each doing one thing well and working together over universal interfaces. Complexity is controlled and the resulting systems are more manageable than their monolithic counterparts. The two most significant components of cloud computing architecture are known as the front end and the back end. The front end is the part seen by the client, i.e. the computer user. This includes the client’s network (or computer) and the applications used to access the cloud via a user interface such as a web browser. The back end of the cloud computing architecture is the ‘cloud’ itself, comprising various computers, servers and data storage devices. 5
  • 6. Deployment models Cloud computing types Public cloud Public cloud or external cloud describes cloud computing in the traditional mainstream sense, whereby resources are dynamically provisioned on a fine-grained, self-service basis over the Internet, via web applications / web services, from an off-site third-party provider who bills on a fine-grained utility computing basis. The best examples of public clouds are the various search engines like Google which serve as our information bank for all information available on the public domain and email applications for email service providers like, Gmail, Yahoo, Hotmail, etc. Community cloud A community cloud may be established where several organizations have similar requirements and seek to share infrastructure so as to realize some of the benefits of cloud computing. The costs are spread over fewer users than a public cloud (but more than a single tenant). This option may offer a higher level of privacy, security and/or policy compliance. In addition it can be economically attractive as the resources (storage, workstations) utilized and shared in the community are already exploited and have reached their return of investment. Examples of community clouds include Google’s "Gov Cloud". Hybrid cloud and hybrid IT delivery The main responsibility of the IT department is to deliver services to the business. With the proliferation of cloud computing (both private and public) and the fact that IT departments must also deliver services via traditional, in-house methods, the newest catch-phrase has become “hybrid cloud computing. Hybrid cloud is also called hybrid delivery by the major vendors including HP, IBM, Oracle, and VMware who offer technology to manage the complexity in managing the performance, security and privacy concerns that results from the mixed delivery methods of IT services. A hybrid storage cloud uses a combination of public and private storage clouds. Hybrid storage clouds are often useful for archiving and backup functions, allowing local data to be replicated to a public cloud. 6
  • 7. Another perspective on deploying a web application in the cloud is using Hybrid Web Hosting, where the hosting infrastructure is a mix between cloud hosting and managed dedicated servers – this is most commonly achieved as part of a web cluster in which some of the nodes are running on real physical hardware and some are running on cloud server instances. Combined cloud Two clouds that have been joined together are more correctly called a "combined cloud". A combined cloud environment consisting of multiple internal and/or external providers "will be typical for most enterprises". By integrating multiple cloud services users may be able to ease the transition to public cloud services while avoiding issues such as PCI compliance. Private cloud Douglas Parkhill first described the concept of a "private computer utility" in his 1966 book The Challenge of the Computer Utility. The idea was based upon direct comparison with other industries (e.g. the electricity industry) and the extensive use of hybrid supply models to balance and mitigate risks. "Private cloud" and "internal cloud" have been described as neologisms, but the concepts themselves pre-date the term cloud by 40 years. Even within modern utility industries, hybrid models still exist despite the formation of reasonably well-functioning markets and the ability to combine multiple providers. Some vendors have used the terms to describe offerings that emulate cloud computing on private networks. These (typically virtualization automation) products offer the ability to host applications or virtual machines in a company's own set of hosts. These provide the benefits of utility computing – shared hardware costs, the ability to recover from failure, and the ability to scale up or down depending upon demand. Private clouds have attracted criticism because users "still have to buy, build, and manage them" and thus do not benefit from lower up-front capital costs and less hands-on management, essentially "[lacking] the economic model that makes cloud computing such an intriguing concept". Enterprise IT organizations use their own private cloud(s) for mission critical and other operational systems to protect their critical infrastructure. Therefore, for all intents and purposes, "private clouds" are not an implementation of cloud computing at all, but are in fact an implementation of a technology subset: the basic concept of virtualized computing. However, as will be seen from the notes on network security issues to follow, private clouds are absolutely essential for 100% security of an organization’s or enterprise’s internal databases. Cloud engineering Cloud engineering is the application of a systematic, disciplined, quantifiable, and interdisciplinary approach to the ideation, conceptualization, development, operation, and maintenance of cloud computing, as well as the study and applied research of the approach, i.e., the application of engineering to cloud. It is a maturing and evolving discipline to facilitate the adoption, strategisation, operationalisation, industrialisation, standardization, productisation, commoditisation, and governance of cloud solutions, leading towards a cloud ecosystem. Cloud engineering is also known as cloud service engineering. 7
  • 8. Cloud storage Cloud storage is a model of networked computer data storage where data is stored on multiple virtual servers, generally hosted by third parties, rather than being hosted on dedicated servers. Hosting companies operate large data centers; and people who require their data to be hosted buy or lease storage capacity from them and use it for their storage needs. The data centre operators, in the background, virtualise the resources according to the requirements of the customer and expose them as virtual servers, which the customers can themselves manage. Physically, the resource may span across multiple servers. Again from the notes on network security issues to follow, you will see that it is not advisable to leave sensitive business data on such hosted storage. The Intercloud The Intercloud is an interconnected global "cloud of clouds" and an extension of the Internet "network of networks" on which it is based. The term was first used in the context of cloud computing in 2007 when Kevin Kelly stated that "eventually we'll have the Intercloud, the cloud of clouds. This Intercloud will have the dimensions of one machine comprising all servers and attendant cloud-books on the planet". It became popular in 2009 and has also been used to describe the data centre of the future. The Intercloud scenario is based on the key concept that each single cloud does not have infinite physical resources. If a cloud saturates the computational and storage resources of its virtualization infrastructure, it could not be able to satisfy further requests for service allocations sent from its clients. The Intercloud scenario aims to address such situation, and in theory, each cloud can use the computational and storage resources of the virtualization infrastructures of other clouds. Such form of pay-for-use may introduce new business opportunities among cloud providers if they manage to go beyond theoretical framework. Nevertheless, the Intercloud raises many more challenges than solutions concerning cloud federation, security, inter-operability, quality of service, vendor's lock-ins, trust, legal issues, monitoring and billing. The concept of a competitive utility computing market which combined many computer utilities together was originally described by Douglas Parkhill in his 1966 book, the "Challenge of the Computer Utility". This concept has been subsequently used many times over the last 40 years and is identical to the Intercloud. Issues Privacy The cloud model has been criticized by privacy advocates for the greater ease in which the companies hosting the cloud services control, and thus, can monitor at will, lawfully or unlawfully, the communication and data stored between the user and the host company. Instances such as the secret NSA programme, working with AT&T, and Verizon, which recorded over 10 million phone calls between American citizens, causes uncertainty among privacy advocates, and the greater powers it gives to telecommunication companies to monitor user activity. While there have been efforts (such as US-EU safe Harbour) to "harmonise" the legal environment, providers such as Amazon still cater to major markets 8
  • 9. (typically the United States and the European Union) by deploying local infrastructure and allowing customers to select "availability zones. Compliance In order to obtain compliance with regulations including FISMA, HIPPA, and SOX in the United States, the Data Protection Directive in the EU and the credit card industry's PCI DSS, users may have to adopt community or hybrid deployment modes which are typically more expensive and may offer restricted benefits. This is how Google is able to "manage and meet additional government policy requirements beyond FISMA and Rackspace Cloud are able to claim PCI compliance. Customers in the EU contracting with cloud providers established outside the EU/EEA have to adhere to the EU regulations on export of personal data. Many providers also obtain SAS 70 Type II certification (e.g. Amazon, Salesforce.com, Google and Microsoft, but this has been criticised on the grounds that the hand-picked set of goals and standards determined by the auditor and the audited are often not disclosed and can vary widely. Providers typically make this information available on request, under non- disclosure agreement. Legal In March 2007, Dell applied to trademark the term "cloud computing" (U. S. Trademark 77,139,082) in the United States. The "Notice of Allowance" the company received in July 2008 was cancelled in August, resulting in a formal rejection of the trademark application less than a week later. Since 2007, the number of trademark filings covering cloud computing brands, goods and services has increased at an almost exponential rate. As companies sought to better position themselves for cloud computing branding and marketing efforts, cloud computing trademark filings increased by 483% between 2008 and 2009. In 2009, 116 cloud computing trademarks were filed, and trademark analysts predict that over 500 such marks could be filed during 2010. Other legal cases may shape the use of cloud computing by the public sector. On October 29, 2010, Google filed a lawsuit against the U.S. Department of Interior, which opened up a bid for software that required that bidders use Microsoft's Business Productivity Online Suite. Google sued, calling the requirement "unduly restrictive of competition”. Scholars have pointed out that, beginning in 2005, the prevalence of open standards and open source may have an impact on the way that public entities choose to select vendors. Open source Open source software has provided the foundation for many cloud computing implementations. In November 2007, the Free Software Foundation released the Affero General Public License, a version of GPLV3 intended to close a perceived legal loophole associated with free software designed to be run over a network. Open standards Most cloud providers expose APIs which are typically well-documented (often under a Creative Commons license) but also unique to their implementation and thus not interoperable. Some vendors have adopted others' APIs and there are a number of open standards under development, including the OGF’s Open Cloud Computing Interface. The Open Cloud Consortium (OCC) is working to develop consensus on early cloud computing standards and practices. 9
  • 10. Security The relative security of cloud computing services is a contentious issue which may be delaying its adoption. Issues barring the adoption of cloud computing are due in large part to the private and public sectors unease surrounding the external management of security based services. It is the very nature of cloud computing based services, private or public, that promote external management of provided services. This delivers great incentive amongst cloud computing service providers in producing a priority in building and maintaining strong management of secure services. Organizations have been formed in order to provide standards for a better future in cloud computing services. One organization in particular, the Cloud Security Alliance is a non-profit organization formed to promote the use of best practices for providing security assurance within cloud computing. The notes that follow on Network Security issues will expose the folly of endeavouring to arrange security of cloud computing in general, although measures like CPI DSS, and some measures taken for ensuring security of online banking transactions, are relevant. Availability and performance In addition to concerns about security, businesses are also worried about acceptable levels of availability and performance of applications hosted in the cloud. There are also concerns about a cloud provider shutting down for financial or legal reasons, which has happened in a number of cases. Strong network connectivity is an essential requirement for availability and performance of cloud computing. Sustainability and Siting Although cloud computing is often assumed to be a form of “green computing”, there is as of yet no published study to substantiate this assumption. Siting the servers affects the environmental effects of cloud computing. In areas where climate favours natural cooling and renewable electricity is readily available, the environmental effects will be more moderate. Thus countries with favourable conditions, such as Finland, Sweden and Switzerland, are trying to attract cloud computing data centres. SmartBay, marine research infrastructure of sensors and computational technology, is being developed using cloud computing, an emerging approach to shared infrastructure in which large pools of systems are linked together to provide IT services. Research A number of universities, vendors and government organizations are investing in research around the topic of cloud computing. Academic institutions include University of Melbourne (Australia), Georgia Tech, Yale, Wayne State, Virginia Tech, University of Wisconsin– Madison, Carnegie Mellon, MIT, Indiana University, University of Massachusetts, University of Maryland, IIT Bombay, North Carolina State University, Purdue University, University of 10
  • 11. California, University of Washington, University of Virginia, University of Utah, University of Minnesota, among others. Joint government, academic and vendor collaborative research projects include the IBM/Google Academic Cloud Computing Initiative (ACCI). In October 2007 IBM and Google announced the multi- university project designed to enhance students' technical knowledge to address the challenges of cloud computing. In April 2009, the National Science Foundation joined the ACCI and awarded approximately million in grants to 14 academic institutions. In July 2008, HP, Intel Corporation and Yahoo announced the creation of a global, multi-data centre, open source test bed, called Open Cirrus, designed to encourage research into all aspects of cloud computing, service and data centre management. Open Cirrus partners include the NSF, the University of Illinois (UIUC), Karlsruhe Institute of Technology, the Infocomm Development Authority (IDA) of Singapore, the Electronics and Telecommunications Research Institute (ETRI) in Korea, the Malaysian Institute for Microelectronic Systems (MIMOS), and the Institute for System Programming at the Russian Academy of Sciences (ISPRAS). In Sept. 2010, more researchers joined the HP/Intel/Yahoo Open Cirrus project for cloud computing research. The new researchers are China Mobile Research Institute (CMRI), Spain's Supercomputing Center of Galicia (CESGA by its Spanish acronym), and Georgia Tech's Center for Experimental Research in Computer Systems (CERCS) and China Telecom. In July 2010, HP Labs India announced a new cloud-based technology designed to simplify taking content and making it mobile-enabled, even from low-end devices. Called SiteonMobile, the new technology is designed for emerging markets where people are more likely to access the internet via mobile phones rather than computers. In November 2010, HP formally opened its Government Cloud Theatre, located at the HP Labs site in Bristol, England. The demonstration facility highlights high-security, highly flexible cloud computing based on intellectual property developed at HP Labs. The aim of the facility is to lessen fears about the security of the cloud. HP Labs Bristol is HP’s second-largest central research location and currently is responsible for researching cloud computing and security. The IEEE Technical Committee on Services Computing in IEEE Computer Society sponsors the IEEE International Conference on Cloud Computing (CLOUD). CLOUD 2010 was held on July 5–10, 2010 in Miami, Florida On March 23, 2011, Google, Microsoft, HP, Yahoo, Verizon, Deutsche Telecom and 17 other companies formed a non-profit organisation called Open Network Foundation, focused on providing support for a new cloud initiative called Software-Defined Networking. The initiative is meant to speed innovation through simple software changes in telecommunications networks, wireless networks, data centres and other networking areas. Criticism of the term Some have come to criticize the term as being either too unspecific or even misleading. CEO Larry Ellison of Oracle Corporation asserts that cloud computing is "everything that we already do", claiming that the company could simply "change the wording on some of our ads" to deploy their cloud-based services. Forrester Research VP Frank Gillett questions the very nature of and motivation behind the push for cloud computing, describing what he calls "cloud washing"—companies simply relabeling their products as "cloud computing", resulting in mere marketing innovation instead of "real" innovation. GNU’s Richard Stallman insists that the industry will only use the model to deliver services at ever increasing rates over proprietary systems, otherwise likening it to a "marketing hype campaign". 11
  • 12. I could not agree more with the critics of the term. Oracle has pioneered Web based computing with centralised data centres for organisations and enterprises for many years now. They can well change the label of their products to be Cloud Compliant. Basically companies are taking advantage of the new movement to re-package old wine in new bottles. The availability of higher speed Internet access from both stationery and mobile devices today compared to what was available a few years back, is what has given the push for Cloud Computing – utilisation of public cloud services. Conclusion There is absolutely no doubt, that the public cloud will thrive with the search engines, the emailing services, the payment services, online banking services, and some other inter- people applications which are normally carried out over the Internet. However, organisations and enterprises will have to be circumspect about how much of their business operations they can offload to the Cloud Applications, if at all. The nest section of this note for my fellow contributors, and the data communications and IT fraternity across the world, addresses this issue. Network Security Issues Each organisation has its own security perceptions and requirements. To get a grip on this they need to ask themselves the following very pertinent questions to determine how they should lay out their IT Infrastructure. Is your organisation’s internal data important and exclusive to you? If so, how secure is this information? If leaked, what would result? …..Business loss? ….Revenue loss? ….Erosion of profit? ….Erosion of corporate value? If damaged, what would result? …Breakdown of business operations? …Un-fulfilled delivery commitments? …Un-fulfilled commercial commitments? …Erosion of corporate value? Would you like to protect your organisation from such eventuality? 12
  • 13. If the answers to all the above questions are positive, then they need to understand where the security threats are emanating from. To enable them to do this I reproduce for ready reference the series of discussions started by me in the IT Next Group of LinkedIn available in URL http://www.linkedin.com/groups?search=&answerCategory=myq&gid=2261770. How secure is VPN (MPLS or otherwise) for MLO (multi-locational organisation) INTRANET connectivity? The MLOs may be banks, corporate organisations, and Govt. Organisations. Before we address this question it is necessary to bring to the fore some basic facts about VPN connectivity MPLS or otherwise, which may or may not be known to the readers of this post. All VPNs irrespective of the protocols being used (MPLS, Frame Relay, ATM), are laid out over the IP Backbone over different telephone service providers (TSPs). These IP backbones of TSPs not only serve the VPN networks of different subscribers, but also serve the public data networks through PSTN, ISDN, PDN, Broadband, and are also connected to the National Internet Exchange Gateways (NIEX). All these networks connect to the national IP backbone of the TSP through a Tier 1 switch at each city / town. As is known by all those who are aware of the functioning of IP networks, in such a network all routers connected to the network through these Tier 1 switches have continuous physical access to each other. Further another characteristic of IP networks is that it supports concatenous or simultaneous communications between all routers connected to the network through the Tier 1 switches at each POP (point of presence). Thus while routers A and B are in communication, a router C in the network could be simultaneously communicating with A or B or both. This is the beauty and also the bane of IP networks. Beauty because unlike circuit switched networks, there is no blocking of communications between any pair of routers even if one of them is already engaged in communication with another router. In the circuit switched scenario the third communication device would be blocked from communicating with either of the communication devices already engaged in communication resulting in a busy tone. This feature is a bane since in networks which have public domain access, as do the IP backbones of all TSPs, the third router could be that of a hacker sitting in the public domain who is provided a continuous physical access to the VPN router port of an organisation through the Tier 1 switches in each city / town. Once this continuous physical access is available to a hacker / cracker, he / she can get into the LAN associated with the VPN router through the process of snooping and spoofing, and to the internal databases residing in the INTRANET. Thus while VPN facilitates the secure transport of data between points A and B in the network through the TSP IP backbone using the various security protocols like IP Sec, they expose the internal databases of the organisation to outside intrusion since it has public domain access from the TSP IP backbone.. Thus we see that internal data bases of an organisation are vulnerable when the INTRANET connectivity of an MLO is arranged through VPN (MPLS or otherwise). To give you a view of how a VPN is connected through a typical TSP’s IP backbone, I would refer the reader to see the first two slides of VPN.ppt which shows schematics of the topology and architecture of a typical TSP IP backbone. This is available in http://www.slideshare.net/pankajmitra 9 months ago 13
  • 14. Why is VPN growing in popularity in the IT world despite the inherent vulnerability of internal data bases connected through VPN based INTRANETs? VPNs are themselves laid out over telecom service providers IP networks – see PowerPoint presentation VPN.ppt – along with all other public data services and the Internet. Thus internal data bases connected through such VPN / MPLS VPN networks can be accessed from the public domain networks for reasons explained in Slide 3 of this presentation. However, most IT consultants and System Integrators lead their customers to believe that their data bases are secure when connected through VPN / MPLS VPN networks. They do it for the following reasons A. It means less work for them – they do not have to write router tables as is required for point-to-point leased lines. B. They lead customers to believe that it is cheaper to have VPN / MPLS VPN networks than point-to-point leased line networks. This is again a myth as is shown in the document MPLS- P2P.doc. See http://www.slideshare.net/pankajmitra. C. Customer IT managers also find this convenient as their work is also reduced since they are connected to the service provider through a single or two WAN port router to the nearest VPN node of the service provider. For any network problem they haul up the service provider and sit back themselves. D. Thus customer IT managers choose the easy way. This is fine as long as there is no intrusion on the data bases from hackers sitting in the public domain who have continuous physical access to the VPN router ports. The troubles will start if and when data bases get hacked. They will get into a nightmarish situation in trying to retrieve the data bases if there is anything left to retrieve. The easy way is the hard way. E. If on the other hand, the Consultant, the system Integrator, and the IT managers of the company took the trouble of setting up a point-to-point leased line network by configuring the router tables of their private network, the hard way; the network will then be free from any intrusion from hackers as such a network denies physical access to the public domain and consequently to hackers. There will be no hacking and the Network administrators and the IT managers will have a trouble free life – the easy way. Thus the hard way is the easy way. “The hard way is the easy way, and the easy way is the hard way” 9 months ago Are firewalls breakable? A firewall is a dedicated appliance with embedded software, or software running on a computer, which inspects network traffic passing through it, and denies or permits passage based on a set of rules. It is normally placed between a protected network and an unprotected network and acts like a gate to protect assets to ensure that nothing private goes out and nothing malicious comes in. A firewall's basic task is to regulate some of the flow of traffic between computer networks of different trust levels. Typical examples are the Internet which is a zone with no trust and an internal network which is a zone of higher trust. A zone with an intermediate trust level, situated between the Internet and a trusted internal network, is often referred to as a "perimeter network" or De-militarised Zone (DMZ). There are several types of firewall techniques: 1. Packet filter: Packet filtering inspects each packet passing through the network and accepts or rejects it based on user-defined rules. Although difficult to configure, it is fairly 14
  • 15. effective and mostly transparent to its users. It is susceptible to IP spoofing. 2. Application Gateway: Applies security mechanisms to specific applications, such as FTP and Telnet servers. This is very effective, but can impose performance degradation. 3. Circuit-level gateway: Applies security mechanisms when a TCP or UDP connection is established. Once the connection has been made, packets can flow between the hosts without further checking. 4. Proxy server: Intercepts all messages entering and leaving the network. The proxy server effectively hides the true network addresses. For all the types of Firewalls the rules and filters are set using software algorithms. The hackers / crackers have a technique of masking their data packets to conform to the filters and rules for access to the network. This is also known as spoofing. Once through into the network they can then seize the computers or the Proxy through Telnet / SSH access and go about disabling the software algorithms which were used to set the rules and filters and open up the system for them to do whatever they wish to do. The IDS (intrusion detection systems) and the IDP (intrusion detection and protection) systems available today can make it difficult for hackers to go into the network, but not impossible. It will take them more time, but then they can eventually get through. To be able to do this the only thing that the hackers need is a continuous physical connectivity to the routers between the Internet connection and the Proxy server or the direct bank of computers which is available to them through broadband Internet connection. The race between the protector and the spoofer is a continuing process. You may go on spending money to increase the deterrence depending on the value you ascribe to the asset (data bases) you are trying to protect. If the asset is also perceived to be valuable by the Hacker / Cracker or those who have employed them then they would also be willing to devote more time to the job at hand. A high end security system with IDS / IDP built in and several layers of firewall would cost in multiples of Crores Rupees. And yet they would not be totally invincible to Hackers. The only real way to prevent the Hackers / Crackers from getting in is to segregate the corporate network (INTRANET) from any form of public domain access – the Internet or VPN (which also provides continuous access to the corporate network from the public domain). See VPN.ppt IN http://www.slideshare.net/pankajmitra. This way we can disable the continuous physical access to the INTRANET from the Internet and from the intruders. By adding systems like IDS / IDP you can delay the cracking, but not prevent it totally. Thus all firewalls are breakable although the effort required depends on the degree of protection built in. This is what prompts one to say “For every firewall there is a water hose” 9 months ago If we isolate our INTRANET from the Internet to achieve 100% security of internal databases how do we communicate with clients, vendors, the general public, consultants, and participate in e-Commerce activity and online banking activity(in case of banks)? This may be done by placing the MLO’s (multi-locational organisation’s) Web server together with its storage on a separate Internet LAN at the organisation’s central location. This LAN may be connected through a standard firewall from Cisco or others and a point-to-point (p2p) leased line of appropriate bandwidth (depending on the busyness of the link and the number of hits on the server within a certain time) to the nearest POP (point-of-presence) of the Internet Service Provider (ISP). The web server designated as PS will have the e-Commerce or online banking client services, all the publishable information of the organisation, and the Internet email gateway for the organisation. The various fields in the PS will be replicated in an Intermediate Server (IS), and a Company Communications Server (CS) which resides in 15
  • 16. the INTRANET. All in and out communications from the INTRANET to the Internet is routed through the CS to and from the various databases and the organisation’s mail server. The IS is connected to a three position electro-mechanical microcontroller driven RJ45 switch which connects it to either the INTRANET LAN or to the Internet LAN, never to both together. Two- way synchronisation takes place between the CS and IS when it is connected to the INTRANET LAN and with the PS when it is connected to the Internet LAN. This ensures free flow of information between the INTRANET and the Internet without impairing the 100% security of databases residing in the INTRANET. This system is designated STS (the acronym for Secure Transfer System) and may be seen in slides 6 to 10 of PVDTN Presentation 1.ppt in http://www.slideshare.net/pankajmitra . The denial of direct Internet access to the INTRANET ensures its 100% security of the databases connected to it. The MLO and people who wish to communicate or interact with it do so through the PS and its associated storage which are connected to the Internet, through the Internet from anywhere in the world. 9 months ago Integrate voice and fax over your 100% secure, 100% uptime INTRANET to save inter-locational telecom and travelling costs. There are two ways in which voice and fax may be integrated over INTRANETS built with p2p leased lines. The conventional VoIP or the path breaking patented PVDTN system. In the former voice / fax packets are sent along with data packets through the p2p links of the WAN. In the latter using channel splitters two parallel networks are run over the same p2p leased line backbone – an IP packet switched network (for data communications and all IP services) and a circuit switched network (for voice and fax communications). The latter is more bandwidth efficient – see FAQ3 in the presentation PVDTN FAQs.ppt in http://www.slideshare.net/pankajmitra. In the latter system by adding between 30 and 40% of the bandwidth required for data communications over your IP data network built over point-to-point (p2p) leased lines, the total inter-locational voice and fax communications, multiple simultaneous NET meetings using voice and voice-data conferencing for different work groups with officers from their respective work places at a moment’s notice can be carried out over secure INTRANET. This will eliminate the PSTN and other public telephony costs currently being incurred for such communications, and also save travelling costs and time for conducting such meetings. How the savings takes place is shown in FAQ1 in the presentation PVDTN FAQs.ppt in http://www.slideshare.net/pankajmitra. The money so saved will help to recover the capital expenditure within a year or two. Since the inter-locational voice / fax communications are carried out over the 100% secure INTRANET they are free from eaves dropping possible in public telephone networks. 9 months ago Inferences The following inferences may be drawn from the above discussions. A. Any network which is laid out over the IP backbone of a Telephone Service provider shares this space with various public domain networks like the 16
  • 17. PSTN, ISDN, PDN, Broadband services, and hence provides continuous physical access from the public domain being part of the same IP Backbone. B. Such networks like VPN networks, if used to connect an organisation’s internal databases will expose them to hacker / cracker attacks from the public domain. C. Firewalls are breakable and hence cannot give 100% security to the organisation’s internal databases as long as there is continuous physical access from the public domain. D. The only way to ensure 100% security of an organisation’s internal databases is to connect them through an INTRANET which has no physical access from the public domain. This is done by building the INTRANET WAN using point- to-point leased lines, and ensuring that there is no public domain access (Internet, ISDN, PSTN, and other shared networks like VPN) to the INTRANET LAN at each organisation location. E. Since all organisations have to have their presence on the Internet and participate in the World Wide Web (www), this is enabled through Web based Proxy Servers (PS) connected to the Internet. The PS will house the organisation’s external mail gateway, the collaboration tools, the e-Commerce applications, the online banking applications (in case of Banks), and will be open to access from all members of the Internet community, with different levels of access based on their relationship with the organisation. F. To facilitate e-Commerce, e-Banking, etc, there will have to be a free flow of information back and forth between this PS and its associated storage and the INTRANET Communications Gateway Server (CS), through a secure transfer system which ensures that there is no direct contact between the INTRANET LAN and the Internet LAN, ensuring that there is no impairment of the 100% security of the INTRANET. G. Since the INTRANET provides a dedicated 100% secure network to the organisation, technology now exists to integrate voice, fax, conferencing (voice and voice-data) cost effectively by increasing the link bandwidths by 40 to 50%, thereby eliminating public telephony between all company locations and saving approximately 50 to 75% present telephone costs, and a substantial part of the inter-locational travelling costs. The savings thus made can pay back the infrastructure for 100% security of the organisation’s internal databases within a few years. H. Thus all data and information which is important and exclusive to the organisation should reside in the Private Cloud connected through the organisation’s INTRANET. I. The e-Commerce activity and transactions like the online banking transactions may be secured by taking measures like dynamic passwords passed on to the user’s dedicated mobile number for every transaction. Thus while hackers and crackers may view your accounts with the bank by hacking 17
  • 18. into your static username and password they will not be able to make any transactions without the account holders mobile phone. This can of course be overcome by them if they can get messages coming into the users’ mobile phones diverted to theirs. To do this they will need to have access to this dedicate phone number or have a contact in the mobile service provider who can help to get this done. Arduous processes no doubt. J. Your computer in the work place may be connected either to the Internet for browsing activity or the INTRANET for other computing activity, but never to both together. This may be achieved through a Secure Switch which connects your machine to either of the INTRANET or Internet LANs. K. The public Clouds like the search engines and the email services are meant for universal access and do not need any security measures. The individual applications can determine what may be done on these Clouds. By hacking through your static username and passwords, hackers / crackers may have access to your Web mail. It is best to delete all emails sensitive to your business from the Web mail storage if you do not want intruders to have access to these. L. Individual Organisation / Bank Web Proxy Servers may be accessed by all people in the Internet community and they will get information / access depending on their relationship with the Organisation / Bank. Hackers / Crackers may hack through to the static usernames and passwords of any person and acquire their access rights to these Web sites. However, the additional precautions mentioned in (I) above will prevent them from carrying out transactions, unless they move further to get messages sent to the individuals dedicated mobile phone diverted to theirs to access the dynamic passwords associated with each transaction. This is definitely a more difficult task though not impossible. M. Claims of Cloud Storage or Cloud Infrastructure, or SAAS service providers that they will be able to provide security to your data are belied as indicated in A, B, and C above. Hackers / Crackers will find a way to get into your data and applications residing in the Public Cloud through the continuous physical access they enjoy through the Common IP Backbone of the Telephone Service Provider / s. Hence take such assurances with a pinch of salt. N. The answers to the security questions posed above, and the above inferences drawn from the series of discussions listed should, we hope, help each organisation’s IT Infrastructure planners to choose they way they wish to lay out their IT Infrastructure. O. If you still need further help and / or clarifications, you may contact us at midautel@bsnl.in or pankajmitra@gmail.com 18