SlideShare a Scribd company logo
1 of 94
Download to read offline
ISSN: 1694-2507 (Print)
ISSN: 1694-2108 (Online)
International Journal of Computer Science
and Business Informatics
(IJCSBI.ORG)
VOL 10, NO 1
FEBRUARY 2014
Table of Contents VOL 10, NO 1 FEBRUARY 2014
Cloud Architecture for Search Engine Application ...............................................................................1
A. L. Saranya and B. Senthil Murugan
Efficient Numerical Integration and Table Lookup Techniques for Real Time Flight Simulation.............. 8
P. Lathasree and Abhay A. Pashilkar
A Review of Literature on Cloud Brokerage Services................................................................................ 25
Dr. J. Akilandeswari and C. Sushanth
Improving Recommendation Quality with Enhanced Correlation Similarity in Modified Weighted Sum
.................................................................................................................................................................... 41
Khin Nila Win and Thiri Haymar Kyaw
Bounded Area Estimation of Internet Traffic Share Curve ...................................................................... 54
Dr. Sharad Gangele, Kapil Verma and Dr. Diwakar Shukla
Information Systems Projects for Sustainable Development and Social Change ................................... 68
James K. Ho and Isha Shah
Software Architectural Pattern to Improve the Performance and Reliability of a Business Application
using the Model View Controller .............................................................................................................. 83
G. Manjula and Dr. G. Mahadevan
IJCSBI.ORG
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 1
Cloud Architecture for Search
Engine Application
A. L. Saranya
School of Information Technology & Engineering,
VIT University, Vellore-632014, Tamil Nadu, India
B. Senthil Murugan
Assistant Professor (Senior)
School of Information Technology & Engineering,
VIT University, Vellore-632014, Tamil Nadu, India
ABSTRACT
Cloud computing has become popular because of its on demand self services capability and
business benefits. This paper presents design of search engine application developed and
deployed using Google app engine. The application uses pattern-matching and regular
expression language processing across millions of web document and returns the matching
web documents. To facilitate large dataset processing the application makes use of Apache
Hadoop suite, which is distributed data processing framework that brings up hundreds of
virtual servers on-demand, runs a parallel computation on them, then shuts down all the
virtual servers releasing all its resources back to the cloud. The MapReduce concept is used
to implement the system to do the parallel computation and give efficient result to user. The
application is efficient and scalable to any number of users in quick response time. The
Google app engine uses cloud SQL instance to store data virtually in a cloud database.
Keywords
MapReduce, Pattern matching, SQL instance, Google app engine, Apache Hadoop suite.
1. INTRODUCTION
Using cloud architecture the software application can be effectively
designed and online databases are used on-demand. Cloud infrastructure
used for software application is utilized on need and returned it back to
cloud providers after its usage to make it available for other application.
Cloud architecture can handle large number of data’s easily. Physical
location of the application infrastructure is determined by the provider, so
that there are many business benefits in cloud architecture, such as business
people no need to invest for infrastructure, quick infrastructure when
needed, resources is utilized efficiently, pay only for what using, through
parallelization processing time of the job is reduced. The main objective of
this paper is to develop efficient, scalable search engine application based
on cloud architecture which will give responses to many users. This
application should be loosely coupled so that it is available to all user
community and can access concurrently.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 2
2. BACKGROUND STUDY
The new computing model of cloud computing provide resource, storage
and online application as service to the user. Cloud computing is dynamic,
reliable, scalable, low cost and secure so that it provides virtual service to
any number users. The cloud computing provide three type of services such
as , software as a service where the application software can used by any
one as on demand resource, platform as a service and infrastructure as a
service. The internet users are more interest in searching data’s and getting
needed information. For quick and efficient result, large computing
resources are needed. Cloud infrastructure is used get the resources needed,
to get data after processing the data and resources is given back. Using
Google apps engine implementation of search engine cloud application is
explained is this paper. The application use Hadoop mapreduce concept to
get large data from the cloud and map the process request on that data and
reduce the result set to give the searched result. Mapping of millions of
result has been done parallel and quick response to request is generated so
that application is more efficient.
3. RELATED WORKS
Chunzhi Wang and Zhuang Yang [1] of Hubei University of Technology,
explain the cloud search engine process based on user interest. They showed
that demand of user can be known by introducing user interest model. Push
mechanism used to get result for search and close all exciting sever on
demand to user. This lets the user to get relevant information on time. They
compare the traditional search model with user interest based search model.
The user interest model has accurate rate of giving relevant information on
user demand.
Lingyging Zeng and Hao Wen Lin [2] of Harbin Institute of technology
explain the concept of existing MapReduce and modified MapReduce to
perform parallel computing to collect the hardware performance information
from the virtual machine. The existing MapReduce will have master slave
process, when the client request is generated master node will create a new
job and assign to a new processor and is ready to perform. The master node
always checks the salve process status is working based on that it will split
and assign work to all available process and get combine all task. They used
this concept in cloud computing which is dynamic and server will generate
the request to the persistence independent storage device to collect and
information.
Jinessh varia [3], technology Evangelist, Amazon Web services explained
the cloud Architecture in June 2008. Varia explained how to develop an
efficient, reliable, scalable, distributed parallel application using Amazon
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 3
Web Service which is loosely coupled system. Explained development of
application with GrepTheWeb Hadoop implementation based search engine
deployed using Amazon Web service. He also explained Amazon web
service such Amazon S3 which is used to get input and output, Amazon
SQS act as message passing, Amazon SimpleDB a database to get status,
Amazon EC2 a controller.
Gaizhen Yang [4] in 2011 explained the application of MapReduce in Cloud
Computing. Hadoop is the frame work for cloud programmers and Map
Reduce is the parallel computing large scale programming model. He
analyses the Hadoop and MapReduce model and described how this both
can perform together that’s Map Reduce program in distributed cloud
computing programming.
Kejiang Ye, Xiaohong Jiang, Yanzhang He, Xiang Li, Haiming Yan, Peng
Huang [5] in 2012 discusses A Scalable Hadoop Virtual Cluster Platform
for MapReduce-Based Parallel Machine Learning with Performance
Consideration. Big data processing is increasing its important because of
increasing data. Efficiently process large data virtual infrastructure is not
clear at present. He clearly explained based on the performance of Hadoop
and vHadoop. The performance is measured based on clustering, k-means,
on vHadoop.
Zhiqiang Liu, Hongyan Liu, Gaoshan Miao [6], in 2010 proposed
MapReduce-based Backpropagation Neural Network over Large Scale
Mobile Data. MapReduce-based Backpropagation Neural Network is
proposed to process classifications on large-scale mobile data. MapReduce-
based framework on cloud computing platform is discussed to improve the
efficiency and scalability over large scale mobile data. MapReduce
framework is well known as a parallel programming model for cloud
computing. It supports the parallelization of data processing on large
clusters and built on the top of a distributed file system. However the
research of how to design a neural network on MapReduce framework is
rarely touched nowadays especially over large scale mobile data.
Closed frequent Itemset mining [7] plays important role in many real world
applications. Cost and handling of large dataset is challenging issues of such
data mining. A parallelized AFOPT-close algorithm is proposed and
implemented based on the cloud computing framework MapReduce in 2012
by Su Qi Wang, Yu Bin Yang, Guang Peng Chen, Yang Gao and Yao
Zhang.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 4
4. METHODOLOGY
4.1 OVERVIEW OF SYSTEM
The overview of the proposed system is shown in Figure 1.
Figure 1. Overview of Engine for Extraction of Similar Resultant Web Document
Design of search engine is divided into modules. The process of sending
request and getting result has four steps. First is launching of request, here
the input query is validated and Hadoop is initiated. Second is map data and
reduce based on matched input data from the cloud data base. Third is
billing of used data for processing Hadoop and stop the Hadoop process.
Fourth one is giving back the resource to cloud database by cleaning all data
used in the application.
4.2 SEARCH ENGINE ARCHITECTURE
The system architecture depicted in Figure 2 implies that the GAE design
will get the query from the user as regular simple expression, then process
the request to the mapreduce phase which split the expression data set into
small sub set and request is sent to all different database machines. After the
extraction of resultant web document which matches the expression it will
combine into a single resultant set and produce it to the user as web
document.
Figure 2. Search Engine System Architecture using cloud sql
Search engine
application
Input query
(Regular
Expression)
MapReduce phase
Output
(web Documents)
Cloud SQL
database storage
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 5
Search engine application is developed to provide the user software as a
service (SaaS). This application is developed based on to give efficient web
search to user. Search engine uses regular expression as a query to search
into the cloud database. This regular expression is run over millions of web
document using Hadoop map reduce concept. It uses matching pattern to
retrieve the document which is matched at most of the user entered regular
expression query. The challenges in designing search engine is that
complex regular expression, if there are many web document which
matches, or else pattern is unknown. This application is overcomes all of
such difficulties and gives result to number of users even with large dataset,
with quick response and cost of usage is less. This is done over because of
mapping is done parallel in number of processor then reduce and combine
into smaller needed information.
4.3 HADOOP MAPREDUCE IMPLEMENTATION
The Mapreduce implementation is pictorially shown in Figure 3.
Figure 3. Mapreduce phase implementation in cloud database
Hadoop split dataset into manageable data and give it to many machines, job
launched and processed in different machine which is located physically
wide somewhere because of its open source and distributed which can
manage large dataset. After that the result of all are aggregated as final
output of job. It works in three phases to implement this. Map phase will
map the data which is matched with the regular expression from the cloud
database. Reduce phase will produce intermediate result of the web
document. Map and reduce phase is done independent of each other in
separate processor. Combine phase will combine all the extracted data from
different machine. Thus needed data will be computed from all over the
cloud data base and processed parallel to give efficient search result.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 6
Hadoop use the master slave process, master process will run in separate
node and see all the slave process which runs in some other separate node.
Salve process all workers which extract data from different machine if any
failure in worker or any problem will be take care by master process.
5. RESULTS
Application implementation in Figure 4 shows the start up page of search
engine application which is developed and deployed using Google app
engine and web tool kit. The application ask the user to enter search string
and shows web document which match the search string based on Map
Reduce concept. Because Map Reduce concept uses parallel computation
search result will be mapped and computed fast so that response time of
application will increase. Cloud SQL instance is used to access the cloud
database and get all resource need for result and after the process is over
resource is released back to the cloud.
Figure 4. Search engine application start up page
6. CONCLUSION
In this paper Search engine application is successfully designed, developed
and deployed using Google Apps engine and cloud instance sql database.
Search engine performs pattern matching across millions of web document
using Apache Hadoop Map-Reduce for regular expression inputted by the
user for query processing. Because of using map reduce concept, millions of
documents are pattern matched in parallel at a time and result is combined
and given to user as a web document. The process uses parallel distributed
processing across many dataset gives the quick response to the user and also
scale for any number of users. Application uses cloud sql data base using
instance created for the application, so that billing of used resources from
cloud computing data base can be easily maintained.
References
[1] Wang C., Yan Z., Chen H., 2010. Search engine concept based on user interest model
and information push mechanism. 8th International Conference on computer science and
education, Sri Lanka.
[2] Zeng L. and Lin H. W. 2012. A modified mapreduce for cloud computing. International
conference on computing, measurement, control and sensor networks.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 7
[3] Jinesh Varia, explained Cloud Architectures in Technology Evangelist Amazon Web
Services in June 2008
[4] Gaizhen Yang, “The Application of MapReduce in the Cloud Computing”, International
Symposium on Intelligence Information Processing and Trusted Computing in 2011.
[5] Kejiang Ye, Xiaohong Jiang, Yanzhang He, Xiang Li, Haiming Yan, and Peng Huang,
“vHadoop: A Scalable Hadoop Virtual Cluster Platform for MapReduce-Based Parallel
Machine Learning with Performance Consideration”, IEEE International Conference on
Cluster Computing Workshops in 2012.
[6] Zhiqiang Liu, Hongyan Li , Gaoshan Miao, “MapReduce-based Backpropagation
Neural Network over Large Scale Mobile Data”, Sixth International Conference on Natural
Computation (ICNC 2010) in 2010.
[7] Su Qi Wang, Yu Bin Yang, Guang Peng Chen, Yang Gao and Yao Zhang,
“MapReduce-based Closed Frequent Itemset Mining with Efficient Redundancy Filtering in
IEEE 12th International Conference on Data Mining Workshops in 2012.
This paper may be cited as:
Saranya, A. L. and Murugan, B. S., 2014. Cloud Architecture for Search
Engine Application. International Journal of Computer Science and
Business Informatics, Vol. 10, No. 1, pp. 1-7.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 8
Efficient Numerical Integration
and Table Lookup Techniques for
Real Time Flight Simulation
P. Lathasree
CSIR-National Aerospace Laboratories
Old Airport Road, PB No 1779, Bangalore-560017
Abhay A. Pashilkar
CSIR-National Aerospace Laboratories
Old Airport Road, PB No 1779, Bangalore-560017
ABSTRACT
A typical flight simulator consists of models of various elements such as the flight dynamic
model, filters and actuators, which have fast and slow eigen values in the overall system.
This results into an electromechanical control system of stiff ordinary differential
equations. Stability, accuracy and speed of computation are the parameters of interest while
selecting numerical integration schemes for use in flight simulators. Similarly, accessing
huge aerodynamic and engine database in table look-up format at high speed is an essential
requirement for high fidelity real time flight simulation. A study was carried out by
implementing well known numerical integration and table lookup techniques in a real time
flight simulator facility designed and developed in house. Table lookup techniques such as
linear search and index computation methodology using novel Virtual Equi-Spacing
concept were also studied. It is seen that the multi-rate integration technique and the table
look up using Virtual Equi-Spacing concept have the best performance amongst the
techniques studied.
Keywords
Real-Time Flight Simulation, Aerodynamic and Engine database, Virtual Equi-Spacing
concept, table look up and interpolation, Runge-Kutta integration, multi-rate integration.
1. INTRODUCTION
Flight simulation has a vital role in the design of aircraft and can benefit all
phases of the aircraft development program: the early conceptual and design
phase, systems design and testing, and flight test support and envelope
expansion [1]. Simulation helps in predicting the flight behavior prior to
flight tests. It helps in certification of the aircraft under demanding
scenarios. Flight Simulation is widely used for training purposes in both
fighter and transport aircraft programs [2]. Therefore, Modeling &
Simulation is one of the enabling technologies for aircraft design.
The fidelity of the simulation largely depends on the accuracy of the
simulation models used and on the quality of the data that goes into the
model. A faithful simulation requires an adequate model in the form of
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 9
mathematical equations, a means of solving these equations in real-time and
finally a means of presenting the output of this solution to the pilot by
means visual motion, tactile and aural cues [3].
The Real-Time Flight Simulator implies the existence of a Man-In-the-Loop
operating the cockpit controls [4]. Because of the presence of the pilot-in-
the-loop, the digital computer executing the flight model in the simulator
must solve the aircraft equations of motion in 'real-time' [5]. Real-Time
implies the occurrence of events at the same time in the simulation as seen
in the physical system. All the associated computations should be completed
within the cycle update time [6].
The basis of a flight simulator is the mathematical model, including the
database package, describing the characteristic features of the aircraft to be
simulated. The block schematic of flight simulator is shown in Figure 1
with constituent modules such as aerodynamic, engine, atmosphere (static
and dynamic), actuator etc. The simulation model for atmosphere includes
the static and dynamic atmosphere components. Dynamic atmosphere model
caters for turbulence, wind shear and cross wind. Dryden and Von Karman
models are generally used for the simulation of atmospheric turbulence [7].
Figure 1. Block Schematic of Flight Simulation
VISUALS
AND
DISPLAY
PILOT
COMMANDS
ELEVONS
RUDDER
THROTTLE
SLATS
ACTUATOR
MODEL
ENGINE
MODEL &
ENGINE
DATABASE
ATMOSPHERE
MODEL
MASS,
C.G &
INERTIA
FLIGHT
MODEL
AEROMODEL
& AERO DATA
AIRCRAFT
RESPONSES
POSITION
VELOCITY
ACCELERATION
FLIGHT PATH
AOA
AOS
FLIGHT CONTROL
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 10
Mathematical models, used to simulate modern aircraft, consist of a set of
non-linear differential equations with large amounts of aerodynamic
function data (tables), sometimes depending on 4 to 5 independent
variables. These aerodynamic data tables result in force and moment
coefficients which contribute to the total forces and moments. The equations
of motion are dependent on these forces and moments. They are solved by
the digital computer using a suitable numerical integration algorithm. This
allows the designer to create the complete range of static and dynamic
aircraft operating conditions, including landing and takeoff [6].
The type of method used for the integration of ordinary differential
equations is critical for real time simulation. The choice of an integrating
algorithm is a trade-off between simplicity, which affects calculation speed,
and accuracy. Also, real simulation needs high speed data access. The
aerodynamic and engine database used for real-time simulation are huge and
complex. Hence, the types of table look-up methods used for access of data
from aerodynamic and engine database also become critical.
This paper discusses the efficient table look up and interpolation schemes
and numerical integration techniques which can be used for ensuring
accurate real-time computations in a flight simulator.
2. REVIEW OF EXISTING TECHNIQUES
The existing numerical integration techniques and table lookup and
interpolation methods for real time implementation are discussed in this
section.
2.1 Numerical Integration
Many linear numerical integration techniques with single and multi step are
available which can also be classified into implicit and explicit numerical
integration techniques [8]. With respect to the stability and accuracy, each
of these numerical integration techniques has advantages and disadvantages
[8]. Depending on the performance, these methods can be suitably used for
stiff and non-stiff systems. Methods not designed for stiff problems must
use time steps small enough to resolve the fastest possible changes, which
makes them rather ineffective on intervals where the solution changes
slowly. The most popular numerical integration methods are listed below.
 Taylor Series Methods
 Runge-Kutta Methods
 Linear Multi Step Methods
 Extrapolation methods
The linear multistep methods (LMMs) require past values of the state. They
are therefore not self-starting and do not directly solve the initial-value
problem [9]. The simplest Runge-Kutta (RK) method is Euler integration,
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 11
which merely truncates the Taylor series after the first derivative and is very
accurate [9]. An RK method (e.g., Euler) could be used to generate the
starting values for LMMs.
Higher order RK algorithms are an extension Taylor series expansion to
higher orders. An important feature of the RK methods is that the only value
of the state vector that is needed is the value at the beginning of the time
step; this makes them well suited to the Ordinary Differential Equations
initial value problem [1].
2.1.1 Stability, Accuracy and Speed of Computation
While choosing the numerical integration technique, one frequently has to
strike a compromise between three aspects [10-11].
• Speed of the method
• Accuracy of the method
• Stability of the method
Speed of the method becomes an essential feature especially for real time
simulation.
Accuracy of the method is also an important aspect and needs to be
considered when choosing a method to integrate the equations of motion
[12]. Accuracy of the numerical integration technique can be determined
from step size, number of steps to be executed and truncation error terms
[10-11]. Generally, two types of errors will be introduced by the numerical
integration methods viz. round-off errors and discretisation errors. Round-
off errors are a property of the computer and the program that is used and
occur due to the finite number of digits used in the calculations [13-14].
Discretisation/ truncation errors are property of the numerical integration
method.
Stability can be defined as the property of an integration method that keeps
the integration errors bounded at subsequent time steps [12]. An unstable
numerical integration method will make the integration errors grow
exponentially resulting in possible arithmetic overflow just after a few time
steps.
Stability of numerical integration technique generally depends on the system
dynamics, step size and order of the chosen technique and is harder to assess
[10-11]. Impact of numerical integration method in terms of stability can be
assessed by applying it to a well-conditioned differential equation and then
investigating the limits of the onset of instability [10-11]. In the context of
stability of numerical integration, it is understood that a stable continuous
system results in a stable discrete-time system. Numerical stability is
important for fixed-step Runge-Kutta integrators because of the limitations
imposed on the integration step size. Generally, selection of the integration
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 12
step size will be carried out based on analysis on the stability of the
numerical integration technique. [15]. Numerical stability will be an issue
when the chosen integration step size produces z-plane poles close to the
Unit Circle.
If the poles are located inside the Unit circle, then the system will be stable.
Increasing T (step size) eventually causes one of the z-plane poles to be on
the Unit Circle where the system becomes marginally stable. Depending on
the location of T (product of characteristic root and step size) on the
stability boundary of respective integrator, it is possible to estimate the
maximum allowable integration step size (Tmax) for the system solution to
be at least marginally stable. Beyond Tmax, the system solution will
become unstable. Hence, it is very essential to consider stability boundaries
for different numerical integrators while selecting the integration step size.
Figure 2 shows the stability boundaries for Runge-Kutta methods [15].
-3 -2.5 -2 -1.5 -1 -0.5 0 0.5
-3
-2
-1
0
1
2
3
RK-4
RK-3
RK-2
Stability Boundaries for RK-2 thru RK4 Integrators
T Plane
Re (T)
Im(T)
Figure 2. Stability Boundaries for Runge-Kutta methods
2.1.2 Numerical Integration techniques for Stiff Systems
‘Stiffness’ of the differential equations may be defined as the existence of
one or more fast decay processes in time, with a time constant that is small
compared to the time-span of interest [13].
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 13
One has to consider the following two points while choosing the numerical
integration technique [10-11]:
 The integration technique should be chosen such that any error it
introduces is small in comparison to the errors associated with the
main terms of the model equations;
 The numerical integration techniques should be able to solve the
system of differential equations within the real-time frame rate.
Many integration techniques, for non-real time simulation applications, are
available that work well with the stiff systems [16-17]. Two approaches that
can be used for simulating stiff systems with respect to real time and non-
real time simulation will be discussed here. The first approach considers
selection of numerical integration technique that works well in the presence
of stiffness.
The second approach involves the use of multi-rate integration to simulate
stiff systems. In multi-rate simulations, the simulation is split into multiple
tasks that are executed with different integration step times. The inverse of
the integration step time is termed as frame rate and expressed in frames per
second. This multi-rate integration technique is useful for real-time
applications as well as non real-time applications.
Of the two approaches discussed for the simulation of stiff systems, only the
multi-rate integration technique is applicable for real time applications.
The control systems with electrical and mechanical components, referred as
electromechanical control systems, are composed of fast and slow
subsystems. Generally, the mechanical systems being controlled are much
slower when compared to the components in electronic controllers and
sensors. This results in an electromechanical control system with fast and
slow dynamics. The aircraft pitch control system is an example of system of
stiff ordinary differential equations comprising of aircraft dynamics and
actuators [15].
Kunovsky et al have established the need of multi-rate integration for real
time flight simulation [18] with an example of aircraft pitch control system
comprising of slow aircraft dynamics and fast actuator dynamics using
Runge-Kutta and Adams-Bashforth numerical integration techniques. The
airframe module of aircraft pitch control system is modeled as a linear
second-order system to account for the short-period longitudinal dynamics.
Generally, selection step size for numerical integration will be carried out
based on the analysis of stability and dynamic accuracy. Ts and Tf are the
integration step sizes of slow and fast systems respectively.
The numerical integrator used to update slow system is termed as ‘‘master’’
routine, and the integration method used to update the fast system is called
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 14
as ‘‘slave’’ routine. It is common to use conventional numerical integration
schemes such as Runge-Kutta methods for both ‘master’ and ‘slave’
systems. For the example studied here, the multi-rate integration scheme
with RK-4 is chosen for master and slave routines. The implementation is
carried out in the Matlab environment. For a pitch command of 2deg,
simulation is carried for the state space based simulink model. This result is
compared with the analytical solution and the response obtained using a
multi-rate integration scheme. The comparison of theta and elevator
responses for three methods is shown in Figure 3 and Figure 4 respectively.
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
Time(sec)
theta_resp(deg)
Analytical
Multirate-RK4
RK4 @ 0.0025sec sampling
Figure 3. Comparison of Theta response
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
-0.05
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
Time(sec)
dle_resp(deg)
Analytical
Multirate-RK4
RK4 @ 0.0025sec sampling
Figure 4. Comparison of elevator response
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 15
The responses obtained from analytical solution are taken as the reference.
From the figures, it can be seen that the response obtained using simulink
model at step size 0.0025 is matching well with reference whereas the
response obtained using the multi-rate integration exhibits loss of accuracy.
The multi-rate integration scheme would be recommended for real time
simulation even though there is some loss of accuracy, since the smaller step
size may deteriorate the performance.
2.2 Table look-up and Interpolation
Generally, an index search or look-up process will be performed first to
locate the data and this is followed by linear interpolation. Following steps
need to be performed for table look-up process [3]:
1. First we should decide between which pair of values in the table the
current input value of independent variable (X) lies
2. Next, calculate the local slope
3. Finally, apply the linear interpolation formula
For real-time simulation, it is always important to save the processing time.
One of the techniques to save the processing time is to remember the index
of the lower pair the interpolation range used in the previous iteration. The
value of the independent variable (X) is unlikely to have changed
substantially from one time step to the next, and hence it is a good first try
to use the same interval as before and thus save time in searching from one
end of the table each time.
Huge and complex aerodynamic and engine database has to be handled in
such a way that it can be easily read and interpolated for a given set of input
conditions. One way of ensuring the speed required for real-time simulation,
is to have uniformly spaced database. For this, the normal practice is to
convert the supplied database with non-uniform break points for
independent variables to equi-spaced format. It is necessary to choose an
appropriate step size for independent variables such as Angle of Attack,
Mach number, Elevator, Angle of Sideslip, Power Lever Angle (PLA) etc to
convert this non-uniform database to equi-spaced format. This is normally
termed as conventional equi-spacing concept. We propose a new concept
called Virtual Equi-Spacing where the original database with non-uniform
break points is retained. With the assumption of virtual equi-spacing, the
search process can be eliminated [19] as the index is directly computed.
The computation of index in Virtual Equi-Spacing concept is explained in
the following section.
2.2.1 Virtual Equi-Spacing Concept
A novel method is proposed which would retain original data with unevenly
spaced break points and satisfies real-time constraint without loss of
accuracy. In this method, an evenly spaced breakpoint array that is a
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 16
superset of the unevenly spaced break points will be created for the
independent variables and shall be referred as ‘Address Map’. The index
into this evenly spaced array can be directly computed (Refer Figure 5).
This index is then used in an equivalent breakpoint index array that provides
pointers to the appropriate interpolation equation.
Figure 5. Indexing scheme in Virtual Equi-Spacing Concept
The Virtual Equi-Spacing concept satisfies real-time speed constraint
without loss of accuracy for the real time flight simulators. This technique
eliminates search process and directly computes the index of data tables.
The Virtual Equi-Spacing concept works as follows. A division of the
desired input value by the step size chosen for the creation of Address map
table gives the location ‘K’. The value of address map [K], say ‘i’ is used as
a pointer in data table to get the final data component value i.e. Table[i] for
the desired input. This is now demonstrated with a typical example.
Aircraft engine database is a three dimensional dataset where thrust is a
function of three independent variables viz. Mach number, PLA and
altitude. The technique of computing index values in address maps and the
index values in data arrays is explained with PLA dimension.
Let pla_val = 54.0 deg for the Mach number 0.4 and Altitude 4500.0m.
The PLA and Thrust relationship at these conditions is given in Table 1.
The computation of index values and thereby data values is presented in
Appendix along with the pseudo code. The engine database of a high
performance fighter aircraft is used to demonstrate the table look-up and
interpolation schemes. This database consists of engine parameters such as
thrust, specific fuel consumption, N1 rpm, N2 rpm etc. supplied as function
0 0 0 1 2 3 4
Unevenly
Spaced
breakpoints
0.0 0.3 0.4 0.5 0.6
Evenly
Spaced
breakpoints
0.0 0.1 0.2 0.3 0.4 0.5 0.6
5
Breakpoint
index
0.8
0.7 0.8
4 5
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 17
of Mach number, PLA and altitude. This index computation methodology
using Virtual Equi-Spacing concept is extended to multi-dimension tables of
wind tunnel database.
Table 1. PLA and Thrust relationship
PLA(deg) Thrust(kN)
28. -0.63
42. 3.21
54. 8.7
66. 13.81
78. 20.24
90. 26.32
104. 28.09
107. 30.26
130. 44.84
The next section presents a study on efficient table look up algorithms and
numerical integration algorithms suitable for real time implementation in
flight simulators.
3. RESULTS
From the survey of existing techniques for numerical integration and table
look-up, concept of multi-rate integration and Virtual Equi-Spacing concept
are implemented for real-time flight simulation and studied. This
implementation is carried out in the real-time flight simulation facility
designed and developed at CSIR-NAL.
Figure 6 shows the conceptual flowchart of real time flight simulation. The
simulation is typically started from an equilibrium / trim condition. For the
given set of pilot inputs, flight dynamic module solves the equations of
motion using the chosen numerical integration method. It is necessary that
all the associated computations should be completed within the cycle update
time for real-time simulation. These computations are completed ahead of
cycle update time and the beginning of the next cycle is delayed till the
internal clock signals the next cycle update as shown in Figure 6.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 18
Figure 6. Conceptual flowchart of real-time flight simulation
3.1 Timing Analysis
The timing analysis is carried out for the numerical integration and table
loop up techniques and the results are presented.
Initial Condition / Trim
( simtime = 0)
(cycletime =0)
Get External Inputs
Get surface positions
Get Forces and Moments
Integrate rigid body
Equations of Motion
Pilot Inputs
Disturbances
Control Laws
Hardware
models
Aerodynamic
Engine
Landing Gear
If Stop Simulation
End
Yes
No
simtime = simtime +
deltat
delay
deltat = integration step size
If cycletime < deltat
Wait till cycletime = deltat
end
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 19
3.1.1 Numerical Integration
The concept of multi-rate integration is adopted for the real-time flight
simulation facility designed and developed at CSIR-NAL. The full
nonlinear model of the aircraft dynamics along with the actuator dynamics
for a light transport aircraft is considered for this real-time flight simulation
environment. The aircraft dynamics of light transport aircraft constitute the
slow dynamics and fast dynamics is composed of actuator dynamics. The
nominal integration step size of 0.025sec is chosen for the airframe
simulation purpose. Similarly, for the actuator dynamics 0.0025sec is
chosen as integration step size based on the analysis of stability and
dynamic accuracy. It can be seen that the ratio of step sizes of slow system
to fast system (frame ratio) is 10 indicating a stiff system. The multi-rate
integration scheme with frame ratio 10 and simulation cycle update time
0.025sec ensures the handling of slow and fast subsystems. The Runge-
Kutta pair of Bogacki and Shampine [20] is currently being used for the
numerical integration of slow and fast dynamics. Table 2 presents the timing
analysis for the simulation (off-line) carried out using the windows based
timer function with the resolution in micro seconds.
Table 2 Timing analysis for multi-rate and mono-rate integration techniques
Duration
of
Simulation
Description Time
(sec)
35sec Multi-rate integration with ts = 0.025 &
tf = 0.0025
0.4271
Mono-rate integration with deltat
0.0025sec
1.8518
50sec Multi-rate integration with ts = 0.025 &
tf = 0.0025
1.1058
Mono-rate integration with deltat
0.0025sec
2.7181
100sec Multi-rate integration with ts = 0.025 &
tf = 0.0025
1.1077
Mono-rate integration with deltat
0.0025sec
4.9378
Figure 7 shows the plots of aircraft responses obtained with pitch stick
doublet for mono-rate integration with 0.0025sec sampling time and
multi-rate integration with 0.025 / 0.0025sec sampling times. From the
plots, it can be seen that the mismatch between the multi-rate integration
scheme and the mono-rate solution is negligible.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 20
0 10 20 30 40 50
-5
0
5
10
15
Alpha(deg)
Time(sec)
0 10 20 30 40 50
-20
-10
0
10
20
Q(deg/s)
Time(sec)
0 10 20 30 40 50
70
72
74
76
78
Vtot(m/s)
Time(sec)
0 10 20 30 40 50
2500
2520
2540
2560
Alt(m)
Time(sec)
0 10 20 30 40 50
-10
0
10
20
Theta(deg)
Time(sec)
0 10 20 30 40 50
-10
0
10
20
Dle(deg)
Time(sec)
Monorate @ 0.0025sec
Multirate @ 0.025 / 0.0025
Figure 7 Comparison plots for aircraft response variables with mono-rate and
multi-irate integration schemes
For real time applications, accuracy is the feature that must be sacrificed in
conflicts with other properties. It is better to obtain a solution with some
small error than not be able to obtain it at all in the allowed time. Moreover,
many real time applications incorporate a feedback control. Feedback
control helps to compensate errors and disturbances, including integration
errors. For real time flight simulation, the multi-rate integration scheme may
be adopted for better computational time.
3.1.2 Table Look-Up and Interpolation
This flight simulator facility is using the aerodynamic and engine database
with unevenly spaced break points. It is proposed to use the original dataset
with unevenly spaced breakpoints and facilitate a faster table look up and
interpolation.
As already discussed, a technique to save time is to remember the index of
the lower pair of the interpolation range used last time. From one time step
to the next, the value of the independent variable X is unlikely to have
changed substantially and so it would be a good first try to use the same
interval as before and thus avoid waste of time in searching from one end of
the table each time. Hence, linear search with option of remembering
previous used index is used for the timing analysis.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 21
Timing analysis is carried out for linear search with option of remembering
previously used index and the novel Virtual Equi-Spacing concept proposed
in the previous section. A windows based timer function with the resolution
in micro seconds is used to obtain the time taken for the table look up and
interpolation. Generally, this process includes, computing the location of
data component value in the corresponding data table and interpolation.
Table 3 Timing studies for different search and interpolation techniques
The recommended Virtual Equi-Spacing technique has been used for the
table lookup and interpolation of the aerodynamic and engine database
consisting of around two lakh data points (representing a high performance
fighter aircraft). The engine data base of size 20000 data points is taken as
an example to carry out the study. Table 3 gives the timing of two different
techniques studies at different PLA conditions while Mach number and
altitude are maintained same. From the table, it is found that Virtual Equi-
Spacing technique takes lesser time. The accuracy is maintained as the
actual data tables are not affected.
4. CONCLUSIONS
A study was carried out to recommend efficient numerical integration and
table look up techniques suitable for real time flight simulation comprising
of system of stiff ordinary differential equations. Numerical integration and
table lookup techniques available in literature were implemented in a real
time flight simulator facility designed and developed in house. Aircraft pitch
control system representing the slow and fast subsystems was considered for
the study on numerical integration techniques. Table lookup techniques such
as linear search and index computation methodology using Virtual Equi-
Spacing concept have been studied for an example of the engine database of
a high performance fighter aircraft. The Virtual Equi-Spacing is a new
Mach
number
0.4
Altitude
4500m
Time in Micro seconds
Linear Search (with
option of remembering
previous used index)
Virtual
Equi-Spacing
concept
PLA / 50 18.08 12.65
PLA / 90 18 12.5
PLA / 107 17.2 12.75
PLA / 110 19.1 12.44
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 22
concept developed for interpolation of large multi-dimensional tables
frequently used in flight simulation. With excessively small step size, it is
possible to solve the stiff differential equations, but this results in
performance penalty, an important aspect of real time simulation. Hence, it
is recommended to opt for multi-rate simulation, where it is necessary to use
a step size for the actuator simulation that is sufficiently small to ensure an
accurate and stable actuator solution and a larger step size for simulating the
slower dynamics of the airframe. The Virtual Equi-Spacing concept for
table lookup and interpolation leads to faster and accurate data access, an
essential feature of real-time simulation while handling larger databases.
From the results, it is found that the recommended multi-rate integration
technique and the table look up using Virtual Equi-Spacing concept perform
better.
5. ACKNOWLEDGMENTS
The authors would like to thank Mr Shyam Chetty, Director, CSIR-NAL
and Dr (Mrs) Girija Gopalratnam, Head, Flight Mechanics and Control
Division, CSIR-NAL for their guidance and support.
REFERENCES
[1] Ken A Norlin, Flight Simulation Software at NASA Dryden Flight Research Center,
NASA TM 104315, October 1995
[2] David Allerton, Flight Simulation- past, present and future, The Aeronautical Journal,
Vol 104, Issue No. 1042, pp 651-663, December 2000
[3] J M Rolfe and K J Staples, Flight Simulation, Cambridge University Press, Year of
publication 1991
[4] Flight Mechanics & Control Division, CSIR-National Aerospace Laboratories,
NAL-ASTE Lecture Series, May 2003
[5] Max Baarspul, A review of Flight Simulation Techniques, Progress in Aerospace
Sciences, (An International Review Journal), Vol. 27, Issue No. 1, pp 1-120,
March 1990
[6] Joseph S. Rosko, Digital Simulation of Physical systems, Addison-Wesley Publishing
Company. Year of publication 1972
[7] Beal, T.R., Digital simulation of atmospheric turbulence for Dryden and Von Karman
models, Journal of Guidance Control and Dynamics, Vol 16, Issue No. 1,
pp132–138, February 1993.
[8] http://qucs.sourceforge.net/tech/node24.html Accessed on 8.1.2014
[9] Brian L Stevens and Frank L Lewis, Aircraft and Control and Simulation, John Wiley
& Sons Inc. Year of Publication 1992
[10]David Allerton, Principles of Flight simulation, John Wiley & Sons Ltd. Year of
Publication 2009
[11]http://www.scribd.com/doc/121445651/PRINICIPLES-OF-FLIGHT-SIMULATION
Accessed on 8.1.2014
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 23
[12] http://mat21.etsii.upm.es/mbs/bookPDFs/Chapter07.pdf, Numerical Integration of
Equations of Motion Accessed on 26.6.2012
[13] Marc Rauw, FDC 1.4 – A SIMULINK Toolbox for Flight Dynamics and Control
Analysis, Draft Version 7, May 25, 2005
[14] John W Wilson and George Steinmetz, Analysis of numerical integration techniques
for real-time digital flight simulation, NASA-TN-D-4900 dated November 1968,
Langley Research Center, Langley Station, NASA, Hampton, VA
[15] Harold Klee and Randel Allen, Simulation of Dynamic Systems with Matlab and
Simulink, Second Edition, CRC Press, Taylor and Francis Group, 2011
[16] Jim Ledin, Simulation Engineering:Build better embedded systems faster, CMP
books, Publication Year 2001
[17]http://www.embedded.com/design/real-world-applications/4023325/Dynamic-System-
Simulation
[18] Jiˇr´ı Kunovsk´y et al, Multi-rate integration and Modern Taylor Series Method, Tenth
International conference on Computer Modeling and simulation, 2008, IEEE Computer
Society.
[19] Donald E. Knuth, The art of computer programming – Volume 3 / Sorting and
Searching, Addison-Wesley Publishing Company. Year of publication 1973
[20] http://en.wikipedia.org/wiki/Bogacki%E2%80%93Shampine_method Accessed on
12/10/2008
This paper may be cited as:
Lathasree, P. and Pashilkar, A. A., 2014. Efficient Numerical Integration
and Table Lookup Techniques for Real Time Flight Simulation.
International Journal of Computer Science and Business Informatics, Vol.
10, No. 1, pp. 8-24.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 24
Appendix
Computing index values and data values:
data pladata /
* 28.0,42.0,54.0,66.0,78.0,90.0,104.0,107.0, 130/
The Address Map assumes the virtual equi-spaced data with 1.0deg step.
For the PLA value 28 to 41, the index number will be 1. For the PLA value
42.0 to 53.0, the index number will be 2. Similarly, for the PLA value 54.0
to 65.0, the index number will be 3 and so on.
data (plamap(it), it=1,103) /
* 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
* 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
* 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
* 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
* 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
* 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
* 7, 7, 7,
* 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8,
* 8, 8, 8, 8, 8, 8, 8, 8, 8,
* 9 /
The index into these address maps can directly computed based on the step
size.
iplav = int((pla_val-28.0)/1.0) + 1 = 27
iplax = plamap(iplav) = 3
Based on this index number corresponding to the independent variable PLA,
it is possible to obtain thrust value in the table.
thrust_val11 = thrust_tab(iplax) = 8.7
thrust_val12 = thrust_tab(iplax+1) = 13.81
thrust_val = thrust_val11 + ((thrustval12-thrustval11)/(pladata(iplax+1)-
pladata(iplax))*pla_val-pla_data(iplax) = 8.7
If PLA value is lying between two break points e.g. pla_val = 70.5
iplav = int((60.5-28)/1) + 1 = 43
iplax = plamap(iplav) = 4
thrust_val11 = thrust_tab(iplax) = 13.81
thrust_val12 = thrust_tab(iplax+1) = 20.24
thrust_val = thrust_val11 + ((thrustval12-thrustval11)/(pladata(iplax+1)-
pladata(iplax))*pla_val-pla_data(iplax) = 16.2213
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 25
A Review of Literature on Cloud
Brokerage Services
Dr. J. Akilandeswari
Professor and Head, Department of Information Technology
Sona College of Technology,
Salem, India.
C. Sushanth
PG Scholar, Department of Information Technology
Sona College of Technology,
Salem, India.
ABSTRACT
Cloud computing is kinetically evolving areas which offer large potential for agencies of all
sized to increase efficiency. Cloud Broker acts as a mediator between cloud users and cloud
service providers. The main functionality of cloud broker lies in selecting best Cloud
Service Providers (CSP) from requirement set defined by cloud user. Request from cloud
users are processed by the cloud broker and suited providers are allocated to them. This
paper gives detailed review of cloud brokerage services and their method of negotiating
with the service providers. Once the SLA is specified by cloud service provider, the cloud
broker will negotiate the terms according to the user’s specification. The negotiation can be
modeled as a middleware, and its services can be provided as application programming
interface.
Keywords
Cloud computing, broker, mediator, service provider, middleware.
1. INTRODUCTION
A cloud refers the interconnection of huge number of computer systems in a
network. The cloud provider extends service through virtualization
technologies to cloud user. Client credentials are stored on the company
server at a remote location. Every action initiated by the client is executed in
a distributed environment and as a result, the complexity of maintaining the
software or infrastructure is minimized. The services provided by cloud
providers are classified into three types: Infrastructure-as-a-Service (IaaS),
Software-as-a-Service (SaaS), and Platform-as-a-Service (PaaS). Cloud
computing makes client to store information on remote site and hence there
is no need of storage infrastructure. Web browser act as an interface
between client and remote machine to access data by logging into his/her
account. The intent of every customer is to use cloud resources at a low cost
with high efficiency in terms of time and space. If more number of cloud
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 26
service providers is providing almost same type of services, customers or
users will have difficulty in choosing the right service provider. To handle
this situation of negotiating with multiple service providers, Cloud Broker
Services (CBS) play a major role as a middleware. Cloud broker acts as a
negotiator between cloud user and cloud service provider. Initially, cloud
provider registers with cloud broker about its specification on offerings and
user submits request to broker. Based on type of service, and requirements,
best provider is suggested to the cloud user. Upon confirmation from the
user, broker establishes the connection to the provider.
2. CLOUD BROKERAGE SERVICES (CBS)
Foued Jrad et al [1] introduced Intercloud Gateway and Open Cloud
Computing Interface specification (OCCI) cloud API to overcome lack of
interoperability and heterogeneity. Cloud users cannot identify appropriate
cloud providers through the assistance of existing Cloud Service Broker
(CSB). By implementing OCCI in Intercloud Gateway, it acts as server for
service providers and OCCI act as a client in abstract cloud API. Cloud
Broker satisfies users of both functional and non-functional requirements
through Service Level Agreement (SLA). Intercloud Gateway acts as a front
end for cloud providers and interacts with cloud broker. Figure 2.1 shows a
generic architecture of the service broker.
GUI/UI
Workflow Engine
Identity Manager
Persistence
SLA Manager
Match Maker
Monitoring and Discovery Manager
Deployment Manager
Abstract Cloud API
Intercloud Gateway
Vendor Cloud Platform
Intercloud Gateway
Vendor Cloud Platform
USER
Cloud Provider A
Cloud Provider B
Figure 2.1 A generic architecture for Cloud Service Broker
Cloud
Service
Broker
Access
Without
Broker
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 27
Identity Manager handles user authentication through unique ID.SLA
Manager is responsible for negotiates SLA creation and storing. Match
Manager takes care of selecting suitable resources for cloud users.
Monitoring and Discovery Manager monitor SLA metrics in various
resource allocations. Deployment manager is in charge of deploying
services to cloud user. Abstract cloud API provides interoperability.
The user submits a request to SLA Manager and it parses the request into
SLA parameters which is given to Match Maker. By applying algorithm
Match Maker find best suited solution and response is passed to the user.
Upon user acceptance a connection is provided by service providers.
Table 2.1 Sample SLA parameters for IaaS
Functional Non-functional
CPU speed Response time
OS type Completion time
Storage size Availability
Image URL Budget
Memory size Data transfer time
Through this architecture, interoperability is achieved, but this cannot assure
best matching cloud service provider to the client.
Tao Yu and Kwei-Jay Lin [2] introduces Quality of Service (QoS) broker
module in between cloud service providers and cloud users. The role of QoS
information is collecting information about active servers, suggesting
appropriate server for clients, and negotiate with servers to get QoS
agreements. The QoS information manager collects information required for
QoS negotiation and analysis. It checks with the Universal Description
Discovery and Integration (UDDI) registry to get the server information and
contacts servers for QoS information such as server send their service
request and QoS load and service levels. After receiving clients functional
and QoS requirements, the QoS negotiation manager searches through the
broker’s database to look for qualified services. If more than one candidate
is found, a decision algorithm is used to select the most suitable one. The
QoS information from both server and QoS analyzer will be used to make
the decision. By using this architecture load balancing factor of server is
maintained for a large number of users, but not efficient in delivering best
suited provider to the client.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 28
Figure 2.2 QoS based Architecture
HQ and RQ allocation algorithm is proposed to maximize server resource
while minimizing QoS instability for each client. The HQ allocation
algorithm is to evenly divide available resource among required client based
on active clients. RQ assigns a different service level to client based on
requirements.
Josef Spillner et al [3] provided solution is to subdivide resource reservation
into either serial or parallel segments.
UDDI
Server
QoS broker
Client
QoS
Information
QoS
Admission &
Enforcement
Web services
QoS Information
Manager
DB
QoS
Negotiation
manager
QoS Analyzer
QoS Request
Service
Request
QoS Result
USER
Level L0 (provider hardware)
Level L1 (broker/market VM)
Recursive Virtualization
Level L2 (user provided VM)
Kernal Virtual Machine
(KVM) + KVM Monitor
Broker Configurator
Ec-2 tools minicom
Policies:
1. Authentication as
user.
2. Loading images.
3. When to switch
off and on VMs.
4. Which resources
and how much
5. Port forwarding.
Figure 2.3 Nested cloud with virtual machine
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 29
Nested virtualization provides services to cloud user. The outcome is a
highly virtualizing cloud resource broker. The system supports
hierarchically nested virtualization with dynamically reallocate capable
resources. A base virtual machine is dedicated to enabling the nested cloud
with other virtual machines is referred to as sub-virtual machine running at a
higher virtualization level. The nested cloud virtual machine is to be
deployed by the broker and offers control facilities through the broker
configurator which turn it into a lightweight infrastructure manager. The
proposed solution yields the higher reselling power of unused resources, but
hardware cost of running virtual machine will be high to obtain the desired
performance.
Chao Chen et al [4] projected objectives of negotiation are minimize price
and guaranteed QoS within expected timeline, maximize profit from the
margin between the customers financial plan and the providers negotiated
price, maximize profit by accepting as many requests as possible to enlarge
market share. The proposed automated negotiation framework uses
Software–as-a-Service (SaaS) broker which is utilized as the storage unit for
customers. This helps the user to save time while selecting multiple
providers. The negotiation framework helps user to assist in establishing a
mutual agreement between provider and client through SaaS broker. The
main objective of the broker is to maintain SLA parameters of cloud
provider and suggesting best provider to customer.
CUSTOMER AGENT
SaaS Broker
Coordinator
Agent
Negotiation Policy
Translator
Negotiation
Engine Decision
Making
System
Policy
DB
Knowledge base
SLA
Generator
Create
SLA
Send
SLA
SLA
Template
Strategy
DB
Directory
SaaS Provider Agent
IaaS
Figure 2.4 Negotiation Framework
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 30
Negotiation policy translator maps customers QoS parameters to provider
specification parameters. Negotiation engine includes workflows which use
negotiation policy during the negotiation process. The decision making
system uses decision making criteria to update the negotiation status. The
minimum cost is incurred for resource utilization. Renegotiation for
dynamic customer needs is not solved.
Wei Wang et al [5] proposed a new cloud brokerage service that reserves a
large pool of instances from cloud providers and serves users with price
discounts. A practical problem facing cloud users is how to minimize their
costs by choosing among different pricing options based on their own
demands. The broker optimally exploits both pricing benefits of long-term
instance, reservations and multiplexing gains. Dynamic approach for the
broker to make instant reservations with the objective of minimizing its
service cost is achieved. This strategy controls, dynamic programming and
algorithms to quickly handle large demands.
Figure 2.5 Cloud Broker Model
A smart cloud brokerage service that serves cloud user demands with a large
pool of computing instances that are dynamically launched on-demand from
IaaS clouds. Partial usage of the billing cycle incurs a full cycle charge, this
makes user to pay more than they actually use. This broker uses single
instance to serve many users by time-multiplexing usage, reducing cost of
cloud user.
Dharmesh Mistry [6] proposed a cloud-based analytics solution as a service
from a cloud broker which could considerably minimize costs for the client,
IaaS
Cloud
Provider
s
Broker
User 1
User 2
User 3
Reserved/On
-demand
instances
Broker
Cost
User
Cost
“On-demand”
Instances
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 31
while assisting Independent Software Vendor (ISV) to maximize profit.
When data are arriving, it is divided and index is created and finally it is
mapped to original values through analysis. Large organizations are
purchasing such software as a SaaS instead of obtaining and hosting
software internally. But for ISVs that constructed their business by the
traditional model. The cloud broker acts as middleware between the ISV and
cloud providers. ISV yields solution to meet customer demands from for
existing services. The broker provides services such as entitlement,
analytics, billing and payment, security and context provisioning. ISVs
usually rely on pre-module licensing models and software audits to confirm
that the appropriate number of users access the modules and functions for
which the customer will be paid.
Figure 2.6 Mapping in Cloud Broker
An ISV can drive faster profit growth, while maintaining margins, and
respond to market demand more quickly.
Lori MacVittie [7] introduces broker as a solution to integrate hybrid policy
without affecting control in services. The integration between cloud and
datacenter is done with cloud broker integration at the process layer.
Brokers deploy vast amount of applications for customer through
infrastructure defined by corporate enforced policies. Identity broker
module communicates with datacenter through authorization and
authentication mechanism. The real-time implementation of cloud broker is
achieved by two types of architectures: Full-proxy broker and Half-proxy
broker. In Full-proxy broker requests are processed through the tunneling
and implemented in many ways such as VPN. In Half-proxy broker only
validation of the request is done by broker, successive communication
established directly. This model defines how the request can be handled in
On-Boarding Provisioning Metering Billing Payments &
Collections
Analytics Demand
Generation
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 32
late binding. A cloud delivery broker can make decision, such as where to
revert user upon request. Hybrid cloud must be able to describe capabilities
such as bandwidth, location, cost, type of environment.
Sigma Systems [8] introduces cloud service broker which is responsible for
order management, provisioning, billing integration and Single Sign–On
(SSO). In the proposed architecture, the Cloud Service Broker allows
service providers to offer their own SLA, which provides a single source for
all applications to customers. Providers can establish and grow a single and
a combined collection of services that match their set of services, and allow
for unique grouping to meet their customers’ needs. Cloud brokerage from
Sigma Systems is available either as a managed service or can be deployed
on-basis.
Figure 2.7 Sigma System model
The Sigma model allows service providers to create single and highly
exciting packages by combining high-speed data and other complex network
services with business and productivity-enhancing, SaaS based services.
Vordel [9] developed cloud service broker in order to allow organizations to
apply a layer of confidence in their cloud computing applications. It agents
the connection to the cloud infrastructure, relating governs controls for
service usage and service uptime.
Ordering Enterprise Product Catalog Billing Single Sign-on
SIGMA SYSTEMS
Cloud Services
Off-net Brokerage
Backup Office Productivity
Security Collaboration
CRM/SFA Financial
On-net Service Management
On-net
SaaS Service
PaaS Service
IaaS Service
Video VPN
Managed
voice
Unified
Messagin
g
Mobile High speed
data
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 33
Figure 2.8 Services provided by Vordel
It records service type, time of day, and the identity of the user. All
information sent to cloud services must be examined for disclosing data, in
order to allow Data Loss Prevention (DLP). Caching protects the enterprise
from inactivity linked with connecting to the cloud service. Service Level
Agreement (SLA) monitoring observes the whole transaction throughput
time. The Cloud Service Broker contains a pluggable structure which allows
for modules to be added, such as modules to provide additional encryption
algorithm.
Apostol T. Vassilev [10] introduced personal brokerage of Web service
access which becomes part of the Web authentication structure, by network
smart cards. This allows new Web services based on their characteristic
properties of essential resistance, tough cryptography, connectivity and
computing power. To enhance network, smart-card capabilities, particularly
in the serious area of human-to-card interaction evidence, to bring further
accessibility and personalization to Web security and privacy. In Single-
Sign-On (SSO) systems users attempt to access services offered by a
connected service provider using a web browser on the client system. The
provider redirects the service request by directing the user’s browser to the
Identity Provider’s (IDP) authentication page. To facilitate the redirection,
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 34
the service provider issues a ticket that unites the user’s digital identity once
the authentication is complete.
Figure 2.9 Personal Brokerage extensions of Federated service
Users use an IDP enforced method for authentication to prove their identity.
If the authentication is well, the IDP declares the user’s identity in the ticket
sent back to the browser, which in turn sends it to the service provider.
Users can then access the requested services. The existing IDP-enforced
authentication method is by means of a user name and key. Because the
entire united system of Web services only requires one username and
password license, SSO systems are convenient for the user. At the equal
time, such credentials become a major target for hackers because it gives
them access to many private user resources at once. Presently network
traffic between users’ browsers and remote servers is secured by ubiquitous
standard security protocols for information exchange, based on Secure
Socket Layer (SSL) and Transport Layer Security (TLS).
Muhammad Zakarya and Ayaz Ali Khan [11] found that Distributed Denial
of Service (DDoS) attack is identified as a major threat in present time,
which we overcome by new cloud environment architecture and Anomaly
Detection System (ADS). These ADS improve computation time, QoS and
high availability. Each cloud is separated as regional areas known as GS.
Each GS is protected by AS/GL. Developed ADS are installed in cloud node
or AS and router. A tree is maintained at every router by making every
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 35
packet with path modification strategy, so the attacker of node is easily
found. ADS have two phases detection of malicious flow confirmation
algorithm to drop attack or pass it.
Randomness or Entropy is given by,
H(X) = − 𝐩 𝐱 𝐥𝐨𝐠 𝐩(𝐱)
𝒏
𝒙€𝑿
(2.1)
Where 0< H(x) < log (n), p(x) probability of x
P(x) =mi/m (2.2)
Where mi is number of packet with value x and m is total number of packets
Normalized entropy is calculated to get overall probability of captured
packet in specific time
Normalized entropy = (H / log n0) (2.3)
For detection of DDoS attack, decide a threshold value. An edge router
collects the flow of traffic for a specific time window w. Find probability
p(x) for each packet node. Calculate link entropy of all active nodes
separately. Calculate H(x) for router, if normalized entropy less than
identified malicious attack flow then system is compromised. For
confirmation of attack flows, decide a threshold value and compare with
entropy rate.
Srijith K. Nair et al [12] describes the concept of cloud bursting, cloud
brokerage, framework of power brokerage based on service OPTIMIS.
When a private cloud need to access external cloud for a certain time for
computation, then the process is called cloud bursting. Internal cloud in the
company needs to verify SLA requirements to measure performance. Cloud
bursting environment, architecture being developed by OPTIMIS with
following capabilities need common management interface, set of
monitoring tools, global load balancer, and categorized providers. Cloud
brokerage model was created by cloud service providers for the cloud
management platform. The cloud management platform is responsible for
activities such as policy enforcement, usage monitor, network security,
platform security. Cloud API mediates consumer interaction with cloud
broker. The SLA monitoring unit is responsible for monitoring all SLA and
violations. Identify and access module records of serviced customer and
generate one time token. Audit unit inspects broker platform and
capabilities. Risk management prioritizes risks based on events.
Network/platform security provides overall security through IDS. The user
send storage request to cloud portal. Then portal forwards id and password
for Identity and Access Management (IAM), it verifies and grants access
along with criteria. Cloud portal converts identity and access rights to
external token, containing criteria and request, which is encrypted and sent
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 36
to Broker IAM. Broker IAM decrypt using portal public key and verifies
integrity which in turn generates one time access token.
Figure 2.10 Functional Requirements for Cloud Service Broker
This token contains Uniform Resource Identifier (URI) which is again
forwarded to portal and discard old token. Cloud portal decrypts using the
private key of broker and forward to the respective user. The user sends data
to Application Programming Interface (API) which checks the strength of
token and grant access to upload data. This uploaded data in service
provider sends the position of data through secret key. This ensures
confidentiality and integrity.
Mark Shtern et al [13] described AERIE architecture. When organization
changing to public cloud infrastructure they have problem with control and
security and must contain best model for deployment. This project suggests
reference architecture for virtual private cloud built on cross provider
platform on-demand compute instance, that reduce levels of trust on
infrastructure providers. Inner instance is started from outer instance.
Together inner and outer instance forms a nested instance. An outer instance
runs an agent which ensures that it has not been modified. These agents
establish connection with the controller using novel key exchange
algorithm. A standard security application is implemented to preserve
integrity of outer instance. Traffic from public internet is made to pass
through security bulwark. A load balancing DNS service is capable of
detecting inaccessible host from available solution. Each instance has an
image which contains encrypted image to launch inner instance. Trusted
API Deployment Services Staging/Pooling Service
Scaling
IAM SLA Monitoring Capability Management and Matching Audit
Gateway/Application Firewalls Risk Management Network/Platform Security
Multi-cloud Support VM/Service Placement
Security Compliance Performance
Usage Monitoring SLA Management
Cost IT Policy Enforcement
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 37
Instance Agent (TIA) conducts key exchange with controller to establish
HTTPS connection using novel algorithm. The controller checks validity of
certificate in image. To maintain integrity it employs Intrusion Detection
System, if any, violations are met a virtual channel is terminated.
Figure 2.11 AERIE Architecture
Przemyslaw Pawluk et al [14] introduce cloud broker service which enables
the deployment and runtime management of cloud application using
multiple providers. Service Measurement Index (SMI) is a possible
approach to facilitate the comparison of cloud business. An attribute is then
expressed as a set of Key Performance Indicators (KPI) which specifies
requested data acquired from every metric. After initial deployment,
decision to add/remove resources is made by cloud manager. Application
manager controls run time management of application according to the
model. A Resource Acquisition Decision (RAD) involves. We will use the
following scenario as a running example the selection of n resources from a
set of m providers. The Broker is responsible for solving the RAD problem.
It must also connect to the set of selected providers to be used and acquire
the collection of resources. Topology Descriptor File (TDF) is used to
identify the application topology to be deployed on the cloud. Each cloud
provider describes details of environment variables in the TDF. The chosen
nodes are instantiated through a translation layer. Cloud Manager and
Broker make use of monitoring information, the former to make ongoing
elasticity decisions and the latter to assist in the decision process. The
broker selects the set of all possible specifications that satisfy the objectives
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 38
stated in the desired models named in the TDF. Next, as a result of multi-
criteria optimization process, a set of equivalent specifications is selected.
Figure 2.12 Cloud Management Frameworks
From this set, one is selected and the appropriate instance is acquired from
the provider. In the situation where there are no suitable specifications suits
the objectives, the broker makes an attempt to relax objectives by
identifying the closest specification in each direction. Next, the optimization
step is performed over the resultant set of relaxed results. The RAD problem
can be formulated as a multi-criteria optimization problem.
Paul Hershey et al [15] presented System of Systems (SoS) method which is
responsible for activities such as QoS monitoring, management and
response for cloud providers that delivers computing as a service. Various
metrics are considered to calculate performance and security of SoS.
Delay is the sum of delays in lower level domain of cloud. There is an
infrastructure component delay. Hence delay is given by
Dsos = p1Dg+p2Db+p3DS+p4Di (2.4)
Pi- parameter that is dependent on the infrastructure components used.
Dj – delay experienced in each layer.
Throughput at system level is defined as the number of transactions that are
completed per unit time
T1 = n x Transaction Throughput (2.5)
TS = m x T1 (2.6)
TB = q x TS (2.7)
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 39
Where m, n, q are number of transactions at lower domain needed to
complete transaction at higher domain. Authentication metric is a logical
conjunction of each level in EMMRA.
Table 2.2 Metrics Categories
Category Metric
Performance Delay
Delay Variation
Throughput
Information Overhead
Security Authentication
Authorization
Non-repudiation
Integrity
Information Availability
Certificate and Accreditation
Physical Security
Asos = AG ^AB^AS^AI (2.8)
Authorization is a bottom-up metric and it is applied at each level.
Authorization at IaaS level can be given as,
AuthI = min {Π PI} (2.9)
PI is permission to perform actions I at IaaS level. The min operator is used
to indicate least privilege level that is granted to the user.
3. CONCLUSIONS
The development of a cloud brokerage services framework is getting
momentum since its usage is pervasive in all verticals. The works till now
do not consider the scenario of more than one cloud service provider
providing the same level of requirements to the user. This scenario will
induce an ambiguity for the users to choose an appropriate provider. The
Cloud Broker Services will act on behalf of the user to choose a particular
service provider for providing service to the user. If Cloud Broker Service
becomes a standard middleware framework, many chores of cloud service
providers can be taken by CBS.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 40
REFERENCES
[1] Foued Jrad, Jie Tao, Achim Streit, SLA Based Service Brokering in Intercloud
Environments. Proceedings of the 2nd International Conference on Cloud Computing
and Services Science, pp. 76-81, 2012.
[2] Tao Yu and Kwei-Jay Lin, The Design of QoS Broker Algorithms for QoS-Capable
Web Services, Proceedings of IEEE International Conference on e-Technology, e-
Commerce and e-Service, pp. 17-24, 2004.
[3] Josef Spillner, Andrey Brito, Francisco Brasileiro, Alexander Schill, A Highly-
Virtualising Cloud Resource Broker, IEEE Fifth International Conference on Utility
and Cloud Computing, pp.233-234, 2012.
[4] Linlin Wu, Saurabh Kumar Garg, Rajkumar Buyya, Chao Chen, Steve Versteeg,
Automated SLA Negotiation Framework for Cloud Computing, 13th IEEE/ACM
International Symposium on Cluster, Cloud, and Grid Computing, pp.235-244, 2013.
[5] Wei Wang, Di Niu, Baochun Li, Ben Liang, Dynamic Cloud Resource Reservation via
Cloud Brokerage, Proceedings of the 33rd International Conference on Distributed
Computing Systems (ICDCS), Philadelphia, Pennsylvania, July 2013.
[6] Dharmesh Mistry, Cloud Brokers can help ISVs Move to SaaS, Cognizant 20-20
Insight, and June 2011.
[7] Lori MacVittie, Integrating the Cloud: Bridges, Brokers, and Gateways, 2012.
[8] Sigma Systems, Cloud Brokerage: Clarity to Cloud Efforts, 2013.
[9] Vordel white papers, Cloud Governance in the 21st
century, 2011.
[10]Apostol T. Vassilev, Bertrand du Castel, Asad M. Ali, Personal Brokerage of Web
Service Access IEEE Security & Privacy, vol. 5, no. 5, pp. 24-31, Sept.-Oct. 2007.
[11]Muhammad Zakarya & Ayaz Ali Khan, Cloud QoS, High Availability & Service
Security Issues with Solutions, International Journal of Computer Science and Network
Security, vol.12 No.7, July 2012.
[12]Srijith K. Nair, Sakshi Porwal, Theo Dimitrakos, Ana Juan Ferrer, Johan Tordsson,
Tabassum Sharif, Craig Sheridan, Muttukrishnan Rajarajan, Afnan Ullah Khan,
Towards Secure Cloud Bursting, Brokerage and Aggregation, Eighth IEEE European
Conference on Web Services, pp.189-196, 2010.
[13]Shtern. M, Simmons. B, Smit. M, Litoiu. M, An architecture for overlaying private
clouds on public providers, Eighth International Conference and Workshop on Systems
Virtualization Management, pp.371, 377, 22-26 Oct. 2012.
[14]Przemyslaw Pawluk, Bradley Simmons, Michael Smit, Marin Litoiu, Serge
Mankovski, Introducing STRATOS: A Cloud Broker Service, IEEE Fifth International
Conference on Cloud Computing, pp.891-898, 2012.
[15]Hershey. P, Rao. S,Silio. C.B., Narayan. A, System of Systems to provide Quality of
Service monitoring, management and response in cloud computing environments, 7th
International Conference on System of Systems Engineering (SoSE), vol., no., pp.314,
320, 16-19 July 2012.
This paper may be cited as:
Akilandeswari, J. and Sushanth, C., 2014. A Review of Literature on Cloud
Brokerage Services. International Journal of Computer Science and
Business Informatics, Vol. 10, No. 1, pp. 25-40.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 41
Improving Recommendation Quality
with Enhanced Correlation Similarity
in Modified Weighted Sum
Khin Nila Win
Facutly of Information and Communication Technology,
University of Technology
Yatanarpon Cyber City
Thiri Haymar Kyaw
Facutly of Information and Communication Technology,
University of Technology
Yatanarpon Cyber City
ABSTRACT
Recommender systems aim to help users in finding the items of their interests from large
data collections with little effort. Those systems use various recommendation approaches to
provide accurate recommendation more and more. Among them, collaborative filtering
approach is the most widely used approach in recommender systems. In the two types of
CF system, item-based CF systems overtake the traditional user-based CF systems since it
can overcome the scalability problem of the user-based CF. Item-based CF system
computes the prediction of the user tastes on new items based on the item similarity result
from the explicit rating of the users. They predict rating on the new items based on the
historical ratings of the users. The proposed system improves the item-based collaborative
filtering approach by enhancing the similarity of rating on items with demographic
similarity of the items. It modifies one of the prediction methods, weighted sum, weighted
by enhanced similarity of the items. This system intends to offer better prediction quality
than other approaches and to produce better recommendation results as a result of
considering item-demographic similarity with similarity result from explicit rating of the
user.
Keywords
Recommender systems, collaborative filtering approach, item-based CF system, user-based
CF systems, demographic similarity, weighted sum.
1. INTRODUCTION
With the explosive growth of knowledge available on World Wide
Web, which lacks an integrated structure or schema, it becomes much
more difficult for users to access relevant information efficiently.
Meanwhile, the substantial increase in the number of websites presents a
challenging task for web masters to organize the contents of websites
to cater to the need of user‟s. Web usage mining has seen a rapid increase
in interest, from both the research and practice communities. The
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 42
motivation of web mining is to discover users‟ access models
automatically and quickly from the vast amount of Web log data, such
as frequent access paths, frequent access page groups and user clustering.
More recently, Web usage mining has been proposed as an underlying
approach for Web personalization. The goal of personalization based on
Web usage mining is to recommend a set of objects to the current (active)
user, possibly consisting of links, ads, text, products, or services, tailored to
the user‟s perceived preferences as determined by the matching usage
patterns [1].
2. MEMORY-BASED TECHNIQUES IN RECOMMENDER
SYSTEMS
Memory-based techniques continuously analyze all user or item data to
calculate recommendations, and can be classified in following main groups:
Collaborative Filtering, Content-based techniques, and Hybrid techniques
[2]. While content-based techniques base their recommendations on individual
information and ignore contributions from other users, collaborative filtering
system emphasizes on the preferences of similarity users or items for their
recommendations. Since the proposed system uses collaborative filtering
techniques, explanations of other techniques are omitted in this paper and analysis
of collaborative filtering techniques are emphasized.
2.1 Collaborative Filtering Techniques (CF)
This approach recommends items that were used by similar users in the
past; they base their recommendations on social, community driven
information (e.g., user behavior like ratings or implicit histories).
Table 1. Special types and special characteristics of Memory-based CF Techniques
Special type of
Memory-based CF
techniques
Pros Cons
-Neighborhood-based
CF
- Item-based/user-
based top-N
recommendations
-easy to implement
- easy for addition if
new data
-no need to consider the
content of
the items in
recommendation
- reliant on human ratings
- dispersed amount of
data may be impact on
performance
are sparse
- problems in
recommendation for new
users and items
- scalability limitation for
large
datasets
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 43
Memory-based collaborative filtering techniques have special characteristics
and representative techniques. Table 1 describes the pros and cons of
memory-based CF techniques [2].
In user-based CF algorithms, first it finds a set of k similar users of the
target user based on correlations or similarities between user records and the
target user. Then, it produces a prediction value for the target user on
unrated items based on the similar users‟ ratings. This approach suffer
scalability problem in large-scale recommender system.
In contrast, item-based CF algorithms attempt to find k similar items that are
co-rated by different users similarly. This performs similarity computations
among the items. Thus, item-based CF algorithms avoid the bottleneck in
user-based algorithms by first considering the relationships among items.
For a target item, predictions can be generated by taking a weighted average
of the target user‟s item ratings on these similar items [3, 6].
2.1.1 Similarity Computation
Most of the recommender systems usually use three similarity computing
techniques: Cosine-based Similarity, Correlation-based Similarity, and
Adjusted Cosine Similarity. The proposed system uses adjusted cosine
similarity for similarity computation.
2.1.1.1 Adjusted Cosine Similarity Vs. Modified Adjusted Cosine Similarity
1) Adjusted Cosine Similarity
Computation of similarity value using basic cosine measure in item-based
recommendation system has one important weakness since the differences
in rating scale between different users are not taken into account. The
adjusted cosine similarity subtracts the corresponding user average from
each co-rated pair to offset this drawback. However, it has one drawback-
the different rating styles of the different users are not taken into account.
Adjusted cosine similarity finds the subtraction value of the rate value of
user u on items i and j respectively and his/her average ratings. Then, it
computes the similarity value as shown in Eq. (1).
(1)
In Eq. (1),
is the average value of the u-th user‟s ratings [4].
2) Modified Adjusted Cosine Similarity
2
,
2
,
,,
)()(
))((
),(
ujuUuuiuUu
ujuuiuUu
RRRR
RRRR
jisim






uR
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 44
Adjusted cosine similarity still ignores the casual rating styles of the user.
For this reason, the proposed system improves the computation by
normalizing the rate values.
Table 2. Enhanced Correlation Similarity Values Vs. Simple Modified Adjusted
Cosine Similarity Values
Modified
Adjusted
Cosine
Similarity
(simi,j)
Demographic
Similarity or
Content
Similarity of
Items
(dem_corij)
Enhanced Correlation
Similarity
(enh_corij=simi,j,+(simi,j*de
m_corij))
0.5 0.2 0.6
0.3 0.4 0.42
0.6 0.2 0.72
0.4 0.8 0.72
0.5 0.5 0.75
0.8 0.1 0.88
0.7 0.3 0.91
For example, in the case of the system‟s range of the rating is 1 to 5, user i
sets the rating 3 to his/her most like item t, while the other user j sets the
rating 5 to his/her most like items t. In such case, the system can‟t assume
the item t is the user i‟s most likes while it assumes this item is the user j‟s
most likes. So, the system can‟t determine the highest rating of the users and
can‟t assume the user‟s most like even if it is the user‟s highest rating in the
case of not being highest rating of the system. Therefore, the system needs
to normalize the rating style to accurately determine which the user most
like and which the least even if the users have different rating styles. The
proposed system applies the normalized rating to overcome such problem.
The proposed method, modified adjusted cosine similarity, can reduce
misunderstanding of the system on the users' likes and dislikes. Eq. 2
denotes the computation of similarity value by modified adjusted cosine
similarity.
(2)
2
,
2
,
,,
)()(
))((
),(
ujuUuuiuUu
ujuuiuUu
RNRRNR
RNRRNR
jisim






International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 45
In Eq. (2),
is the average value of the u-th user‟s ratings
(3)
In Eq. (3),
HS means highest rating scale of the system
HRu means highest rating scale of the current user
Considering the topic similarity of item,
Where,
simij means the similarity of item i and item j from the adjusted cosine
similarity after normalizing the user‟s rating behaviour, dem_corij means the
similarity of the item i and item j according to the topic similarity.
Table 2 describes the way of computing to get enhanced correlation
similarity and also demonstrates how the demographic similarity improves
the modified adjusted similarity value.
2.1.2 Prediction Computation
To get the recommendation, recommender systems always compute the
prediction value firstly and then recommend the item according to the
prediction values. Among them, weighted sum is one of the widely used
techniques for prediction. However, it uses only the rating-based similarity
of the item. The proposed system enhances weighted sum techniques by
using enhanced correlation similarity instead of adjusted cosine similarity
value. Enhanced correlation similarity is the similarity value in which the
modified adjusted cosine similarity value is enhanced with demographic
similarity of the two items.
2.1.2.1 Weighted Sum Vs. Modified Weighted Sum
1) Weighted Sum
The prediction value of weighted sum technique is computed by computing
the summation of the ratings of the user on the items similar to i. Each
rating of user is weighted by the corresponding similarity si,j between items i
and j. Eq. 4 denotes the formula for prediction computation with weighted
sum.
(4)
iu
u
iu
R
HR
HS
NR ,,

ijijijij cordemsimsimcorenh __ 
uR



Nitemsallsimilar iN
NuiNNitemsallsimilar
sim
Rsim
iuP
,
,,
)(
)(
,
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 46
2) Modified Weighted Sum
In Modified Weighted Sum in Eq. (5), each normalized rating, NRu,N in Eq.
(6), is weighted by the enhanced correlation similarity enh_coriN. The
prediction Pu,i is denoted as
(5)
In Eq. (5),
(6)
Modifying the weighted sum by enhanced correlation similarity performs
the prediction more accurately than the existing systems. Each of the
systems considering the item demographic data produces the prediction
quality more than 9% higher than the systems which do not consider the
item demographic data.
3. RELATED WORKS
Recommendation techniques are applied in many areas in the mid-1990.
Some researchers develop recommender systems for various songs. Popular
music recommendation systems in the early 2000 are [7], [8], [9], [10]. In e-
learning systems, web mining techniques are used to learn all available
information about learners and build models to apply in personalization. A
detailed description about using and applying educational data mining was
given in (Romero et al., 2006) and (Romero et al., 2007) [11]. Many
resources and supported techniques such as [12], [13] are developed for
recommendation and personalization.
There have been many collaborative systems developed in the academia and
the industry. Grundy system [14] was the first recommender system, which
proposed to use stereotypes as a mechanism for building models of users
based on a limited amount of information on each individual user. Later on,
the Tapestry system relied on each user to identify like-minded users
manually [15]. GroupLens [16, 17], Video Recommender [18], and Ringo
[19] were the first systems to use collaborative filtering algorithms to
automate prediction.
4. REAL RECOMMENDER SYSTEM
Most of the earlier learning resources recommender systems find the
problems in determining the recommended pages accurately since they



Nitemsallsimilar iN
NuiNNitemsallsimilar
corenh
NRcorenh
iuP
,
,,
)_(
)_(
,
Nu
uNu
R
HR
HS
NR ,,

International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 47
ignore the rating style of the current user. The proposed system,
Recommender System for Resources and Educational Assistants for
Learners, overcomes this challenge by normalizing the current user's rating
style. And in the section of similarity computation, the system considers the
rating similarity accompanying with topic similarity of resources pages. To
avoid the cold-start problem for users earlier system encountered, the
proposed system uses stereotypes or demographic CF. As a result, the
system takes advantages of not only item-based CF and but also stereotypes
or demographic CF. Moreover, the system can avoid the scalability and
quality bottleneck of the user based CF since it uses item-based
collaborative filtering techniques.
Modifying adjusted cosine similarity with normalized rating of users and
modifying weighted sum with enhanced correlation similarity are not only
able to determine accurately which the user's most likes but also able to
produce the higher prediction quality than the systems which do not
consider the item demographic data and only emphasize the rating of the
users. The system can reduce mean absolute error (MAE) between the
predicted ratings and actual ratings of the users due to the advantages of
modified adjusted cosine similarity and modified weighted sum.
5. CASE STUDY OF RESOURCES AND EDUCATIONAL
ASSISTANTS RECOMMENDATION
The following tables show the case study of resources and educational
assistants recommendation. Table 3 shows all links current user u has rated
in the first column and the links in second column are the links need to be
predicted for current user since they are the links current user has not rated.
Table 3. The links which current user has rated and other links which current user
has not rated but other users has rated
The links current user
has rated
The links current user has
not rated but other users
has rated
IEEE seminar topics on
networking 2011-2012
Social Networking
Electronics &
Communication Project
Topics
LAN Monitoring and
Controlling
Network Books of Free
Computer Books
LAN & WAN
IPv6
JavaWorld:Solutions for
Java Developers
Mobile Java
Core Java
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 48
The data in Table 4 describes the respective co-rated links with the links to
be predicted. Fig 1 distinguishes that four links are the links the current user
has just rated but other three links has not among the co-rated links with the
predicted link, LAN & WAN. In Fig 2, there are three co-rated links the
current user has already rated and four links that has not. Unfortunately,
there is no co-rated links the current user has rated in Fig 3, 4, and 5.
According to this result, these three links may not be possible the current
user‟s interested links. Finally, the system recommends the two links, LAN
& WAN and IPv6 according to the prediction values.
Table 4. Predicted links with their similar links
The links to predict
for current user
The links similar to the link to be predicted
LAN & WAN Social Networking
LAN Monitoring and Controlling
Network Books of Free Computer Books
Unified Communications of Infoworld
Networking of Infoworld
Social Hubs, IPv6
IPv6 Network Books of Free Computer Books
Social Networking
IEEE seminar topics on networking 2011-2012
LAN & WAN
Mobile Java
Java & XML
Java Security
JavaWorld:Solutions
for Java Developers
Core Java
Java & XML
Web Services & SOAs
Swing/GUI Programming
Java Security
Mobile Java Core Java
Java Security
JavaWorld:Solutions for Java Developers
LAN & WAN
Network Books of Free Computer Books
Core Java Mobile Java
Swing/GUI Programming
Docjar
Program With Java
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 49
Fig. 1 Fig. 2
Fig. 3 Fig. 4
Fig. 5 Fig. 6
Fig. 1 - Fig. 5. Co-rated links for the respective predicted links
Fig. 6. Recommended links for current user
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 50
6. EVALUATION OF THE SYSTEM
The recommender system can be evaluated by comparing recommendations
with a test set of known user ratings. These systems are measured using
predictive accuracy metrics [5, 6], where the predicted ratings are directly
compared with actual user ratings. The most commonly used metric is
Mean Absolute Error (MAE) which is the average absolute difference
between predicted ratings and actual ratings. Eq. 7 denotes the computation
of MAE value.
(7)
In Eq. (7),
Pu,i is the predicted rate value of user u on item i,
ru,i is the actual rate value of user u on item i,
N is the amount of ratings in the test set.
The proposed system can reduce MAE by applying both demographic
correlation and rating similarity of items.
6.1.1 Comparison of MAE Values
The following table compares MAE between the system which uses
adjusted cosine for similarity computation and weighted sum for prediction
computation and the proposed system.
Table 5. Comparison of MAE Values
MAE Values For
Existing System with Adjusted
Cosine
and Weighted Sum
MAE Values
For Proposed System
1.48 0.68
1.6 1.045
2.987 2.635
1.96 1.93
1.92 1.87
N
rP
MAE
iuiuiu 

,,},{
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014
Vol 10 No 1 - February 2014

More Related Content

What's hot

IMPROVEMENT OF ENERGY EFFICIENCY IN CLOUD COMPUTING BY LOAD BALANCING ALGORITHM
IMPROVEMENT OF ENERGY EFFICIENCY IN CLOUD COMPUTING BY LOAD BALANCING ALGORITHMIMPROVEMENT OF ENERGY EFFICIENCY IN CLOUD COMPUTING BY LOAD BALANCING ALGORITHM
IMPROVEMENT OF ENERGY EFFICIENCY IN CLOUD COMPUTING BY LOAD BALANCING ALGORITHMAssociate Professor in VSB Coimbatore
 
A Survey on Resource Allocation & Monitoring in Cloud Computing
A Survey on Resource Allocation & Monitoring in Cloud ComputingA Survey on Resource Allocation & Monitoring in Cloud Computing
A Survey on Resource Allocation & Monitoring in Cloud ComputingMohd Hairey
 
Big Data on Implementation of Many to Many Clustering
Big Data on Implementation of Many to Many ClusteringBig Data on Implementation of Many to Many Clustering
Big Data on Implementation of Many to Many Clusteringpaperpublications3
 
Survey on Dynamic Resource Allocation Strategy in Cloud Computing Environment
Survey on Dynamic Resource Allocation Strategy in Cloud Computing EnvironmentSurvey on Dynamic Resource Allocation Strategy in Cloud Computing Environment
Survey on Dynamic Resource Allocation Strategy in Cloud Computing EnvironmentEditor IJCATR
 
Task Scheduling methodology in cloud computing
Task Scheduling methodology in cloud computing Task Scheduling methodology in cloud computing
Task Scheduling methodology in cloud computing Qutub-ud- Din
 
Hybrid Based Resource Provisioning in Cloud
Hybrid Based Resource Provisioning in CloudHybrid Based Resource Provisioning in Cloud
Hybrid Based Resource Provisioning in CloudEditor IJCATR
 
Modeling and Optimization of Resource Allocation in Cloud [PhD Thesis Progres...
Modeling and Optimization of Resource Allocation in Cloud [PhD Thesis Progres...Modeling and Optimization of Resource Allocation in Cloud [PhD Thesis Progres...
Modeling and Optimization of Resource Allocation in Cloud [PhD Thesis Progres...AtakanAral
 
Extending Grids with Cloud Resource Management for Scientific Computing
Extending Grids with Cloud Resource Management for Scientific ComputingExtending Grids with Cloud Resource Management for Scientific Computing
Extending Grids with Cloud Resource Management for Scientific ComputingBharat Kalia
 
Service oriented cloud architecture for improved performance of smart grid ap...
Service oriented cloud architecture for improved performance of smart grid ap...Service oriented cloud architecture for improved performance of smart grid ap...
Service oriented cloud architecture for improved performance of smart grid ap...eSAT Journals
 
Service oriented cloud architecture for improved
Service oriented cloud architecture for improvedService oriented cloud architecture for improved
Service oriented cloud architecture for improvedeSAT Publishing House
 
Presented by Ahmed Abdulhakim Al-Absi - Scaling map reduce applications acro...
Presented by Ahmed Abdulhakim Al-Absi -  Scaling map reduce applications acro...Presented by Ahmed Abdulhakim Al-Absi -  Scaling map reduce applications acro...
Presented by Ahmed Abdulhakim Al-Absi - Scaling map reduce applications acro...Absi Ahmed
 
A Review on Scheduling in Cloud Computing
A Review on Scheduling in Cloud ComputingA Review on Scheduling in Cloud Computing
A Review on Scheduling in Cloud Computingijujournal
 
Big Graph : Tools, Techniques, Issues, Challenges and Future Directions
Big Graph : Tools, Techniques, Issues, Challenges and Future Directions Big Graph : Tools, Techniques, Issues, Challenges and Future Directions
Big Graph : Tools, Techniques, Issues, Challenges and Future Directions csandit
 
Introducing Novel Graph Database Cloud Computing For Efficient Data Management
Introducing Novel Graph Database Cloud Computing For Efficient Data ManagementIntroducing Novel Graph Database Cloud Computing For Efficient Data Management
Introducing Novel Graph Database Cloud Computing For Efficient Data ManagementIJERA Editor
 
Ahmed Absi slides bigbwa
Ahmed Absi slides  bigbwaAhmed Absi slides  bigbwa
Ahmed Absi slides bigbwaAbsi Ahmed
 
Comparative Analysis, Security Aspects & Optimization of Workload in Gfs Base...
Comparative Analysis, Security Aspects & Optimization of Workload in Gfs Base...Comparative Analysis, Security Aspects & Optimization of Workload in Gfs Base...
Comparative Analysis, Security Aspects & Optimization of Workload in Gfs Base...IOSR Journals
 

What's hot (18)

IMPROVEMENT OF ENERGY EFFICIENCY IN CLOUD COMPUTING BY LOAD BALANCING ALGORITHM
IMPROVEMENT OF ENERGY EFFICIENCY IN CLOUD COMPUTING BY LOAD BALANCING ALGORITHMIMPROVEMENT OF ENERGY EFFICIENCY IN CLOUD COMPUTING BY LOAD BALANCING ALGORITHM
IMPROVEMENT OF ENERGY EFFICIENCY IN CLOUD COMPUTING BY LOAD BALANCING ALGORITHM
 
A Survey on Resource Allocation & Monitoring in Cloud Computing
A Survey on Resource Allocation & Monitoring in Cloud ComputingA Survey on Resource Allocation & Monitoring in Cloud Computing
A Survey on Resource Allocation & Monitoring in Cloud Computing
 
Big Data on Implementation of Many to Many Clustering
Big Data on Implementation of Many to Many ClusteringBig Data on Implementation of Many to Many Clustering
Big Data on Implementation of Many to Many Clustering
 
International Journal of Engineering Inventions (IJEI)
International Journal of Engineering Inventions (IJEI)International Journal of Engineering Inventions (IJEI)
International Journal of Engineering Inventions (IJEI)
 
Survey on Dynamic Resource Allocation Strategy in Cloud Computing Environment
Survey on Dynamic Resource Allocation Strategy in Cloud Computing EnvironmentSurvey on Dynamic Resource Allocation Strategy in Cloud Computing Environment
Survey on Dynamic Resource Allocation Strategy in Cloud Computing Environment
 
Task Scheduling methodology in cloud computing
Task Scheduling methodology in cloud computing Task Scheduling methodology in cloud computing
Task Scheduling methodology in cloud computing
 
Hybrid Based Resource Provisioning in Cloud
Hybrid Based Resource Provisioning in CloudHybrid Based Resource Provisioning in Cloud
Hybrid Based Resource Provisioning in Cloud
 
Modeling and Optimization of Resource Allocation in Cloud [PhD Thesis Progres...
Modeling and Optimization of Resource Allocation in Cloud [PhD Thesis Progres...Modeling and Optimization of Resource Allocation in Cloud [PhD Thesis Progres...
Modeling and Optimization of Resource Allocation in Cloud [PhD Thesis Progres...
 
Extending Grids with Cloud Resource Management for Scientific Computing
Extending Grids with Cloud Resource Management for Scientific ComputingExtending Grids with Cloud Resource Management for Scientific Computing
Extending Grids with Cloud Resource Management for Scientific Computing
 
Service oriented cloud architecture for improved performance of smart grid ap...
Service oriented cloud architecture for improved performance of smart grid ap...Service oriented cloud architecture for improved performance of smart grid ap...
Service oriented cloud architecture for improved performance of smart grid ap...
 
Service oriented cloud architecture for improved
Service oriented cloud architecture for improvedService oriented cloud architecture for improved
Service oriented cloud architecture for improved
 
A 01
A 01A 01
A 01
 
Presented by Ahmed Abdulhakim Al-Absi - Scaling map reduce applications acro...
Presented by Ahmed Abdulhakim Al-Absi -  Scaling map reduce applications acro...Presented by Ahmed Abdulhakim Al-Absi -  Scaling map reduce applications acro...
Presented by Ahmed Abdulhakim Al-Absi - Scaling map reduce applications acro...
 
A Review on Scheduling in Cloud Computing
A Review on Scheduling in Cloud ComputingA Review on Scheduling in Cloud Computing
A Review on Scheduling in Cloud Computing
 
Big Graph : Tools, Techniques, Issues, Challenges and Future Directions
Big Graph : Tools, Techniques, Issues, Challenges and Future Directions Big Graph : Tools, Techniques, Issues, Challenges and Future Directions
Big Graph : Tools, Techniques, Issues, Challenges and Future Directions
 
Introducing Novel Graph Database Cloud Computing For Efficient Data Management
Introducing Novel Graph Database Cloud Computing For Efficient Data ManagementIntroducing Novel Graph Database Cloud Computing For Efficient Data Management
Introducing Novel Graph Database Cloud Computing For Efficient Data Management
 
Ahmed Absi slides bigbwa
Ahmed Absi slides  bigbwaAhmed Absi slides  bigbwa
Ahmed Absi slides bigbwa
 
Comparative Analysis, Security Aspects & Optimization of Workload in Gfs Base...
Comparative Analysis, Security Aspects & Optimization of Workload in Gfs Base...Comparative Analysis, Security Aspects & Optimization of Workload in Gfs Base...
Comparative Analysis, Security Aspects & Optimization of Workload in Gfs Base...
 

Similar to Vol 10 No 1 - February 2014

Improved Utilization of Infrastructure of Clouds by using Upgraded Functional...
Improved Utilization of Infrastructure of Clouds by using Upgraded Functional...Improved Utilization of Infrastructure of Clouds by using Upgraded Functional...
Improved Utilization of Infrastructure of Clouds by using Upgraded Functional...AM Publications
 
An Algorithm to synchronize the local database with cloud Database
An Algorithm to synchronize the local database with cloud DatabaseAn Algorithm to synchronize the local database with cloud Database
An Algorithm to synchronize the local database with cloud DatabaseAM Publications
 
A Novel Approach for Workload Optimization and Improving Security in Cloud Co...
A Novel Approach for Workload Optimization and Improving Security in Cloud Co...A Novel Approach for Workload Optimization and Improving Security in Cloud Co...
A Novel Approach for Workload Optimization and Improving Security in Cloud Co...IOSR Journals
 
Performance evaluation of Map-reduce jar pig hive and spark with machine lear...
Performance evaluation of Map-reduce jar pig hive and spark with machine lear...Performance evaluation of Map-reduce jar pig hive and spark with machine lear...
Performance evaluation of Map-reduce jar pig hive and spark with machine lear...IJECEIAES
 
Resource Allocation for Task Using Fair Share Scheduling Algorithm
Resource Allocation for Task Using Fair Share Scheduling AlgorithmResource Allocation for Task Using Fair Share Scheduling Algorithm
Resource Allocation for Task Using Fair Share Scheduling AlgorithmIRJET Journal
 
IRJET- A Workflow Management System for Scalable Data Mining on Clouds
IRJET- A Workflow Management System for Scalable Data Mining on CloudsIRJET- A Workflow Management System for Scalable Data Mining on Clouds
IRJET- A Workflow Management System for Scalable Data Mining on CloudsIRJET Journal
 
Leveraging Map Reduce With Hadoop for Weather Data Analytics
Leveraging Map Reduce With Hadoop for Weather Data Analytics Leveraging Map Reduce With Hadoop for Weather Data Analytics
Leveraging Map Reduce With Hadoop for Weather Data Analytics iosrjce
 
Design architecture based on web
Design architecture based on webDesign architecture based on web
Design architecture based on webcsandit
 
DESIGN ARCHITECTURE-BASED ON WEB SERVER AND APPLICATION CLUSTER IN CLOUD ENVI...
DESIGN ARCHITECTURE-BASED ON WEB SERVER AND APPLICATION CLUSTER IN CLOUD ENVI...DESIGN ARCHITECTURE-BASED ON WEB SERVER AND APPLICATION CLUSTER IN CLOUD ENVI...
DESIGN ARCHITECTURE-BASED ON WEB SERVER AND APPLICATION CLUSTER IN CLOUD ENVI...cscpconf
 
A Survey on Data Mapping Strategy for data stored in the storage cloud 111
A Survey on Data Mapping Strategy for data stored in the storage cloud  111A Survey on Data Mapping Strategy for data stored in the storage cloud  111
A Survey on Data Mapping Strategy for data stored in the storage cloud 111NavNeet KuMar
 
Review and Classification of Cloud Computing Research
Review and Classification of Cloud Computing ResearchReview and Classification of Cloud Computing Research
Review and Classification of Cloud Computing Researchiosrjce
 
LARGE-SCALE DATA PROCESSING USING MAPREDUCE IN CLOUD COMPUTING ENVIRONMENT
LARGE-SCALE DATA PROCESSING USING MAPREDUCE IN CLOUD COMPUTING ENVIRONMENTLARGE-SCALE DATA PROCESSING USING MAPREDUCE IN CLOUD COMPUTING ENVIRONMENT
LARGE-SCALE DATA PROCESSING USING MAPREDUCE IN CLOUD COMPUTING ENVIRONMENTijwscjournal
 
A STUDY ON JOB SCHEDULING IN CLOUD ENVIRONMENT
A STUDY ON JOB SCHEDULING IN CLOUD ENVIRONMENTA STUDY ON JOB SCHEDULING IN CLOUD ENVIRONMENT
A STUDY ON JOB SCHEDULING IN CLOUD ENVIRONMENTpharmaindexing
 

Similar to Vol 10 No 1 - February 2014 (20)

Improved Utilization of Infrastructure of Clouds by using Upgraded Functional...
Improved Utilization of Infrastructure of Clouds by using Upgraded Functional...Improved Utilization of Infrastructure of Clouds by using Upgraded Functional...
Improved Utilization of Infrastructure of Clouds by using Upgraded Functional...
 
B04 06 0918
B04 06 0918B04 06 0918
B04 06 0918
 
An Algorithm to synchronize the local database with cloud Database
An Algorithm to synchronize the local database with cloud DatabaseAn Algorithm to synchronize the local database with cloud Database
An Algorithm to synchronize the local database with cloud Database
 
D017212027
D017212027D017212027
D017212027
 
A Novel Approach for Workload Optimization and Improving Security in Cloud Co...
A Novel Approach for Workload Optimization and Improving Security in Cloud Co...A Novel Approach for Workload Optimization and Improving Security in Cloud Co...
A Novel Approach for Workload Optimization and Improving Security in Cloud Co...
 
Presentation
PresentationPresentation
Presentation
 
Performance evaluation of Map-reduce jar pig hive and spark with machine lear...
Performance evaluation of Map-reduce jar pig hive and spark with machine lear...Performance evaluation of Map-reduce jar pig hive and spark with machine lear...
Performance evaluation of Map-reduce jar pig hive and spark with machine lear...
 
B04 06 0918
B04 06 0918B04 06 0918
B04 06 0918
 
Resource Allocation for Task Using Fair Share Scheduling Algorithm
Resource Allocation for Task Using Fair Share Scheduling AlgorithmResource Allocation for Task Using Fair Share Scheduling Algorithm
Resource Allocation for Task Using Fair Share Scheduling Algorithm
 
IRJET- A Workflow Management System for Scalable Data Mining on Clouds
IRJET- A Workflow Management System for Scalable Data Mining on CloudsIRJET- A Workflow Management System for Scalable Data Mining on Clouds
IRJET- A Workflow Management System for Scalable Data Mining on Clouds
 
B017320612
B017320612B017320612
B017320612
 
Leveraging Map Reduce With Hadoop for Weather Data Analytics
Leveraging Map Reduce With Hadoop for Weather Data Analytics Leveraging Map Reduce With Hadoop for Weather Data Analytics
Leveraging Map Reduce With Hadoop for Weather Data Analytics
 
Design architecture based on web
Design architecture based on webDesign architecture based on web
Design architecture based on web
 
DESIGN ARCHITECTURE-BASED ON WEB SERVER AND APPLICATION CLUSTER IN CLOUD ENVI...
DESIGN ARCHITECTURE-BASED ON WEB SERVER AND APPLICATION CLUSTER IN CLOUD ENVI...DESIGN ARCHITECTURE-BASED ON WEB SERVER AND APPLICATION CLUSTER IN CLOUD ENVI...
DESIGN ARCHITECTURE-BASED ON WEB SERVER AND APPLICATION CLUSTER IN CLOUD ENVI...
 
A Survey on Data Mapping Strategy for data stored in the storage cloud 111
A Survey on Data Mapping Strategy for data stored in the storage cloud  111A Survey on Data Mapping Strategy for data stored in the storage cloud  111
A Survey on Data Mapping Strategy for data stored in the storage cloud 111
 
Review and Classification of Cloud Computing Research
Review and Classification of Cloud Computing ResearchReview and Classification of Cloud Computing Research
Review and Classification of Cloud Computing Research
 
50120140502008
5012014050200850120140502008
50120140502008
 
Resume
ResumeResume
Resume
 
LARGE-SCALE DATA PROCESSING USING MAPREDUCE IN CLOUD COMPUTING ENVIRONMENT
LARGE-SCALE DATA PROCESSING USING MAPREDUCE IN CLOUD COMPUTING ENVIRONMENTLARGE-SCALE DATA PROCESSING USING MAPREDUCE IN CLOUD COMPUTING ENVIRONMENT
LARGE-SCALE DATA PROCESSING USING MAPREDUCE IN CLOUD COMPUTING ENVIRONMENT
 
A STUDY ON JOB SCHEDULING IN CLOUD ENVIRONMENT
A STUDY ON JOB SCHEDULING IN CLOUD ENVIRONMENTA STUDY ON JOB SCHEDULING IN CLOUD ENVIRONMENT
A STUDY ON JOB SCHEDULING IN CLOUD ENVIRONMENT
 

More from ijcsbi

Vol 17 No 2 - July-December 2017
Vol 17 No 2 - July-December 2017Vol 17 No 2 - July-December 2017
Vol 17 No 2 - July-December 2017ijcsbi
 
Vol 17 No 1 - January June 2017
Vol 17 No 1 - January June 2017Vol 17 No 1 - January June 2017
Vol 17 No 1 - January June 2017ijcsbi
 
Vol 16 No 2 - July-December 2016
Vol 16 No 2 - July-December 2016Vol 16 No 2 - July-December 2016
Vol 16 No 2 - July-December 2016ijcsbi
 
Vol 16 No 1 - January-June 2016
Vol 16 No 1 - January-June 2016Vol 16 No 1 - January-June 2016
Vol 16 No 1 - January-June 2016ijcsbi
 
Vol 15 No 6 - November 2015
Vol 15 No 6 - November 2015Vol 15 No 6 - November 2015
Vol 15 No 6 - November 2015ijcsbi
 
Vol 15 No 5 - September 2015
Vol 15 No 5 - September 2015Vol 15 No 5 - September 2015
Vol 15 No 5 - September 2015ijcsbi
 
Vol 15 No 4 - July 2015
Vol 15 No 4 - July 2015Vol 15 No 4 - July 2015
Vol 15 No 4 - July 2015ijcsbi
 
Vol 15 No 3 - May 2015
Vol 15 No 3 - May 2015Vol 15 No 3 - May 2015
Vol 15 No 3 - May 2015ijcsbi
 
Vol 15 No 2 - March 2015
Vol 15 No 2 - March 2015Vol 15 No 2 - March 2015
Vol 15 No 2 - March 2015ijcsbi
 
Vol 15 No 1 - January 2015
Vol 15 No 1 - January 2015Vol 15 No 1 - January 2015
Vol 15 No 1 - January 2015ijcsbi
 
Vol 14 No 3 - November 2014
Vol 14 No 3 - November 2014Vol 14 No 3 - November 2014
Vol 14 No 3 - November 2014ijcsbi
 
Vol 14 No 2 - September 2014
Vol 14 No 2 - September 2014Vol 14 No 2 - September 2014
Vol 14 No 2 - September 2014ijcsbi
 
Vol 14 No 1 - July 2014
Vol 14 No 1 - July 2014Vol 14 No 1 - July 2014
Vol 14 No 1 - July 2014ijcsbi
 
Vol 13 No 1 - May 2014
Vol 13 No 1 - May 2014Vol 13 No 1 - May 2014
Vol 13 No 1 - May 2014ijcsbi
 
Vol 12 No 1 - April 2014
Vol 12 No 1 - April 2014Vol 12 No 1 - April 2014
Vol 12 No 1 - April 2014ijcsbi
 
Vol 11 No 1 - March 2014
Vol 11 No 1 - March 2014Vol 11 No 1 - March 2014
Vol 11 No 1 - March 2014ijcsbi
 
Vol 9 No 1 - January 2014
Vol 9 No 1 - January 2014Vol 9 No 1 - January 2014
Vol 9 No 1 - January 2014ijcsbi
 
Vol 8 No 1 - December 2013
Vol 8 No 1 - December 2013Vol 8 No 1 - December 2013
Vol 8 No 1 - December 2013ijcsbi
 
Vol 7 No 1 - November 2013
Vol 7 No 1 - November 2013Vol 7 No 1 - November 2013
Vol 7 No 1 - November 2013ijcsbi
 
Vol 6 No 1 - October 2013
Vol 6 No 1 - October 2013Vol 6 No 1 - October 2013
Vol 6 No 1 - October 2013ijcsbi
 

More from ijcsbi (20)

Vol 17 No 2 - July-December 2017
Vol 17 No 2 - July-December 2017Vol 17 No 2 - July-December 2017
Vol 17 No 2 - July-December 2017
 
Vol 17 No 1 - January June 2017
Vol 17 No 1 - January June 2017Vol 17 No 1 - January June 2017
Vol 17 No 1 - January June 2017
 
Vol 16 No 2 - July-December 2016
Vol 16 No 2 - July-December 2016Vol 16 No 2 - July-December 2016
Vol 16 No 2 - July-December 2016
 
Vol 16 No 1 - January-June 2016
Vol 16 No 1 - January-June 2016Vol 16 No 1 - January-June 2016
Vol 16 No 1 - January-June 2016
 
Vol 15 No 6 - November 2015
Vol 15 No 6 - November 2015Vol 15 No 6 - November 2015
Vol 15 No 6 - November 2015
 
Vol 15 No 5 - September 2015
Vol 15 No 5 - September 2015Vol 15 No 5 - September 2015
Vol 15 No 5 - September 2015
 
Vol 15 No 4 - July 2015
Vol 15 No 4 - July 2015Vol 15 No 4 - July 2015
Vol 15 No 4 - July 2015
 
Vol 15 No 3 - May 2015
Vol 15 No 3 - May 2015Vol 15 No 3 - May 2015
Vol 15 No 3 - May 2015
 
Vol 15 No 2 - March 2015
Vol 15 No 2 - March 2015Vol 15 No 2 - March 2015
Vol 15 No 2 - March 2015
 
Vol 15 No 1 - January 2015
Vol 15 No 1 - January 2015Vol 15 No 1 - January 2015
Vol 15 No 1 - January 2015
 
Vol 14 No 3 - November 2014
Vol 14 No 3 - November 2014Vol 14 No 3 - November 2014
Vol 14 No 3 - November 2014
 
Vol 14 No 2 - September 2014
Vol 14 No 2 - September 2014Vol 14 No 2 - September 2014
Vol 14 No 2 - September 2014
 
Vol 14 No 1 - July 2014
Vol 14 No 1 - July 2014Vol 14 No 1 - July 2014
Vol 14 No 1 - July 2014
 
Vol 13 No 1 - May 2014
Vol 13 No 1 - May 2014Vol 13 No 1 - May 2014
Vol 13 No 1 - May 2014
 
Vol 12 No 1 - April 2014
Vol 12 No 1 - April 2014Vol 12 No 1 - April 2014
Vol 12 No 1 - April 2014
 
Vol 11 No 1 - March 2014
Vol 11 No 1 - March 2014Vol 11 No 1 - March 2014
Vol 11 No 1 - March 2014
 
Vol 9 No 1 - January 2014
Vol 9 No 1 - January 2014Vol 9 No 1 - January 2014
Vol 9 No 1 - January 2014
 
Vol 8 No 1 - December 2013
Vol 8 No 1 - December 2013Vol 8 No 1 - December 2013
Vol 8 No 1 - December 2013
 
Vol 7 No 1 - November 2013
Vol 7 No 1 - November 2013Vol 7 No 1 - November 2013
Vol 7 No 1 - November 2013
 
Vol 6 No 1 - October 2013
Vol 6 No 1 - October 2013Vol 6 No 1 - October 2013
Vol 6 No 1 - October 2013
 

Recently uploaded

80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...
80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...
80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...Nguyen Thanh Tu Collection
 
HMCS Max Bernays Pre-Deployment Brief (May 2024).pptx
HMCS Max Bernays Pre-Deployment Brief (May 2024).pptxHMCS Max Bernays Pre-Deployment Brief (May 2024).pptx
HMCS Max Bernays Pre-Deployment Brief (May 2024).pptxEsquimalt MFRC
 
FICTIONAL SALESMAN/SALESMAN SNSW 2024.pdf
FICTIONAL SALESMAN/SALESMAN SNSW 2024.pdfFICTIONAL SALESMAN/SALESMAN SNSW 2024.pdf
FICTIONAL SALESMAN/SALESMAN SNSW 2024.pdfPondicherry University
 
AIM of Education-Teachers Training-2024.ppt
AIM of Education-Teachers Training-2024.pptAIM of Education-Teachers Training-2024.ppt
AIM of Education-Teachers Training-2024.pptNishitharanjan Rout
 
Sensory_Experience_and_Emotional_Resonance_in_Gabriel_Okaras_The_Piano_and_Th...
Sensory_Experience_and_Emotional_Resonance_in_Gabriel_Okaras_The_Piano_and_Th...Sensory_Experience_and_Emotional_Resonance_in_Gabriel_Okaras_The_Piano_and_Th...
Sensory_Experience_and_Emotional_Resonance_in_Gabriel_Okaras_The_Piano_and_Th...Pooja Bhuva
 
Graduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - EnglishGraduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - Englishneillewis46
 
PANDITA RAMABAI- Indian political thought GENDER.pptx
PANDITA RAMABAI- Indian political thought GENDER.pptxPANDITA RAMABAI- Indian political thought GENDER.pptx
PANDITA RAMABAI- Indian political thought GENDER.pptxakanksha16arora
 
dusjagr & nano talk on open tools for agriculture research and learning
dusjagr & nano talk on open tools for agriculture research and learningdusjagr & nano talk on open tools for agriculture research and learning
dusjagr & nano talk on open tools for agriculture research and learningMarc Dusseiller Dusjagr
 
Spellings Wk 4 and Wk 5 for Grade 4 at CAPS
Spellings Wk 4 and Wk 5 for Grade 4 at CAPSSpellings Wk 4 and Wk 5 for Grade 4 at CAPS
Spellings Wk 4 and Wk 5 for Grade 4 at CAPSAnaAcapella
 
FSB Advising Checklist - Orientation 2024
FSB Advising Checklist - Orientation 2024FSB Advising Checklist - Orientation 2024
FSB Advising Checklist - Orientation 2024Elizabeth Walsh
 
REMIFENTANIL: An Ultra short acting opioid.pptx
REMIFENTANIL: An Ultra short acting opioid.pptxREMIFENTANIL: An Ultra short acting opioid.pptx
REMIFENTANIL: An Ultra short acting opioid.pptxDr. Ravikiran H M Gowda
 
Introduction to TechSoup’s Digital Marketing Services and Use Cases
Introduction to TechSoup’s Digital Marketing  Services and Use CasesIntroduction to TechSoup’s Digital Marketing  Services and Use Cases
Introduction to TechSoup’s Digital Marketing Services and Use CasesTechSoup
 
How to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POSHow to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POSCeline George
 
Towards a code of practice for AI in AT.pptx
Towards a code of practice for AI in AT.pptxTowards a code of practice for AI in AT.pptx
Towards a code of practice for AI in AT.pptxJisc
 
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...Amil baba
 
Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...
Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...
Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...Pooja Bhuva
 
Model Attribute _rec_name in the Odoo 17
Model Attribute _rec_name in the Odoo 17Model Attribute _rec_name in the Odoo 17
Model Attribute _rec_name in the Odoo 17Celine George
 
21st_Century_Skills_Framework_Final_Presentation_2.pptx
21st_Century_Skills_Framework_Final_Presentation_2.pptx21st_Century_Skills_Framework_Final_Presentation_2.pptx
21st_Century_Skills_Framework_Final_Presentation_2.pptxJoelynRubio1
 
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...EADTU
 

Recently uploaded (20)

80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...
80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...
80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...
 
HMCS Max Bernays Pre-Deployment Brief (May 2024).pptx
HMCS Max Bernays Pre-Deployment Brief (May 2024).pptxHMCS Max Bernays Pre-Deployment Brief (May 2024).pptx
HMCS Max Bernays Pre-Deployment Brief (May 2024).pptx
 
VAMOS CUIDAR DO NOSSO PLANETA! .
VAMOS CUIDAR DO NOSSO PLANETA!                    .VAMOS CUIDAR DO NOSSO PLANETA!                    .
VAMOS CUIDAR DO NOSSO PLANETA! .
 
FICTIONAL SALESMAN/SALESMAN SNSW 2024.pdf
FICTIONAL SALESMAN/SALESMAN SNSW 2024.pdfFICTIONAL SALESMAN/SALESMAN SNSW 2024.pdf
FICTIONAL SALESMAN/SALESMAN SNSW 2024.pdf
 
AIM of Education-Teachers Training-2024.ppt
AIM of Education-Teachers Training-2024.pptAIM of Education-Teachers Training-2024.ppt
AIM of Education-Teachers Training-2024.ppt
 
Sensory_Experience_and_Emotional_Resonance_in_Gabriel_Okaras_The_Piano_and_Th...
Sensory_Experience_and_Emotional_Resonance_in_Gabriel_Okaras_The_Piano_and_Th...Sensory_Experience_and_Emotional_Resonance_in_Gabriel_Okaras_The_Piano_and_Th...
Sensory_Experience_and_Emotional_Resonance_in_Gabriel_Okaras_The_Piano_and_Th...
 
Graduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - EnglishGraduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - English
 
PANDITA RAMABAI- Indian political thought GENDER.pptx
PANDITA RAMABAI- Indian political thought GENDER.pptxPANDITA RAMABAI- Indian political thought GENDER.pptx
PANDITA RAMABAI- Indian political thought GENDER.pptx
 
dusjagr & nano talk on open tools for agriculture research and learning
dusjagr & nano talk on open tools for agriculture research and learningdusjagr & nano talk on open tools for agriculture research and learning
dusjagr & nano talk on open tools for agriculture research and learning
 
Spellings Wk 4 and Wk 5 for Grade 4 at CAPS
Spellings Wk 4 and Wk 5 for Grade 4 at CAPSSpellings Wk 4 and Wk 5 for Grade 4 at CAPS
Spellings Wk 4 and Wk 5 for Grade 4 at CAPS
 
FSB Advising Checklist - Orientation 2024
FSB Advising Checklist - Orientation 2024FSB Advising Checklist - Orientation 2024
FSB Advising Checklist - Orientation 2024
 
REMIFENTANIL: An Ultra short acting opioid.pptx
REMIFENTANIL: An Ultra short acting opioid.pptxREMIFENTANIL: An Ultra short acting opioid.pptx
REMIFENTANIL: An Ultra short acting opioid.pptx
 
Introduction to TechSoup’s Digital Marketing Services and Use Cases
Introduction to TechSoup’s Digital Marketing  Services and Use CasesIntroduction to TechSoup’s Digital Marketing  Services and Use Cases
Introduction to TechSoup’s Digital Marketing Services and Use Cases
 
How to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POSHow to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POS
 
Towards a code of practice for AI in AT.pptx
Towards a code of practice for AI in AT.pptxTowards a code of practice for AI in AT.pptx
Towards a code of practice for AI in AT.pptx
 
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
 
Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...
Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...
Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...
 
Model Attribute _rec_name in the Odoo 17
Model Attribute _rec_name in the Odoo 17Model Attribute _rec_name in the Odoo 17
Model Attribute _rec_name in the Odoo 17
 
21st_Century_Skills_Framework_Final_Presentation_2.pptx
21st_Century_Skills_Framework_Final_Presentation_2.pptx21st_Century_Skills_Framework_Final_Presentation_2.pptx
21st_Century_Skills_Framework_Final_Presentation_2.pptx
 
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...
 

Vol 10 No 1 - February 2014

  • 1. ISSN: 1694-2507 (Print) ISSN: 1694-2108 (Online) International Journal of Computer Science and Business Informatics (IJCSBI.ORG) VOL 10, NO 1 FEBRUARY 2014
  • 2. Table of Contents VOL 10, NO 1 FEBRUARY 2014 Cloud Architecture for Search Engine Application ...............................................................................1 A. L. Saranya and B. Senthil Murugan Efficient Numerical Integration and Table Lookup Techniques for Real Time Flight Simulation.............. 8 P. Lathasree and Abhay A. Pashilkar A Review of Literature on Cloud Brokerage Services................................................................................ 25 Dr. J. Akilandeswari and C. Sushanth Improving Recommendation Quality with Enhanced Correlation Similarity in Modified Weighted Sum .................................................................................................................................................................... 41 Khin Nila Win and Thiri Haymar Kyaw Bounded Area Estimation of Internet Traffic Share Curve ...................................................................... 54 Dr. Sharad Gangele, Kapil Verma and Dr. Diwakar Shukla Information Systems Projects for Sustainable Development and Social Change ................................... 68 James K. Ho and Isha Shah Software Architectural Pattern to Improve the Performance and Reliability of a Business Application using the Model View Controller .............................................................................................................. 83 G. Manjula and Dr. G. Mahadevan IJCSBI.ORG
  • 3. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 1 Cloud Architecture for Search Engine Application A. L. Saranya School of Information Technology & Engineering, VIT University, Vellore-632014, Tamil Nadu, India B. Senthil Murugan Assistant Professor (Senior) School of Information Technology & Engineering, VIT University, Vellore-632014, Tamil Nadu, India ABSTRACT Cloud computing has become popular because of its on demand self services capability and business benefits. This paper presents design of search engine application developed and deployed using Google app engine. The application uses pattern-matching and regular expression language processing across millions of web document and returns the matching web documents. To facilitate large dataset processing the application makes use of Apache Hadoop suite, which is distributed data processing framework that brings up hundreds of virtual servers on-demand, runs a parallel computation on them, then shuts down all the virtual servers releasing all its resources back to the cloud. The MapReduce concept is used to implement the system to do the parallel computation and give efficient result to user. The application is efficient and scalable to any number of users in quick response time. The Google app engine uses cloud SQL instance to store data virtually in a cloud database. Keywords MapReduce, Pattern matching, SQL instance, Google app engine, Apache Hadoop suite. 1. INTRODUCTION Using cloud architecture the software application can be effectively designed and online databases are used on-demand. Cloud infrastructure used for software application is utilized on need and returned it back to cloud providers after its usage to make it available for other application. Cloud architecture can handle large number of data’s easily. Physical location of the application infrastructure is determined by the provider, so that there are many business benefits in cloud architecture, such as business people no need to invest for infrastructure, quick infrastructure when needed, resources is utilized efficiently, pay only for what using, through parallelization processing time of the job is reduced. The main objective of this paper is to develop efficient, scalable search engine application based on cloud architecture which will give responses to many users. This application should be loosely coupled so that it is available to all user community and can access concurrently.
  • 4. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 2 2. BACKGROUND STUDY The new computing model of cloud computing provide resource, storage and online application as service to the user. Cloud computing is dynamic, reliable, scalable, low cost and secure so that it provides virtual service to any number users. The cloud computing provide three type of services such as , software as a service where the application software can used by any one as on demand resource, platform as a service and infrastructure as a service. The internet users are more interest in searching data’s and getting needed information. For quick and efficient result, large computing resources are needed. Cloud infrastructure is used get the resources needed, to get data after processing the data and resources is given back. Using Google apps engine implementation of search engine cloud application is explained is this paper. The application use Hadoop mapreduce concept to get large data from the cloud and map the process request on that data and reduce the result set to give the searched result. Mapping of millions of result has been done parallel and quick response to request is generated so that application is more efficient. 3. RELATED WORKS Chunzhi Wang and Zhuang Yang [1] of Hubei University of Technology, explain the cloud search engine process based on user interest. They showed that demand of user can be known by introducing user interest model. Push mechanism used to get result for search and close all exciting sever on demand to user. This lets the user to get relevant information on time. They compare the traditional search model with user interest based search model. The user interest model has accurate rate of giving relevant information on user demand. Lingyging Zeng and Hao Wen Lin [2] of Harbin Institute of technology explain the concept of existing MapReduce and modified MapReduce to perform parallel computing to collect the hardware performance information from the virtual machine. The existing MapReduce will have master slave process, when the client request is generated master node will create a new job and assign to a new processor and is ready to perform. The master node always checks the salve process status is working based on that it will split and assign work to all available process and get combine all task. They used this concept in cloud computing which is dynamic and server will generate the request to the persistence independent storage device to collect and information. Jinessh varia [3], technology Evangelist, Amazon Web services explained the cloud Architecture in June 2008. Varia explained how to develop an efficient, reliable, scalable, distributed parallel application using Amazon
  • 5. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 3 Web Service which is loosely coupled system. Explained development of application with GrepTheWeb Hadoop implementation based search engine deployed using Amazon Web service. He also explained Amazon web service such Amazon S3 which is used to get input and output, Amazon SQS act as message passing, Amazon SimpleDB a database to get status, Amazon EC2 a controller. Gaizhen Yang [4] in 2011 explained the application of MapReduce in Cloud Computing. Hadoop is the frame work for cloud programmers and Map Reduce is the parallel computing large scale programming model. He analyses the Hadoop and MapReduce model and described how this both can perform together that’s Map Reduce program in distributed cloud computing programming. Kejiang Ye, Xiaohong Jiang, Yanzhang He, Xiang Li, Haiming Yan, Peng Huang [5] in 2012 discusses A Scalable Hadoop Virtual Cluster Platform for MapReduce-Based Parallel Machine Learning with Performance Consideration. Big data processing is increasing its important because of increasing data. Efficiently process large data virtual infrastructure is not clear at present. He clearly explained based on the performance of Hadoop and vHadoop. The performance is measured based on clustering, k-means, on vHadoop. Zhiqiang Liu, Hongyan Liu, Gaoshan Miao [6], in 2010 proposed MapReduce-based Backpropagation Neural Network over Large Scale Mobile Data. MapReduce-based Backpropagation Neural Network is proposed to process classifications on large-scale mobile data. MapReduce- based framework on cloud computing platform is discussed to improve the efficiency and scalability over large scale mobile data. MapReduce framework is well known as a parallel programming model for cloud computing. It supports the parallelization of data processing on large clusters and built on the top of a distributed file system. However the research of how to design a neural network on MapReduce framework is rarely touched nowadays especially over large scale mobile data. Closed frequent Itemset mining [7] plays important role in many real world applications. Cost and handling of large dataset is challenging issues of such data mining. A parallelized AFOPT-close algorithm is proposed and implemented based on the cloud computing framework MapReduce in 2012 by Su Qi Wang, Yu Bin Yang, Guang Peng Chen, Yang Gao and Yao Zhang.
  • 6. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 4 4. METHODOLOGY 4.1 OVERVIEW OF SYSTEM The overview of the proposed system is shown in Figure 1. Figure 1. Overview of Engine for Extraction of Similar Resultant Web Document Design of search engine is divided into modules. The process of sending request and getting result has four steps. First is launching of request, here the input query is validated and Hadoop is initiated. Second is map data and reduce based on matched input data from the cloud data base. Third is billing of used data for processing Hadoop and stop the Hadoop process. Fourth one is giving back the resource to cloud database by cleaning all data used in the application. 4.2 SEARCH ENGINE ARCHITECTURE The system architecture depicted in Figure 2 implies that the GAE design will get the query from the user as regular simple expression, then process the request to the mapreduce phase which split the expression data set into small sub set and request is sent to all different database machines. After the extraction of resultant web document which matches the expression it will combine into a single resultant set and produce it to the user as web document. Figure 2. Search Engine System Architecture using cloud sql Search engine application Input query (Regular Expression) MapReduce phase Output (web Documents) Cloud SQL database storage
  • 7. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 5 Search engine application is developed to provide the user software as a service (SaaS). This application is developed based on to give efficient web search to user. Search engine uses regular expression as a query to search into the cloud database. This regular expression is run over millions of web document using Hadoop map reduce concept. It uses matching pattern to retrieve the document which is matched at most of the user entered regular expression query. The challenges in designing search engine is that complex regular expression, if there are many web document which matches, or else pattern is unknown. This application is overcomes all of such difficulties and gives result to number of users even with large dataset, with quick response and cost of usage is less. This is done over because of mapping is done parallel in number of processor then reduce and combine into smaller needed information. 4.3 HADOOP MAPREDUCE IMPLEMENTATION The Mapreduce implementation is pictorially shown in Figure 3. Figure 3. Mapreduce phase implementation in cloud database Hadoop split dataset into manageable data and give it to many machines, job launched and processed in different machine which is located physically wide somewhere because of its open source and distributed which can manage large dataset. After that the result of all are aggregated as final output of job. It works in three phases to implement this. Map phase will map the data which is matched with the regular expression from the cloud database. Reduce phase will produce intermediate result of the web document. Map and reduce phase is done independent of each other in separate processor. Combine phase will combine all the extracted data from different machine. Thus needed data will be computed from all over the cloud data base and processed parallel to give efficient search result.
  • 8. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 6 Hadoop use the master slave process, master process will run in separate node and see all the slave process which runs in some other separate node. Salve process all workers which extract data from different machine if any failure in worker or any problem will be take care by master process. 5. RESULTS Application implementation in Figure 4 shows the start up page of search engine application which is developed and deployed using Google app engine and web tool kit. The application ask the user to enter search string and shows web document which match the search string based on Map Reduce concept. Because Map Reduce concept uses parallel computation search result will be mapped and computed fast so that response time of application will increase. Cloud SQL instance is used to access the cloud database and get all resource need for result and after the process is over resource is released back to the cloud. Figure 4. Search engine application start up page 6. CONCLUSION In this paper Search engine application is successfully designed, developed and deployed using Google Apps engine and cloud instance sql database. Search engine performs pattern matching across millions of web document using Apache Hadoop Map-Reduce for regular expression inputted by the user for query processing. Because of using map reduce concept, millions of documents are pattern matched in parallel at a time and result is combined and given to user as a web document. The process uses parallel distributed processing across many dataset gives the quick response to the user and also scale for any number of users. Application uses cloud sql data base using instance created for the application, so that billing of used resources from cloud computing data base can be easily maintained. References [1] Wang C., Yan Z., Chen H., 2010. Search engine concept based on user interest model and information push mechanism. 8th International Conference on computer science and education, Sri Lanka. [2] Zeng L. and Lin H. W. 2012. A modified mapreduce for cloud computing. International conference on computing, measurement, control and sensor networks.
  • 9. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 7 [3] Jinesh Varia, explained Cloud Architectures in Technology Evangelist Amazon Web Services in June 2008 [4] Gaizhen Yang, “The Application of MapReduce in the Cloud Computing”, International Symposium on Intelligence Information Processing and Trusted Computing in 2011. [5] Kejiang Ye, Xiaohong Jiang, Yanzhang He, Xiang Li, Haiming Yan, and Peng Huang, “vHadoop: A Scalable Hadoop Virtual Cluster Platform for MapReduce-Based Parallel Machine Learning with Performance Consideration”, IEEE International Conference on Cluster Computing Workshops in 2012. [6] Zhiqiang Liu, Hongyan Li , Gaoshan Miao, “MapReduce-based Backpropagation Neural Network over Large Scale Mobile Data”, Sixth International Conference on Natural Computation (ICNC 2010) in 2010. [7] Su Qi Wang, Yu Bin Yang, Guang Peng Chen, Yang Gao and Yao Zhang, “MapReduce-based Closed Frequent Itemset Mining with Efficient Redundancy Filtering in IEEE 12th International Conference on Data Mining Workshops in 2012. This paper may be cited as: Saranya, A. L. and Murugan, B. S., 2014. Cloud Architecture for Search Engine Application. International Journal of Computer Science and Business Informatics, Vol. 10, No. 1, pp. 1-7.
  • 10. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 8 Efficient Numerical Integration and Table Lookup Techniques for Real Time Flight Simulation P. Lathasree CSIR-National Aerospace Laboratories Old Airport Road, PB No 1779, Bangalore-560017 Abhay A. Pashilkar CSIR-National Aerospace Laboratories Old Airport Road, PB No 1779, Bangalore-560017 ABSTRACT A typical flight simulator consists of models of various elements such as the flight dynamic model, filters and actuators, which have fast and slow eigen values in the overall system. This results into an electromechanical control system of stiff ordinary differential equations. Stability, accuracy and speed of computation are the parameters of interest while selecting numerical integration schemes for use in flight simulators. Similarly, accessing huge aerodynamic and engine database in table look-up format at high speed is an essential requirement for high fidelity real time flight simulation. A study was carried out by implementing well known numerical integration and table lookup techniques in a real time flight simulator facility designed and developed in house. Table lookup techniques such as linear search and index computation methodology using novel Virtual Equi-Spacing concept were also studied. It is seen that the multi-rate integration technique and the table look up using Virtual Equi-Spacing concept have the best performance amongst the techniques studied. Keywords Real-Time Flight Simulation, Aerodynamic and Engine database, Virtual Equi-Spacing concept, table look up and interpolation, Runge-Kutta integration, multi-rate integration. 1. INTRODUCTION Flight simulation has a vital role in the design of aircraft and can benefit all phases of the aircraft development program: the early conceptual and design phase, systems design and testing, and flight test support and envelope expansion [1]. Simulation helps in predicting the flight behavior prior to flight tests. It helps in certification of the aircraft under demanding scenarios. Flight Simulation is widely used for training purposes in both fighter and transport aircraft programs [2]. Therefore, Modeling & Simulation is one of the enabling technologies for aircraft design. The fidelity of the simulation largely depends on the accuracy of the simulation models used and on the quality of the data that goes into the model. A faithful simulation requires an adequate model in the form of
  • 11. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 9 mathematical equations, a means of solving these equations in real-time and finally a means of presenting the output of this solution to the pilot by means visual motion, tactile and aural cues [3]. The Real-Time Flight Simulator implies the existence of a Man-In-the-Loop operating the cockpit controls [4]. Because of the presence of the pilot-in- the-loop, the digital computer executing the flight model in the simulator must solve the aircraft equations of motion in 'real-time' [5]. Real-Time implies the occurrence of events at the same time in the simulation as seen in the physical system. All the associated computations should be completed within the cycle update time [6]. The basis of a flight simulator is the mathematical model, including the database package, describing the characteristic features of the aircraft to be simulated. The block schematic of flight simulator is shown in Figure 1 with constituent modules such as aerodynamic, engine, atmosphere (static and dynamic), actuator etc. The simulation model for atmosphere includes the static and dynamic atmosphere components. Dynamic atmosphere model caters for turbulence, wind shear and cross wind. Dryden and Von Karman models are generally used for the simulation of atmospheric turbulence [7]. Figure 1. Block Schematic of Flight Simulation VISUALS AND DISPLAY PILOT COMMANDS ELEVONS RUDDER THROTTLE SLATS ACTUATOR MODEL ENGINE MODEL & ENGINE DATABASE ATMOSPHERE MODEL MASS, C.G & INERTIA FLIGHT MODEL AEROMODEL & AERO DATA AIRCRAFT RESPONSES POSITION VELOCITY ACCELERATION FLIGHT PATH AOA AOS FLIGHT CONTROL
  • 12. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 10 Mathematical models, used to simulate modern aircraft, consist of a set of non-linear differential equations with large amounts of aerodynamic function data (tables), sometimes depending on 4 to 5 independent variables. These aerodynamic data tables result in force and moment coefficients which contribute to the total forces and moments. The equations of motion are dependent on these forces and moments. They are solved by the digital computer using a suitable numerical integration algorithm. This allows the designer to create the complete range of static and dynamic aircraft operating conditions, including landing and takeoff [6]. The type of method used for the integration of ordinary differential equations is critical for real time simulation. The choice of an integrating algorithm is a trade-off between simplicity, which affects calculation speed, and accuracy. Also, real simulation needs high speed data access. The aerodynamic and engine database used for real-time simulation are huge and complex. Hence, the types of table look-up methods used for access of data from aerodynamic and engine database also become critical. This paper discusses the efficient table look up and interpolation schemes and numerical integration techniques which can be used for ensuring accurate real-time computations in a flight simulator. 2. REVIEW OF EXISTING TECHNIQUES The existing numerical integration techniques and table lookup and interpolation methods for real time implementation are discussed in this section. 2.1 Numerical Integration Many linear numerical integration techniques with single and multi step are available which can also be classified into implicit and explicit numerical integration techniques [8]. With respect to the stability and accuracy, each of these numerical integration techniques has advantages and disadvantages [8]. Depending on the performance, these methods can be suitably used for stiff and non-stiff systems. Methods not designed for stiff problems must use time steps small enough to resolve the fastest possible changes, which makes them rather ineffective on intervals where the solution changes slowly. The most popular numerical integration methods are listed below.  Taylor Series Methods  Runge-Kutta Methods  Linear Multi Step Methods  Extrapolation methods The linear multistep methods (LMMs) require past values of the state. They are therefore not self-starting and do not directly solve the initial-value problem [9]. The simplest Runge-Kutta (RK) method is Euler integration,
  • 13. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 11 which merely truncates the Taylor series after the first derivative and is very accurate [9]. An RK method (e.g., Euler) could be used to generate the starting values for LMMs. Higher order RK algorithms are an extension Taylor series expansion to higher orders. An important feature of the RK methods is that the only value of the state vector that is needed is the value at the beginning of the time step; this makes them well suited to the Ordinary Differential Equations initial value problem [1]. 2.1.1 Stability, Accuracy and Speed of Computation While choosing the numerical integration technique, one frequently has to strike a compromise between three aspects [10-11]. • Speed of the method • Accuracy of the method • Stability of the method Speed of the method becomes an essential feature especially for real time simulation. Accuracy of the method is also an important aspect and needs to be considered when choosing a method to integrate the equations of motion [12]. Accuracy of the numerical integration technique can be determined from step size, number of steps to be executed and truncation error terms [10-11]. Generally, two types of errors will be introduced by the numerical integration methods viz. round-off errors and discretisation errors. Round- off errors are a property of the computer and the program that is used and occur due to the finite number of digits used in the calculations [13-14]. Discretisation/ truncation errors are property of the numerical integration method. Stability can be defined as the property of an integration method that keeps the integration errors bounded at subsequent time steps [12]. An unstable numerical integration method will make the integration errors grow exponentially resulting in possible arithmetic overflow just after a few time steps. Stability of numerical integration technique generally depends on the system dynamics, step size and order of the chosen technique and is harder to assess [10-11]. Impact of numerical integration method in terms of stability can be assessed by applying it to a well-conditioned differential equation and then investigating the limits of the onset of instability [10-11]. In the context of stability of numerical integration, it is understood that a stable continuous system results in a stable discrete-time system. Numerical stability is important for fixed-step Runge-Kutta integrators because of the limitations imposed on the integration step size. Generally, selection of the integration
  • 14. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 12 step size will be carried out based on analysis on the stability of the numerical integration technique. [15]. Numerical stability will be an issue when the chosen integration step size produces z-plane poles close to the Unit Circle. If the poles are located inside the Unit circle, then the system will be stable. Increasing T (step size) eventually causes one of the z-plane poles to be on the Unit Circle where the system becomes marginally stable. Depending on the location of T (product of characteristic root and step size) on the stability boundary of respective integrator, it is possible to estimate the maximum allowable integration step size (Tmax) for the system solution to be at least marginally stable. Beyond Tmax, the system solution will become unstable. Hence, it is very essential to consider stability boundaries for different numerical integrators while selecting the integration step size. Figure 2 shows the stability boundaries for Runge-Kutta methods [15]. -3 -2.5 -2 -1.5 -1 -0.5 0 0.5 -3 -2 -1 0 1 2 3 RK-4 RK-3 RK-2 Stability Boundaries for RK-2 thru RK4 Integrators T Plane Re (T) Im(T) Figure 2. Stability Boundaries for Runge-Kutta methods 2.1.2 Numerical Integration techniques for Stiff Systems ‘Stiffness’ of the differential equations may be defined as the existence of one or more fast decay processes in time, with a time constant that is small compared to the time-span of interest [13].
  • 15. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 13 One has to consider the following two points while choosing the numerical integration technique [10-11]:  The integration technique should be chosen such that any error it introduces is small in comparison to the errors associated with the main terms of the model equations;  The numerical integration techniques should be able to solve the system of differential equations within the real-time frame rate. Many integration techniques, for non-real time simulation applications, are available that work well with the stiff systems [16-17]. Two approaches that can be used for simulating stiff systems with respect to real time and non- real time simulation will be discussed here. The first approach considers selection of numerical integration technique that works well in the presence of stiffness. The second approach involves the use of multi-rate integration to simulate stiff systems. In multi-rate simulations, the simulation is split into multiple tasks that are executed with different integration step times. The inverse of the integration step time is termed as frame rate and expressed in frames per second. This multi-rate integration technique is useful for real-time applications as well as non real-time applications. Of the two approaches discussed for the simulation of stiff systems, only the multi-rate integration technique is applicable for real time applications. The control systems with electrical and mechanical components, referred as electromechanical control systems, are composed of fast and slow subsystems. Generally, the mechanical systems being controlled are much slower when compared to the components in electronic controllers and sensors. This results in an electromechanical control system with fast and slow dynamics. The aircraft pitch control system is an example of system of stiff ordinary differential equations comprising of aircraft dynamics and actuators [15]. Kunovsky et al have established the need of multi-rate integration for real time flight simulation [18] with an example of aircraft pitch control system comprising of slow aircraft dynamics and fast actuator dynamics using Runge-Kutta and Adams-Bashforth numerical integration techniques. The airframe module of aircraft pitch control system is modeled as a linear second-order system to account for the short-period longitudinal dynamics. Generally, selection step size for numerical integration will be carried out based on the analysis of stability and dynamic accuracy. Ts and Tf are the integration step sizes of slow and fast systems respectively. The numerical integrator used to update slow system is termed as ‘‘master’’ routine, and the integration method used to update the fast system is called
  • 16. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 14 as ‘‘slave’’ routine. It is common to use conventional numerical integration schemes such as Runge-Kutta methods for both ‘master’ and ‘slave’ systems. For the example studied here, the multi-rate integration scheme with RK-4 is chosen for master and slave routines. The implementation is carried out in the Matlab environment. For a pitch command of 2deg, simulation is carried for the state space based simulink model. This result is compared with the analytical solution and the response obtained using a multi-rate integration scheme. The comparison of theta and elevator responses for three methods is shown in Figure 3 and Figure 4 respectively. 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Time(sec) theta_resp(deg) Analytical Multirate-RK4 RK4 @ 0.0025sec sampling Figure 3. Comparison of Theta response 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 -0.05 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 Time(sec) dle_resp(deg) Analytical Multirate-RK4 RK4 @ 0.0025sec sampling Figure 4. Comparison of elevator response
  • 17. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 15 The responses obtained from analytical solution are taken as the reference. From the figures, it can be seen that the response obtained using simulink model at step size 0.0025 is matching well with reference whereas the response obtained using the multi-rate integration exhibits loss of accuracy. The multi-rate integration scheme would be recommended for real time simulation even though there is some loss of accuracy, since the smaller step size may deteriorate the performance. 2.2 Table look-up and Interpolation Generally, an index search or look-up process will be performed first to locate the data and this is followed by linear interpolation. Following steps need to be performed for table look-up process [3]: 1. First we should decide between which pair of values in the table the current input value of independent variable (X) lies 2. Next, calculate the local slope 3. Finally, apply the linear interpolation formula For real-time simulation, it is always important to save the processing time. One of the techniques to save the processing time is to remember the index of the lower pair the interpolation range used in the previous iteration. The value of the independent variable (X) is unlikely to have changed substantially from one time step to the next, and hence it is a good first try to use the same interval as before and thus save time in searching from one end of the table each time. Huge and complex aerodynamic and engine database has to be handled in such a way that it can be easily read and interpolated for a given set of input conditions. One way of ensuring the speed required for real-time simulation, is to have uniformly spaced database. For this, the normal practice is to convert the supplied database with non-uniform break points for independent variables to equi-spaced format. It is necessary to choose an appropriate step size for independent variables such as Angle of Attack, Mach number, Elevator, Angle of Sideslip, Power Lever Angle (PLA) etc to convert this non-uniform database to equi-spaced format. This is normally termed as conventional equi-spacing concept. We propose a new concept called Virtual Equi-Spacing where the original database with non-uniform break points is retained. With the assumption of virtual equi-spacing, the search process can be eliminated [19] as the index is directly computed. The computation of index in Virtual Equi-Spacing concept is explained in the following section. 2.2.1 Virtual Equi-Spacing Concept A novel method is proposed which would retain original data with unevenly spaced break points and satisfies real-time constraint without loss of accuracy. In this method, an evenly spaced breakpoint array that is a
  • 18. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 16 superset of the unevenly spaced break points will be created for the independent variables and shall be referred as ‘Address Map’. The index into this evenly spaced array can be directly computed (Refer Figure 5). This index is then used in an equivalent breakpoint index array that provides pointers to the appropriate interpolation equation. Figure 5. Indexing scheme in Virtual Equi-Spacing Concept The Virtual Equi-Spacing concept satisfies real-time speed constraint without loss of accuracy for the real time flight simulators. This technique eliminates search process and directly computes the index of data tables. The Virtual Equi-Spacing concept works as follows. A division of the desired input value by the step size chosen for the creation of Address map table gives the location ‘K’. The value of address map [K], say ‘i’ is used as a pointer in data table to get the final data component value i.e. Table[i] for the desired input. This is now demonstrated with a typical example. Aircraft engine database is a three dimensional dataset where thrust is a function of three independent variables viz. Mach number, PLA and altitude. The technique of computing index values in address maps and the index values in data arrays is explained with PLA dimension. Let pla_val = 54.0 deg for the Mach number 0.4 and Altitude 4500.0m. The PLA and Thrust relationship at these conditions is given in Table 1. The computation of index values and thereby data values is presented in Appendix along with the pseudo code. The engine database of a high performance fighter aircraft is used to demonstrate the table look-up and interpolation schemes. This database consists of engine parameters such as thrust, specific fuel consumption, N1 rpm, N2 rpm etc. supplied as function 0 0 0 1 2 3 4 Unevenly Spaced breakpoints 0.0 0.3 0.4 0.5 0.6 Evenly Spaced breakpoints 0.0 0.1 0.2 0.3 0.4 0.5 0.6 5 Breakpoint index 0.8 0.7 0.8 4 5
  • 19. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 17 of Mach number, PLA and altitude. This index computation methodology using Virtual Equi-Spacing concept is extended to multi-dimension tables of wind tunnel database. Table 1. PLA and Thrust relationship PLA(deg) Thrust(kN) 28. -0.63 42. 3.21 54. 8.7 66. 13.81 78. 20.24 90. 26.32 104. 28.09 107. 30.26 130. 44.84 The next section presents a study on efficient table look up algorithms and numerical integration algorithms suitable for real time implementation in flight simulators. 3. RESULTS From the survey of existing techniques for numerical integration and table look-up, concept of multi-rate integration and Virtual Equi-Spacing concept are implemented for real-time flight simulation and studied. This implementation is carried out in the real-time flight simulation facility designed and developed at CSIR-NAL. Figure 6 shows the conceptual flowchart of real time flight simulation. The simulation is typically started from an equilibrium / trim condition. For the given set of pilot inputs, flight dynamic module solves the equations of motion using the chosen numerical integration method. It is necessary that all the associated computations should be completed within the cycle update time for real-time simulation. These computations are completed ahead of cycle update time and the beginning of the next cycle is delayed till the internal clock signals the next cycle update as shown in Figure 6.
  • 20. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 18 Figure 6. Conceptual flowchart of real-time flight simulation 3.1 Timing Analysis The timing analysis is carried out for the numerical integration and table loop up techniques and the results are presented. Initial Condition / Trim ( simtime = 0) (cycletime =0) Get External Inputs Get surface positions Get Forces and Moments Integrate rigid body Equations of Motion Pilot Inputs Disturbances Control Laws Hardware models Aerodynamic Engine Landing Gear If Stop Simulation End Yes No simtime = simtime + deltat delay deltat = integration step size If cycletime < deltat Wait till cycletime = deltat end
  • 21. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 19 3.1.1 Numerical Integration The concept of multi-rate integration is adopted for the real-time flight simulation facility designed and developed at CSIR-NAL. The full nonlinear model of the aircraft dynamics along with the actuator dynamics for a light transport aircraft is considered for this real-time flight simulation environment. The aircraft dynamics of light transport aircraft constitute the slow dynamics and fast dynamics is composed of actuator dynamics. The nominal integration step size of 0.025sec is chosen for the airframe simulation purpose. Similarly, for the actuator dynamics 0.0025sec is chosen as integration step size based on the analysis of stability and dynamic accuracy. It can be seen that the ratio of step sizes of slow system to fast system (frame ratio) is 10 indicating a stiff system. The multi-rate integration scheme with frame ratio 10 and simulation cycle update time 0.025sec ensures the handling of slow and fast subsystems. The Runge- Kutta pair of Bogacki and Shampine [20] is currently being used for the numerical integration of slow and fast dynamics. Table 2 presents the timing analysis for the simulation (off-line) carried out using the windows based timer function with the resolution in micro seconds. Table 2 Timing analysis for multi-rate and mono-rate integration techniques Duration of Simulation Description Time (sec) 35sec Multi-rate integration with ts = 0.025 & tf = 0.0025 0.4271 Mono-rate integration with deltat 0.0025sec 1.8518 50sec Multi-rate integration with ts = 0.025 & tf = 0.0025 1.1058 Mono-rate integration with deltat 0.0025sec 2.7181 100sec Multi-rate integration with ts = 0.025 & tf = 0.0025 1.1077 Mono-rate integration with deltat 0.0025sec 4.9378 Figure 7 shows the plots of aircraft responses obtained with pitch stick doublet for mono-rate integration with 0.0025sec sampling time and multi-rate integration with 0.025 / 0.0025sec sampling times. From the plots, it can be seen that the mismatch between the multi-rate integration scheme and the mono-rate solution is negligible.
  • 22. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 20 0 10 20 30 40 50 -5 0 5 10 15 Alpha(deg) Time(sec) 0 10 20 30 40 50 -20 -10 0 10 20 Q(deg/s) Time(sec) 0 10 20 30 40 50 70 72 74 76 78 Vtot(m/s) Time(sec) 0 10 20 30 40 50 2500 2520 2540 2560 Alt(m) Time(sec) 0 10 20 30 40 50 -10 0 10 20 Theta(deg) Time(sec) 0 10 20 30 40 50 -10 0 10 20 Dle(deg) Time(sec) Monorate @ 0.0025sec Multirate @ 0.025 / 0.0025 Figure 7 Comparison plots for aircraft response variables with mono-rate and multi-irate integration schemes For real time applications, accuracy is the feature that must be sacrificed in conflicts with other properties. It is better to obtain a solution with some small error than not be able to obtain it at all in the allowed time. Moreover, many real time applications incorporate a feedback control. Feedback control helps to compensate errors and disturbances, including integration errors. For real time flight simulation, the multi-rate integration scheme may be adopted for better computational time. 3.1.2 Table Look-Up and Interpolation This flight simulator facility is using the aerodynamic and engine database with unevenly spaced break points. It is proposed to use the original dataset with unevenly spaced breakpoints and facilitate a faster table look up and interpolation. As already discussed, a technique to save time is to remember the index of the lower pair of the interpolation range used last time. From one time step to the next, the value of the independent variable X is unlikely to have changed substantially and so it would be a good first try to use the same interval as before and thus avoid waste of time in searching from one end of the table each time. Hence, linear search with option of remembering previous used index is used for the timing analysis.
  • 23. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 21 Timing analysis is carried out for linear search with option of remembering previously used index and the novel Virtual Equi-Spacing concept proposed in the previous section. A windows based timer function with the resolution in micro seconds is used to obtain the time taken for the table look up and interpolation. Generally, this process includes, computing the location of data component value in the corresponding data table and interpolation. Table 3 Timing studies for different search and interpolation techniques The recommended Virtual Equi-Spacing technique has been used for the table lookup and interpolation of the aerodynamic and engine database consisting of around two lakh data points (representing a high performance fighter aircraft). The engine data base of size 20000 data points is taken as an example to carry out the study. Table 3 gives the timing of two different techniques studies at different PLA conditions while Mach number and altitude are maintained same. From the table, it is found that Virtual Equi- Spacing technique takes lesser time. The accuracy is maintained as the actual data tables are not affected. 4. CONCLUSIONS A study was carried out to recommend efficient numerical integration and table look up techniques suitable for real time flight simulation comprising of system of stiff ordinary differential equations. Numerical integration and table lookup techniques available in literature were implemented in a real time flight simulator facility designed and developed in house. Aircraft pitch control system representing the slow and fast subsystems was considered for the study on numerical integration techniques. Table lookup techniques such as linear search and index computation methodology using Virtual Equi- Spacing concept have been studied for an example of the engine database of a high performance fighter aircraft. The Virtual Equi-Spacing is a new Mach number 0.4 Altitude 4500m Time in Micro seconds Linear Search (with option of remembering previous used index) Virtual Equi-Spacing concept PLA / 50 18.08 12.65 PLA / 90 18 12.5 PLA / 107 17.2 12.75 PLA / 110 19.1 12.44
  • 24. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 22 concept developed for interpolation of large multi-dimensional tables frequently used in flight simulation. With excessively small step size, it is possible to solve the stiff differential equations, but this results in performance penalty, an important aspect of real time simulation. Hence, it is recommended to opt for multi-rate simulation, where it is necessary to use a step size for the actuator simulation that is sufficiently small to ensure an accurate and stable actuator solution and a larger step size for simulating the slower dynamics of the airframe. The Virtual Equi-Spacing concept for table lookup and interpolation leads to faster and accurate data access, an essential feature of real-time simulation while handling larger databases. From the results, it is found that the recommended multi-rate integration technique and the table look up using Virtual Equi-Spacing concept perform better. 5. ACKNOWLEDGMENTS The authors would like to thank Mr Shyam Chetty, Director, CSIR-NAL and Dr (Mrs) Girija Gopalratnam, Head, Flight Mechanics and Control Division, CSIR-NAL for their guidance and support. REFERENCES [1] Ken A Norlin, Flight Simulation Software at NASA Dryden Flight Research Center, NASA TM 104315, October 1995 [2] David Allerton, Flight Simulation- past, present and future, The Aeronautical Journal, Vol 104, Issue No. 1042, pp 651-663, December 2000 [3] J M Rolfe and K J Staples, Flight Simulation, Cambridge University Press, Year of publication 1991 [4] Flight Mechanics & Control Division, CSIR-National Aerospace Laboratories, NAL-ASTE Lecture Series, May 2003 [5] Max Baarspul, A review of Flight Simulation Techniques, Progress in Aerospace Sciences, (An International Review Journal), Vol. 27, Issue No. 1, pp 1-120, March 1990 [6] Joseph S. Rosko, Digital Simulation of Physical systems, Addison-Wesley Publishing Company. Year of publication 1972 [7] Beal, T.R., Digital simulation of atmospheric turbulence for Dryden and Von Karman models, Journal of Guidance Control and Dynamics, Vol 16, Issue No. 1, pp132–138, February 1993. [8] http://qucs.sourceforge.net/tech/node24.html Accessed on 8.1.2014 [9] Brian L Stevens and Frank L Lewis, Aircraft and Control and Simulation, John Wiley & Sons Inc. Year of Publication 1992 [10]David Allerton, Principles of Flight simulation, John Wiley & Sons Ltd. Year of Publication 2009 [11]http://www.scribd.com/doc/121445651/PRINICIPLES-OF-FLIGHT-SIMULATION Accessed on 8.1.2014
  • 25. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 23 [12] http://mat21.etsii.upm.es/mbs/bookPDFs/Chapter07.pdf, Numerical Integration of Equations of Motion Accessed on 26.6.2012 [13] Marc Rauw, FDC 1.4 – A SIMULINK Toolbox for Flight Dynamics and Control Analysis, Draft Version 7, May 25, 2005 [14] John W Wilson and George Steinmetz, Analysis of numerical integration techniques for real-time digital flight simulation, NASA-TN-D-4900 dated November 1968, Langley Research Center, Langley Station, NASA, Hampton, VA [15] Harold Klee and Randel Allen, Simulation of Dynamic Systems with Matlab and Simulink, Second Edition, CRC Press, Taylor and Francis Group, 2011 [16] Jim Ledin, Simulation Engineering:Build better embedded systems faster, CMP books, Publication Year 2001 [17]http://www.embedded.com/design/real-world-applications/4023325/Dynamic-System- Simulation [18] Jiˇr´ı Kunovsk´y et al, Multi-rate integration and Modern Taylor Series Method, Tenth International conference on Computer Modeling and simulation, 2008, IEEE Computer Society. [19] Donald E. Knuth, The art of computer programming – Volume 3 / Sorting and Searching, Addison-Wesley Publishing Company. Year of publication 1973 [20] http://en.wikipedia.org/wiki/Bogacki%E2%80%93Shampine_method Accessed on 12/10/2008 This paper may be cited as: Lathasree, P. and Pashilkar, A. A., 2014. Efficient Numerical Integration and Table Lookup Techniques for Real Time Flight Simulation. International Journal of Computer Science and Business Informatics, Vol. 10, No. 1, pp. 8-24.
  • 26. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 24 Appendix Computing index values and data values: data pladata / * 28.0,42.0,54.0,66.0,78.0,90.0,104.0,107.0, 130/ The Address Map assumes the virtual equi-spaced data with 1.0deg step. For the PLA value 28 to 41, the index number will be 1. For the PLA value 42.0 to 53.0, the index number will be 2. Similarly, for the PLA value 54.0 to 65.0, the index number will be 3 and so on. data (plamap(it), it=1,103) / * 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, * 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, * 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, * 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, * 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, * 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, * 7, 7, 7, * 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, * 8, 8, 8, 8, 8, 8, 8, 8, 8, * 9 / The index into these address maps can directly computed based on the step size. iplav = int((pla_val-28.0)/1.0) + 1 = 27 iplax = plamap(iplav) = 3 Based on this index number corresponding to the independent variable PLA, it is possible to obtain thrust value in the table. thrust_val11 = thrust_tab(iplax) = 8.7 thrust_val12 = thrust_tab(iplax+1) = 13.81 thrust_val = thrust_val11 + ((thrustval12-thrustval11)/(pladata(iplax+1)- pladata(iplax))*pla_val-pla_data(iplax) = 8.7 If PLA value is lying between two break points e.g. pla_val = 70.5 iplav = int((60.5-28)/1) + 1 = 43 iplax = plamap(iplav) = 4 thrust_val11 = thrust_tab(iplax) = 13.81 thrust_val12 = thrust_tab(iplax+1) = 20.24 thrust_val = thrust_val11 + ((thrustval12-thrustval11)/(pladata(iplax+1)- pladata(iplax))*pla_val-pla_data(iplax) = 16.2213
  • 27. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 25 A Review of Literature on Cloud Brokerage Services Dr. J. Akilandeswari Professor and Head, Department of Information Technology Sona College of Technology, Salem, India. C. Sushanth PG Scholar, Department of Information Technology Sona College of Technology, Salem, India. ABSTRACT Cloud computing is kinetically evolving areas which offer large potential for agencies of all sized to increase efficiency. Cloud Broker acts as a mediator between cloud users and cloud service providers. The main functionality of cloud broker lies in selecting best Cloud Service Providers (CSP) from requirement set defined by cloud user. Request from cloud users are processed by the cloud broker and suited providers are allocated to them. This paper gives detailed review of cloud brokerage services and their method of negotiating with the service providers. Once the SLA is specified by cloud service provider, the cloud broker will negotiate the terms according to the user’s specification. The negotiation can be modeled as a middleware, and its services can be provided as application programming interface. Keywords Cloud computing, broker, mediator, service provider, middleware. 1. INTRODUCTION A cloud refers the interconnection of huge number of computer systems in a network. The cloud provider extends service through virtualization technologies to cloud user. Client credentials are stored on the company server at a remote location. Every action initiated by the client is executed in a distributed environment and as a result, the complexity of maintaining the software or infrastructure is minimized. The services provided by cloud providers are classified into three types: Infrastructure-as-a-Service (IaaS), Software-as-a-Service (SaaS), and Platform-as-a-Service (PaaS). Cloud computing makes client to store information on remote site and hence there is no need of storage infrastructure. Web browser act as an interface between client and remote machine to access data by logging into his/her account. The intent of every customer is to use cloud resources at a low cost with high efficiency in terms of time and space. If more number of cloud
  • 28. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 26 service providers is providing almost same type of services, customers or users will have difficulty in choosing the right service provider. To handle this situation of negotiating with multiple service providers, Cloud Broker Services (CBS) play a major role as a middleware. Cloud broker acts as a negotiator between cloud user and cloud service provider. Initially, cloud provider registers with cloud broker about its specification on offerings and user submits request to broker. Based on type of service, and requirements, best provider is suggested to the cloud user. Upon confirmation from the user, broker establishes the connection to the provider. 2. CLOUD BROKERAGE SERVICES (CBS) Foued Jrad et al [1] introduced Intercloud Gateway and Open Cloud Computing Interface specification (OCCI) cloud API to overcome lack of interoperability and heterogeneity. Cloud users cannot identify appropriate cloud providers through the assistance of existing Cloud Service Broker (CSB). By implementing OCCI in Intercloud Gateway, it acts as server for service providers and OCCI act as a client in abstract cloud API. Cloud Broker satisfies users of both functional and non-functional requirements through Service Level Agreement (SLA). Intercloud Gateway acts as a front end for cloud providers and interacts with cloud broker. Figure 2.1 shows a generic architecture of the service broker. GUI/UI Workflow Engine Identity Manager Persistence SLA Manager Match Maker Monitoring and Discovery Manager Deployment Manager Abstract Cloud API Intercloud Gateway Vendor Cloud Platform Intercloud Gateway Vendor Cloud Platform USER Cloud Provider A Cloud Provider B Figure 2.1 A generic architecture for Cloud Service Broker Cloud Service Broker Access Without Broker
  • 29. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 27 Identity Manager handles user authentication through unique ID.SLA Manager is responsible for negotiates SLA creation and storing. Match Manager takes care of selecting suitable resources for cloud users. Monitoring and Discovery Manager monitor SLA metrics in various resource allocations. Deployment manager is in charge of deploying services to cloud user. Abstract cloud API provides interoperability. The user submits a request to SLA Manager and it parses the request into SLA parameters which is given to Match Maker. By applying algorithm Match Maker find best suited solution and response is passed to the user. Upon user acceptance a connection is provided by service providers. Table 2.1 Sample SLA parameters for IaaS Functional Non-functional CPU speed Response time OS type Completion time Storage size Availability Image URL Budget Memory size Data transfer time Through this architecture, interoperability is achieved, but this cannot assure best matching cloud service provider to the client. Tao Yu and Kwei-Jay Lin [2] introduces Quality of Service (QoS) broker module in between cloud service providers and cloud users. The role of QoS information is collecting information about active servers, suggesting appropriate server for clients, and negotiate with servers to get QoS agreements. The QoS information manager collects information required for QoS negotiation and analysis. It checks with the Universal Description Discovery and Integration (UDDI) registry to get the server information and contacts servers for QoS information such as server send their service request and QoS load and service levels. After receiving clients functional and QoS requirements, the QoS negotiation manager searches through the broker’s database to look for qualified services. If more than one candidate is found, a decision algorithm is used to select the most suitable one. The QoS information from both server and QoS analyzer will be used to make the decision. By using this architecture load balancing factor of server is maintained for a large number of users, but not efficient in delivering best suited provider to the client.
  • 30. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 28 Figure 2.2 QoS based Architecture HQ and RQ allocation algorithm is proposed to maximize server resource while minimizing QoS instability for each client. The HQ allocation algorithm is to evenly divide available resource among required client based on active clients. RQ assigns a different service level to client based on requirements. Josef Spillner et al [3] provided solution is to subdivide resource reservation into either serial or parallel segments. UDDI Server QoS broker Client QoS Information QoS Admission & Enforcement Web services QoS Information Manager DB QoS Negotiation manager QoS Analyzer QoS Request Service Request QoS Result USER Level L0 (provider hardware) Level L1 (broker/market VM) Recursive Virtualization Level L2 (user provided VM) Kernal Virtual Machine (KVM) + KVM Monitor Broker Configurator Ec-2 tools minicom Policies: 1. Authentication as user. 2. Loading images. 3. When to switch off and on VMs. 4. Which resources and how much 5. Port forwarding. Figure 2.3 Nested cloud with virtual machine
  • 31. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 29 Nested virtualization provides services to cloud user. The outcome is a highly virtualizing cloud resource broker. The system supports hierarchically nested virtualization with dynamically reallocate capable resources. A base virtual machine is dedicated to enabling the nested cloud with other virtual machines is referred to as sub-virtual machine running at a higher virtualization level. The nested cloud virtual machine is to be deployed by the broker and offers control facilities through the broker configurator which turn it into a lightweight infrastructure manager. The proposed solution yields the higher reselling power of unused resources, but hardware cost of running virtual machine will be high to obtain the desired performance. Chao Chen et al [4] projected objectives of negotiation are minimize price and guaranteed QoS within expected timeline, maximize profit from the margin between the customers financial plan and the providers negotiated price, maximize profit by accepting as many requests as possible to enlarge market share. The proposed automated negotiation framework uses Software–as-a-Service (SaaS) broker which is utilized as the storage unit for customers. This helps the user to save time while selecting multiple providers. The negotiation framework helps user to assist in establishing a mutual agreement between provider and client through SaaS broker. The main objective of the broker is to maintain SLA parameters of cloud provider and suggesting best provider to customer. CUSTOMER AGENT SaaS Broker Coordinator Agent Negotiation Policy Translator Negotiation Engine Decision Making System Policy DB Knowledge base SLA Generator Create SLA Send SLA SLA Template Strategy DB Directory SaaS Provider Agent IaaS Figure 2.4 Negotiation Framework
  • 32. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 30 Negotiation policy translator maps customers QoS parameters to provider specification parameters. Negotiation engine includes workflows which use negotiation policy during the negotiation process. The decision making system uses decision making criteria to update the negotiation status. The minimum cost is incurred for resource utilization. Renegotiation for dynamic customer needs is not solved. Wei Wang et al [5] proposed a new cloud brokerage service that reserves a large pool of instances from cloud providers and serves users with price discounts. A practical problem facing cloud users is how to minimize their costs by choosing among different pricing options based on their own demands. The broker optimally exploits both pricing benefits of long-term instance, reservations and multiplexing gains. Dynamic approach for the broker to make instant reservations with the objective of minimizing its service cost is achieved. This strategy controls, dynamic programming and algorithms to quickly handle large demands. Figure 2.5 Cloud Broker Model A smart cloud brokerage service that serves cloud user demands with a large pool of computing instances that are dynamically launched on-demand from IaaS clouds. Partial usage of the billing cycle incurs a full cycle charge, this makes user to pay more than they actually use. This broker uses single instance to serve many users by time-multiplexing usage, reducing cost of cloud user. Dharmesh Mistry [6] proposed a cloud-based analytics solution as a service from a cloud broker which could considerably minimize costs for the client, IaaS Cloud Provider s Broker User 1 User 2 User 3 Reserved/On -demand instances Broker Cost User Cost “On-demand” Instances
  • 33. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 31 while assisting Independent Software Vendor (ISV) to maximize profit. When data are arriving, it is divided and index is created and finally it is mapped to original values through analysis. Large organizations are purchasing such software as a SaaS instead of obtaining and hosting software internally. But for ISVs that constructed their business by the traditional model. The cloud broker acts as middleware between the ISV and cloud providers. ISV yields solution to meet customer demands from for existing services. The broker provides services such as entitlement, analytics, billing and payment, security and context provisioning. ISVs usually rely on pre-module licensing models and software audits to confirm that the appropriate number of users access the modules and functions for which the customer will be paid. Figure 2.6 Mapping in Cloud Broker An ISV can drive faster profit growth, while maintaining margins, and respond to market demand more quickly. Lori MacVittie [7] introduces broker as a solution to integrate hybrid policy without affecting control in services. The integration between cloud and datacenter is done with cloud broker integration at the process layer. Brokers deploy vast amount of applications for customer through infrastructure defined by corporate enforced policies. Identity broker module communicates with datacenter through authorization and authentication mechanism. The real-time implementation of cloud broker is achieved by two types of architectures: Full-proxy broker and Half-proxy broker. In Full-proxy broker requests are processed through the tunneling and implemented in many ways such as VPN. In Half-proxy broker only validation of the request is done by broker, successive communication established directly. This model defines how the request can be handled in On-Boarding Provisioning Metering Billing Payments & Collections Analytics Demand Generation
  • 34. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 32 late binding. A cloud delivery broker can make decision, such as where to revert user upon request. Hybrid cloud must be able to describe capabilities such as bandwidth, location, cost, type of environment. Sigma Systems [8] introduces cloud service broker which is responsible for order management, provisioning, billing integration and Single Sign–On (SSO). In the proposed architecture, the Cloud Service Broker allows service providers to offer their own SLA, which provides a single source for all applications to customers. Providers can establish and grow a single and a combined collection of services that match their set of services, and allow for unique grouping to meet their customers’ needs. Cloud brokerage from Sigma Systems is available either as a managed service or can be deployed on-basis. Figure 2.7 Sigma System model The Sigma model allows service providers to create single and highly exciting packages by combining high-speed data and other complex network services with business and productivity-enhancing, SaaS based services. Vordel [9] developed cloud service broker in order to allow organizations to apply a layer of confidence in their cloud computing applications. It agents the connection to the cloud infrastructure, relating governs controls for service usage and service uptime. Ordering Enterprise Product Catalog Billing Single Sign-on SIGMA SYSTEMS Cloud Services Off-net Brokerage Backup Office Productivity Security Collaboration CRM/SFA Financial On-net Service Management On-net SaaS Service PaaS Service IaaS Service Video VPN Managed voice Unified Messagin g Mobile High speed data
  • 35. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 33 Figure 2.8 Services provided by Vordel It records service type, time of day, and the identity of the user. All information sent to cloud services must be examined for disclosing data, in order to allow Data Loss Prevention (DLP). Caching protects the enterprise from inactivity linked with connecting to the cloud service. Service Level Agreement (SLA) monitoring observes the whole transaction throughput time. The Cloud Service Broker contains a pluggable structure which allows for modules to be added, such as modules to provide additional encryption algorithm. Apostol T. Vassilev [10] introduced personal brokerage of Web service access which becomes part of the Web authentication structure, by network smart cards. This allows new Web services based on their characteristic properties of essential resistance, tough cryptography, connectivity and computing power. To enhance network, smart-card capabilities, particularly in the serious area of human-to-card interaction evidence, to bring further accessibility and personalization to Web security and privacy. In Single- Sign-On (SSO) systems users attempt to access services offered by a connected service provider using a web browser on the client system. The provider redirects the service request by directing the user’s browser to the Identity Provider’s (IDP) authentication page. To facilitate the redirection,
  • 36. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 34 the service provider issues a ticket that unites the user’s digital identity once the authentication is complete. Figure 2.9 Personal Brokerage extensions of Federated service Users use an IDP enforced method for authentication to prove their identity. If the authentication is well, the IDP declares the user’s identity in the ticket sent back to the browser, which in turn sends it to the service provider. Users can then access the requested services. The existing IDP-enforced authentication method is by means of a user name and key. Because the entire united system of Web services only requires one username and password license, SSO systems are convenient for the user. At the equal time, such credentials become a major target for hackers because it gives them access to many private user resources at once. Presently network traffic between users’ browsers and remote servers is secured by ubiquitous standard security protocols for information exchange, based on Secure Socket Layer (SSL) and Transport Layer Security (TLS). Muhammad Zakarya and Ayaz Ali Khan [11] found that Distributed Denial of Service (DDoS) attack is identified as a major threat in present time, which we overcome by new cloud environment architecture and Anomaly Detection System (ADS). These ADS improve computation time, QoS and high availability. Each cloud is separated as regional areas known as GS. Each GS is protected by AS/GL. Developed ADS are installed in cloud node or AS and router. A tree is maintained at every router by making every
  • 37. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 35 packet with path modification strategy, so the attacker of node is easily found. ADS have two phases detection of malicious flow confirmation algorithm to drop attack or pass it. Randomness or Entropy is given by, H(X) = − 𝐩 𝐱 𝐥𝐨𝐠 𝐩(𝐱) 𝒏 𝒙€𝑿 (2.1) Where 0< H(x) < log (n), p(x) probability of x P(x) =mi/m (2.2) Where mi is number of packet with value x and m is total number of packets Normalized entropy is calculated to get overall probability of captured packet in specific time Normalized entropy = (H / log n0) (2.3) For detection of DDoS attack, decide a threshold value. An edge router collects the flow of traffic for a specific time window w. Find probability p(x) for each packet node. Calculate link entropy of all active nodes separately. Calculate H(x) for router, if normalized entropy less than identified malicious attack flow then system is compromised. For confirmation of attack flows, decide a threshold value and compare with entropy rate. Srijith K. Nair et al [12] describes the concept of cloud bursting, cloud brokerage, framework of power brokerage based on service OPTIMIS. When a private cloud need to access external cloud for a certain time for computation, then the process is called cloud bursting. Internal cloud in the company needs to verify SLA requirements to measure performance. Cloud bursting environment, architecture being developed by OPTIMIS with following capabilities need common management interface, set of monitoring tools, global load balancer, and categorized providers. Cloud brokerage model was created by cloud service providers for the cloud management platform. The cloud management platform is responsible for activities such as policy enforcement, usage monitor, network security, platform security. Cloud API mediates consumer interaction with cloud broker. The SLA monitoring unit is responsible for monitoring all SLA and violations. Identify and access module records of serviced customer and generate one time token. Audit unit inspects broker platform and capabilities. Risk management prioritizes risks based on events. Network/platform security provides overall security through IDS. The user send storage request to cloud portal. Then portal forwards id and password for Identity and Access Management (IAM), it verifies and grants access along with criteria. Cloud portal converts identity and access rights to external token, containing criteria and request, which is encrypted and sent
  • 38. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 36 to Broker IAM. Broker IAM decrypt using portal public key and verifies integrity which in turn generates one time access token. Figure 2.10 Functional Requirements for Cloud Service Broker This token contains Uniform Resource Identifier (URI) which is again forwarded to portal and discard old token. Cloud portal decrypts using the private key of broker and forward to the respective user. The user sends data to Application Programming Interface (API) which checks the strength of token and grant access to upload data. This uploaded data in service provider sends the position of data through secret key. This ensures confidentiality and integrity. Mark Shtern et al [13] described AERIE architecture. When organization changing to public cloud infrastructure they have problem with control and security and must contain best model for deployment. This project suggests reference architecture for virtual private cloud built on cross provider platform on-demand compute instance, that reduce levels of trust on infrastructure providers. Inner instance is started from outer instance. Together inner and outer instance forms a nested instance. An outer instance runs an agent which ensures that it has not been modified. These agents establish connection with the controller using novel key exchange algorithm. A standard security application is implemented to preserve integrity of outer instance. Traffic from public internet is made to pass through security bulwark. A load balancing DNS service is capable of detecting inaccessible host from available solution. Each instance has an image which contains encrypted image to launch inner instance. Trusted API Deployment Services Staging/Pooling Service Scaling IAM SLA Monitoring Capability Management and Matching Audit Gateway/Application Firewalls Risk Management Network/Platform Security Multi-cloud Support VM/Service Placement Security Compliance Performance Usage Monitoring SLA Management Cost IT Policy Enforcement
  • 39. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 37 Instance Agent (TIA) conducts key exchange with controller to establish HTTPS connection using novel algorithm. The controller checks validity of certificate in image. To maintain integrity it employs Intrusion Detection System, if any, violations are met a virtual channel is terminated. Figure 2.11 AERIE Architecture Przemyslaw Pawluk et al [14] introduce cloud broker service which enables the deployment and runtime management of cloud application using multiple providers. Service Measurement Index (SMI) is a possible approach to facilitate the comparison of cloud business. An attribute is then expressed as a set of Key Performance Indicators (KPI) which specifies requested data acquired from every metric. After initial deployment, decision to add/remove resources is made by cloud manager. Application manager controls run time management of application according to the model. A Resource Acquisition Decision (RAD) involves. We will use the following scenario as a running example the selection of n resources from a set of m providers. The Broker is responsible for solving the RAD problem. It must also connect to the set of selected providers to be used and acquire the collection of resources. Topology Descriptor File (TDF) is used to identify the application topology to be deployed on the cloud. Each cloud provider describes details of environment variables in the TDF. The chosen nodes are instantiated through a translation layer. Cloud Manager and Broker make use of monitoring information, the former to make ongoing elasticity decisions and the latter to assist in the decision process. The broker selects the set of all possible specifications that satisfy the objectives
  • 40. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 38 stated in the desired models named in the TDF. Next, as a result of multi- criteria optimization process, a set of equivalent specifications is selected. Figure 2.12 Cloud Management Frameworks From this set, one is selected and the appropriate instance is acquired from the provider. In the situation where there are no suitable specifications suits the objectives, the broker makes an attempt to relax objectives by identifying the closest specification in each direction. Next, the optimization step is performed over the resultant set of relaxed results. The RAD problem can be formulated as a multi-criteria optimization problem. Paul Hershey et al [15] presented System of Systems (SoS) method which is responsible for activities such as QoS monitoring, management and response for cloud providers that delivers computing as a service. Various metrics are considered to calculate performance and security of SoS. Delay is the sum of delays in lower level domain of cloud. There is an infrastructure component delay. Hence delay is given by Dsos = p1Dg+p2Db+p3DS+p4Di (2.4) Pi- parameter that is dependent on the infrastructure components used. Dj – delay experienced in each layer. Throughput at system level is defined as the number of transactions that are completed per unit time T1 = n x Transaction Throughput (2.5) TS = m x T1 (2.6) TB = q x TS (2.7)
  • 41. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 39 Where m, n, q are number of transactions at lower domain needed to complete transaction at higher domain. Authentication metric is a logical conjunction of each level in EMMRA. Table 2.2 Metrics Categories Category Metric Performance Delay Delay Variation Throughput Information Overhead Security Authentication Authorization Non-repudiation Integrity Information Availability Certificate and Accreditation Physical Security Asos = AG ^AB^AS^AI (2.8) Authorization is a bottom-up metric and it is applied at each level. Authorization at IaaS level can be given as, AuthI = min {Π PI} (2.9) PI is permission to perform actions I at IaaS level. The min operator is used to indicate least privilege level that is granted to the user. 3. CONCLUSIONS The development of a cloud brokerage services framework is getting momentum since its usage is pervasive in all verticals. The works till now do not consider the scenario of more than one cloud service provider providing the same level of requirements to the user. This scenario will induce an ambiguity for the users to choose an appropriate provider. The Cloud Broker Services will act on behalf of the user to choose a particular service provider for providing service to the user. If Cloud Broker Service becomes a standard middleware framework, many chores of cloud service providers can be taken by CBS.
  • 42. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 40 REFERENCES [1] Foued Jrad, Jie Tao, Achim Streit, SLA Based Service Brokering in Intercloud Environments. Proceedings of the 2nd International Conference on Cloud Computing and Services Science, pp. 76-81, 2012. [2] Tao Yu and Kwei-Jay Lin, The Design of QoS Broker Algorithms for QoS-Capable Web Services, Proceedings of IEEE International Conference on e-Technology, e- Commerce and e-Service, pp. 17-24, 2004. [3] Josef Spillner, Andrey Brito, Francisco Brasileiro, Alexander Schill, A Highly- Virtualising Cloud Resource Broker, IEEE Fifth International Conference on Utility and Cloud Computing, pp.233-234, 2012. [4] Linlin Wu, Saurabh Kumar Garg, Rajkumar Buyya, Chao Chen, Steve Versteeg, Automated SLA Negotiation Framework for Cloud Computing, 13th IEEE/ACM International Symposium on Cluster, Cloud, and Grid Computing, pp.235-244, 2013. [5] Wei Wang, Di Niu, Baochun Li, Ben Liang, Dynamic Cloud Resource Reservation via Cloud Brokerage, Proceedings of the 33rd International Conference on Distributed Computing Systems (ICDCS), Philadelphia, Pennsylvania, July 2013. [6] Dharmesh Mistry, Cloud Brokers can help ISVs Move to SaaS, Cognizant 20-20 Insight, and June 2011. [7] Lori MacVittie, Integrating the Cloud: Bridges, Brokers, and Gateways, 2012. [8] Sigma Systems, Cloud Brokerage: Clarity to Cloud Efforts, 2013. [9] Vordel white papers, Cloud Governance in the 21st century, 2011. [10]Apostol T. Vassilev, Bertrand du Castel, Asad M. Ali, Personal Brokerage of Web Service Access IEEE Security & Privacy, vol. 5, no. 5, pp. 24-31, Sept.-Oct. 2007. [11]Muhammad Zakarya & Ayaz Ali Khan, Cloud QoS, High Availability & Service Security Issues with Solutions, International Journal of Computer Science and Network Security, vol.12 No.7, July 2012. [12]Srijith K. Nair, Sakshi Porwal, Theo Dimitrakos, Ana Juan Ferrer, Johan Tordsson, Tabassum Sharif, Craig Sheridan, Muttukrishnan Rajarajan, Afnan Ullah Khan, Towards Secure Cloud Bursting, Brokerage and Aggregation, Eighth IEEE European Conference on Web Services, pp.189-196, 2010. [13]Shtern. M, Simmons. B, Smit. M, Litoiu. M, An architecture for overlaying private clouds on public providers, Eighth International Conference and Workshop on Systems Virtualization Management, pp.371, 377, 22-26 Oct. 2012. [14]Przemyslaw Pawluk, Bradley Simmons, Michael Smit, Marin Litoiu, Serge Mankovski, Introducing STRATOS: A Cloud Broker Service, IEEE Fifth International Conference on Cloud Computing, pp.891-898, 2012. [15]Hershey. P, Rao. S,Silio. C.B., Narayan. A, System of Systems to provide Quality of Service monitoring, management and response in cloud computing environments, 7th International Conference on System of Systems Engineering (SoSE), vol., no., pp.314, 320, 16-19 July 2012. This paper may be cited as: Akilandeswari, J. and Sushanth, C., 2014. A Review of Literature on Cloud Brokerage Services. International Journal of Computer Science and Business Informatics, Vol. 10, No. 1, pp. 25-40.
  • 43. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 41 Improving Recommendation Quality with Enhanced Correlation Similarity in Modified Weighted Sum Khin Nila Win Facutly of Information and Communication Technology, University of Technology Yatanarpon Cyber City Thiri Haymar Kyaw Facutly of Information and Communication Technology, University of Technology Yatanarpon Cyber City ABSTRACT Recommender systems aim to help users in finding the items of their interests from large data collections with little effort. Those systems use various recommendation approaches to provide accurate recommendation more and more. Among them, collaborative filtering approach is the most widely used approach in recommender systems. In the two types of CF system, item-based CF systems overtake the traditional user-based CF systems since it can overcome the scalability problem of the user-based CF. Item-based CF system computes the prediction of the user tastes on new items based on the item similarity result from the explicit rating of the users. They predict rating on the new items based on the historical ratings of the users. The proposed system improves the item-based collaborative filtering approach by enhancing the similarity of rating on items with demographic similarity of the items. It modifies one of the prediction methods, weighted sum, weighted by enhanced similarity of the items. This system intends to offer better prediction quality than other approaches and to produce better recommendation results as a result of considering item-demographic similarity with similarity result from explicit rating of the user. Keywords Recommender systems, collaborative filtering approach, item-based CF system, user-based CF systems, demographic similarity, weighted sum. 1. INTRODUCTION With the explosive growth of knowledge available on World Wide Web, which lacks an integrated structure or schema, it becomes much more difficult for users to access relevant information efficiently. Meanwhile, the substantial increase in the number of websites presents a challenging task for web masters to organize the contents of websites to cater to the need of user‟s. Web usage mining has seen a rapid increase in interest, from both the research and practice communities. The
  • 44. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 42 motivation of web mining is to discover users‟ access models automatically and quickly from the vast amount of Web log data, such as frequent access paths, frequent access page groups and user clustering. More recently, Web usage mining has been proposed as an underlying approach for Web personalization. The goal of personalization based on Web usage mining is to recommend a set of objects to the current (active) user, possibly consisting of links, ads, text, products, or services, tailored to the user‟s perceived preferences as determined by the matching usage patterns [1]. 2. MEMORY-BASED TECHNIQUES IN RECOMMENDER SYSTEMS Memory-based techniques continuously analyze all user or item data to calculate recommendations, and can be classified in following main groups: Collaborative Filtering, Content-based techniques, and Hybrid techniques [2]. While content-based techniques base their recommendations on individual information and ignore contributions from other users, collaborative filtering system emphasizes on the preferences of similarity users or items for their recommendations. Since the proposed system uses collaborative filtering techniques, explanations of other techniques are omitted in this paper and analysis of collaborative filtering techniques are emphasized. 2.1 Collaborative Filtering Techniques (CF) This approach recommends items that were used by similar users in the past; they base their recommendations on social, community driven information (e.g., user behavior like ratings or implicit histories). Table 1. Special types and special characteristics of Memory-based CF Techniques Special type of Memory-based CF techniques Pros Cons -Neighborhood-based CF - Item-based/user- based top-N recommendations -easy to implement - easy for addition if new data -no need to consider the content of the items in recommendation - reliant on human ratings - dispersed amount of data may be impact on performance are sparse - problems in recommendation for new users and items - scalability limitation for large datasets
  • 45. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 43 Memory-based collaborative filtering techniques have special characteristics and representative techniques. Table 1 describes the pros and cons of memory-based CF techniques [2]. In user-based CF algorithms, first it finds a set of k similar users of the target user based on correlations or similarities between user records and the target user. Then, it produces a prediction value for the target user on unrated items based on the similar users‟ ratings. This approach suffer scalability problem in large-scale recommender system. In contrast, item-based CF algorithms attempt to find k similar items that are co-rated by different users similarly. This performs similarity computations among the items. Thus, item-based CF algorithms avoid the bottleneck in user-based algorithms by first considering the relationships among items. For a target item, predictions can be generated by taking a weighted average of the target user‟s item ratings on these similar items [3, 6]. 2.1.1 Similarity Computation Most of the recommender systems usually use three similarity computing techniques: Cosine-based Similarity, Correlation-based Similarity, and Adjusted Cosine Similarity. The proposed system uses adjusted cosine similarity for similarity computation. 2.1.1.1 Adjusted Cosine Similarity Vs. Modified Adjusted Cosine Similarity 1) Adjusted Cosine Similarity Computation of similarity value using basic cosine measure in item-based recommendation system has one important weakness since the differences in rating scale between different users are not taken into account. The adjusted cosine similarity subtracts the corresponding user average from each co-rated pair to offset this drawback. However, it has one drawback- the different rating styles of the different users are not taken into account. Adjusted cosine similarity finds the subtraction value of the rate value of user u on items i and j respectively and his/her average ratings. Then, it computes the similarity value as shown in Eq. (1). (1) In Eq. (1), is the average value of the u-th user‟s ratings [4]. 2) Modified Adjusted Cosine Similarity 2 , 2 , ,, )()( ))(( ),( ujuUuuiuUu ujuuiuUu RRRR RRRR jisim       uR
  • 46. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 44 Adjusted cosine similarity still ignores the casual rating styles of the user. For this reason, the proposed system improves the computation by normalizing the rate values. Table 2. Enhanced Correlation Similarity Values Vs. Simple Modified Adjusted Cosine Similarity Values Modified Adjusted Cosine Similarity (simi,j) Demographic Similarity or Content Similarity of Items (dem_corij) Enhanced Correlation Similarity (enh_corij=simi,j,+(simi,j*de m_corij)) 0.5 0.2 0.6 0.3 0.4 0.42 0.6 0.2 0.72 0.4 0.8 0.72 0.5 0.5 0.75 0.8 0.1 0.88 0.7 0.3 0.91 For example, in the case of the system‟s range of the rating is 1 to 5, user i sets the rating 3 to his/her most like item t, while the other user j sets the rating 5 to his/her most like items t. In such case, the system can‟t assume the item t is the user i‟s most likes while it assumes this item is the user j‟s most likes. So, the system can‟t determine the highest rating of the users and can‟t assume the user‟s most like even if it is the user‟s highest rating in the case of not being highest rating of the system. Therefore, the system needs to normalize the rating style to accurately determine which the user most like and which the least even if the users have different rating styles. The proposed system applies the normalized rating to overcome such problem. The proposed method, modified adjusted cosine similarity, can reduce misunderstanding of the system on the users' likes and dislikes. Eq. 2 denotes the computation of similarity value by modified adjusted cosine similarity. (2) 2 , 2 , ,, )()( ))(( ),( ujuUuuiuUu ujuuiuUu RNRRNR RNRRNR jisim      
  • 47. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 45 In Eq. (2), is the average value of the u-th user‟s ratings (3) In Eq. (3), HS means highest rating scale of the system HRu means highest rating scale of the current user Considering the topic similarity of item, Where, simij means the similarity of item i and item j from the adjusted cosine similarity after normalizing the user‟s rating behaviour, dem_corij means the similarity of the item i and item j according to the topic similarity. Table 2 describes the way of computing to get enhanced correlation similarity and also demonstrates how the demographic similarity improves the modified adjusted similarity value. 2.1.2 Prediction Computation To get the recommendation, recommender systems always compute the prediction value firstly and then recommend the item according to the prediction values. Among them, weighted sum is one of the widely used techniques for prediction. However, it uses only the rating-based similarity of the item. The proposed system enhances weighted sum techniques by using enhanced correlation similarity instead of adjusted cosine similarity value. Enhanced correlation similarity is the similarity value in which the modified adjusted cosine similarity value is enhanced with demographic similarity of the two items. 2.1.2.1 Weighted Sum Vs. Modified Weighted Sum 1) Weighted Sum The prediction value of weighted sum technique is computed by computing the summation of the ratings of the user on the items similar to i. Each rating of user is weighted by the corresponding similarity si,j between items i and j. Eq. 4 denotes the formula for prediction computation with weighted sum. (4) iu u iu R HR HS NR ,,  ijijijij cordemsimsimcorenh __  uR    Nitemsallsimilar iN NuiNNitemsallsimilar sim Rsim iuP , ,, )( )( ,
  • 48. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 46 2) Modified Weighted Sum In Modified Weighted Sum in Eq. (5), each normalized rating, NRu,N in Eq. (6), is weighted by the enhanced correlation similarity enh_coriN. The prediction Pu,i is denoted as (5) In Eq. (5), (6) Modifying the weighted sum by enhanced correlation similarity performs the prediction more accurately than the existing systems. Each of the systems considering the item demographic data produces the prediction quality more than 9% higher than the systems which do not consider the item demographic data. 3. RELATED WORKS Recommendation techniques are applied in many areas in the mid-1990. Some researchers develop recommender systems for various songs. Popular music recommendation systems in the early 2000 are [7], [8], [9], [10]. In e- learning systems, web mining techniques are used to learn all available information about learners and build models to apply in personalization. A detailed description about using and applying educational data mining was given in (Romero et al., 2006) and (Romero et al., 2007) [11]. Many resources and supported techniques such as [12], [13] are developed for recommendation and personalization. There have been many collaborative systems developed in the academia and the industry. Grundy system [14] was the first recommender system, which proposed to use stereotypes as a mechanism for building models of users based on a limited amount of information on each individual user. Later on, the Tapestry system relied on each user to identify like-minded users manually [15]. GroupLens [16, 17], Video Recommender [18], and Ringo [19] were the first systems to use collaborative filtering algorithms to automate prediction. 4. REAL RECOMMENDER SYSTEM Most of the earlier learning resources recommender systems find the problems in determining the recommended pages accurately since they    Nitemsallsimilar iN NuiNNitemsallsimilar corenh NRcorenh iuP , ,, )_( )_( , Nu uNu R HR HS NR ,, 
  • 49. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 47 ignore the rating style of the current user. The proposed system, Recommender System for Resources and Educational Assistants for Learners, overcomes this challenge by normalizing the current user's rating style. And in the section of similarity computation, the system considers the rating similarity accompanying with topic similarity of resources pages. To avoid the cold-start problem for users earlier system encountered, the proposed system uses stereotypes or demographic CF. As a result, the system takes advantages of not only item-based CF and but also stereotypes or demographic CF. Moreover, the system can avoid the scalability and quality bottleneck of the user based CF since it uses item-based collaborative filtering techniques. Modifying adjusted cosine similarity with normalized rating of users and modifying weighted sum with enhanced correlation similarity are not only able to determine accurately which the user's most likes but also able to produce the higher prediction quality than the systems which do not consider the item demographic data and only emphasize the rating of the users. The system can reduce mean absolute error (MAE) between the predicted ratings and actual ratings of the users due to the advantages of modified adjusted cosine similarity and modified weighted sum. 5. CASE STUDY OF RESOURCES AND EDUCATIONAL ASSISTANTS RECOMMENDATION The following tables show the case study of resources and educational assistants recommendation. Table 3 shows all links current user u has rated in the first column and the links in second column are the links need to be predicted for current user since they are the links current user has not rated. Table 3. The links which current user has rated and other links which current user has not rated but other users has rated The links current user has rated The links current user has not rated but other users has rated IEEE seminar topics on networking 2011-2012 Social Networking Electronics & Communication Project Topics LAN Monitoring and Controlling Network Books of Free Computer Books LAN & WAN IPv6 JavaWorld:Solutions for Java Developers Mobile Java Core Java
  • 50. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 48 The data in Table 4 describes the respective co-rated links with the links to be predicted. Fig 1 distinguishes that four links are the links the current user has just rated but other three links has not among the co-rated links with the predicted link, LAN & WAN. In Fig 2, there are three co-rated links the current user has already rated and four links that has not. Unfortunately, there is no co-rated links the current user has rated in Fig 3, 4, and 5. According to this result, these three links may not be possible the current user‟s interested links. Finally, the system recommends the two links, LAN & WAN and IPv6 according to the prediction values. Table 4. Predicted links with their similar links The links to predict for current user The links similar to the link to be predicted LAN & WAN Social Networking LAN Monitoring and Controlling Network Books of Free Computer Books Unified Communications of Infoworld Networking of Infoworld Social Hubs, IPv6 IPv6 Network Books of Free Computer Books Social Networking IEEE seminar topics on networking 2011-2012 LAN & WAN Mobile Java Java & XML Java Security JavaWorld:Solutions for Java Developers Core Java Java & XML Web Services & SOAs Swing/GUI Programming Java Security Mobile Java Core Java Java Security JavaWorld:Solutions for Java Developers LAN & WAN Network Books of Free Computer Books Core Java Mobile Java Swing/GUI Programming Docjar Program With Java
  • 51. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 49 Fig. 1 Fig. 2 Fig. 3 Fig. 4 Fig. 5 Fig. 6 Fig. 1 - Fig. 5. Co-rated links for the respective predicted links Fig. 6. Recommended links for current user
  • 52. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 10, No. 1. FEBRUARY 2014 50 6. EVALUATION OF THE SYSTEM The recommender system can be evaluated by comparing recommendations with a test set of known user ratings. These systems are measured using predictive accuracy metrics [5, 6], where the predicted ratings are directly compared with actual user ratings. The most commonly used metric is Mean Absolute Error (MAE) which is the average absolute difference between predicted ratings and actual ratings. Eq. 7 denotes the computation of MAE value. (7) In Eq. (7), Pu,i is the predicted rate value of user u on item i, ru,i is the actual rate value of user u on item i, N is the amount of ratings in the test set. The proposed system can reduce MAE by applying both demographic correlation and rating similarity of items. 6.1.1 Comparison of MAE Values The following table compares MAE between the system which uses adjusted cosine for similarity computation and weighted sum for prediction computation and the proposed system. Table 5. Comparison of MAE Values MAE Values For Existing System with Adjusted Cosine and Weighted Sum MAE Values For Proposed System 1.48 0.68 1.6 1.045 2.987 2.635 1.96 1.93 1.92 1.87 N rP MAE iuiuiu   ,,},{