9548086042 for call girls in Indira Nagar with room service
PaaS Billing Model for Cloud Computing Management
1. For more Details, Feel free to contact us at any time.
Ph: 9841103123, 044-42607879, Website: http://www.tsys.co.in/
Mail Id: tsysglobalsolutions2014@gmail.com.
IEEE TRANSACTION ON CLOUD COMPUTING 2016 TOPICS
A Platform as a Service Billing Model for Cloud Computing Management Approaches
Abstract - Platform as a Service (PaaS) billing needs an effective billing strategy. In this paper
we proceeded a literature review and proposed a new billing model for a PaaS provider. Our
billing model allows charges to PaaS clients in several policies, from specific plans to fully pay-
per-use. We automated our billing model in a monitoring and management software tool. The
model and the tool were validated through a case study in a software development company. The
results indicated that our model is useful and preferable in relation to current billing policies and
can be used in PaaS management.
IEEE Latin America Transactions (Jan. 2016)
Flexible and Fine-Grained Attribute-Based Data Storage in Cloud Computing
Abstract- With the development of cloud computing, outsourcing data to cloud server attracts
lots of attentions. To guarantee the security and achieve flexibly fine-grained file access control,
attribute based encryption (ABE) was proposed and used in cloud storage system. However, user
revocation is the primary issue in ABE schemes. In this article, we provide a ciphertext-policy
attribute based encryption (CP-ABE) scheme with efficient user revocation for cloud storage
system. The issue of user revocation can be solved efficiently by introducing the concept of user
group. When any user leaves, the group manager will update users‟ private keys except for those
who have been revoked. Additionally, CP-ABE scheme has heavy computation cost, as it grows
linearly with the complexity for the access structure. To reduce the computation cost, we
outsource high computation load to cloud service providers without leaking file content and
secret keys. Notbaly, our scheme can withstand collusion attack performed by revoked users
cooperating with existing users. We prove the security of our scheme under the divisible
computation Diffie-Hellman (DCDH) assumption. The result of our experiment shows
computation cost for local devices is relatively low and can be constant. Our scheme is suitable
for resource constrained devices.
IEEE Transactions on Services Computing (January 2016)
2. For more Details, Feel free to contact us at any time.
Ph: 9841103123, 044-42607879, Website: http://www.tsys.co.in/
Mail Id: tsysglobalsolutions2014@gmail.com.
Knowledge-Based Resource Allocation for Collaborative Simulation Development in a
Multi-tenant Cloud Computing Environment
Abstract - Cloud computing technologies have enabled a new paradigm for advanced product
development powered by the provision and subscription of computational services in a multi-
tenant distributed simulation environment. The description of computational resources and their
optimal allocation among tenants with different requirements holds the key to implementing
effective software systems for such a paradigm. To address this issue, a systematic framework
for monitoring, analyzing and improving system performance is proposed in this research.
Specifically, a radial basis function neural network is established to transform simulation tasks
with abstract descriptions into specific resource requirements in terms of their quantities and
qualities. Additionally, a novel mathematical model is constructed to represent the complex
resource allocation process in a multi-tenant computing environment by considering priority-
based tenant satisfaction, total computational cost and multi-level load balance. To achieve
optimal resource allocation, an improved multi-objective genetic algorithm is proposed based on
the elitist archive and the K-means approaches. As demonstrated in a case study, the proposed
framework and methods can effectively support the cloud simulation paradigm and efficiently
meet tenants‟ computational requirements in a distributed environment.
IEEE Transactions on Services Computing (January 2016)
Wireless Resource Scheduling Based on Backoff for Multi-user Multi-service Mobile Cloud
Computing
Abstract - Mobile cloud computing (MCC) can significantly improve the processing/storage
capacity and standby time of mobile terminals (MTs) by migrating data processing and storage to
remote cloud. However, due to the wireless resource limitations of access points/base stations,
data streaming of MCC suffers poor quality-of-service (QoS) in multi-user multi-service
scenarios, such as long buffering time and intermittent disruptions. In this paper, we propose a
Backoff based Wireless Resource Scheduling (BWRS) scheme, in which real-time services have
higher priority than non-real-time services. BWRS can improve the QoS of real-time streams and
overall performance of mobile cloud computing networks. We formulate an M/M/1 queueing
model and propose a Queueing-Delay-Optimal Control (QDOC) algorithm to minimize the
average queueing delay of nonreal- time services. Furthermore, a Delay-Constrained Control
3. For more Details, Feel free to contact us at any time.
Ph: 9841103123, 044-42607879, Website: http://www.tsys.co.in/
Mail Id: tsysglobalsolutions2014@gmail.com.
(DCC) algorithm is developed not only to minimize the queueing delay of non-real-time services
of muti-service users, but also to support users‟ non-real-time services under delay constraints.
The simulation results show that the proposed scheme can minimize the average queueing delay
while still meeting delay requirement, and can significantly improve blocking probability and
channel utilization.
IEEE Transactions on Vehicular Technology (February 2016)
Publicly Verifiable Inner Product Evaluation over Outsourced Data Streams under
Multiple Keys
Abstract - Uploading data streams to a resource-rich cloud server for inner product evaluation,
an essential building block in many popular stream applications (e.g., statistical monitoring), is
appealing to many companies and individuals. On the other hand, verifying the result of the
remote computation plays a crucial role in addressing the issue of trust. Since the outsourced data
collection likely comes from multiple data sources, it is desired for the system to be able to
pinpoint the originator of errors by allotting each data source a unique secret key, which requires
the inner product verification to be performed under any two parties‟ different keys. However,
the present solutions either depend on a single key assumption or powerful yet
practicallyinefficient fully homomorphic cryptosystems. In this paper, we focus on the more
challenging multi-key scenario where data streams are uploaded by multiple data sources with
distinct keys. We first present a novel homomorphic verifiable tag technique to publicly verify
the outsourced inner product computation on the dynamic data streams, and then extend it to
support the verification of matrix product computation. We prove the security of our scheme in
the random oracle model. Moreover, the experimental result also shows the practicability of our
design.
IEEE Transactions on Services Computing (February 2016)
Energy Efficient Resource Allocation and User Scheduling for Collaborative Mobile
Clouds with Hybrid Receivers
Abstract- In this paper, we study the resource allocation and user scheduling algorithm for
minimizing the energy cost of data transmission in the context of OFDMA collaborative mobile
cloud (CMC) with simultaneous wireless information and power transfer (SWIPT) receivers. The
CMC, which consists of several collaborating mobile terminals (MTs) offers one potential
4. For more Details, Feel free to contact us at any time.
Ph: 9841103123, 044-42607879, Website: http://www.tsys.co.in/
Mail Id: tsysglobalsolutions2014@gmail.com.
solution for downlink content distribution and for the energy consumption reduction. Previous
work on the design of CMC system mainly focused on the cloud formulation or EE investigation,
while how to allocate the radio resource and schedule user transmission lacks attention. With the
objective to minimize the system energy consumption, an optimization problem which jointly
considers subchannel assignment, power allocation and user scheduling has been presented. We
propose different algorithms to address the formulated problem based on the convex
optimization technique. Simulation results demonstrate that the proposed user scheduling and
resource allocation algorithms can achieve significant EE performance.
IEEE Transactions on Vehicular Technology (February 2016)
Multicore-aware virtual machine placement in cloud data centers
Abstract - Finding the best way to map virtual machines (VMs) to physical machines (PMs) in a
cloud data center is an important optimization problem, with significant impact on costs,
performance, and energy consumption. In most situations, the computational capacity of PMs
and the computational load of VMs are a vital aspect to consider in the VM-to-PM mapping.
Previous work modeled computational capacity and load as one-dimensional quantities.
However, today‟s PMs have multiple processor cores, all of which can be shared by cores of
multiple multicore VMs, leading to complex scheduling issues within a single PM, which the
one-dimensional problem formulation cannot capture. In this paper, we argue that at least a
simplified model of these scheduling issues should be taken into account during VM placement.
We show how constraint programming techniques can be used to solve this problem, leading to
significant improvement over non-multicore-aware VM placement. Several ways are presented
to hybridize an exact constraint solver with common packing heuristics to derive an effective and
scalable algorithm.
IEEE Transactions on Computers (February 2016)
VINEA: An Architecture for Virtual Network Embedding Policy Programmability
Abstract - Network virtualization has enabled new business models by allowing infrastructure
providers to lease or share their physical network. A fundamental management problem that
cloud providers face to support customized virtual network (VN) services is the virtual network
embedding. This requires solving the (NP-hard) problem of matching constrained virtual
networks onto the physical network. In this paper we present VINEA, a policy-based virtual
5. For more Details, Feel free to contact us at any time.
Ph: 9841103123, 044-42607879, Website: http://www.tsys.co.in/
Mail Id: tsysglobalsolutions2014@gmail.com.
network embedding architecture, and its system implementation. VINEA leverages our previous
results on VN embedding optimality and convergence guarantees, and it is based on a network
utility maximization approach that separates policies (i:e:, high-level goals) from underlying
embedding mechanisms: resource discovery, virtual network mapping, and allocation on the
physical infrastructure. We show how VINEA can subsume existing embedding approaches, and
how it can be used to design novel solutions that adapt to different scenarios, by merely
instantiating different policies. We describe the VINEA architecture, as well as our object model:
our VINO protocol and the API to program the embedding policies; we then analyze key
representative tradeoffs among novel and existing VN embedding policy configurations, via
event-driven simulations, and with our prototype implementation. Among our findings, our
evaluation shows how, in contrast to existing solutions, simultaneously embedding nodes and
links may lead to lower providers‟ revenue. We release our implementation on a testbed that uses
a Linux system architecture to reserve virtual node and link capacities. Our prototype can be also
used to augment existing open-source “Networking as a Service” architectures such as
OpenStack Neutron, that currently lacks a VN embedding protocol, and as a policy-
programmable solution to the “slice stitching” problem within wide-area virtual network
testbeds.
IEEE Transactions on Parallel and Distributed Systems (February 2016)
A Scalable Data Chunk Similarity based Compression Approach for Efficient Big Sensing
Data Processing on Cloud
Abstract - Big sensing data is prevalent in both industry and scientific research applications
where the data is generated with high volume and velocity. Cloud computing provides a
promising platform for big sensing data processing and storage as it provides a flexible stack of
massive computing, storage, and software services in a scalable manner. Current big sensing data
processing on Cloud have adopted some data compression techniques. However, due to the high
volume and velocity of big sensing data, traditional data compression techniques lack sufficient
efficiency and scalability for data processing. Based on specific on-Cloud data compression
requirements, we propose a novel scalable data compression approach based on calculating
similarity among the partitioned data chunks. Instead of compressing basic data units, the
compression will be conducted over partitioned data chunks. To restore original data sets, some
6. For more Details, Feel free to contact us at any time.
Ph: 9841103123, 044-42607879, Website: http://www.tsys.co.in/
Mail Id: tsysglobalsolutions2014@gmail.com.
restoration functions and predictions will be designed. MapReduce is used for algorithm
implementation to achieve extra scalability on Cloud. With real world meteorological big
sensing data experiments on U-Cloud platform, we demonstrate that the proposed scalable
compression approach based on data chunk similarity can significantly improve data
compression efficiency with affordable data accuracy loss.
IEEE Transactions on Knowledge and Data Engineering (February 2016)
Middleware-oriented Deployment Automation for Cloud Applications
Abstract - Fully automated provisioning and deployment of applications is one of the most
essential prerequisites to make use of the benefits of Cloud computing in order to reduce the
costs for managing applications. A huge variety of approaches, tools, and providers are available
to automate the involved processes. The DevOps community, for instance, provides tooling and
reusable artifacts to implement deployment automation in an applicationoriented manner.
Platform-as-a-Service frameworks are available for the same purpose. In this work we
systematically classify and characterize available deployment approaches independently from the
underlying technology used. For motivation and evaluation purposes, we choose Web
applications with different technology stacks and analyze their specific deployment
requirements. Afterwards, we provision these applications using each of the identified types of
deployment approaches in the Cloud to perform qualitative and quantitative measurements.
Finally, we discuss the evaluation results and derive recommendations to decide which
deployment approach to use based on the deployment requirements of an application. Our results
show that deployment approaches can also be efficiently combined if there is no „best fit‟ for a
particular application.
IEEE Transactions on Cloud Computing (February 2016)
Exploiting Spatio-Temporal Diversity for Water Saving in Geo-Distributed Data Centers
Abstract - As the critical infrastructure for supporting Internet and cloud computing services,
massive geo-distributed data centers are notorious for their huge electricity appetites and carbon
footprints. Nonetheless, a lesser-known fact is that data centers are also “thirsty”: to operate data
centers, millions of gallons of water are required for cooling and electricity production. The
existing watersaving techniques primarily focus on improved “engineering” (e.g., upgrading to
air economizer cooling, diverting recycled/sea water instead of potable water) and do not apply
7. For more Details, Feel free to contact us at any time.
Ph: 9841103123, 044-42607879, Website: http://www.tsys.co.in/
Mail Id: tsysglobalsolutions2014@gmail.com.
to all data centers due to high upfront capital costs and/or location restrictions. In this paper, we
propose a software-based approach towards water conservation by exploiting the inherent spatio-
temporal diversity of water efficiency across geo-distributed data centers. Specifically, we
propose a batch job scheduling algorithm, called WACE (minimization of WAter, Carbon and
Electricity cost), which dynamically adjusts geographic load balancing and resource provisioning
to minimize the water consumption along with carbon emission and electricity cost while
satisfying average delay performance requirement. WACE can be implemented online without
foreseeing the far future information and yields a total cost (incorporating electricity cost, water
consumption and carbon emission) that is provably close to the optimal algorithm with
lookahead information. Finally, we validate WACE through a trace-based simulation study and
show that WACE outperforms state-of-the-art benchmarks: 25% water saving while incurring an
acceptable delay increase. We also extend WACE to joint scheduling of batch workloads and
delay-sensitive interactive workloads for further water footprint reduction in geo-distributed data
centers.
IEEE Transactions on Cloud Computing (February 2016)
Cost Effective, Reliable and Secure Workflow Deployment over Federated Clouds
Abstract - The significant growth in cloud computing has led to increasing number of cloud
providers, each offering their service under different conditions – one might be more secure
whilst another might be less expensive or more reliable. At the same time user applications have
become more and more complex. Often, they consist of a diverse collection of software
components, and need to handle variable workloads, which poses different requirements on the
infrastructure. Therefore, many organisations are considering using a combination of different
clouds to satisfy these needs. It raises, however, a non-trivial issue of how to select the best
combination of clouds to meet the application requirements. This paper presents a novel
algorithm to deploy workflow applications on federated clouds. Firstly, we introduce an entropy-
based method to quantify the most reliable workflow deployments. Secondly, we apply an
extension of the Bell-LaPadula Multi-Level security model to address application security
requirements. Finally, we optimise deployment in terms of its entropy and also its monetary cost,
taking into account the cost of computing power, data storage and inter-cloud communication.
We implemented our new approach and compared it against two existing scheduling algorithms:
8. For more Details, Feel free to contact us at any time.
Ph: 9841103123, 044-42607879, Website: http://www.tsys.co.in/
Mail Id: tsysglobalsolutions2014@gmail.com.
Extended Dynamic Constraint Algorithm (EDCA) and Extended Biobjective dynamic level
scheduling (EBDLS). We show that our algorithm can find deployments that are of equivalent
reliability but are less expensive and meet security requirements. We have validated our solution
through a set of realistic scientific workflows, using well-known cloud simulation tools
(WorkflowSim and DynamicCloudSim) and a realistic cloud based data analysis system (e-
Science Central).
IEEE Transactions on Services Computing (March 2016)
Cost Effective, Reliable and Secure Workflow Deployment over Federated Clouds
Abstract - The significant growth in cloud computing has led to increasing number of cloud
providers, each offering their service under different conditions – one might be more secure
whilst another might be less expensive or more reliable. At the same time user applications have
become more and more complex. Often, they consist of a diverse collection of software
components, and need to handle variable workloads, which poses different requirements on the
infrastructure. Therefore, many organisations are considering using a combination of different
clouds to satisfy these needs. It raises, however, a non-trivial issue of how to select the best
combination of clouds to meet the application requirements. This paper presents a novel
algorithm to deploy workflow applications on federated clouds. Firstly, we introduce an entropy-
based method to quantify the most reliable workflow deployments. Secondly, we apply an
extension of the Bell-LaPadula Multi-Level security model to address application security
requirements. Finally, we optimise deployment in terms of its entropy and also its monetary cost,
taking into account the cost of computing power, data storage and inter-cloud communication.
We implemented our new approach and compared it against two existing scheduling algorithms:
Extended Dynamic Constraint Algorithm (EDCA) and Extended Biobjective dynamic level
scheduling (EBDLS). We show that our algorithm can find deployments that are of equivalent
reliability but are less expensive and meet security requirements. We have validated our solution
through a set of realistic scientific workflows, using well-known cloud simulation tools
(WorkflowSim and DynamicCloudSim) and a realistic cloud based data analysis system (e-
Science Central).
IEEE Transactions on Services Computing (March 2016)
9. For more Details, Feel free to contact us at any time.
Ph: 9841103123, 044-42607879, Website: http://www.tsys.co.in/
Mail Id: tsysglobalsolutions2014@gmail.com.
A Dynamical and Load-Balanced Flow Scheduling Approach for Big Data Centers in
Clouds
Abstract - Load-balanced flow scheduling for big data centers in clouds, in which a large
amount of data needs to be transferred frequently among thousands of interconnected servers, is
a key and challenging issue. The OpenFlow is a promising solution to balance data flows in a
data center network through its programmatic traffic controller. Existing OpenFlow based
scheduling schemes, however, statically set up routes only at the initialization stage of data
transmissions, which suffers from dynamical flow distribution and changing network states in
data centers and often results in poor system performance. In this paper, we propose a novel
dynamical load-balanced scheduling (DLBS) approach for maximizing the network throughput
while balancing workload dynamically. We firstly formulate the DLBS problem, and then
develop a set of efficient heuristic scheduling algorithms for the two typical OpenFlow network
models, which balance data flows time slot by time slot. Experimental results demonstrate that
our DLBS approach significantly outperforms other representative load-balanced scheduling
algorithms Round Robin and LOBUS; and the higher imbalance degree data flows in data
centers exhibit, the more improvement our DLBS approach will bring to the data centers.
IEEE Transactions on Cloud Computing (March 2016)
Secure Data Sharing in Cloud Computing Using Revocable-Storage Identity-Based
Encryption
Abstract - Cloud computing provides a flexible and convenient way for data sharing, which
brings various benefits for both the society and individuals. But there exists a natural resistance
for users to directly outsource the shared data to the cloud server since the data often contain
valuable information. Thus, it is necessary to place cryptographically enhanced access control on
the shared data. Identity-based encryption is a promising cryptographical primitive to build a
practical data sharing system. However, access control is not static. That is, when some user‟s
authorization is expired, there should be a mechanism that can remove him/her from the system.
Consequently, the revoked user cannot access both the previously and subsequently shared data.
To this end, we propose a notion called revocable-storage identity-based encryption (RS-IBE),
which can provide the forward/backward security of ciphertext by introducing the functionalities
of user revocation and ciphertext update simultaneously. Furthermore, we present a concrete
10. For more Details, Feel free to contact us at any time.
Ph: 9841103123, 044-42607879, Website: http://www.tsys.co.in/
Mail Id: tsysglobalsolutions2014@gmail.com.
construction of RS-IBE, and prove its security in the defined security model. The performance
comparisons indicate that the proposed RS-IBE scheme has advantages in terms of functionality
and efficiency, and thus is feasible for a practical and cost-effective data-sharing system. Finally,
we provide implementation results of the proposed scheme to demonstrate its practicability.
IEEE Transactions on Cloud Computing (March 2016)
Energy Aware Offloading for Competing Users on a Shared Communication Channel
Abstract - This paper considers a set of mobile users that employ cloud-based computation
offloading. In order to execute jobs in the cloud however, the user uploads must occur over a
base station channel that is shared by all of the uploading users. Since the job completion times
are subject to hard deadline constraints, this restricts the feasible set of jobs that can be
processed. The system is modelled as a competitive game in which each user is interested in
minimizing its own energy consumption. The game is subject to the real-time constraints
imposed by the job execution deadlines, user specific channel bit rates, and the competition over
the shared communication channel. The paper shows that for a wide range of parameters, a game
where each user independently sets its offloading decisions always has a pure Nash equilibrium,
and a Gauss-Seidel-like method for determining this equilibrium is introduced. Results are
presented that illustrate that the system always converges to a Nash equilibrium using the Gauss-
Seidel method. Data is also presented that show the number of iterations required, and the quality
of the solutions. We find that the solutions perform well compared to a lower bound on total
energy performance.
IEEE Transactions on Mobile Computing (March 2016)
Prius: Hybrid Edge Cloud and Client Adaptation for HTTP Adaptive Streaming in
Cellular Networks
Abstract - In this paper, we present Prius, a hybrid edge cloud and client adaptation framework
for HTTP adaptive streaming (HAS) by taking advantage of the new capabilities empowered by
recent advances in edge cloud computing. In particular, emerging edge clouds are capable of
accessing application-layer and radio access networks (RAN) information in real time. Coupled
with powerful computation support, an edge cloud assisted strategy is expected to significantly
enrich mobile services. Meanwhile, although HAS has established itself as the dominant
technology for video streaming, one key challenge for adapting HAS to mobile cellular networks
11. For more Details, Feel free to contact us at any time.
Ph: 9841103123, 044-42607879, Website: http://www.tsys.co.in/
Mail Id: tsysglobalsolutions2014@gmail.com.
is in overcoming the inaccurate bandwidth estimation and unfair bitrate adaptation under the
highly dynamic cellular links. Edge cloud assisted HAS presents a new opportunity to resolve
these issues and achieve systematic enhancement of quality of experience (QoE) and QoE
fairness in cellular networks. To explore this new opportunity, Prius overlays a layer of
adaptation intelligence at the edge cloud to finalize the adaptation decisions while considering
the initial bandwidth-irrelevant bitrate selection at the clients. Prius is able to exploit RAN
channel status, client device characteristics as well as applicationlayer information in order to
jointly adapt the bitrate of multiple clients. Prius also adopts a QoE continuum model to track the
cumulative viewing experience and an exponential smoothing estimation to accurately estimate
future channel under different moving patterns. Extensive trace-driven simulation results show
that Prius with hybrid edge cloud and client adaptation is promising under both slow and fast
moving environment. Furthermore, Prius adaptation algorithm achieves a near-optimal
performance that outperforms exiting strategies.
IEEE Transactions on Circuits and Systems for Video Technology (March 2016)
Design, Implementation and Evaluation of a Point Cloud Codec for Tele-Immersive Video
Abstract - we present a generic and real-time time-varying point cloud codec for 3D immersive
video. This codec is suitable for mixed reality applications where 3D point clouds are acquired at
a fast rate. In this codec, intra frames are coded progressively in an octree subdivision. To further
exploit inter-frame dependencies, we present an inter-prediction algorithm that partitions the
octree voxel space in N times N times N macroblocks (N=8,16,32). The algorithm codes points
in these blocks in the predictive frame as a rigid transform applied to the points in the intra coded
frame. The rigid transform is computed using the iterative closest point algorithm and compactly
represented in a quaternion quantization scheme. To encode the color attributes, we defined a
mapping of color per vertex attributes in the traversed octree to an image grid and use legacy
image coding method based on JPEG. As a result, a generic compression framework suitable for
real-time 3D tele-immersion is developed. This framework has been optimized to run in real-
time on commodity hardware for both encoder and decoder. Objective evaluation shows that a
higher rate-distortion (R-D) performance is achieved compared to available point cloud codecs.
A subjective study in a state of art mixed reality system shows that introduced prediction
distortions are negligible compared to the original reconstructed point clouds. In addition, it
12. For more Details, Feel free to contact us at any time.
Ph: 9841103123, 044-42607879, Website: http://www.tsys.co.in/
Mail Id: tsysglobalsolutions2014@gmail.com.
shows the benefit of reconstructed point cloud video as a representation in the 3D Virtual world.
The codec is available as open source for integration in immersive and augmented
communication applications and serves as a base reference software platform in
JCT1/SC29/WG11 (MPEG) for the further development of standardized point cloud
compression solutions.
IEEE Transactions on Circuits and Systems for Video Technology (March 2016)
Robust Sparse Coding for Mobile Image Labeling on the Cloud
Abstract - With the rapid development of the mobile service and online social networking
service, a large of mobile images are generated and shared on the social networks every day. The
visual content of these images contains rich knowledge for many uses, such as social
categorization and recommendation. Mobile image labeling has therefore been proposed to
understand the visual content and received intensive attention in recent years. In this paper, we
present a novel mobile image labeling scheme on the cloud, in which mobile images are first and
efficiently transmitted to the cloud by Hamming compressed sensing (HCS) such that the heavy
computation for image understanding is transferred to the cloud for fast response to the queries
of users. On the cloud, we design a sparse correntropy framework for robustly learning the
semantic content of mobile images, based on which the relevant tags are assigned to the query
images. The proposed framework (called McMil) is very insensitive to noise and outliers, and is
optimized by a half-quadratic optimization technique. We theoretically show that our image
labeling approach is more robust than the squared loss, absolute loss, Cauchy loss and many
other robust loss function based sparse coding methods. To further understand the proposed
algorithm, we also derive its robustness and generalization error bounds. At last, we conduct
experiments on the PASCAL VOC‟07 dataset and empirically demonstrate the effectiveness of
the proposed robust sparse coding method for mobile image labeling.
IEEE Transactions on Circuits and Systems for Video Technology (March 2016)
Optimal Resource Sharing in 5G-enabled Vehicular Networks: A Matrix Game Approach
Abstract - Vehicular networks are expected to accommodate a large number of data-heavy
mobile devices and multi-application services. Whereas, it faces a significant challenge when we
need to deal with the ever-increasing demand of mobile traffic. In this paper, we present a new
13. For more Details, Feel free to contact us at any time.
Ph: 9841103123, 044-42607879, Website: http://www.tsys.co.in/
Mail Id: tsysglobalsolutions2014@gmail.com.
paradigm of 5G-enabled vehicular networks to improve the network capacity and the system
computing capability. We extend the original cloud radio access network (C-RAN) to integrate
local cloud services to provide a low-cost, scalable, self-organizing and effective solution. The
new C-RAN is named as Enhanced C-RAN (EC-RAN). Cloudlets in EC-RAN are
geographically distributed for local services. Furthermore, Device-to-Device (D2D) and
Heterogeneous Networks (HetNet) are essential technologies in 5G systems. They can greatly
improve the spectrum efficiency and support the large-scale live video streaming in short-
distance communications. We exploit the matrix game theoretical approach to operate the
cloudlet resource management and allocation. Nash equilibrium solution can be obtained by
Karush-Kuhn-Tucker nonlinear complementarity approach. Illustrative results indicate that the
proposed resource sharing scheme with the geo-distributed cloudlets can improve the resource
utilization and reduce the system power consumption. Moreover, with the integration of the
Softwaredefine network (SDN) architecture, a vehicular network can easily reach a globally
optimal solution.
IEEE Transactions on Vehicular Technology (March 2016)
Efficient and Privacy-Preserving Outsourced Calculation of Rational Numbers
Abstract - In this paper, we propose a framework for efficient and privacy-preserving
outsourced calculation of rational numbers, which we refer to as POCR. Using POCR, a user can
securely outsource the storing and processing of rational numbers to a cloud server without
compromising the security of the (original) data and the computed results. More specifically, we
present a Paillier cryptosystem with threshold decryption (PCTD), the core cryptographic
primitive, to reduce the private key exposure risk in POCR. We also present the toolkits required
in the privacy preserving calculation of integers and rational numbers to ensure that commonly
used outsourced operations can be handled on-the-fly. We then prove that the proposed POCR
achieves the goal of secure integer and rational number calculation without resulting in privacy
leakage to unauthorized parties, as well as demonstrating the utility and the efficiency of POCR
using simulations.
IEEE Transactions on Dependable and Secure Computing (March 2016)
Bandwidth Provisioning for Virtual Machine Migration in Cloud: Strategy and
Application
14. For more Details, Feel free to contact us at any time.
Ph: 9841103123, 044-42607879, Website: http://www.tsys.co.in/
Mail Id: tsysglobalsolutions2014@gmail.com.
Abstract - Physical resources are highly virtualized in today‟s datacenter-based cloud-
computing networks. Servers, for example, are virtualized as Virtual Machines (VMs). Through
abstraction of physical resources, server virtualization enables migration of VMs over the
interconnecting network. VM migration can be used for load balancing, energy conservation,
disaster protection, etc. Migration of a VMinvolves iterative memory copy and network re-
configuration. Memory states are transferred in multiple phases to keep the VM alive during the
migration process, with a small downtime for switchover. Significant network resources are
consumed during this process. Migration also results in undesirable performance impacts.
Suboptimal network bandwidth assignment, inaccurate pre-copy iterations, and high end-to-end
network delay in wide-area networks (WAN) can exacerbate the performance degradation. In
this study, we devise strategies to find suitable bandwidth and pre-copy iteration count to
optimize different performance metrics of VM migration over a WAN. First, we formulate
models to measure network resource consumption, migration duration, and migration downtime.
Then, we propose a strategy to determine appropriate migration bandwidth and number of pre-
copy iterations, and perform numerical experiments in multiple cloud environments with large
number of migration requests. Results show that our approach consumes less network resources
when compared with maximum and minimum-bandwidth provisioning strategies while using an
order of magnitude less bandwidth than maximumbandwidth strategy. It also achieves
significantly lower migration duration than minimum-bandwidth scheme.
IEEE Transactions on Cloud Computing (March 2016)
Reliable Computing Service in Massive-scale Systems Through Rapid Low-cost Failover
Abstract - Large-scale distributed systems in Cloud datacenter are capable of provisioning
service to consumers with diverse business requirements. Providers face pressure to provision
uninterrupted reliable services while reducing operational costs due to significant software and
hardware failures. A widely used means to achieve such a goal is using redundant system
components to implement usertransparent failover, yet its effectiveness must be balanced
carefully without incurring heavy overhead when deployed – an important practical
consideration for complex large-scale systems. Failover techniques developed for Cloud systems
often suffer serious limitations, including mandatory restart leading to poor cost-effectiveness, as
well as solely focusing on crash failures, omitting other important types, e.g. timing failures and
15. For more Details, Feel free to contact us at any time.
Ph: 9841103123, 044-42607879, Website: http://www.tsys.co.in/
Mail Id: tsysglobalsolutions2014@gmail.com.
simultaneous failures. This paper addresses these limitations by presenting a new approach to
user-transparent failover for massive-scale systems. The approach uses soft-state inference to
achieve rapid failure recovery and avoid unnecessary restart, with minimal system resource
overhead. It also copes with different failures, including correlated and simultaneous events. The
proposed approach was implemented, deployed and evaluated within Fuxi system, the
underlying resource management system used within Alibaba Cloud. Results demonstrate that
our approach tolerates complex failure scenarios while incurring at worst 228.5 microsecond
instance overhead with 1.71% additional CPU usage.
IEEE Transactions on Services Computing (March 2016)
Stochastic Modeling and Optimization of Stragglers
Abstract- Map Reduce framework is widely used to parallelize batch jobs since it exploits a
high degree of multi-tasking to process them. However, it has been observed that when the
number of servers increases, the map phase can take much longer than expected. This paper
analytically shows that the stochastic behavior of the servers has a negative effect on the
completion time of a MapReduce job, and continuously increasing the number of servers without
accurate scheduling can degrade the overall performance. We analytically model the map phase
in terms of hardware, system, and application parameters to capture the effects of stragglers on
the performance. Mean sojourn time (MST), the time needed to sync the completed tasks at a
reducer, is introduced as a performance metric and mathematically formulated. Following that,
we stochastically investigate the optimal task scheduling which leads to an equilibrium property
in a datacenter with different types of servers. Our experimental results show the performance of
the different types of schedulers targeting MapReduce applications. We also show that, in the
case of mixed deterministic and stochastic schedulers, there is an optimal scheduler that can
always achieve the lowest MST.
IEEE Transactions on Cloud Computing (April 2016)
Novel Scheduling Algorithms for Efficient Deployment of MapReduce Applications in
Heterogeneous Computing Environments
Abstract - Cloud computing has become increasingly popular model for delivering applications
hosted in large data centers as subscription oriented services. Hadoop is a popular system
supporting the MapReduce function, which plays a crucial role in cloud computing. The
16. For more Details, Feel free to contact us at any time.
Ph: 9841103123, 044-42607879, Website: http://www.tsys.co.in/
Mail Id: tsysglobalsolutions2014@gmail.com.
resources required for executing jobs in a large data center vary according to the job type. In
Hadoop, jobs are scheduled by default on a first-come-first-served basis, which may unbalance
resource utilization. This paper proposes a job scheduler called the job allocation scheduler
(JAS), designed to balance resource utilization. For various job workloads, the JAS categorizes
jobs and then assigns tasks to a CPU-bound queue or an I/O-bound queue. However, the JAS
exhibited a locality problem, which was addressed by developing a modified JAS called the job
allocation scheduler with locality (JASL). The JASL improved the use of nodes and the
performance of Hadoop in heterogeneous computing environments. Finally, two parameters were
added to the JASL to detect inaccurate slot settings and create a dynamic job allocation scheduler
with locality (DJASL). The DJASL exhibited superior performance than did the JAS, and data
locality similar to that of the JASL.
IEEE Transactions on Cloud Computing (April 2016)
Hybrid Tree-rule Firewall for High Speed Data Transmission
Abstract - Traditional firewalls employ listed rules in both configuration and process phases to
regulate network traffic. However, configuring a firewall with listed rules may create rule
conflicts, and slows down the firewall. To overcome this problem, we have proposed a Tree-rule
firewall in our previous study. Although the Tree-rule firewall guarantees no conflicts within its
rule set and operates faster than traditional firewalls, keeping track of the state of network
connections using hashing functions incurs extra computational overhead. In order to reduce this
overhead, we propose a hybrid Tree-rule firewall in this paper. This hybrid scheme takes
advantages of both Tree-rule firewalls and traditional listed-rule firewalls. The GUIs of our Tree-
rule firewalls are utilized to provide a means for users to create conflict-free firewall rules, which
are organized in a tree structure and called 'tree rules'. These tree rules are later converted into
listed rules that share the merit of being conflict-free. Finally, in decision making, the listed rules
are used to verify against packet header information. The rules which have matched with most
packets are moved up to the top positions by the core firewall. The mechanism applied in this
hybrid scheme can significantly improve the functional speed of a firewall.
IEEE Transactions on Cloud Computing (April 2016)
Feedback Autonomic Provisioning for Guaranteeing Performance in MapReduce Systems
17. For more Details, Feel free to contact us at any time.
Ph: 9841103123, 044-42607879, Website: http://www.tsys.co.in/
Mail Id: tsysglobalsolutions2014@gmail.com.
Abstract - Companies have a fast growing amounts of data to process and store, a data explosion
is happening next to us. Currently one of the most common approaches to treat these vast data
quantities are based on the MapReduce parallel programming paradigm. While its use is
widespread in the industry, ensuring performance constraints, while at the same time minimizing
costs, still provides considerable challenges. We propose a coarse grained control theoretical
approach, based on techniques that have already proved their usefulness in the control
community. We introduce the first algorithm to create dynamic models for Big Data MapReduce
systems, running a concurrent workload. Furthermore, we identify two important control use
cases: relaxed performance - minimal resource and strict performance. For the first case we
develop two feedback control mechanism. A classical feedback controller and an evenbased
feedback, that minimises the number of cluster reconfigurations as well. Moreover, to address
strict performance requirements a feedforward predictive controller that efficiently suppresses
the effects of large workload size variations is developed. All the controllers are validated online
in a benchmark running in a real 60 node MapReduce cluster, using a data intensive Business
Intelligence workload. Our experiments demonstrate the success of the control strategies
employed in assuring service time constraints.
IEEE Transactions on Cloud Computing (April 2016)
Synchronizing Files from a Large Number of Insertions and Deletions
Abstract - Developing efficient algorithms to synchronize between different versions of files is
an important problem with numerous applications. We consider the interactive synchronization
protocol introduced by Yazdi and Dolecek, itself based on an earlier synchronization algorithm
by Venkataramanan et al. Unlike preceding synchronization algorithms, Yazdi and Dolecek‟s
algorithm is specifically designed to handle a number of deletions linear in the length of the file.
We extend this algorithm in three ways: first, we handle non-binary files. Second, these files
contain symbols chosen according to non-uniform distributions. Lastly, the files are modified by
both insertions and deletions. We take into consideration the collision entropy of the source and
refine the matching graph developed by Yazdi and Dolecek by appropriately placing weights on
the matching graph edges. We compare our protocol with the widely used synchronization
software rsync, and with the synchronization protocol by Venkataramanan et al. Additionally, we
provide tradeoffs between the number of rounds of communication and the total amount of
18. For more Details, Feel free to contact us at any time.
Ph: 9841103123, 044-42607879, Website: http://www.tsys.co.in/
Mail Id: tsysglobalsolutions2014@gmail.com.
bandwidth required to synchronize the two files under various implementation choices of the
baseline algorithm. Finally, we show the robustness of the protocol under imperfect knowledge
of the properties of the edit channel, which is the expected scenario in practice.
IEEE Transactions on Communications (April 2016)
Frame Interpolation for Cloud-Based Mobile Video Streaming
Abstract - Cloud-based High Definition (HD) video streaming is becoming popular day by day.
On one hand, it is important for both end users and large storage servers to store their huge
amount of data at different locations and servers. On the other hand, it is becoming a big
challenge for network service providers to provide reliable connectivity to the network users.
There have been many studies over cloud-based video streaming for Quality of Experience
(QoE) for services like YouTube. Packet losses and bit errors are very common in transmission
networks, which affect the user feedback over cloud-based media services. To cover up packet
losses and bit errors, Error Concealment (EC) techniques are usually applied at the
decoder/receiver side to estimate the lost information. This paper proposes a time-efficient and
quality-oriented EC method. The proposed method considers H.265/HEVC based intra-encoded
videos for the estimation of whole intra-frame loss. The main emphasis in the proposed approach
is the recovery of Motion Vectors (MVs) of a lost frame in real-time. To boost-up the search
process for the lost MVs, a bigger block size and searching in parallel are both considered. The
simulation results clearly show that our proposed method outperforms the traditional Block
Matching Algorithm (BMA) by approximately 2.5 dB and Frame Copy (FC) by up to 12 dB at a
packet loss rate of 1%, 3%, and 5% with different Quantization Parameters (QPs). The
computational time of the proposed approach outperforms the BMA by approximately 1788
seconds.
IEEE Transactions on Multimedia (May 2016)
Toward Cost-Efficient Content Placement in Media Cloud: Modeling and Analysis
Abstract - Cloud-centric media network (CCMN) was previously proposed to provide cost-
effective content distribution services for user-generated contents (UGCs) based on media cloud.
CCMN service providers orchestrate cloud resources to deliver UGCs in a pay-per-use style,
with an objective to minimize the operational monetary cost. The monetary cost depends on the
actual usage of cloud resources (e.g., computing, storage, and bandwidth), which in turn, is
19. For more Details, Feel free to contact us at any time.
Ph: 9841103123, 044-42607879, Website: http://www.tsys.co.in/
Mail Id: tsysglobalsolutions2014@gmail.com.
affected by the content placement strategy. In this paper, we investigate this cost-optimal content
placement problem. Specifically, it is formulated into a constrained optimization problem, in
which the objective is to minimize the total monetary cost, with respect to the resource capacity.
We tackle this problem via a two-step strategy. The first step focuses on the placement for a
single content, which is mapped into a k-center problem. Using a graph-theoretic approach, we
derive and verify a logarithmic model between the optimal mean hop distance from viewers to
contents, and the optimal number of content replicas. The second step leverages this analytical
result to solve the cost optimization problem, via a feasible direction method. The analysis is
substantiated via numerical simulations, using a set of data traces from a top content website.
This investigation suggests that the optimal number of content replica for each title follows a
power-law distribution in respect to its popularity rank. Moreover, it reveals a fundamental
tradeoff between the storage and bandwidth cost. Finally, compared to existing heuristics, our
proposed algorithm is able to obtain the optimal placement strategy, with lower computational
complexity.
IEEE Transactions on Multimedia (May 2016)
DAC-Mobi: Data-Assisted Communications of Mobile Images with Cloud Computing
Support
Abstract - This research proposes a novel data assisted image transmission scheme, which
utilizes a large amount of correlated images stored in the cloud to improve the spectrum
efficiency and visual quality. First, a two-layer Coset coding is proposed for the DCT
coefficients transmission. The most significant bits (MSB) of the coefficients are generated by
the first layer Coset and together with a few low frequency coefficients are transmitted through
the most reliable channel coding and digital modulation. The middle bits generated by the second
layer Coset are discarded by the sender and the residual bits are transmitted through amplitude
modulation. Based on the MSB and the residual bits, an approximation of the original image is
reconstructed. With this approximation, a lot of correlated images can be retrieved from the
cloud, which are used to recover the discarded middle bits. The two layer Coset coding can
significantly decrease the data energy so as to improve the transmission power efficiency. Hence,
the end to end distortion of amplitude modulation can be reduced. Second, the image quality can
be further improved by joint internal and external denoising with the retrieved images.
20. For more Details, Feel free to contact us at any time.
Ph: 9841103123, 044-42607879, Website: http://www.tsys.co.in/
Mail Id: tsysglobalsolutions2014@gmail.com.
Simulations show that the proposed scheme outperforms conventional digital schemes about 4
dB in peak signal to noise power ratio (PSNR) and achieves 2 dB gain over the state-of-the-art
uncoded transmission. At low signal to noise power ratio (SNR), an additional 2-3 dB gain is
achieved. The visual quality comparison also validates the objective image assessment result.
IEEE Transactions on Multimedia (May 2016)
RepCloud: Attesting to Cloud Service Dependency
Abstract - Security enhancements to the emerging IaaS (Infrastructure as a Service) cloud
computing systems have become the focus of much research, but little of this targets the
underlying infrastructure. Trusted Cloud systems are proposed to integrate Trusted Computing
infrastructure with cloud systems. With remote attestations, cloud customers are able to
determine the genuine behaviors of their applications‟ hosts; and therefore they establish trust to
the cloud. However, the current Trusted Clouds have difficulties in effectively attesting to the
cloud service dependency for customers‟ applications, due to the cloud‟s complexity,
heterogeneity and dynamism. In this paper, we present RepCloud, a decentralized cloud trust
management framework, inspired by the reputation systems from the research in peerto- peer
systems. With RepCloud, cloud customers are able to determine the properties of the exact nodes
that may affect the genuine functionalities of their applications, without obtaining much internal
information of the cloud. Experiments showed that besides achieving fine-grained cloud service
dependency attestation, RepCloud incurred lower trust management overhead than the existing
trusted cloud systems.
IEEE Transactions on Services Computing (May 2016)
Securing SIFT: Privacy-preserving Outsourcing Computation of Feature Extractions Over
Encrypted Image Data
Abstract - Advances in cloud computing have greatly motivated data owners to outsource their
huge amount of personal multimedia data and/or computationally expensive tasks onto the cloud
by leveraging its abundant resources for cost saving and flexibility. Despite the tremendous
benefits, the outsourced multimedia data and its originated applications may reveal the data
owner‟s private information, such as the personal identity, locations or even financial profiles.
This observation has recently aroused new research interest on privacy-preserving computations
over outsourced multimedia data. In this paper, we propose an effective and practical privacy-
21. For more Details, Feel free to contact us at any time.
Ph: 9841103123, 044-42607879, Website: http://www.tsys.co.in/
Mail Id: tsysglobalsolutions2014@gmail.com.
preserving computation outsourcing protocol for the prevailing scale-invariant feature transform
(SIFT) over massive encrypted image data. We first show that previous solutions to this problem
have either efficiency/security or practicality issues, and none can well preserve the important
characteristics of the original SIFT in terms of distinctiveness and robustness. We then present a
new scheme design that achieves efficiency and security requirements simultaneously with the
preservation of its key characteristics, by randomly splitting the original image data, designing
two novel efficient protocols for secure multiplication and comparison, and carefully distributing
the feature extraction computations onto two independent cloud servers. We both carefully
analyze and extensively evaluate the security and effectiveness of our design. The results show
that our solution is practically secure, outperforms the state-of-theart, and performs comparably
to the original SIFT in terms of various characteristics, including rotation invariance, image scale
invariance, robust matching across affine distortion, addition of noise and change in 3D
viewpoint and illumination.
IEEE Transactions on Image Processing (May 2016)
A Load-Aware Pluggable Cloud Framework for Real-time Video Processing
Abstract - A large number of video applications require real-time response. The high speed
video processing then requires a distributed and parallelized framework utilizing all-possible
computing resources, i.e. both CPU and GPU at their best. The CPU-GPU collaboration may
cause resource imbalance where GPU-based jobs consume less computing resources while
occupying more memory compared to CPU-based jobs. In this paper, we propose a load-aware
pluggable cloud framework for real-time video processing where CPU-GPU switching based on
workload status can be performed at run time. Furthermore, we design aspect-oriented monitors
to collect framework metrics, and propose a distance coverage algorithm to detect performance
degradation, in order to make sure that the framework runs optimally to achieve good
performance when a load-aware task switching is made. We have comprehensively evaluated the
framework and the evaluation results show that the proposed framework has good performance,
reusability, pluggability and scalability.
IEEE Transactions on Industrial Informatics (May 2016)
Mobile Cloud Support for Semantic-enriched Speech Recognition in Social Care
22. For more Details, Feel free to contact us at any time.
Ph: 9841103123, 044-42607879, Website: http://www.tsys.co.in/
Mail Id: tsysglobalsolutions2014@gmail.com.
Abstract - Nowadays, most users carry high computing power mobile devices where speech
recognition is certainly one of the main technologies available in every modern smartphone,
although battery draining and application performance (resource shortage) have a big impact on
the experienced quality. Shifting applications and services to the cloud may help to improve
mobile user satisfaction as demonstrated by several ongoing efforts in the mobile cloud area.
However, the quality of speech recognition is still not sufficient in many complex cases to
replace the common hand written text, especially when prompt reaction to short-term
provisioning requests is required. To address the new scenario, this paper proposes a mobile
cloud infrastructure to support the extraction of semantics information from speech recognition
in the Social Care domain, where carers have to speak about their patients conditions in order to
have reliable notes used afterward to plan the best support. We present not only an architecture
proposal, but also a real prototype that we have deployed and thoroughly assessed with different
queries, accents, and in presence of load peaks, in our experimental mobile cloud Platform as a
Service (PaaS) test bed based on Cloud Foundry.
IEEE Transactions on Cloud Computing (May 2016)
Trajectory Pattern Mining for Urban Computing in the Cloud
Abstract - The increasing pervasiveness of mobile devices along with the use of technologies
like GPS, Wifi networks, RFID, and sensors, allows for the collections of large amounts of
movement data. This amount of data can be analyzed to extract descriptive and predictive
models that can be properly exploited to improve urban life. From a technological viewpoint,
Cloud computing can play an essential role by helping city administrators to quickly acquire new
capabilities and reducing initial capital costs by means of a comprehensive pay-as-you-go
solution. This paper presents a workflow-based parallel approach for discovering patterns and
rules from trajectory data, in a Cloud-based framework. Experimental evaluation has been
carried out on both real-world and synthetic trajectory data, up to one million of trajectories. The
results show that, due to the high complexity and large volumes of data involved in the
application scenario, the trajectory pattern mining process takes advantage from the scalable
execution environment offered by a Cloud architecture in terms of both execution time, speed-up
and scale-up.
IEEE Transactions on Parallel and Distributed Systems (May 2016)
23. For more Details, Feel free to contact us at any time.
Ph: 9841103123, 044-42607879, Website: http://www.tsys.co.in/
Mail Id: tsysglobalsolutions2014@gmail.com.
Resource Dependency Processing in Web Scaling Frameworks
Abstract - The upsurge of mobile devices paired with highly interactive social web applications
generates enormous amounts of requests web services have to deal with. Consequently in our
previous work, a novel request flow scheme with scalable components was proposed for storing
interdependent, permanently updated resources in a database. The major challenge is to process
dependencies in an optimal fashion while maintaining dependency constraints. In this work,
three research objectives are evaluated by examining resource dependencies and their key graph
measurements. An all-sources longest-path algorithm is presented for efficient processing and
dependencies are analysed to find correlations between performance and graph measures. Two
algorithms basing their parameters on six real-world web service structures, e.g. Facebook Graph
API are developed to generate dependency graphs and a model is developed to estimate
performance based on resource parameters. An evaluation of four graph series discusses
performance effects of different graph structures. The results of an evaluation of 2000 web
services with over 850 thousand resources and 6 million requests indicate that resource
dependency processing can be up to a factor of two faster compared to a traditional processing
approach while an average model fit of 97% allows an accurate prediction.
IEEE Transactions on Services Computing (May 2016)
Game User-Oriented Multimedia Transmission over Cognitive Radio Networks
Abstract - Cognitive radio (CR) is an emerging technique to improve the efficiency of spectrum
resource utilization. In CR networks, the selfish behavior of secondary users (SU) can
considerably affect the performance of primary users (PU). Accordingly, game theory, which
takes into consideration of the game players‟ selfish behavior, has been applied into the design of
CR networks. Most of the existing studies focus on the network design only from the network
perspective to improve system performance such as utility and throughput. However, the users‟
experience to the service, which cannot simply be reflected by quality of service (QoS), has been
largely ignored. The user-perceived multimedia quality and service can be different from the
actual received multimedia quality, and thus is very important to take into consider of the
network design. To better serve the network users, quality of experience (QoE) is adopted to
measure the network service from the users‟ perspective and help improve the users‟ satisfaction
to the CR network service. As CR networks requires lots of data storage and computation for
24. For more Details, Feel free to contact us at any time.
Ph: 9841103123, 044-42607879, Website: http://www.tsys.co.in/
Mail Id: tsysglobalsolutions2014@gmail.com.
spectrum sensing, spectrum sharing and algorithm design, cloud computation comes as a handy
solution because it can provide massive storage and fast computation. In this paper, we propose
to design a user-oriented CR cloud network for multimedia applications, where the user‟s
satisfaction is reflected in the CR cloud network design. In the proposed framework, the primary
and secondary user game is formulated as Stackelberg game. Specifically, a refunding term is
defined in the users‟ utility function to effectively consider and to reflect the network users‟ QoE
requirement. Our contributions include two folds: (1) A game based CR cloud network design
for multimedia transmission is proposed, and the network user‟s QoE requirement is satisfied in
the design; (2) The existence and uniqueness of the Stackelberg Nash equili- rium is proved, and
the design is optimal. Our simulation results demonstrate the effectiveness of the game user-
oriented CR cloud network design.
IEEE Transactions on Circuits and Systems for Video Technology (May 2016)
An Optimized Virtual Load Balanced Call Admission Controller for IMS Cloud
Computing
Abstract - Network functions virtualization provides opportunities to design, deploy, and
manage networking services. It utilizes Cloud computing virtualization services that run on high-
volume servers, switches and storage hardware to virtualize network functions. Virtualization
techniques can be used in IP Multimedia Subsystem (IMS) cloud computing to develop different
networking functions (e.g. load balancing and call admission control). IMS network signaling
happens through Session Initiation Protocol (SIP). An open issue is the control of overload that
occurs when an SIP server lacks sufficient CPU and memory resources to process all messages.
This paper proposes a virtual load balanced call admission controller (VLB-CAC) for the cloud-
hosted SIP servers. VLB-CAC determines the optimal “call admission rates” and “signaling
paths” for admitted calls along with the optimal allocation of CPU and memory resources of the
SIP servers. This optimal solution is derived through a new linear programming model. This
model requires some critical information of SIP servers as input. Further, VLB-CAC is equipped
with an autoscaler to overcome resource limitations. The proposed scheme is implemented in
SAVI (Smart Applications on Virtual Infrastructure) which serves as a virtual testbed. An
assessment of the numerical and experimental results demonstrates the efficiency of the proposed
work.
25. For more Details, Feel free to contact us at any time.
Ph: 9841103123, 044-42607879, Website: http://www.tsys.co.in/
Mail Id: tsysglobalsolutions2014@gmail.com.
IEEE Transactions on Network and Service Management (May 2016)
Cryptoleq: A Heterogeneous Abstract Machine for Encrypted and Unencrypted
Computation
Abstract - The rapid expansion and increased popularity of cloud computing comes with no
shortage of privacy concerns about outsourcing computation to semi-trusted parties. Leveraging
the power of encryption, in this paper we introduce Cryptoleq: an abstract machine based on the
concept of One Instruction Set Computer, capable of performing general-purpose computation
on encrypted programs. The program operands are protected using the Paillier partially
homomorphic cryptosystem, which supports addition on the encrypted domain. Full
homomorphism over addition and multiplication, which is necessary for enabling general-
purpose computation, is achieved by inventing a heuristically obfuscated software re-encryption
module written using Cryptoleq instructions and blended into the executing program. Cryptoleq
is heterogeneous, allowing mixing encrypted and unencrypted instruction operands in the same
program memory space. Programming with Cryptoleq is facilitated using an enhanced assembly
language that allows development of any advanced algorithm on encrypted datasets. In our
evaluation, we compare Cryptoleq‟s performance against a popular fully homomorphic
encryption library, and demonstrate correctness using a typical Private Information Retrieval
problem.
IEEE Transactions on Information Forensics and Security (May 2016)
Incident-Supporting Visual Cloud Computing Utilizing Software-Defined Networking
Abstract - In the event of natural or man-made disasters, providing rapid situational awareness
through video/image data collected at salient incident scenes is often critical to first responders.
However, computer vision techniques that can process the media-rich and data-intensive content
obtained from civilian smartphones or surveillance cameras require large amounts of
computational resources or ancillary data sources that may not be available at the geographical
location of the incident. In this paper, we propose an incident-supporting visual cloud computing
solution by defining a collection, computation and consumption (3C) architecture supporting fog
computing at the networkedge close to the collection/consumption sites, which is coupled with
cloud offloading to a core computation, utilizing softwaredefined networking (SDN). We
evaluate our 3C architecture and algorithms using realistic virtual environment testbeds. We also
26. For more Details, Feel free to contact us at any time.
Ph: 9841103123, 044-42607879, Website: http://www.tsys.co.in/
Mail Id: tsysglobalsolutions2014@gmail.com.
describe our insights in preparing the cloud provisioning and thin-client desktop fogs to handle
the elasticity and user mobility demands in a theater-scale application. In addition, we
demonstrate the use of SDN for on-demand compute offload with congestion-avoiding traffic
steering to enhance remote user Quality of Experience (QoE) in a regional-scale application. The
optimization between fog computing at the network-edge with core cloud computing for
managing visual analytics reduces latency, congestion and increases throughput.
IEEE Transactions on Circuits and Systems for Video Technology (May 2016)
On the Serviceability of Mobile Vehicular Cloudlets in a Large-Scale Urban Environment
Abstract - Recently, cloud computing technology has been utilized to make vehicles on roads
smarter and to offer better driving experience. Consequently, the concept of mobile vehicular
cloudlet (MVC) was born, where nearby smart vehicles were connected to provide cloud
computing services locally. Existing researches focus on MVC system models and architectures,
and no work to date addresses the critical question of what is the potential, i.e., level of local
cloud computing service, achievable by MVCs in real-world large-scale urban environments.
This issue is fundamental to the practical implementation of MVC technology. Answering this
question is also challenging because MVCs operate in highly complicated and dynamic
environments. In this paper, we directly address this challenging issue and we introduce the
concept of serviceability to measure the ability of an MVC to provide cloud computing service.
In particular, we evaluate this measure in practical environments through a real-world vehicular
mobility trace of Beijing. Using the time-varying graph model for mobile cloud computing under
different scenarios, we find that the serviceability has a relationship with the delay tolerance of
the undertaken computational task, which can be described by two characteristic parameters. The
evolution of serviceability through a day and the influence of network congestion are also
analyzed. We also portray the spatial distribution of the serviceability and analyze the influence
of connectivity and mobility in both MVC and vehicle levels. Our observations are valuable to
assist designing vehicular cloud computing systems and applications, as well as to help make
offloading decisions.
IEEE Transactions on Intelligent Transportation Systems (June 2016)
Traffic Load Balancing Schemes for Devolved Controllers in Mega Data Centers
27. For more Details, Feel free to contact us at any time.
Ph: 9841103123, 044-42607879, Website: http://www.tsys.co.in/
Mail Id: tsysglobalsolutions2014@gmail.com.
Abstract- In most existing cloud services, a centralized controller is used for resource
management and coordination. However, such infrastructure is gradually not sufficient to meet
the rapid growth of mega data centers. In recent literature, a new approach named devolved
controller was proposed for scalability concern. This approach splits the whole network into
several regions, each with one controller to monitor and reroute a portion of the flows. This
technique alleviates the problem of an overloaded single controller, but brings other problems
such as unbalanced work load among controllers and reconfiguration complexities. In this paper,
we make an exploration on the usage of devolved controllers for mega data centers, and design
some new schemes to overcome these shortcomings and improve the performance of the system.
We first formulate Load Balancing problem for Devolved Controllers (LBDC) in data centers,
and prove that it is NP-complete. We then design an f-approximation for LBDC, where f is the
largest number of potential controllers for a switch in the network. Furthermore, we propose both
centralized and distributed greedy approaches to solve the LBDC problem effectively. The
numerical results validate the efficiency of our schemes, which can become a solution to
monitoring, managing, and coordinating mega data centers with multiple controllers working
together.
IEEE Transactions on Parallel and Distributed Systems (June 2016)
CavSimBase: A Database for Large Scale Comparison of Protein Binding Sites
Abstract - CavBase is a database containing information about the three-dimensional geometry
and the physicochemical properties of putative protein binding sites. Analyzing CavBase data
typically involves computing the similarity of pairs of binding sites. In contrast to sequence
alignment, however, a structural comparison of protein binding sites is a computationally
challenging problem, making large scale studies difficult or even infeasible. One possibility to
overcome this obstacle is to precompute pairwise similarities in an all-against-all comparison,
and to make these similarities subsequently accessible to data analysis methods. Pairwise
similarities, once being computed, can also be used to equip CavBase with a neighborhood
structure. Taking advantage of this structure, methods for problems such as similarity retrieval
can be implemented efficiently. In this paper, we tackle the problem of performing an all-
against-all comparison using CavBase, consisting of more than 200,000 protein cavities, by
means of parallel computation and cloud computing techniques. We present the conceptual
28. For more Details, Feel free to contact us at any time.
Ph: 9841103123, 044-42607879, Website: http://www.tsys.co.in/
Mail Id: tsysglobalsolutions2014@gmail.com.
design and technical realization of a large-scale study to create a similarity database called
CavSimBase. We illustrate how CavSimBase is constructed, is accessed, and is used to answer
biological questions by data analysis and similarity retrieval.
IEEE Transactions on Knowledge and Data Engineering (June 2016)
Failure Diagnosis for Distributed Systems using Targeted Fault Injection
Abstract - This paper introduces a novel approach to automating failure diagnostics in
distributed systems by combining fault injection and data analytics. We use fault injection to
populate the database of failures for a target distributed system. When a failure is reported from
production environment, the database is queried to find “matched” failures generated by fault
injections. Relying on the assumption that similar faults generate similar failures, we use
information from the matched failures as hints to locate the actual root cause of the reported
failures. In order to implement this approach, we introduce techniques for (i) reconstructing end-
to-end execution flows of distributed software components, (ii) computing the similarity of the
reconstructed flows, and (iii) performing precise fault injection at pre-specified executing points
in distributed systems. We have evaluated our approach using an OpenStack cloud platform, a
popular cloud infrastructure management system. Our experimental results showed that this
approach is effective in determining the root causes, e.g., fault types and affected components,
for 71-100% of tested failures. Furthermore, it can provide fault locations close to actual ones
and can easily be used to find and fix actual root causes. We have also validated this technique
by localizing real bugs that occurred in OpenStack.
IEEE Transactions on Parallel and Distributed Systems (June 2016)
Visual Analysis of Cloud Computing Performance Using Behavioral Lines
Abstract - Cloud computing is an essential technology to Big Data analytics and services. A
cloud computing system is often comprised of a large number of parallel computing and storage
devices. Monitoring the usage and performance of such a system is important for efficient
operations, maintenance, and security. Tracing every application on a large cloud system is
untenable due to scale and privacy issues. But profile data can be collected relatively efficiently
by regularly sampling the state of the system, including properties such as CPU load, memory
usage, network usage, and others, creating a set of multivariate time series for each system.
Adequate tools for studying such large-scale, multidimensional data are lacking. In this paper,
29. For more Details, Feel free to contact us at any time.
Ph: 9841103123, 044-42607879, Website: http://www.tsys.co.in/
Mail Id: tsysglobalsolutions2014@gmail.com.
we present a visual based analysis approach to understanding and analyzing the performance and
behavior of cloud computing systems. Our design is based on similarity measures and a layout
method to portray the behavior of each compute node over time. When visualizing a large
number of behavioral lines together, distinct patterns often appear suggesting particular types of
performance bottleneck. The resulting system provides multiple linked views, which allow the
user to interactively explore the data by examining the data or a selected subset at different levels
of detail. Our case studies, which use datasets collected from two different cloud systems, show
that this visual based approach is effective in identifying trends and anomalies of the systems.
IEEE Transactions on Visualization and Computer Graphics (June 2016)
Cloud-based Video Actor Identification with Batch-Orthogonal Local-Sensitive Hashing
and Sparse Representation
Abstract - Recognizing and retrieving multimedia content with movie/TV-series actors,
especially querying actor-specific videos in large scale video dataset has attracted much attention
in both video processing and computer vision research field. However, many existing methods
have low efficiency both in training and testing processes and also less satisfactory performance.
Considering these challenges, in this paper, we propose an efficient cloud-based actor
identification approach with Batch- Orthogonal Local-Sensitive Hashing (BOLSH) and Multi-
Task Joint Sparse Representation Classification (MTJSRC). Our approach, is featured by: (i)
videos from movie/TV-series are segmented into shots with the cloud-based shot boundary
detection; (ii) while faces in each shot are detected and tracked, the cloudbased BOLSH is then
implemented on these faces for feature description; (iii) the sparse representation is then adopted
for actor identification in each shot; (iv) finally, a simple application, actor-specific shots
retrieval is realized to verify our approach. We conduct extensive experiments and empirical
evaluations on a large scale dataset, to demonstrate the satisfying performance of our approach
considering both accuracy and efficiency.
IEEE Transactions on Multimedia (June 2016)
Efficient R-Tree Based Indexing Scheme for Server-Centric Cloud Storage System
Abstract - Cloud storage system poses new challenges to the community to support efficient
concurrent querying tasks for various data-intensive applications, where indices always hold
30. For more Details, Feel free to contact us at any time.
Ph: 9841103123, 044-42607879, Website: http://www.tsys.co.in/
Mail Id: tsysglobalsolutions2014@gmail.com.
important positions. In this paper, we explore a practical method to construct a two-layer
indexing scheme for multi-dimensional data in diverse server-centric cloud storage system. We
first propose RT-HCN, an indexing scheme integrating R-tree based indexing structure and
HCN-based routing protocol. RT-HCN organizes storage and compute nodes into an HCN
overlay, one of the newly proposed sever-centric data center topologies. Based on the properties
of HCN, we design a specific index mapping technique to maintain layered global indices and
corresponding query processing algorithms to support efficient query tasks. Then, we expand the
idea of RT-HCN onto another server-centric data center topology DCell, discovering a potential
generalized and feasible way of deploying two-layer indexing schemes on other server-centric
networks. Furthermore, we prove theoretically that RT-HCN is both space-efficient and query-
efficient, by which each node actually maintains a tolerable number of global indices while high
concurrent queries can be processed within accepted overhead. We finally conduct targeted
experiments on Amazon's EC2 platforms, comparing our design with RT-CAN, a similar
indexing scheme for traditional P2P network. The results validate the query efficiency, especially
the speedup of point query of RT-HCN, depicting its potential applicability in future data
centers.
IEEE Transactions on Knowledge and Data Engineering (June 2016)
Faster Learning and Adaptation in Security Games by Exploiting Information Asymmetry
Abstract - With the advancement of modern technologies, the security battle between a
legitimate system (LS) and an adversary is becoming increasingly sophisticated, involving
complex interactions in unknown dynamic environments. Stochastic game (SG), together with
multi-agent reinforcement learning (MARL), offers a systematic framework for the study of
information warfare in current and emerging cyber-physical systems. In practical security games,
each player usually has only incomplete information about the opponent, which induces
information asymmetry. This paper exploits information asymmetry from a new angle,
considering how to exploit information unknown to the opponent to the player's advantage. Two
new MARL algorithms, termed minimax post-decision state (minimax-PDS) and Win-or-Learn
Fast post-decision state (WoLF-PDS), are proposed, which enable the LS to learn and adapt
faster in dynamic environments by exploiting its information advantage. The proposed
31. For more Details, Feel free to contact us at any time.
Ph: 9841103123, 044-42607879, Website: http://www.tsys.co.in/
Mail Id: tsysglobalsolutions2014@gmail.com.
algorithms are provably convergent and rational, respectively. Also, numerical results are
presented to show their effectiveness through three important applications.
IEEE Transactions on Signal Processing (July1, 1 2016)
SUPPORT OFFERED TO REGISTERED STUDENTS:
1. IEEE Base paper.
2. Review material as per individuals‟ university guidelines
3. Future Enhancement
4. assist in answering all critical questions
5. Training on programming language
6. Complete Source Code.
7. Final Report / Document
8. International Conference / International Journal Publication on your Project.
FOLLOW US ON FACEBOOK @ TSYS Academic Projects