This document provides descriptions for 10 IEEE projects related to cloud computing and data security. It includes summaries of projects focused on: 1) Dynamic and public auditing for cloud data with fair arbitration. 2) Enabling cloud storage auditing while outsourcing key updates. 3) Providing user security guarantees in public cloud infrastructures.
Influencing policy (training slides from Fast Track Impact)
S3 Infotech CSE IEEE Projects 2016-2017
1. S3 INFOTECH +91 988 48 48 198
# 10/1, Jones Road, Saidapet, Chennai – 15.Mob:, 9884848198.
www.s3computers.com E-Mail: info@s3computers.com
CSE IEEE PROJECTS -
2016-2017
S3I_DN16_001 - Dynamic and Public Auditing with Fair Arbitration for Cloud
Data
Cloud users no longer physically possess their data, so how to ensure the
integrity of their outsourced data becomes a challenging task. Recently proposed
schemes such as “provable data possession” and “proofs of retrievability” are
designed to address this problem, but they are designed to audit static archive data and
therefore lack of data dynamics support. Moreover, threat models in these schemes
usually assume an honest data owner and focus on detecting a dishonest cloud service
provider despite the fact that clients may also misbehave. This paper proposes a public
auditing scheme with data dynamics support and fairness arbitration of potential
disputes. In particular, we design an index switcher to eliminate the limitation of
index usage in tag computation in current schemes and achieve efficient handling of
data dynamics. To address the fairness problem so that no party can misbehave
without being detected, we further extend existing threat models and adopt signature
exchange idea to design fair arbitration protocols, so that any possible dispute can be
fairly settled. The security analysis shows our scheme is provably secure, and the
performance evaluation demonstrates the overhead of data dynamics and dispute
arbitration are reasonable.
S3I_DN16_002 - Enabling Cloud Storage Auditing with Verifiable Outsourcing
of Key Updates
Key-exposure resistance has always been an important issue for in-depth cyber
defence in many security applications. Recently, how to deal with the key exposure
problem in the settings of cloud storage auditing has been proposed and studied. To
address the challenge, existing solutions all require the client to update his secret keys
in every time period, which may inevitably bring in new local burdens to the client,
especially those with limited computation resources, such as mobile phones. In this
paper, we focus on how to make the key updates as transparent as possible for the
client and propose a new paradigm called cloud storage auditing with verifiable
outsourcing of key updates. In this paradigm, key updates can be safely outsourced to
some authorized party, and thus the key-update burden on the client will be kept
minimal. In particular, we leverage the third party auditor (TPA) in many existing
public auditing designs, let it play the role of authorized party in our case, and make it
2. S3 INFOTECH +91 988 48 48 198
# 10/1, Jones Road, Saidapet, Chennai – 15.Mob:, 9884848198.
www.s3computers.com E-Mail: info@s3computers.com
in charge of both the storage auditing and the secure key updates for key-exposure
resistance. In our design, TPA only needs to hold an encrypted version of the client's
secret key while doing all these burdensome tasks on behalf of the client. The client
only needs to download the encrypted secret key from the TPA when uploading new
files to cloud. Besides, our design also equips the client with capability to further
verify the validity of the encrypted secret keys provided by the TPA. All these salient
features are carefully designed to make the whole auditing procedure with key
exposure resistance as transparent as possible for the client. We formalize the
definition and the security model of this paradigm. The security proof and the
performance simulation show that our detailed design instantiations are secure and
efficient.
S3I_DN16_003 - Providing User Security Guarantees in Public Infrastructure
Clouds
The infrastructure cloud (IaaS) service model offers improved resource
flexibility and availability, where tenants – insulated from the minutiae of hardware
maintenance – rent computing resources to deploy and operate complex systems.
Large-scale services running on IaaS platforms demonstrate the viability of this
model; nevertheless, many organizations operating on sensitive data avoid migrating
operations to IaaS platforms due to security concerns. In this paper, we describe a
framework for data and operation security in IaaS, consisting of protocols for a trusted
launch of virtual machines and domain-based storage protection. We continue with an
extensive theoretical analysis with proofs about protocol resistance against attacks in
the defined threat model. The protocols allow trust to be established by remotely
attesting host platform configuration prior to launching guest virtual machines and
ensure confidentiality of data in remote storage, with encryption keys maintained
outside of the IaaS domain. Presented experimental results demonstrate the validity
and efficiency of the proposed protocols. The framework prototype was implemented
on a test bed operating a public electronic health record system, showing that the
proposed protocols can be integrated into existing cloud environments.
S3I_DN16_004 - Service Usage Classification with Encrypted Internet Traffic in
Mobile Messaging Apps
The rapid adoption of mobile messaging Apps has enabled us to collect massive
amount of encrypted Internet traffic of mobile messaging. The classification of this
traffic into different types of in-App service usages can help for intelligent network
management, such as managing network bandwidth budget and providing quality of
services. Traditional approaches for classification of Internet traffic rely on packet
inspection, such as parsing HTTP headers. However, messaging Apps are increasingly
using secure protocols, such as HTTPS and SSL, to transmit data. This imposes
3. S3 INFOTECH +91 988 48 48 198
# 10/1, Jones Road, Saidapet, Chennai – 15.Mob:, 9884848198.
www.s3computers.com E-Mail: info@s3computers.com
significant challenges on the performances of service usage classification by packet
inspection. To this end, in this paper, we investigate how to exploit encrypted Internet
traffic for classifying in-App usages. Specifically, we develop a system, named
CUMMA, for classifying service usages of mobile messaging Apps by jointly
modeling user behavioral patterns, network traffic characteristics and temporal
dependencies. Along this line, we first segment Internet traffic from traffic-flows into
sessions with a number of dialogs in a hierarchical way. Also, we extract the
discriminative features of traffic data from two perspectives: (i) packet length and (ii)
time delay. Next, we learn a service usage predictor to classify these segmented
dialogs into single-type usages or outliers. In addition, we design a clustering Hidden
Markov Model (HMM) based method to detect mixed dialogs from outliers and
decompose mixed dialogs into sub-dialogs of single-type usage. Indeed, CUMMA
enables mobile analysts to identify service usages and analyze end-user in-App
behaviors even for encrypted Internet traffic. Finally, the extensive experiments on
real-world messaging data demonstrate the effectiveness and efficiency of the
proposed method for service usage classification.
S3I_DN16_005 - Text Mining the Contributors to Rail Accidents
Rail accidents represent an important safety concern for the transportation
industry in many countries. In the 11 years from 2001 to 2012, the U.S. had more than
40 000 rail accidents that cost more than $45 million. While most of the accidents
during this period had very little cost, about 5200 had damages in excess of $141 500.
To better understand the contributors to these extreme accidents, the Federal Railroad
Administration has required the railroads involved in accidents to submit reports that
contain both fixed field entries and narratives that describe the characteristics of the
accident. While a number of studies have looked at the fixed fields, none have done
an extensive analysis of the narratives. This paper describes the use of text mining
with a combination of techniques to automatically discover accident characteristics
that can inform a better understanding of the contributors to the accidents. The study
evaluates the efficacy of text mining of accident narratives by assessing predictive
performance for the costs of extreme accidents. The results show that predictive
accuracy for accident costs significantly improves through the use of features found
by text mining and predictive accuracy further improves through the use of modern
ensemble methods. Importantly, this study also shows through case examples how the
findings from text mining of the narratives can improve understanding of the
contributors to rail accidents in ways not possible through only fixed field analysis of
the accident reports.
S3I_DN16_006 - MMBcloud-tree: Authenticated Index for Verifiable Cloud
Service Selection
4. S3 INFOTECH +91 988 48 48 198
# 10/1, Jones Road, Saidapet, Chennai – 15.Mob:, 9884848198.
www.s3computers.com E-Mail: info@s3computers.com
Cloud brokers have been recently introduced as an additional computational
layer to facilitate cloud selection and service management tasks for cloud consumers.
However, existing brokerage schemes on cloud service selection typically assume that
brokers are completely trusted, and do not provide any guarantee over the correctness
of the service recommendations. It is then possible for a compromised or dishonest
broker to easily take advantage of the limited capabilities of the clients and provide
incorrect or incomplete responses. To address this problem, we propose an innovative
Cloud Service Selection Verification (CSSV) scheme and index structures
(MMBcloud-tree) to enable cloud clients to detect misbehavior of the cloud brokers
during the service selection process. We demonstrate correctness and efficiency of our
approaches both theoretically and empirically.
S3I_DN16_007 - Identity-Based Proxy-Oriented Data Uploading and Remote
Data Integrity Checking in Public Cloud
More and more clients would like to store their data to public cloud servers
(PCSs) along with the rapid development of cloud computing. New security problems
have to be solved in order to help more clients process their data in public cloud.
When the client is restricted to access PCS, he will delegate its proxy to process his
data and upload them. On the other hand, remote data integrity checking is also an
important security problem in public cloud storage. It makes the clients check whether
their outsourced data are kept intact without downloading the whole data. From the
security problems, we propose a novel proxy-oriented data uploading and remote data
integrity checking model in identity-based public key cryptography: identity-based
proxy-oriented data uploading and remote data integrity checking in public cloud (ID-
PUIC). We give the formal definition, system model, and security model. Then, a
concrete ID-PUIC protocol is designed using the bilinear pairings. The proposed ID-
PUIC protocol is provably secure based on the hardness of computational Diffie-
Hellman problem. Our ID-PUIC protocol is also efficient and flexible. Based on the
original client's authorization, the proposed ID-PUIC protocol can realize private
remote data integrity checking, delegated remote data integrity checking, and public
remote data integrity checking.
S3I_DN16_008 - Fine-grained Two-factor Access Control for Web-based Cloud
Computing Services
In this paper, we introduce a new fine-grained two-factor authentication (2FA)
access control system for web-based cloud computing services. Specifically, in our
proposed 2FA access control system, an attribute-based access control mechanism is
implemented with the necessity of both a user secret key and a lightweight security
device. As a user cannot access the system if they do not hold both, the mechanism
can enhance the security of the system, especially in those scenarios where many
5. S3 INFOTECH +91 988 48 48 198
# 10/1, Jones Road, Saidapet, Chennai – 15.Mob:, 9884848198.
www.s3computers.com E-Mail: info@s3computers.com
users share the same computer for web-based cloud services. In addition, attribute-
based control in the system also enables the cloud server to restrict the access to those
users with the same set of attributes while preserving user privacy, i.e., the cloud
server only knows that the user fulfills the required predicate, but has no idea on the
exact identity of the user. Finally, we also carry out a simulation to demonstrate the
practicability of our proposed 2FA system.
S3I_DN16_009 - Cloud workflow scheduling with deadlines and time slot
availability
Allocating service capacities in cloud computing is based on the assumption
that they are unlimited and can be used at any time. However, available service
capacities change with workload and cannot satisfy users’ requests at any time from
the cloud provider’s perspective because cloud services can be shared by multiple
tasks. Cloud service providers provide available time slots for new user’s requests
based on available capacities. In this paper, we consider workflow scheduling with
deadline and time slot availability in cloud computing. An iterated heuristic
framework is presented for the problem under study which mainly consists of initial
solution construction, improvement, and perturbation. Three initial solution
construction strategies, two greedy- and fair-based improvement strategies and a
perturbation strategy are proposed. Different strategies in the three phases result in
several heuristics. Experimental results show that different initial solution and
improvement strategies have different effects on solution qualities.
S3I_DN16_010 - Publicly Verifiable Inner Product Evaluation over Outsourced
Data Streams under Multiple Keys
Uploading data streams to a resource-rich cloud server for inner product
evaluation, an essential building block in many popular stream applications (e.g.,
statistical monitoring), is appealing to many companies and individuals. On the other
hand, verifying the result of the remote computation plays a crucial role in addressing
the issue of trust. Since the outsourced data collection likely comes from multiple data
sources, it is desired for the system to be able to pinpoint the originator of errors by
allotting each data source a unique secret key, which requires the inner product
verification to be performed under any two parties’ different keys. However, the
present solutions either depend on a single key assumption or powerful yet
practicallyinefficient fully homomorphic cryptosystems. In this paper, we focus on the
more challenging multi-key scenario where data streams are uploaded by multiple
data sources with distinct keys. We first present a novel homomorphic verifiable tag
technique to publicly verify the outsourced inner product computation on the dynamic
data streams, and then extend it to support the verification of matrix product
6. S3 INFOTECH +91 988 48 48 198
# 10/1, Jones Road, Saidapet, Chennai – 15.Mob:, 9884848198.
www.s3computers.com E-Mail: info@s3computers.com
computation. We prove the security of our scheme in the random oracle model.
Moreover, the experimental result also shows the practicability of our design.
S3I_DN16_011 - Inverted Linear Quadtree: Efficient Top K Spatial Keyword
Search
With advances in geo-positioning technologies and geo-location services, there are a
rapidly growing amount of spatiotextual objects collected in many applications such
as location based services and social networks, in which an object is described by its
spatial location and a set of keywords (terms). Consequently, the study of spatial
keyword search which explores both location and textual description of the objects
has attracted great attention from the commercial organizations and research
communities. In the paper, we study two fundamental problems in the spatial keyword
queries: top k spatial keyword search (TOPK-SK), and batch top k spatial keyword
search (BTOPK-SK). Given a set of spatio-textual objects, a query location and a set
of query keywords, the TOPK-SK retrieves the closest k objects each of which
contains all keywords in the query. BTOPK-SK is the batch processing of sets of
TOPK-SK queries. Based on the inverted index and the linear quadtree, we propose a
novel index structure, called inverted linear quadtree (IL-Quadtree), which is carefully
designed to exploit both spatial and keyword based pruning techniques to effectively
reduce the search space. An efficient algorithm is then developed to tackle top k
spatial keyword search. To further enhance the filtering capability of the signature of
linear quadtree, we propose a partition based method. In addition, to deal with
BTOPK-SK, we design a new computing paradigm which partition the queries into
groups based on both spatial proximity and the textual relevance between queries. We
show that the IL-Quadtree technique can also efficiently support BTOPK-SK.
Comprehensive experiments on real and synthetic data clearly demonstrate the
efficiency of our methods.
S3I_DN16_012 - Securing SIFT: Privacy-preserving Outsourcing Computation
of Feature Extractions over Encrypted Image Data
Advances in cloud computing have greatly motivated data owners to outsource their
huge amount of personal multimedia data and/or computationally expensive tasks
onto the cloud by leveraging its abundant resources for cost saving and flexibility.
Despite the tremendous benefits, the outsourced multimedia data and its originated
applications may reveal the data owner’s private information, such as the personal
identity, locations or even financial profiles. This observation has recently aroused
new research interest on privacy-preserving computations over outsourced multimedia
data. In this paper, we propose an effective and practical privacy-preserving
computation outsourcing protocol for the prevailing scale-invariant feature transform
(SIFT) over massive encrypted image data. We first show that previous solutions to
7. S3 INFOTECH +91 988 48 48 198
# 10/1, Jones Road, Saidapet, Chennai – 15.Mob:, 9884848198.
www.s3computers.com E-Mail: info@s3computers.com
this problem have either efficiency/security or practicality issues, and none can well
preserve the important characteristics of the original SIFT in terms of distinctiveness
and robustness. We then present a new scheme design that achieves efficiency and
security requirements simultaneously with the preservation of its key characteristics,
by randomly splitting the original image data, designing two novel efficient protocols
for secure multiplication and comparison, and carefully distributing the feature
extraction computations onto two independent cloud servers. We both carefully
analyze and extensively evaluate the security and effectiveness of our design. The
results show that our solution is practically secure, outperforms the state-of-theart, and
performs comparably to the original SIFT in terms of various characteristics,
including rotation invariance, image scale invariance, robust matching across affine
distortion, addition of noise and change in 3D viewpoint and illumination.
S3I_DN16_013 - A Secure and Dynamic Multi-keyword Ranked Search Scheme
over Encrypted Cloud Data
Due to the increasing popularity of cloud computing, more and more data owners are
motivated to outsource their data to cloud servers for great convenience and reduced
cost in data management. However, sensitive data should be encrypted before
outsourcing for privacy requirements, which obsoletes data utilization like keyword-
based document retrieval. In this paper, we present a secure multi-keyword ranked
search scheme over encrypted cloud data, which simultaneously supports dynamic
update operations like deletion and insertion of documents. Specifically, the vector
space model and the widely-used TF IDF model are combined in the index
construction and query generation. We construct a special tree-based index structure
and propose a “Greedy Depth-first Search” algorithm to provide efficient multi-
keyword ranked search. The secure kNN algorithm is utilized to encrypt the index and
query vectors, and meanwhile ensure accurate relevance score calculation between
encrypted index and query vectors. In order to resist statistical attacks, phantom terms
are added to the index vector for blinding search results . Due to the use of our special
tree-based index structure, the proposed scheme can achieve sub-linear search time
and deal with the deletion and insertion of documents flexibly. Extensive experiments
are conducted to demonstrate the efficiency of the proposed scheme.
S3I_DN16_014 - Protecting Your Right: Verifiable Attribute-based Keyword
Search with Fine-grained Owner-enforced Search Authorization in the Cloud
Search over encrypted data is a critically important enabling technique in cloud
computing, where encryption-beforeoutsourcing is a fundamental solution to
protecting user data privacy in the untrusted cloud server environment. Many secure
search schemes have been focusing on the single-contributor scenario, where the
outsourced dataset or the secure searchable index of the dataset are encrypted and
8. S3 INFOTECH +91 988 48 48 198
# 10/1, Jones Road, Saidapet, Chennai – 15.Mob:, 9884848198.
www.s3computers.com E-Mail: info@s3computers.com
managed by a single owner, typically based on symmetric cryptography. In this paper,
we focus on a different yet more challenging scenario where the outsourced dataset
can be contributed from multiple owners and are searchable by multiple users, i.e.
multi-user multi-contributor case. Inspired by attribute-based encryption (ABE), we
present the first attribute-based keyword search scheme with efficient user revocation
(ABKS-UR) that enables scalable fine-grained (i.e. file-level) search authorization.
Our scheme allows multiple owners to encrypt and outsource their data to the cloud
server independently. Users can generate their own search capabilities without relying
on an always online trusted authority. Fine-grained search authorization is also
implemented by the owner-enforced access policy on the index of each file. Further,
by incorporating proxy re-encryption and lazy re-encryption techniques, we are able
to delegate heavy system update workload during user revocation to the resourceful
semi-trusted cloud server. We formalize the security definition and prove the
proposed ABKS-UR scheme selectively secure against chosen-keyword attack. To
build confidence of data user in the proposed secure search system, we also design a
search result verification scheme. Finally, performance evaluation shows that the
efficiency of our scheme.
S3I_DN16_015 - Secure Data Analytics for Cloud-Integrated Internet of Things
Applications
Cloud-integrated Internet of Things (IoT) is emerging as the next-generation
service platform that enables smart functionality worldwide. IoT applications such as
smart grid and power systems, e-health, and body monitoring applications along with
large-scale environmental and industrial monitoring are increasingly generating large
amounts of data that can conveniently be analyzed through cloud service provisioning.
However, the nature of these applications mandates the use of secure and privacy-
preserving implementation of services that ensures the integrity of data without any
unwarranted exposure. This article explores the unique challenges and issues within
this context of enabling secure cloud-based data analytics for the IoT. Three main
applications are discussed in detail, with solutions outlined based on the use of fully
homomorphic encryption systems to achieve data security and privacy over cloud-
based analytical phases. The limitations of existing technologies are discussed and
models proposed with regard to achieving high efficiency and accuracy in the
provisioning of analytic services for encrypted data over a cloud platform.
S3I_DN16_016 - A Low-Cost Low-Power Ring Oscillator-based Truly Random
Number Generator for Encryption on Smart Cards
W. Bit rate of the TRNG after post processing is 100 kb/s. The proposed TRNG has
been made into an IP and successfully applied in an SD card for encryption
application. The proposed TRNG has passed the NIST tests and Diehard tests.m
9. S3 INFOTECH +91 988 48 48 198
# 10/1, Jones Road, Saidapet, Chennai – 15.Mob:, 9884848198.
www.s3computers.com E-Mail: info@s3computers.com
standard CMOS process, the proposed TRNG has an area as low as 0.005 mm2.
Powered by a single 1.8 V supply voltage, the TRNG has a power consumption of 40
The design of a low-cost low-power ring oscillator-based truly random number
generator (TRNG) macro-cell, suitable to be integrated in smart cards, is presented.
The oscillator sampling technique is exploited and a tetrahedral oscillator with large
jitter has been employed to realize the TRNG. Techniques to improve the statistical
quality of the ring oscillator-based TRNGs’ bit sequences have been presented and
verified by simulation and measurement. Post digital processor is added to further
enhance the randomness of the output bits. Fabricated in HHNEC 0.13
S3I_DN16_017 - Encrypted Data Management with Deduplication in Cloud
Computing
Cloud computing offers a new way to deliver services by rearranging resources over
the Internet and providing them to users on demand. It plays an important role in
supporting data storage, processing, and management in the Internet of Things (IoT).
Various cloud service providers (CSPs) offer huge volumes of storage to maintain and
manage IoT data, which can include videos, photos, and personal health records.
These CSPs provide desirable service properties, such as scalability, elasticity, fault
tolerance, and pay per use. Thus, cloud computing has become a promising service
paradigm to support IoT applications and IoT system deployment. To ensure data
privacy, existing research proposes to outsource only encrypted data to CSPs.
However, the same or different users could save duplicated data under different
encryption schemes at the cloud. Although cloud storage space is huge, this kind of
duplication wastes networking resources, consumes excess power, and complicates
data management. Thus, saving storage is becoming a crucial task for CSPs.
Deduplication can achieve high space and cost savings, reducing up to 90 to 95
percent of storage needs for backup applications (http://opendedup.org) and up to 68
percent in standard file systems.1 Obviously, the savings, which can be passed back
directly or indirectly to cloud users, are significant to the economics of cloud
business. At the same time, data owners want CSPs to protect their personal data from
unauthorized access. CSPs should therefore perform access control based on the data
owner’s expectations. In addition, data owners want to control not only data access
but also its storage and usage. From a flexibility viewpoint, data deduplication should
cooperate with data access control mechanisms. That is, the same data, although in an
encrypted form, is only saved once at the cloud but can be accessed by different users
based on the data owners’ policies.
S3I_DN16_018 - Dual-Server Public-Key Encryption with Keyword Search for
Secure Cloud Storage
10. S3 INFOTECH +91 988 48 48 198
# 10/1, Jones Road, Saidapet, Chennai – 15.Mob:, 9884848198.
www.s3computers.com E-Mail: info@s3computers.com
Searchable encryption is of increasing interest for protecting the data privacy in
secure searchable cloud storage. In this work, we investigate the security of a well-
known cryptographic primitive, namely Public Key Encryption with Keyword Search
(PEKS) which is very useful in many applications of cloud storage. Unfortunately, it
has been shown that the traditional PEKS framework suffers from an inherent
insecurity called inside Keyword Guessing Attack (KGA) launched by the malicious
server. To address this security vulnerability, we propose a new PEKS framework
named Dual-Server Public Key Encryption with Keyword Search (DS-PEKS). As
another main contribution, we define a new variant of the Smooth Projective Hash
Functions (SPHFs) referred to as linear and homomorphic SPHF (LH-SPHF). We
then show a generic construction of secure DS-PEKS from LH-SPHF. To illustrate
the feasibility of our new framework, we provide an efficient instantiation of the
general framework from a DDH-based LH-SPHF and show that it can achieve the
strong security against inside KGA.
S3I_DN16_019 - A recommendation system based on hierarchical clustering of
an article-level citation network
The scholarly literature is expanding at a rate that necessitates intelligent algorithms
for search and navigation.For the most part, the problem of delivering scholarly
articles has been solved. If one knows the title of an article, locating it requires little
effort and, paywalls permitting, acquiring a digital copy has become trivial.However,
the navigational aspect of scientific search – finding relevant, influential articles that
one does not know exist – is in its early development. In this paper, we introduce
Eigenfactor Recommends – a citation-based method for improving scholarly
navigation. The algorithm uses the hierarchical structure of scientific knowledge,
making possible multiple scales of relevance for different users. We implement the
method and generate more than 300 million recommendations from more than 35
million articles from various bibliographic databases including the AMiner dataset.
We find little overlap with co-citation, another well-known citation recommender,
which indicates potential complementarity. In an online A-B comparison using SSRN,
we find that our approach performs as well as co-citation, but this new approach
offers much larger recommendation coverage. We make the code and
recommendations freely available at babel.eigenfactor.org and provide an API for
others to use for implementing and comparing the recommendations on their own
platforms.
S3I_DN16_020 - Efficient Group Key Transfer Protocol for WSNs
Special designs are needed for cryptographic schemes in wireless sensor
networks (WSNs). This is because sensor nodes are limited in memory storage and
computational power. The existing group key transfer protocols for WSNs using
11. S3 INFOTECH +91 988 48 48 198
# 10/1, Jones Road, Saidapet, Chennai – 15.Mob:, 9884848198.
www.s3computers.com E-Mail: info@s3computers.com
classical secret sharing require that a t-degree interpolating polynomial be computed
in order to encrypt and decrypt the secret group key. This approach is too
computationally intensive. In this paper, we propose a new group key transfer
protocol using a linear secret sharing scheme (LSSS) and factoring assumption. The
proposed protocol can resist potential attacks and also significantly reduce the
computation complexity of the system while maintaining low communication cost.
Such a scheme is desirable for secure group communications in wireless sensor
networks (WSNs), where portable devices or sensors need to reduce their computation
as much as possible due to battery power limitations.
S3I_SE16_001 - A Tool-Supported Methodology for Validation and Refinement
of Early-Stage Domain Models
Model-driven engineering (MDE) promotes automated model transformations
along the entire development process. Guaranteeing the quality of early models is
essential for a successful application of MDE techniques and related tool-supported
model refinements. Do these models properly reflect the requirements elicited from
the owners of the problem domain? Ultimately, this question needs to be asked to the
domain experts. The problem is that a gap exists between the respective backgrounds
of modeling experts and domain experts. MDE developers cannot show a model to the
domain experts and simply ask them whether it is correct with respect to the
requirements they had in mind. To facilitate their interaction and make such validation
more systematic, we propose a methodology and a tool that derive a set of
customizable questionnaires expressed in natural language from each model to be
validated. Unexpected answers by domain experts help to identify those portions of
the models requiring deeper attention. We illustrate the methodology and the current
status of the developed tool MOTHIA, which can handle UML Use Case, Class, and
Activity diagrams. We assess MOTHIA effectiveness in reducing the gap between
domain and modeling experts, and in detecting modeling faults on the European
Project CHOReOS.
S3I_SE16_002 - Trust Agent-Based Behavior Induction in Social Networks
12. S3 INFOTECH +91 988 48 48 198
# 10/1, Jones Road, Saidapet, Chennai – 15.Mob:, 9884848198.
www.s3computers.com E-Mail: info@s3computers.com
The essence of social networks is that they can influence people's public
opinions and group behaviors form quickly. Negative group behavior influences
societal stability significantly, but existing behavior-induction approaches are too
simple and inefficient. To automatically and efficiently induct behavior in social
networks, this article introduces trust agents and designs their features according to
group behavior features. In addition, a dynamics control mechanism can be generated
to coordinate participant behaviors in social networks to avoid a specific restricted
negative group behavior.
S3I_DE16_001 - Incremental and Decremental Max-flow for Online Semi-
supervised Learning
In classification, if a small number of instances is added or removed,
incremental and decremental techniques can be applied to quickly update the model.
However, the design of incremental and decremental algorithms involves many
considerations. In this paper, we focus on linear classifiers including logistic
regression and linear SVM because of their simplicity over kernel or other methods.
By applying a warm start strategy, we investigate issues such as using primal or dual
formulation, choosing optimization methods, and creating practical implementations.
Through theoretical analysis and practical experiments, we conclude that a warm start
setting on a high-order optimization method for primal formulations is more suitable
than others for incremental and decremental learning of linear classification.
S3I_DE16_002 - Personalized Influential Topic Search via Social Network
Summarization
Social networks are a vital mechanism to disseminate information to friends and
colleagues. In this work, we investigate an important problem - the personalized
influential topic search, or PIT-Search in a social network: Given a keyword query q
issued by a user u in a social network, a PIT-Search is to find the top-k q-related
topics that are most influential for the query user u. The influence of a topic to a query
user depends on the social connection between the query user and the social users
containing the topic in the social network. To measure the topics’ influence at the
similar granularity scale, we need to extract the social summarization of the social
network regarding topics. To make effective topic-aware social summarization, we
propose two random-walk based approaches: random clustering and an L-length
random walk. Based on the proposed approaches, we can find a small set of
representative users with assigned influential scores to simulate the influence of the
large number of topic users in the social network with regards to the topic. The
13. S3 INFOTECH +91 988 48 48 198
# 10/1, Jones Road, Saidapet, Chennai – 15.Mob:, 9884848198.
www.s3computers.com E-Mail: info@s3computers.com
selected representative users are denoted as the social summarization of topic-aware
influence spread over the social network. And then, we verify the usefulness of the
social summarization by applying it to the problem of personalized influential topic
search. Finally, we evaluate the performance of our algorithms using real-world
datasets, and show the approach is efficient and effective in practice.
S3I_DE16_003 - Survey on Aspect-Level Sentiment Analysis
The field of sentiment analysis, in which sentiment is gathered, analyzed, and
aggregated from text, has seen a lot of attention in the last few years. The
corresponding growth of the field has resulted in the emergence of various subareas,
each addressing a different level of analysis or research question. This survey focuses
on aspect-level sentiment analysis, where the goal is to find and aggregate sentiment
on entities mentioned within documents or aspects of them. An in-depth overview of
the current state-of-the-art is given, showing the tremendous progress that has already
been made in finding both the target, which can be an entity as such, or some aspect
of it, and the corresponding sentiment. Aspect-level sentiment analysis yields very
fine-grained sentiment information which can be useful for applications in various
domains. Current solutions are categorized based on whether they provide a method
for aspect detection, sentiment analysis, or both. Furthermore, a breakdown based on
the type of algorithm used is provided. For each discussed study, the reported
performance is included. To facilitate the quantitative evaluation of the various
proposed methods, a call is made for the standardization of the evaluation
methodology that includes the use of shared data sets. Semanticallyrich concept-
centric aspect-level sentiment analysis is discussed and identified as one of the most
promising future research direction.
S3I_DE16_004 - Multilabel Classification via Co-evolutionary Multilabel
Hypernetwork
Multilabel classification is prevalent in many real-world applications where
data instances may be associated with multiple labels simultaneously. In multilabel
classification, exploiting label correlations is an essential but nontrivial task. Most of
the existing multilabel learning algorithms are either ineffective or computational
demanding and less scalable in exploiting label correlations. In this paper, we propose
a co-evolutionary multilabel hypernetwork (Co-MLHN) as an attempt to exploit label
correlations in an effective and efficient way. To this end, we firstly convert the
traditional hypernetwork into a multilabel hypernetwork (MLHN) where label
correlations are explicitly represented. We then propose a co-evolutionary learning
algorithm to learn an integrated classification model for all labels. The proposed Co-
14. S3 INFOTECH +91 988 48 48 198
# 10/1, Jones Road, Saidapet, Chennai – 15.Mob:, 9884848198.
www.s3computers.com E-Mail: info@s3computers.com
MLHN exploits arbitrary order label correlations and has linear computational
complexity with respect to the number of labels. Empirical studies on a broad range of
multilabel data sets demonstrate that Co-MLHN achieves competitive results against
state-of-the-art multilabel learning algorithms, in terms of both classification
performance and scalability with respect to the number of labels.
S3I_DE16_005 - Answering Pattern Queries Using Views
Answering queries using views has proven effective for querying relational and
semistructured data. This paper investigates this issue for graph pattern queries based
on graph simulation. We propose a notion of pattern containment to characterize
graph pattern matching using graph pattern views. We show that a pattern query can
be answered using a set of views if and only if it is contained in the views. Based on
this characterization, we develop efficient algorithms to answer graph pattern queries.
We also study problems for determining (minimal, minimum) containment of pattern
queries. We establish their complexity (from cubic-time to NPcomplete) and provide
efficient checking algorithms (approximation when the problem is intractable). In
addition, when a pattern query is not contained in the views, we study maximally
contained rewriting to find approximate answers; we show that it is in cubic-time to
compute such rewriting, and present a rewriting algorithm. We experimentally verify
that these methods are able to efficiently answer pattern queries on large real-world
graphs.
S3I_DE16_006 - Similarity Measure Selection for Clustering Time Series
Databases
In the past few years, clustering has become a popular task associated with time
series. The choice of a suitable distance measure is crucial to the clustering process
and, given the vast number of distance measures for time series available in the
literature and their diverse characteristics, this selection is not straightforward. With
the objective of simplifying this task, we propose a multi-label classification
framework that provides the means to automatically select the most suitable distance
measures for clustering a time series database. This classifier is based on a novel
collection of characteristics that describe the main features of the time series databases
and provide the predictive information necessary to discriminate between a set of
distance measures. In order to test the validity of this classifier, we conduct a
complete set of experiments using both synthetic and real time series databases and a
set of five common distance measures. The positive results obtained by the designed
classification framework for various performance measures indicate that the proposed
methodology is useful to simplify the process of distance selection in time series
clustering tasks.
15. S3 INFOTECH +91 988 48 48 198
# 10/1, Jones Road, Saidapet, Chennai – 15.Mob:, 9884848198.
www.s3computers.com E-Mail: info@s3computers.com
S3I_DE16_007 - Incremental Consolidation of Data-Intensive Multi-Flows
Business intelligence (BI) systems depend on efficient integration of disparate
and often heterogeneous data. The integration of data is governed by data-intensive
flows and is driven by a set of information requirements. Designing such flows is in
general a complex process, which due to the complexity of business environments is
hard to be done manually. In this paper, we deal with the challenge of efficient design
and maintenance of data-intensive flows and propose an incremental approach,
namely CoAl , for semi-automatically consolidating data-intensive flows satisfying a
given set of information requirements. CoAl works at the logical level and
consolidates data flows from either high-level information requirements or platform-
specific programs. As CoAl integrates a new data flow, it opts for maximal reuse of
existing flows and applies a customizable cost model tuned for minimizing the overall
cost of a unified solution. We demonstrate the efficiency and effectiveness of our
approach through an experimental evaluation using our implemented prototype.
S3I_WSN16_001 - Analysis of PKF: A Communication Cost Reduction Scheme
for Wireless Sensor Networks
Energy efficiency is a primary concern for wireless sensor networks (WSNs).
One of its most energy-intensive processes is the radio communication. This work
uses a predictor combined with a Kalman filter (KF) to reduce the communication
energy cost for cluster-based WSNs. The technique, called PKF, is suitable for typical
WSN applications with adjustable data quality and tens of picojoule computation cost.
However, it is challenging to precisely quantify its underlying process from a
16. S3 INFOTECH +91 988 48 48 198
# 10/1, Jones Road, Saidapet, Chennai – 15.Mob:, 9884848198.
www.s3computers.com E-Mail: info@s3computers.com
mathematical point of view. Through an in-depth mathematical analysis, we formulate
the tradeoff between energy efficiency and reconstruction quality of PKF. One of our
prominent results for that is the explicit expression for the covariance of the doubly
truncated multivariate normal distribution; it improves the previous methods and has
generality. The validity and accuracy of the analysis are verified with both artificial
and real signals. The simulation results, using real temperature values, demonstrate
the efficiency of PKF: without additional data degradation, it reduces the
communication cost by more than 88%. Compared to previous works based on KF,
PKF requires less computational effort while improving the reconstruction quality;
compared with the techniques without KF, the advantages of PKF are even more
significant. It reduces the transmission rate of them by at least 29%. Besides, it can be
integrated into network level techniques to further extend the whole network lifetime.
S3I_WSN16_002 - Online Packet Dispatching for Delay Optimal Concurrent
Transmissions in Heterogeneous Multi-RAT Networks
In this paper, we consider the problem of concurrent transmissions in a wireless
network consisting of multiple radio access technologies (multi-RATs). That is, a
single flow of packets is dispatched over multiple RATs so that the complementary
advantages of different RATs can be exploited. One of the challenging issues arising
in concurrent transmissions is the packet outof- order problem due to diverse wireless
channel states and scheduling policies of different RATs, leading to substantial
performance degradation to delay sensitive applications. To address this problem, we
firstly propose a state-independent packet dispatching (SIPD) policy, which attempts
to find the traffic dispatching ratios over multiple RATs to minimize the maximum
average delay across different RATs in the long run. We further propose a state-
dependent packet dispatching (SDPD) policy, which achieves fine-grained packet
dispatching in the short-term. We use the value function as a measure of the
admittance cost for packet dispatching given the current queueing states, and
formulate the SDPD problem as a convex programming problem. We derive the
close-form solutions for both problems for the special case of two RATs, and adopt
the dual decomposition technique as the solution for the general cases. Simulation
results are presented to compare the performance of the proposed schemes with
existing solutions.
S3I_WSN16_003 - Toward Optimal Adaptive Wireless Communications in
Unknown Environments
Designing efficient channel access schemes for wireless communications
without any prior knowledge about the nature of environments has been a very
challenging issue, in which the channel state distribution of all spectrum resources
17. S3 INFOTECH +91 988 48 48 198
# 10/1, Jones Road, Saidapet, Chennai – 15.Mob:, 9884848198.
www.s3computers.com E-Mail: info@s3computers.com
could be entirely or partially stochastic or adversarial at different times and locations.
In this paper, we propose an online learning algorithm for adaptive channel access of
wireless communications in unknown environments based on the theory of
multiarmed bandits (MAB) problems. By automatically tuning two control
parameters, i.e., learning rate and exploration probability, our algorithms could find
the optimal channel access strategies and achieve the almost optimal learning
performance over time in different scenarios. The quantitative performance studies
indicate the superior throughput gain when compared with previous solutions and the
flexibility of our algorithm in practice, which is resilient to both oblivious and
adaptive jamming attacks with different intelligence and attacking strength that ranges
from no-attack to the full-attack of all spectrum resources. We conduct extensive
simulations to validate our theoretical analysis.
S3I_WSN16_004 - Adaptive Pilot Clustering in Heterogeneous Massive MIMO
Networks
We consider the uplink of a cellular massive MIMO network. Acquiring
channel state information at the base stations (BSs) requires uplink pilot signaling.
Since the number of orthogonal pilot sequences is limited by the channel coherence,
pilot reuse across cells is necessary to achieve high spectral efficiency. However,
finding efficient pilot reuse patterns is nontrivial especially in practical asymmetric
BS deployments. We approach this problem using coalitional game theory. Each BS
has a few unique pilots and can form coalitions with other BSs to gain access to more
pilots. The BSs in a coalition thus benefit from serving more users in their cells, at the
expense of higher pilot contamination and interference. Given that a cell’s average
spectral efficiency depends on the overall pilot reuse pattern, the suitable coalitional
game model is in partition form. We develop a low-complexity distributed coalition
formation based on individual stability. By incorporating a base station
intercommunication budget constraint, we are able to control the overhead in message
exchange between the base stations and ensure the algorithm’s convergence to a
solution of the game called individually stable coalition structure. Simulation results
reveal fast algorithmic convergence and substantial performance gains over the
baseline schemes with no pilot reuse, full pilot reuse, or random pilot reuse pattern.
S3I_WSN16_005 - Data Aggregation and Principal Component Analysis in
WSNs
Data aggregation plays an important role inWireless Sensor Networks (WSNs)
as far as it reduces power consumption and boosts the scalability of the network,
specially in topologies that are prone to bottlenecks (e.g. cluster-trees). Existing works
in the literature use clustering approaches, Principal Component Analysis (PCA)
and/or Compressed Sensing (CS) strategies. Our contribution is aligned with PCA and
18. S3 INFOTECH +91 988 48 48 198
# 10/1, Jones Road, Saidapet, Chennai – 15.Mob:, 9884848198.
www.s3computers.com E-Mail: info@s3computers.com
explores whether a projection basis that is not the eigenvectors basis may be valid to
sustain a Normalized Mean Squared Error (NMSE) threshold in signal reconstruction
and reduce the energy consumption. We derivate first the NSME achieved with the
new basis and elaborate then on the Jacobi eigenvalue decomposition ideas to propose
a new subspace-based data aggregation method. The proposed solution reduces
transmissions among the sink and one or more Data Aggregation Nodes (DANs) in
the network. In our simulations we consider without loss of generality a single cluster
network and results show that the new technique succeeds in satisfying the NMSE
requirement and gets close in terms of energy consumption to the best possible
solution employing subspace representations. Additionally the proposed method
alleviates the computational load with respect to an eigenvector-based strategy (by a
factor of six in our simulations).
S3I_WSN16_006 - A New cost-effective approach for Battlefield Surveillance in
Wireless Sensor Networks
Assuring security (in the form of attacking mode as well as in safeguard mode)
and at the same time keeping strong eye on the opposition's status (position, quantity,
availability) is the key responsibility of a commander in the battlefield. Battlefield
surveillance is one of the strong applications of Wireless Sensor Networks (WSNs). A
commander is not only liable to his above responsibilities, but also to manage his
duties in an efficient way. For this reason, ensuring maximum destruction with
minimum resources is a major concern of a commander in the battlefield. This paper
focuses on the maximum destruction problem in military affairs. In the work of
Jaigirdar and Islam (2012), the authors proposed two novel algorithms (Maximum
degree analysis and Maximum clique analysis) that ensure the efficiency and cost-
effectiveness of the above problem. A comparative study explaining the number of
resources required for commencing required level of destruction made to the
opponents has been provided in the paper. In this paper the authors have come
forward with another algorithm for the same problem. With the simulation studies and
comparative analysis of the same example set the authors in this paper demonstrate
the effectiveness (in both the quality and quantity) of the new method to be best
among the three.
S3I_NS16_001 - Contributory Broadcast Encryption with Efficient Encryption
and Short Ciphertexts
Broadcast encryption (BE) schemes allow a sender to securely broadcast to any
subset of members but require a trusted party to distribute decryption keys. Group key
agreement (GKA) protocols enable a group of members to negotiate a common
encryption key via open networks so that only the group members can decrypt the
ciphertexts encrypted under the shared encryption key, but a sender cannot exclude
19. S3 INFOTECH +91 988 48 48 198
# 10/1, Jones Road, Saidapet, Chennai – 15.Mob:, 9884848198.
www.s3computers.com E-Mail: info@s3computers.com
any particular member from decrypting the ciphertexts. In this paper, we bridge these
two notions with a hybrid primitive referred to as contributory broadcast encryption
(ConBE). In this new primitive, a group of members negotiate a common public
encryption key while each member holds a decryption key. A sender seeing the public
group encryption key can limit the decryption to a subset of members of his choice.
Following this model, we propose a ConBE scheme with short ciphertexts. The
scheme is proven to be fully collusion-resistant under the decision n-Bilinear Diffie-
Hellman Exponentiation (BDHE) assumption in the standard model. Of independent
interest, we present a new BE scheme that is aggregatable. The aggregatability
property is shown to be useful to construct advanced protocols.
S3I_NS16_002 - Building an intrusion detection system using a filter-based
feature selection algorithm
Redundant and irrelevant features in data have caused a long-term problem in
network traffic classification. These features not only slow down the process of
classification but also prevent a classifier from making accurate decisions, especially
when coping with big data. In this paper, we propose a mutual information based
algorithm that analytically selects the optimal feature for classification. This mutual
information based feature selection algorithm can handle linearly and nonlinearly
dependent data features. Its effectiveness is evaluated in the cases of network
intrusion detection. An Intrusion Detection System (IDS), named Least Square
Support Vector Machine based IDS (LSSVM-IDS), is built using the features selected
by our proposed feature selection algorithm. The performance of LSSVM-IDS is
evaluated using three intrusion detection evaluation datasets, namely KDD Cup 99,
NSL-KDD and Kyoto 2006+ dataset. The evaluation results show that our feature
selection algorithm contributes more critical features for LSSVM-IDS to achieve
better accuracy and lower computational cost compared with the state-of-the-art
methods.
S3I_SEC_001: Cloud Workflow Scheduling with Deadlines and Time Slot
Availability
Allocating service capacities in cloud computing is based on the assumption that
they are unlimited and can be used at any time. However, available service capacities
change with workload and cannot satisfy users’ requests at any time from the cloud
provider’s perspective because cloud services can be shared by multiple tasks. Cloud
service providers provide available time slots for new user’s requests based on
available capacities. In this paper, we consider workflow scheduling with deadline
and time slot availability in cloud computing. An iterated heuristic framework is
20. S3 INFOTECH +91 988 48 48 198
# 10/1, Jones Road, Saidapet, Chennai – 15.Mob:, 9884848198.
www.s3computers.com E-Mail: info@s3computers.com
presented for the problem under study which mainly consists of initial solution
construction, improvement, and perturbation. Three initial solution construction
strategies, two greedy- and fair-based improvement strategies and a perturbation
strategy are proposed. Different strategies in the three phases result in several
heuristics. Experimental results show that different initial solution and improvement
strategies have different effects on solution qualities.
S3I_SEC_002: Publicly Verifiable Inner Product Evaluation over Outsourced
Data Streams under Multiple Keys
Uploading data streams to a resource-rich cloud server for inner product
evaluation, an essential building block in many popular stream applications (e.g.,
statistical monitoring), is appealing to many companies and individuals. On the other
hand, verifying the result of the remote computation plays a crucial role in addressing
the issue of trust. Since the outsourced data collection likely comes from multiple data
sources, it is desired for the system to be able to pinpoint the originator of errors by
allotting each data source a unique secret key, which requires the inner product
verification to be performed under any two parties’ different keys. However, the
present solutions either depend on a single key assumption or powerful yet practically
inefficient fully homomorphic cryptosystems. In this paper, we focus on the more
challenging multi-key scenario where data streams are uploaded by multiple data
sources with distinct keys. We first present a novel homomorphic verifiable tag
technique to publicly verify the outsourced inner product computation on the dynamic
data streams, and then extend it to support the verification of matrix product
computation. We prove the security of our scheme in the random oracle model.
Moreover, the experimental result also shows the practicability of our design.
S3I_DPS16_001 - Cost Minimization for Rule Caching in Software Defined
Networking
Software-defined networking (SDN) is an emerging network paradigm that
simplifies network management by decoupling the control plane and data plane, such
that switches become simple data forwarding devices and network management is
controlled by logically centralized servers. In SDN-enabled networks, network flow is
managed by a set of associated rules that are maintained by switches in their local
Ternary Content Addressable Memories (TCAMs) which support high-speed parallel
lookup on wildcard patterns. Since TCAM is an expensive hardware and extremely
power-hungry, each switch has only limited TCAM space and it is inefficient and
even infeasible to maintain all rules at local switches. On the other hand, if we
eliminate TCAM occupation by forwarding all packets to the centralized controller for
21. S3 INFOTECH +91 988 48 48 198
# 10/1, Jones Road, Saidapet, Chennai – 15.Mob:, 9884848198.
www.s3computers.com E-Mail: info@s3computers.com
processing, it results in a long delay and heavy processing burden on the controller. In
this paper, we strive for the fine balance between rule caching and remote packet
processing by formulating a minimum weighted flow provisioning ( MWFP) problem
with an objective of minimizing the total cost of TCAM occupation and remote packet
processing. We propose an efficient offline algorithm if the network traffic is given,
otherwise, we propose two online algorithms with guaranteed competitive ratios.
Finally, we conduct extensive experiments by simulations using real network traffic
traces. The simulation results demonstrate that our proposed algorithms can
significantly reduce the total cost of remote controller processing and TCAM
occupation, and the solutions obtained are nearly optimal.
S3I_DPS16_002 - On Binary Decomposition based Privacy-preserving
Aggregation Schemes in Real-time Monitoring Systems
In real-time monitoring systems, fine-grained measurements would pose great
privacy threats to the participants as real-time measurements could disclose accurate
people-centric activities. Differential privacy has been proposed to formalize and
guide the design of privacy-preserving schemes. Nonetheless, due to the correlations
and high fluctuations in time-series data, it is hard to achieve an effective privacy and
utility tradeoff by differential privacy mechanisms. To address this issue, in this
paper, we first proposed novel multi-dimensional decomposition based schemes to
compress the noise and enhance the utility in differential privacy. The key idea is to
decompose the measurements into multi-dimensional records and to achieve
differential privacy in bounded dimensions so that the error caused by unbounded
measurements can be significantly reduced. We then extended our developed scheme
and developed a binary decomposition scheme for privacy-preserving time-series
aggregation in real-time monitoring systems. Through a combination of extensive
theoretical analysis and experiments, our data shows that our proposed schemes can
effectively improve usability while achieving the same level of differential privacy
than existing schemes.
S3I_DPS16_003 - An OpenMP Extension that Supports Thread-Level
Speculation
22. S3 INFOTECH +91 988 48 48 198
# 10/1, Jones Road, Saidapet, Chennai – 15.Mob:, 9884848198.
www.s3computers.com E-Mail: info@s3computers.com
OpenMP directives are the de-facto standard for shared-memory parallel
programming. However, OpenMP does not guarantee the correctness of the parallel
execution of a given loop if runtime data dependences arise. Consequently, many
highly-parallel regions cannot be safely parallelized with OpenMP due to the
possibility of a dependence violation. In this paper, we propose to augment OpenMP
capabilities, by adding thread-level speculation (TLS) support. Our contribution is
threefold. First, we have defined a new speculative clause for variables inside parallel
loops. This clause ensures that all accesses to these variables will be carried out
according to sequential semantics. Second, we have created a new, software-based
TLS runtime library to ensure correctness in the parallel execution of OpenMP loops
that include speculative variables. Third, we have developed a new GCC plugin,
which seamlessly translates our OpenMP speculative clause into calls to our TLS
runtime engine. The result is the ATLaS C Compiler framework, which takes
advantage of TLS techniques to expand OpenMP functionalities, and guarantees the
sequential semantics of any parallelized loop.
S3I_DPS16_004 - Real-Time Semantic Search Using Approximate Methodology
for Large-Scale Storage Systems
The challenges of handling the explosive growth in data volume and
complexity cause the increasing needs for semantic queries. The semantic queries can
be interpreted as the correlation-aware retrieval, while containing approximate results.
Existing cloud storage systems mainly fail to offer an adequate capability for the
semantic queries. Since the true value or worth of data heavily depends on how
efficiently semantic search can be carried out on the data in (near-) real-time, large
fractions of data end up with their values being lost or significantly reduced due to the
data staleness. To address this problem, we propose a near-real-time and cost-
effective semantic queries based methodology, called FAST. The idea behind FAST is
to explore and exploit the semantic correlation within and among datasets via
correlation-aware hashing and manageable flat-structured addressing to significantly
reduce the processing latency, while incurring acceptably small loss of data-search
accuracy. The near-real-time property of FAST enables rapid identification of
correlated files and the significant narrowing of the scope of data to be processed.
FAST supports several types of data analytics, which can be implemented in existing
searchable storage systems. We conduct a real-world use case in which children
reported missing in an extremely crowded environment (e.g., a highly popular scenic
spot on a peak tourist day) are identified in a timely fashion by analyzing 60 million
images using FAST. FAST is further improved by using semantic-aware namespace
to provide dynamic and adaptive namespace management for ultra-large storage
systems. Extensive experimental results demonstrate the efficiency and efficacy of
FAST in the performance improvements.
23. S3 INFOTECH +91 988 48 48 198
# 10/1, Jones Road, Saidapet, Chennai – 15.Mob:, 9884848198.
www.s3computers.com E-Mail: info@s3computers.com
S3I_MC16_001: Service Usage Classification with Encrypted Internet Traffic in
Mobile Messaging Apps
The rapid adoption of mobile messaging Apps has enabled us to collect
massive amount of encrypted Internet traffic of mobile messaging. The classification
of this traffic into different types of in-App service usages can help for intelligent
network management, such as managing network bandwidth budget and providing
quality of services. Traditional approaches for classification of Internet traffic rely on
packet inspection, such as parsing HTTP headers. However, messaging Apps are
increasingly using secure protocols, such as HTTPS and SSL, to transmit data. This
imposes significant challenges on the performances of service usage classification by
packet inspection. To this end, in this paper, we investigate how to exploit encrypted
Internet traffic for classifying in-App usages. Specifically, we develop a system,
named CUMMA, for classifying service usages of mobile messaging Apps by jointly
modeling user behavioral patterns, network traffic characteristics and temporal
dependencies. Along this line, we first segment Internet traffic from traffic-flows into
sessions with a number of dialogs in a hierarchical way. Also, we extract the
discriminative features of traffic data from two perspectives: (i) packet length and (ii)
time delay. Next, we learn a service usage predictor to classify these segmented
dialogs into single-type usages or outliers. In addition, we design a clustering Hidden
Markov Model (HMM) based method to detect mixed dialogs from outliers and
decompose mixed dialogs into sub-dialogs of single-type usage. Indeed, CUMMA
enables mobile analysts to identify service usages and analyze end-user in-App
behaviors even for encrypted Internet traffic. Finally, the extensive experiments on
real-world messaging data demonstrate the effectiveness and efficiency of the
proposed method for service usage classification.
24. S3 INFOTECH +91 988 48 48 198
# 10/1, Jones Road, Saidapet, Chennai – 15.Mob:, 9884848198.
www.s3computers.com E-Mail: info@s3computers.com
S3I_CC16_001: Dynamic and Public Auditing with Fair Arbitration for Cloud Data
Cloud users no longer physically possess their data, so how to ensure the integrity of their
outsourced data becomes a challenging task. Recently proposed schemes such as “provable data
possession” and “proofs of retrievability” are designed to address this problem, but they are
designed to audit static archive data and therefore lack of data dynamics support. Moreover,
threat models in these schemes usually assume an honest data owner and focus on detecting a
dishonest cloud service provider despite the fact that clients may also misbehave. This paper
proposes a public auditing scheme with data dynamics support and fairness arbitration of
potential disputes. In particular, we design an index switcher to eliminate the limitation of index
usage in tag computation in current schemes and achieve efficient handling of data dynamics. To
address the fairness problem so that no party can misbehave without being detected, we further
extend existing threat models and adopt signature exchange idea to design fair arbitration
protocols, so that any possible dispute can be fairly settled. The security analysis shows our
scheme is provably secure, and the performance evaluation demonstrates the overhead of data
dynamics and dispute arbitration are reasonable.
S3I_CC16_002: Enabling Cloud Storage Auditing with Verifiable Outsourcing of Key
Updates
Key-exposure resistance has always been an important issue for in-depth cyber defence in
many security applications. Recently, how to deal with the key exposure problem in the settings
of cloud storage auditing has been proposed and studied. To address the challenge, existing
solutions all require the client to update his secret keys in every time period, which may
inevitably bring in new local burdens to the client, especially those with limited computation
resources, such as mobile phones. In this paper, we focus on how to make the key updates as
transparent as possible for the client and propose a new paradigm called cloud storage auditing
with verifiable outsourcing of key updates. In this paradigm, key updates can be safely
outsourced to some authorized party, and thus the key-update burden on the client will be kept
minimal. In particular, we leverage the third party auditor (TPA) in many existing public
auditing designs, let it play the role of authorized party in our case, and make it in charge of both
the storage auditing and the secure key updates for key-exposure resistance. In our design, TPA
only needs to hold an encrypted version of the client's secret key while doing all these
burdensome tasks on behalf of the client. The client only needs to download the encrypted secret
key from the TPA when uploading new files to cloud. Besides, our design also equips the client
with capability to further verify the validity of the encrypted secret keys provided by the TPA.
All these salient features are carefully designed to make the whole auditing procedure with key
exposure resistance as transparent as possible for the client. We formalize the definition and the
security model of this paradigm. The security proof and the performance simulation show that
our detailed design instantiations are secure and efficient.
S3I_CC16_003: Providing User Security Guarantees in Public Infrastructure Clouds
The infrastructure cloud (IaaS) service model offers improved resource flexibility and
availability, where tenants – insulated from the minutiae of hardware maintenance – rent
computing resources to deploy and operate complex systems. Large-scale services running on
IaaS platforms demonstrate the viability of this model; nevertheless, many organizations
operating on sensitive data avoid migrating operations to IaaS platforms due to security
25. S3 INFOTECH +91 988 48 48 198
# 10/1, Jones Road, Saidapet, Chennai – 15.Mob:, 9884848198.
www.s3computers.com E-Mail: info@s3computers.com
concerns. In this paper, we describe a framework for data and operation security in IaaS,
consisting of protocols for a trusted launch of virtual machines and domain-based storage
protection. We continue with an extensive theoretical analysis with proofs about protocol
resistance against attacks in the defined threat model. The protocols allow trust to be established
by remotely attesting host platform configuration prior to launching guest virtual machines and
ensure confidentiality of data in remote storage, with encryption keys maintained outside of the
IaaS domain. Presented experimental results demonstrate the validity and efficiency of the
proposed protocols. The framework prototype was implemented on a test bed operating a public
electronic health record system, showing that the proposed protocols can be integrated into
existing cloud environments.
S3I_CC16_004: Attribute-Based Data Sharing Scheme Revisited in Cloud Computing
Cipher-text-policy attribute-based encryption (CP-ABE) is a very promising encryption
technique for secure data sharing in the context of cloud computing. Data owner is allowed to
fully control the access policy associated with his data which to be shared. However, CP-ABE is
limited to a potential security risk that is known as key escrow problem, whereby the secret keys
of users have to be issued by a trusted key authority. Besides, most of the existing CP-ABE
schemes cannot support attribute with arbitrary state. In this paper, we revisit attribute-based data
sharing scheme in order to solve the key escrow issue but also improve the expressiveness of
attribute, so that the resulting scheme is more friendly to cloud computing applications. We
propose an improved two-party key issuing protocol that can guarantee that neither key authority
nor cloud service provider can compromise the whole secret key of a user individually.
Moreover, we introduce the concept of attribute with weight, being provided to enhance the
expression of attribute, which can not only extend the expression from binary to arbitrary state,
but also lighten the complexity of access policy. Therefore, both storage cost and encryption
complexity for a ciphertext are relieved. The performance analysis and the security proof show
that the proposed scheme is able to achieve efficient and secure data sharing in cloud computing.
S3I_CC16_005: An Efficient File Hierarchy Attribute-Based Encryption Scheme in Cloud
Computing
Ciphertext-policy attribute-based encryption (CP-ABE) has been a preferred encryption
technology to solve the challenging problem of secure data sharing in cloud computing. The
shared data files generally have the characteristic of multilevel hierarchy, particularly in the area
of healthcare and the military. However, the hierarchy structure of shared files has not been
explored in CP-ABE. In this paper, an efficient file hierarchy attribute-based encryption scheme
is proposed in cloud computing. The layered access structures are integrated into a single access
structure, and then, the hierarchical files are encrypted with the integrated access structure. The
ciphertext components related to attributes could be shared by the files. Therefore, both
ciphertext storage and time cost of encryption are saved. Moreover, the proposed scheme is
proved to be secure under the standard assumption. Experimental simulation shows that the
proposed scheme is highly efficient in terms of encryption and decryption. With the number of
the files increasing, the advantages of our scheme become more and more conspicuous.
S3I_CC16_006: Identity-Based Proxy-Oriented Data Uploading and Remote Data Integrity
Checking in Public Cloud
26. S3 INFOTECH +91 988 48 48 198
# 10/1, Jones Road, Saidapet, Chennai – 15.Mob:, 9884848198.
www.s3computers.com E-Mail: info@s3computers.com
More and more clients would like to store their data to public cloud servers
(PCSs) along with the rapid development of cloud computing. New security problems
have to be solved in order to help more clients process their data in public cloud.
When the client is restricted to access PCS, he will delegate its proxy to process his
data and upload them. On the other hand, remote data integrity checking is also an
important security problem in public cloud storage. It makes the clients check whether
their outsourced data are kept intact without downloading the whole data. From the
security problems, we propose a novel proxy-oriented data uploading and remote data
integrity checking model in identity-based public key cryptography: identity-based
proxy-oriented data uploading and remote data integrity checking in public cloud (ID-
PUIC). We give the formal definition, system model, and security model. Then, a
concrete ID-PUIC protocol is designed using the bilinear pairings. The proposed ID-
PUIC protocol is provably secure based on the hardness of computational Diffie-
Hellman problem. Our ID-PUIC protocol is also efficient and flexible. Based on the
original client's authorization, the proposed ID-PUIC protocol can realize private
remote data integrity checking, delegated remote data integrity checking, and public
remote data integrity checking.
S3I_CC16_007: Enabling Cloud Storage Auditing With Verifiable Outsourcing
of Key Updates
Key-exposure resistance has always been an important issue for in-depth cyber
defence in many security applications. Recently, how to deal with the key exposure
problem in the settings of cloud storage auditing has been proposed and studied. To
address the challenge, existing solutions all require the client to update his secret keys
in every time period, which may inevitably bring in new local burdens to the client,
especially those with limited computation resources, such as mobile phones. In this
paper, we focus on how to make the key updates as transparent as possible for the
client and propose a new paradigm called cloud storage auditing with verifiable
outsourcing of key updates. In this paradigm, key updates can be safely outsourced to
some authorized party, and thus the key-update burden on the client will be kept
minimal. In particular, we leverage the third party auditor (TPA) in many existing
public auditing designs, let it play the role of authorized party in our case, and make it
in charge of both the storage auditing and the secure key updates for key-exposure
resistance. In our design, TPA only needs to hold an encrypted version of the client's
secret key while doing all these burdensome tasks on behalf of the client. The client
only needs to download the encrypted secret key from the TPA when uploading new
files to cloud. Besides, our design also equips the client with capability to further
verify the validity of the encrypted secret keys provided by the TPA. All these salient
features are carefully designed to make the whole auditing procedure with key
exposure resistance as transparent as possible for the client. We formalize the
27. S3 INFOTECH +91 988 48 48 198
# 10/1, Jones Road, Saidapet, Chennai – 15.Mob:, 9884848198.
www.s3computers.com E-Mail: info@s3computers.com
definition and the security model of this paradigm. The security proof and the
performance simulation show that our detailed design instantiations are secure and
efficient.
S3I_CC16_008: Outsourcing Eigen-Decomposition and Singular Value
Decomposition of Large Matrix to a Public Cloud
Cloud computing enables customers with limited computational resources to
outsource their huge computation workloads to the cloud with massive computational
power. However, in order to utilize this computing paradigm, it presents various
challenges that need to be addressed, especially security. As eigen-decomposition
(ED) and singular value decomposition (SVD) of a matrix are widely applied in
engineering tasks, we are motivated to design secure, correct, and efficient protocols
for outsourcing the ED and SVD of a matrix to a malicious cloud in this paper. In
order to achieve security, we employ efficient privacy-preserving transformations to
protect both the input and output privacy. In order to check the correctness of the
result returned from the cloud, an efficient verification algorithm is employed. A
computational complexity analysis shows that our protocols are highly efficient. We
also introduce an outsourcing principle component analysis as an application of our
two proposed protocols.
S3I_CC16_009: Performance limitations of a text search application running in
cloud instances
This article analyzes the performance of MySQL in clouds based on commodity
hardware in order to identify the bottlenecks in the execution of series of scripts
developed on the SQL standard. The developed scripts were designed in order to
perform text search in a considerable amount of records. Two types of platforms were
employed: a physical machine that serves as host and an instance within a cloud
infrastructure. The results show that the intensive use of a relational database presents
a greater loss of performance in a cloud instance due limitations in the primary storage
system that was employed in the cloud infrastructure.
28. S3 INFOTECH +91 988 48 48 198
# 10/1, Jones Road, Saidapet, Chennai – 15.Mob:, 9884848198.
www.s3computers.com E-Mail: info@s3computers.com
S3I_CC16_0010: A dynamic load balancing method of cloud-center based on
SDN
In order to achieve dynamic load balancing based on data flow level, in this
paper, we apply SDN technology to the cloud data center, and propose a dynamic load
balancing method of cloud center based on SDN. The approach of using the SDN
technology in the current task scheduling flexibility, accomplish real-time monitoring
of the service node flow and load condition by the OpenFlow protocol. When the load
of system is imbalanced, the controller can allocate globally network resources.
What's more, by using dynamic correction, the load of the system is not obvious tilt in
the long run. The results of simulation show that this approach can realize and ensure
that the load will not tilt over a long period of time, and improve the system
throughput.
S3I_CC16_0011: Encryption-Based Solution for Data Sovereignty in Federated
Clouds
The rapidly growing demand for cloud services in the current business practice
has favored the success of the hybrid clouds and the advent of cloud federation. The
available literature of this topic has focused on middleware abstraction to interoperate
heterogeneous cloud platforms and orchestrate different management and business
models. However, cloud federation implies serious security and privacy issues with
respect to data sovereignty when data is outsourced across different judicial and legal
systems. This column describes a solution that applies encryption to protect data
sovereignty in federated clouds rather than restricting the elasticity and migration of
data across federated clouds.
S3I_CC16_0012: Attribute-based access control for multi-authority systems with
constant size ciphertext in cloud computing
In most existing CP-ABE schemes, there is only one authority in the system and
all the public keys and private keys are issued by this authority, which incurs
ciphertext size and computation costs in the encryption and decryption operations that
depend at least linearly on the number of attributes involved in the access policy. We
propose an efficient multi-authority CP-ABE scheme in which the authorities need not
interact to generate public information during the system initialization phase. Our
scheme has constant ciphertext length and a constant number of pairing computations.
Our scheme can be proven CPA-secure in random oracle model under the decision q-
BDHE assumption. When user's attributes revocation occurs, the scheme transfers
most re-encryption work to the cloud service provider, reducing the data owner's
computational cost on the premise of security. Finally the analysis and simulation
result show that the schemes proposed in this thesis ensure the privacy and secure
access of sensitive data stored in the cloud server, and be able to cope with the
29. S3 INFOTECH +91 988 48 48 198
# 10/1, Jones Road, Saidapet, Chennai – 15.Mob:, 9884848198.
www.s3computers.com E-Mail: info@s3computers.com
dynamic changes of users' access privileges in large-scale systems. Besides, the multi-
authority ABE eliminates the key escrow problem, achieves the length of ciphertext
optimization and enhances the efficiency of the encryption and decryption operations.
S3I_CC16_0013: AMTS: Adaptive multi-objective task scheduling strategy in
cloud computing
Task scheduling in cloud computing environments is a multi-objective
optimization problem, which is NP hard. It is also a challenging problem to find an
appropriate trade-off among resource utilization, energy consumption and Quality of
Service (QoS) requirements under the changing environment and diverse tasks.
Considering both processing time and transmission time, a PSO-based Adaptive
Multi-objective Task Scheduling (AMTS) Strategy is proposed in this paper. First, the
task scheduling problem is formulated. Then, a task scheduling policy is advanced to
get the optimal resource utilization, task completion time, average cost and average
energy consumption. In order to maintain the particle diversity, the adaptive
acceleration coefficient is adopted. Experimental results show that the improved PSO
algorithm can obtain quasi-optimal solutions for the cloud task scheduling problem.
S3I_CC16_0014: Efficient R-Tree Based Indexing Scheme for Server-Centric
Cloud Storage System
Cloud storage system poses new challenges to the community to support
efficient concurrent querying tasks for various data-intensive applications, where
indices always hold important positions. In this paper, we explore a practical method
to construct a two-layer indexing scheme for multi-dimensional data in diverse server-
centric cloud storage system. We first propose RT-HCN, an indexing scheme
integrating R-tree based indexing structure and HCN-based routing protocol. RT-
HCN organizes storage and compute nodes into an HCN overlay, one of the newly
proposed sever-centric data center topologies. Based on the properties of HCN, we
design a specific index mapping technique to maintain layered global indices and
corresponding query processing algorithms to support efficient query tasks. Then, we
expand the idea of RT-HCN onto another server-centric data center topology DCell,
discovering a potential generalized and feasible way of deploying two-layer indexing
schemes on other server-centric networks. Furthermore, we prove theoretically that
RT-HCN is both space-efficient and query-efficient, by which each node actually
maintains a tolerable number of global indices while high concurrent queries can be
processed within accepted overhead. We finally conduct targeted experiments on
Amazon's EC2 platforms, comparing our design with RT-CAN, a similar indexing
scheme for traditional P2P network. The results validate the query efficiency,
especially the speedup of point query of RT-HCN, depicting its potential applicability
in future data centers.
30. S3 INFOTECH +91 988 48 48 198
# 10/1, Jones Road, Saidapet, Chennai – 15.Mob:, 9884848198.
www.s3computers.com E-Mail: info@s3computers.com
S3I_CC16_0015: Dynamic Certification of Cloud Services: Trust, but Verify!
Although intended to ensure cloud service providers' security, reliability, and
legal compliance, current cloud service certifications are quickly outdated. Dynamic
certification, on the other hand, provides automated monitoring and auditing to verify
cloud service providers' ongoing adherence to certification requirements.
S3I_CC16_0016: Privacy preserving and delegated access control for cloud
applications
In cloud computing applications, users' data and applications are hosted by
cloud providers. This paper proposed an access control scheme that uses a
combination of discretionary access control and cryptographic techniques to secure
users' data and applications hosted by cloud providers. Many cloud applications
require users to share their data and applications hosted by cloud providers. To
facilitate resource sharing, the proposed scheme allows cloud users to delegate their
access permissions to other users easily. Using the access control policies that guard
the access to resources and the credentials submitted by users, a third party can infer
information about the cloud users. The proposed scheme uses cryptographic
techniques to obscure the access control policies and users' credentials to ensure the
privacy of the cloud users. Data encryption is used to guarantee the confidentiality of
data. Compared with existing schemes, the proposed scheme is more flexible and easy
to use. Experiments showed that the proposed scheme is also efficient.
S3I_CC16_0017: Auditing a Cloud Provider’s Compliance With Data Backup
Requirements: A Game Theoretical Analysis
The new developments in cloud computing have introduced significant security
challenges to guarantee the confidentiality, integrity, and availability of outsourced
data. A service level agreement (SLA) is usually signed between the cloud provider
(CP) and the customer. For redundancy purposes, it is important to verify the CP’s
compliance with data backup requirements in the SLA. There exist a number of
security mechanisms to check the integrity and availability of outsourced data. This
task can be performed by the customer or be delegated to an independent entity that
we will refer to as the verifier. However, checking the availability of data introduces
extra costs, which can discourage the customer of performing data verification too
often. The interaction between the verifier and the CP can be captured using game
theory in order to find an optimal data verification strategy. In this paper, we
formulate this problem as a two player non-cooperative game. We consider the case in
which each type of data is replicated a number of times, which can depend on a set of
parameters including, among others, its size and sensitivity. We analyze the strategies
of the CP and the verifier at the Nash equilibrium and derive the expected behavior of
31. S3 INFOTECH +91 988 48 48 198
# 10/1, Jones Road, Saidapet, Chennai – 15.Mob:, 9884848198.
www.s3computers.com E-Mail: info@s3computers.com
both the players. Finally, we validate our model numerically on a case study and
explain how we evaluate the parameters in the model.
S3I_CC16_0018: An Efficient Privacy-Preserving Ranked Keyword Search
Method
Cloud data owners prefer to outsource documents in an encrypted form for the
purpose of privacy preserving. Therefore it is essential to develop efficient and
reliable ciphertext search techniques. One challenge is that the relationship between
documents will be normally concealed in the process of encryption, which will lead to
significant search accuracy performance degradation. Also the volume of data in data
centers has experienced a dramatic growth. This will make it even more challenging
to design ciphertext search schemes that can provide efficient and reliable online
information retrieval on large volume of encrypted data. In this paper, a hierarchical
clustering method is proposed to support more search semantics and also to meet the
demand for fast ciphertext search within a big data environment. The proposed
hierarchical approach clusters the documents based on the minimum relevance
threshold, and then partitions the resulting clusters into sub-clusters until the
constraint on the maximum size of cluster is reached. In the search phase, this
approach can reach a linear computational complexity against an exponential size
increase of document collection. In order to verify the authenticity of search results, a
structure called minimum hash sub-tree is designed in this paper. Experiments have
been conducted using the collection set built from the IEEE Xplore. The results show
that with a sharp increase of documents in the dataset the search time of the proposed
method increases linearly whereas the search time of the traditional method increases
exponentially. Furthermore, the proposed method has an advantage over the
traditional method in the rank privacy and relevance of retrieved documents
S3I_CC16_0019: Towards Building Forensics Enabled Cloud Through Secure
Logging-as-a-Service
Collection and analysis of various logs (e.g., process logs, network logs) are
fundamental activities in computer forensics. Ensuring the security of the activity logs
is therefore crucial to ensure reliable forensics investigations. However, because of
the black-box nature of clouds and the volatility and co-mingling of cloud data,
providing the cloud logs to investigators while preserving users' privacy and the
integrity of logs is challenging. The current secure logging schemes, which consider
the logger as trusted cannot be applied in clouds since there is a chance that cloud
providers (logger) collude with malicious users or investigators to alter the logs. In
this paper, we analyze the threats on cloud users' activity logs considering the
collusion between cloud users, providers, and investigators. Based on the threat
model, we propose Secure-Logging-as-a-Service ( SecLaaS), which preserves various
32. S3 INFOTECH +91 988 48 48 198
# 10/1, Jones Road, Saidapet, Chennai – 15.Mob:, 9884848198.
www.s3computers.com E-Mail: info@s3computers.com
logs generated for the activity of virtual machines running in clouds and ensures the
confidentiality and integrity of such logs. Investigators or the court authority can only
access these logs by the RESTful APIs provided by SecLaaS, which ensures
confidentiality of logs. The integrity of the logs is ensured by hash-chain scheme and
proofs of past logs published periodically by the cloud providers. In prior research, we
used two accumulator schemes Bloom filter and RSA accumulator to build the proofs
of past logs. In this paper, we propose a new accumulator scheme - Bloom-Tree,
which performs better than the other two accumulators in terms of time and space
requirement.
S3I_CC16_0020: A Genetic Algorithm for Virtual Machine Migration in
Heterogeneous Mobile Cloud Computing
Mobile Cloud Computing (MCC) improves the performance of a mobile
application by executing it at a resourceful cloud server that can minimize execution
time compared to a resource-constrained mobile device. Virtual Machine (VM)
migration in MCC brings cloud resources closer to a user so as to further minimize the
response time of an offloaded application. Such resource migration is very effective
for interactive and real-time applications. However, the key challenge is to find an
optimal cloud server for migration that offers the maximum reduction in computation
time. In this paper, we propose a Genetic Algorithm (GA) based VM migration
model, namely GAVMM, for heterogeneous MCC system. In GAVMM, we take user
mobility and load of the cloud servers into consideration to optimize the effectiveness
of VM migration. The goal of GAVMM is to select the optimal cloud server for a
mobile VM and to minimize the total number of VM migrations, resulting in a
reduced task execution time. Additionally, we present a thorough numerical
evaluation to investigate the effectiveness of our proposed model compared to the
state-of-the-art VM migration policies.
S3I_CC16_0021: Online Resource Scheduling Under Concave Pricing for Cloud
Computing
With the booming growth of cloud computing industry, computational
resources are readily and elastically available to the customers. In order to attract
customers with various demands, most Infrastructure-as-a-service (IaaS) cloud service
providers offer several pricing strategies such as pay as you go, pay less per unit when
you use more (so called volume discount), and pay even less when you reserve. The
diverse pricing schemes among different IaaS service providers or even in the same
provider form a complex economic landscape that nurtures the market of cloud
brokers. By strategically scheduling multiple customers' resource requests, a cloud
broker can fully take advantage of the discounts offered by cloud service providers. In
this paper, we focus on how a broker may help a group of customers to fully utilize
33. S3 INFOTECH +91 988 48 48 198
# 10/1, Jones Road, Saidapet, Chennai – 15.Mob:, 9884848198.
www.s3computers.com E-Mail: info@s3computers.com
the volume discount pricing strategy offered by cloud service providers through cost-
efficient online resource scheduling. We present a randomized online stack-centric
scheduling algorithm (ROSA) and theoretically prove the lower bound of its
competitive ratio. Our simulation shows that ROSA achieves a competitive ratio close
to the theoretical lower bound under a special case cost function. Trace driven
simulation using Google cluster data demonstrates that ROSA is superior to the
conventional online scheduling algorithms in terms of cost saving.
S3I_CC16_0022: Dynamic Bin Packing for On-Demand Cloud Resource
Allocation
Dynamic Bin Packing (DBP) is a variant of classical bin packing, which
assumes that items may arrive and depart at arbitrary times. Existing works on DBP
generally aim to minimize the maximum number of bins ever used in the packing. In
this paper, we consider a new version of the DBP problem, namely, the MinTotal
DBP problem which targets at minimizing the total cost of the bins used overtime. It
is motivated by the request dispatching problem arising from cloud gaming systems.
We analyze the competitive ratios of the modified versions of the commonly used
First Fit, Best Fit, and Any Fit packing (the family of packing algorithms that open a
new bin only when no currently open bin can accommodate the item to be packed)
algorithms for the MinTotal DBP problem. We show that the competitive ratio of Any
Fit packing cannot be better than μ + 1, where μ is the ratio of the maximum item
duration to the minimum item duration. The competitive ratio of Best Fit packing is
not bounded for any given μ. For First Fit packing, if all the item sizes are smaller
than 1/β of the bin capacity (β> 1 is a constant), the competitive ratio has an upper
bound of β/β-1·μ+3β/β-1 + 1. For the general case, the competitive ratio of First Fit
packing has an upper bound of 2μ + 7. We also propose a Hybrid First Fit packing
algorithm that can achieve a competitive ratio no larger than 5/4 μ + 19/4 when μ is
not known and can achieve a competitive ratio no larger than μ + 5 when μ is known.
S3I_CC16_0023: A Scalable Data Chunk Similarity based Compression
Approach for Efficient Big Sensing Data Processing on Cloud
34. S3 INFOTECH +91 988 48 48 198
# 10/1, Jones Road, Saidapet, Chennai – 15.Mob:, 9884848198.
www.s3computers.com E-Mail: info@s3computers.com
Big sensing data is prevalent in both industry and scientific research
applications where the data is generated with high volume and velocity. Cloud
computing provides a promising platform for big sensing data processing and storage
as it provides a flexible stack of massive computing, storage, and software services in
a scalable manner. Current big sensing data processing on Cloud have adopted some
data compression techniques. However, due to the high volume and velocity of big
sensing data, traditional data compression techniques lack sufficient efficiency and
scalability for data processing. Based on specific on-Cloud data compression
requirements, we propose a novel scalable data compression approach based on
calculating similarity among the partitioned data chunks. Instead of compressing basic
data units, the compression will be conducted over partitioned data chunks. To restore
original data sets, some restoration functions and predictions will be designed.
MapReduce is used for algorithm implementation to achieve extra scalability on
Cloud. With real world meteorological big sensing data experiments on U-Cloud
platform, we demonstrate that the proposed scalable compression approach based on
data chunk similarity can significantly improve data compression efficiency with
affordable data accuracy loss.
S3I_J16_001 - SecRBAC: Secure data in the Clouds
Most current security solutions are based on perimeter security. However,
Cloud computing breaks the organization perimeters. When data resides in the Cloud,
they reside outside the organizational bounds. This leads users to a loos of control
over their data and raises reasonable security concerns that slow down the adoption of
Cloud computing. Is the Cloud service provider accessing the data? Is it legitimately
applying the access control policy defined by the user? This paper presents a data-
centric access control solution with enriched role-based expressiveness in which
security is focused on protecting user data regardless the Cloud service provider that
holds it. Novel identity-based and proxy re-encryption techniques are used to protect
the authorization model. Data is encrypted and authorization rules are
cryptographically protected to preserve user data against the service provider access
or misbehavior. The authorization model provides high expressiveness with role
hierarchy and resource hierarchy support. The solution takes advantage of the logic
formalism provided by Semantic Web technologies, which enables advanced rule
management like semantic conflict detection. A proof of concept implementation has
been developed and a working prototypical deployment of the proposal has been
integrated within Google services.
S3I_J16_002 - Trust Agent-Based Behavior Induction in Social Networks