INTERNATIONAL JOURNAL FOR TRENDS IN ENGINEERING & TECHNOLOGY
VOLUME 4 ISSUE 2 – APRIL 2015 - ISSN: 2349 - 9303
75
Improving Efficiency of Security in Multi-Cloud
Prashanth.R
1
Panimalar Engineering College
prashanthrajendiran@gmail.com
Sridharan.K
2
Panimalar Engineering College,
sridharank.p@gmail.com
Abstract--Due to risk in service availability failure and the possibilities of malicious insiders in the single cloud, a movement towards
“Multi-clouds” has emerged recently. In general a multi-cloud security system there is a possibility for third party to access the user files.
Ensuring security in this stage has become tedious since, most of the activities are done in network. In this paper, an enhanced security
methodology has been introduced in order to make the data stored in cloud more secure. Duple authentication process introduced in this
concept defends malicious insiders and shields the private data. Various disadvantages in traditional systems like unauthorized access,
hacking have been overcome in this proposed system and a comparison made with the traditional systems in terms of performance and
computational time have shown better results.
Keywords: Cloud Computing , Cloud Security , Cloud Performance, Multi Cloud.
——————————  ——————————
1 INTRODUCTION:
n recent years, cloud computing has rapidly expanded as an
alternative to conventional computing model since it can
provide a flexible, dynamic, resilient, and cost-effective
infrastructure. When multiple internal and/or external cloud
services are incorporated, we can get a distributed cloud
environment, i.e., multi-cloud. The clients can access his/her
remote resource through interfaces, for example, Web browser.
Generally, cloud computing has three deployment models:
public cloud, private cloud, and hybrid cloud. Multi-cloud is the
extension of hybrid cloud. When multi-cloud is used to store the
clients’ data, the distributed cloud storage platforms are
indispensable for the clients’ data management. Of course
multi-cloud storage platform is also more vulnerable to security
attacks. For example, the malicious CSPs may modify or delete
the clients’ data since these data are outside the clients. To
ensure the remote data’ security, the CSPs must provide
security techniques for the storage service.
In 2007, Ateniese et al. proposed the PDP model and concrete
PDP schemes. It is a probabilistic proof technique for CSPs to
prove the clients’ data integrity without downloading the whole
data. After that, Ateniese et al. proposed the dynamic PDP
security model and the concrete dynamic PDP schemes. To
support data insert operation, Erway et al. proposed a full-
dynamic PDP scheme based on authenticated flip table. Since
PDP is an important lightweight remote data integrity checking
model, many researchers have studied this model.
In 2012, Zhu et al. proposed the PDP model in
distributed cloud environment from the following aspects: high
security, transparent verification, and high performance. They
proposed a verification framework for multi-cloud storage and
constructed a CPDP scheme which is claimed to be provably
secure in their security model. Their scheme took use of the
techniques: hash index hierarchy (HIH), homomorphic
verifiable response, and multiprover zero-knowledge proof
system. They claimed that their scheme satisfied the security
properties: completeness, knowledge soundness, and zero-
knowledge. These properties ensure that their CPDP can
implement the security against data leakage attack and tag
forgery attack.
In this comment, we show that Zhu et al.’s CPDP scheme does
not satisfy the property of knowledge soundness. The malicious
CSPs or organizer can cheat the clients. Then, we discuss the
origin and severity of the security flaws. Our work can help
crypto-graphers and engineers design and implement more
secure.
The intention of this paper is to avoid the unauthorized access
of the cloud storage with the help of authorized person’s details.
The trusted third party can generate the token by the enhanced
technique called randomized algorithm and with the help of that
token the key distribution center (KDC) can generate the key to
the client. The key distribution center can generate the key by
the key generation algorithm called SHA algorithm. The
benefits of this paper is to avoid the data leakage attack and tag
forgery attack.
I
INTERNATIONAL JOURNAL FOR TRENDS IN ENGINEERING & TECHNOLOGY
VOLUME 4 ISSUE 2 – APRIL 2015 - ISSN: 2349 - 9303
76
2 PROPOSED SYSTEM:
2.1 Architecture:
LOGIN
L
TOKEN
KEY
Figure 1: Architecture Diagram for authentication
2.2 Working Principle:
In this paper, the client can get the token from the
trusted third party by giving all his/her personnel details. So, the
trusted third party only knows the personal details of the client
and the token which she/he is generated. The client can give the
key to the token distribution center (KDC) in order to get the
key to access the cloud. The KDC only knows the token which
is given by the client and key which is generated by them with
the help of username and key the client can access the cloud in
order to store, retrieve and process the data and so on.
The token can be generated by the randomized algorithm and
the key can be generated with the help of SHA algorithm, This
implies that the client can access the cloud with high security
the data can be protected by providing various encryption
algorithms like Homomorphic Verifiable Response (HVR) and
Hash Index Hierarchy (HIH).
The files can be splitted and stored in the cloud by inserting the
index value which will be generated by the Hash Index
Hierarchy (HIH). The files can be downloaded with the help of
that index value. The index value can be used to identify the
files in order to merge and to give the original file which will be
uploaded.
2.3 Methodology and Modules:
2.3.1 Multi cloud storage:
Distributed computing is used to refer to any large
collaboration in which many individual personal computer
owners allow some of their computer's processing time to be put
at the service of a large problem. In our system the each cloud
admin consist of data blocks .The cloud user upload the data
into multi-cloud. Cloud computing environment is constructed
based on open architectures and interfaces, it has the capability
to incorporate multiple internal and/or external cloud services
together to provide high interoperability. We call such a
distributed cloud environment as a multi-Cloud . A multi-cloud
allows clients to easily access his/her resources remotely
through interfaces.
2.3.2 Data Integrity:
Data Integrity is very important in database operations
in particular and Data warehousing and Business intelligence in
general. Because Data Integrity ensured that data is of high
quality, correct, consistent and accessible.
2.3.3 Cooperative PDP:
Cooperative PDP (CPDP) schemes adopting zero-
knowledge property and three-layered index hierarchy,
respectively. In particular efficient method for selecting the
optimal number of sectors in each block to minimize the
computation costs of clients and storage service providers.
Cooperative PDP (CPDP) scheme without compromising data
privacy based on modern cryptographic techniques.
2.3.4 Third Party Auditor:
Trusted Third Party (TTP) who is trusted to store
verification parameters and offer public query services for these
parameters. In our system the Trusted Third Party, view the user
data blocks and uploaded to the distributed cloud. In distributed
cloud environment each cloud has user data blocks. If any
modification tried by cloud owner an alert is send to the Trusted
Third Party.
2.3.5 Cloud User:
The Cloud User who have a large amount of data to be
stored in multiple clouds and have the permissions to access and
manipulate stored data. The User’s Data is converted into data
blocks .The data blocks is uploaded to the cloud. The TPA view
the data blocks and Uploaded in multi cloud. The user can
update the uploaded data. If the user wants to download their
files, the data’s in multi-cloud is integrated and downloaded.
2.3.6 Token Generation:
The token can be generated with the help of user’s
personnel details. The input will be the user’s personnel details.
The token can be used to generate the key in order to access the
CSP CSP CSP
FILE SPILTER/MERGER
PART 1 PART 2 PART 3
HOMOMORPHIC VERIFIABLE
RESPONSE
CLIENTS TRUSTED
THIRD PARTY
KEY
DISTRIBUTION
CENTER(KDC)
INTERNATIONAL JOURNAL FOR TRENDS IN ENGINEERING & TECHNOLOGY
VOLUME 4 ISSUE 2 – APRIL 2015 - ISSN: 2349 - 9303
77
cloud. The token will be the input to generate the key. Each
time the token will be unique even for the constant user.
String randomString(final int length) {
Random r = new Random();
StringBuilder sb = new Stringbuilder();
for(int i = 0; i < length; i++) {
char c = (char)(r.nextInt((int)
(Character.MAX_VALUE)));
sb.append(c);
}
return sb.toString();}
2.3.7 Key Generation:
The key can be generated in order to access the cloud
storage to download and delete the files. The key can be private
or public. The public key is also called master key, by using that
master key the private key can be generated.
String password = "123456";
MessageDigest md = MessageDigest
.getInstance("SHA-256");
md.update(password.getBytes());
byte byteData[] = md.digest();
StringBuffer sb = new StringBuffer();
for (int i = 0; i < byteData.length; i++){
sb.append(Integer.toString((byteData[i]&0xff)
+ 0x100, 16).substring(1));
}
return sb.toString();
3 WORK FLOW DIAGRAM:
Figure 2: Architecture Flow of the System
4 EXPERIMENTAL RESULT:
Figure 3: The experiment results of the differents for a 150K-
Bytes file (_ = 0.01 and P = 0.99).
we evaluated the performance of our scheme in terms
of computational overhead. For the sake of comparison, our
experiments were executed in the following scenario: a fixed-
size file is used to generate the tags and prove data possession
under the different number of sectors s. For a 150K-Bytes file,
the computational overheads of the verification protocol are
shown in Figure 3(a) when the value of s is ranged from 1 to 50
and the size of sector is 20-Bytes. Moreover, there exists an
optimal value of s from 15 to 25. The computational over-heads
of the tag generation are also shown in Figure 3(b). The results
indicate that the overheads are reduced when the values of s are
increased. Hence, it is necessary to select the optimal number of
sectors in each block to minimize the computation costs of
clients and storage service providers.
5 CONCLUSION:
This paper shows the security of constant user’s in order
to access the cloud storage. And customers only need to deal
with that service provider, using very simple and unified service
interface, without concerning the internal processes between
heterogeneous clouds. This model can ensure high data
reliability, low compilation time and high security by using
intelligent data security strategies.
6 REFERENCES:
[1] A. Bajpai, P. Rana, and S. Maitrey, Remote
mirroring:A disaster recovery technique in cloud
computing, International Journal of Advance Research in
Science and Engineering, vol. 2, no. 8, 2013.
[2] Y. Tan, H. Jiang, D. Feng, L. Tian, and Z. Yan,
CABdedupe: A causality-based de-duplication performance
booster for cloud backup services, in Parallel & Distributed
Processing Symposium (IPDPS),2011, pp. 1266-1277.
[3] W. Li, Y. Yang, and D. Yuan, A novel cost-effective
dynamic data replication strategy for reliability in cloud data
centers, in IEEE Ninth International Conference on
Dependable, Autonomic and Secure Computing ( DASC),2011,
pp. 496-502.
INTERNATIONAL JOURNAL FOR TRENDS IN ENGINEERING & TECHNOLOGY
VOLUME 4 ISSUE 2 – APRIL 2015 - ISSN: 2349 - 9303
78
[4] J. Zhang and N. Zhang, cloud computing-based data
storage and disaster recovery, in IEEE International Conference
on Future Computer Science and Education(ICFCSE), 2011,
pp. 629-632.
[5] T. Wood, H. A. Lagar-Cavilla, K. K. Ramakrishnan,
P. Shenoy, and J. Van der Merwe, PipeCloud: Using causality
to overcome speed-of-light delays in cloudbased disaster
recovery, in Proceedings of the 2nd ACMSymposium on Cloud
Computing, 2011, p. 17.
[6] D. Bermbach, M. Klems, S. Tai, and M.
Menzel,Metastorage: A federated cloud storage system to
manage consistency-latency tradeoffs, in IEEE International
Conference on Cloud Computing (CLOUD), 2011, pp.452-459.
[7] H. Wang, Q. Jing, R. Chen, B. He, Z. Qian, and L.
Zhou, Distributed systems meet economics: Pricing in the
cloud, in Proceedings of the 2nd USENIX Conference on
HotTopics in Cloud Computing, Boston, USA, 2010, p. 6.
[8] T. Nguyen, A. Cutway, and W. Shi, Differentiated
replication strategy in data centers, in Proc. the IFIP
International Conference on Network and ParallelComputing,
Guangzhou, China, 2010, pp. 277-288.
[9] T. Wood, E. Cecchet, K. K. Ramakrishnan, P. Shenoy,
J. Van der Merwe, and A. Venkataramani, Disaster recovery as
a cloud service: Economic benefits & deployment challenges, in
2nd USENIX Workshop on HotTopics in Cloud Computing,
Boston, USA, 2010.
[10] C. Cachin, R. Haas, and M. Vukolic, Dependable
storage in the Intercloud, IBM Research, vol. 3783, pp. 1-6,
2010.
[11] M. Vrable, S. Savage, and G. M. Voelker, Cumulus:
Filesystem backup to the cloud, ACM Transactions onStorage
(TOS), vol. 5, no. 4, p. 14, 2009.
[12] A. Fox, R. Griffith, A. Joseph, R. Katz, A. Konwinski,
G. Lee, and I. Stoica, Above the clouds: A Berkeley view of
cloud computing, Dept. Electrical Eng. and Comput. Sciences,
University of California, Berkeley,Rep. UCB/EECS, 2009.

Improving Efficiency of Security in Multi-Cloud

  • 1.
    INTERNATIONAL JOURNAL FORTRENDS IN ENGINEERING & TECHNOLOGY VOLUME 4 ISSUE 2 – APRIL 2015 - ISSN: 2349 - 9303 75 Improving Efficiency of Security in Multi-Cloud Prashanth.R 1 Panimalar Engineering College prashanthrajendiran@gmail.com Sridharan.K 2 Panimalar Engineering College, sridharank.p@gmail.com Abstract--Due to risk in service availability failure and the possibilities of malicious insiders in the single cloud, a movement towards “Multi-clouds” has emerged recently. In general a multi-cloud security system there is a possibility for third party to access the user files. Ensuring security in this stage has become tedious since, most of the activities are done in network. In this paper, an enhanced security methodology has been introduced in order to make the data stored in cloud more secure. Duple authentication process introduced in this concept defends malicious insiders and shields the private data. Various disadvantages in traditional systems like unauthorized access, hacking have been overcome in this proposed system and a comparison made with the traditional systems in terms of performance and computational time have shown better results. Keywords: Cloud Computing , Cloud Security , Cloud Performance, Multi Cloud. ——————————  —————————— 1 INTRODUCTION: n recent years, cloud computing has rapidly expanded as an alternative to conventional computing model since it can provide a flexible, dynamic, resilient, and cost-effective infrastructure. When multiple internal and/or external cloud services are incorporated, we can get a distributed cloud environment, i.e., multi-cloud. The clients can access his/her remote resource through interfaces, for example, Web browser. Generally, cloud computing has three deployment models: public cloud, private cloud, and hybrid cloud. Multi-cloud is the extension of hybrid cloud. When multi-cloud is used to store the clients’ data, the distributed cloud storage platforms are indispensable for the clients’ data management. Of course multi-cloud storage platform is also more vulnerable to security attacks. For example, the malicious CSPs may modify or delete the clients’ data since these data are outside the clients. To ensure the remote data’ security, the CSPs must provide security techniques for the storage service. In 2007, Ateniese et al. proposed the PDP model and concrete PDP schemes. It is a probabilistic proof technique for CSPs to prove the clients’ data integrity without downloading the whole data. After that, Ateniese et al. proposed the dynamic PDP security model and the concrete dynamic PDP schemes. To support data insert operation, Erway et al. proposed a full- dynamic PDP scheme based on authenticated flip table. Since PDP is an important lightweight remote data integrity checking model, many researchers have studied this model. In 2012, Zhu et al. proposed the PDP model in distributed cloud environment from the following aspects: high security, transparent verification, and high performance. They proposed a verification framework for multi-cloud storage and constructed a CPDP scheme which is claimed to be provably secure in their security model. Their scheme took use of the techniques: hash index hierarchy (HIH), homomorphic verifiable response, and multiprover zero-knowledge proof system. They claimed that their scheme satisfied the security properties: completeness, knowledge soundness, and zero- knowledge. These properties ensure that their CPDP can implement the security against data leakage attack and tag forgery attack. In this comment, we show that Zhu et al.’s CPDP scheme does not satisfy the property of knowledge soundness. The malicious CSPs or organizer can cheat the clients. Then, we discuss the origin and severity of the security flaws. Our work can help crypto-graphers and engineers design and implement more secure. The intention of this paper is to avoid the unauthorized access of the cloud storage with the help of authorized person’s details. The trusted third party can generate the token by the enhanced technique called randomized algorithm and with the help of that token the key distribution center (KDC) can generate the key to the client. The key distribution center can generate the key by the key generation algorithm called SHA algorithm. The benefits of this paper is to avoid the data leakage attack and tag forgery attack. I
  • 2.
    INTERNATIONAL JOURNAL FORTRENDS IN ENGINEERING & TECHNOLOGY VOLUME 4 ISSUE 2 – APRIL 2015 - ISSN: 2349 - 9303 76 2 PROPOSED SYSTEM: 2.1 Architecture: LOGIN L TOKEN KEY Figure 1: Architecture Diagram for authentication 2.2 Working Principle: In this paper, the client can get the token from the trusted third party by giving all his/her personnel details. So, the trusted third party only knows the personal details of the client and the token which she/he is generated. The client can give the key to the token distribution center (KDC) in order to get the key to access the cloud. The KDC only knows the token which is given by the client and key which is generated by them with the help of username and key the client can access the cloud in order to store, retrieve and process the data and so on. The token can be generated by the randomized algorithm and the key can be generated with the help of SHA algorithm, This implies that the client can access the cloud with high security the data can be protected by providing various encryption algorithms like Homomorphic Verifiable Response (HVR) and Hash Index Hierarchy (HIH). The files can be splitted and stored in the cloud by inserting the index value which will be generated by the Hash Index Hierarchy (HIH). The files can be downloaded with the help of that index value. The index value can be used to identify the files in order to merge and to give the original file which will be uploaded. 2.3 Methodology and Modules: 2.3.1 Multi cloud storage: Distributed computing is used to refer to any large collaboration in which many individual personal computer owners allow some of their computer's processing time to be put at the service of a large problem. In our system the each cloud admin consist of data blocks .The cloud user upload the data into multi-cloud. Cloud computing environment is constructed based on open architectures and interfaces, it has the capability to incorporate multiple internal and/or external cloud services together to provide high interoperability. We call such a distributed cloud environment as a multi-Cloud . A multi-cloud allows clients to easily access his/her resources remotely through interfaces. 2.3.2 Data Integrity: Data Integrity is very important in database operations in particular and Data warehousing and Business intelligence in general. Because Data Integrity ensured that data is of high quality, correct, consistent and accessible. 2.3.3 Cooperative PDP: Cooperative PDP (CPDP) schemes adopting zero- knowledge property and three-layered index hierarchy, respectively. In particular efficient method for selecting the optimal number of sectors in each block to minimize the computation costs of clients and storage service providers. Cooperative PDP (CPDP) scheme without compromising data privacy based on modern cryptographic techniques. 2.3.4 Third Party Auditor: Trusted Third Party (TTP) who is trusted to store verification parameters and offer public query services for these parameters. In our system the Trusted Third Party, view the user data blocks and uploaded to the distributed cloud. In distributed cloud environment each cloud has user data blocks. If any modification tried by cloud owner an alert is send to the Trusted Third Party. 2.3.5 Cloud User: The Cloud User who have a large amount of data to be stored in multiple clouds and have the permissions to access and manipulate stored data. The User’s Data is converted into data blocks .The data blocks is uploaded to the cloud. The TPA view the data blocks and Uploaded in multi cloud. The user can update the uploaded data. If the user wants to download their files, the data’s in multi-cloud is integrated and downloaded. 2.3.6 Token Generation: The token can be generated with the help of user’s personnel details. The input will be the user’s personnel details. The token can be used to generate the key in order to access the CSP CSP CSP FILE SPILTER/MERGER PART 1 PART 2 PART 3 HOMOMORPHIC VERIFIABLE RESPONSE CLIENTS TRUSTED THIRD PARTY KEY DISTRIBUTION CENTER(KDC)
  • 3.
    INTERNATIONAL JOURNAL FORTRENDS IN ENGINEERING & TECHNOLOGY VOLUME 4 ISSUE 2 – APRIL 2015 - ISSN: 2349 - 9303 77 cloud. The token will be the input to generate the key. Each time the token will be unique even for the constant user. String randomString(final int length) { Random r = new Random(); StringBuilder sb = new Stringbuilder(); for(int i = 0; i < length; i++) { char c = (char)(r.nextInt((int) (Character.MAX_VALUE))); sb.append(c); } return sb.toString();} 2.3.7 Key Generation: The key can be generated in order to access the cloud storage to download and delete the files. The key can be private or public. The public key is also called master key, by using that master key the private key can be generated. String password = "123456"; MessageDigest md = MessageDigest .getInstance("SHA-256"); md.update(password.getBytes()); byte byteData[] = md.digest(); StringBuffer sb = new StringBuffer(); for (int i = 0; i < byteData.length; i++){ sb.append(Integer.toString((byteData[i]&0xff) + 0x100, 16).substring(1)); } return sb.toString(); 3 WORK FLOW DIAGRAM: Figure 2: Architecture Flow of the System 4 EXPERIMENTAL RESULT: Figure 3: The experiment results of the differents for a 150K- Bytes file (_ = 0.01 and P = 0.99). we evaluated the performance of our scheme in terms of computational overhead. For the sake of comparison, our experiments were executed in the following scenario: a fixed- size file is used to generate the tags and prove data possession under the different number of sectors s. For a 150K-Bytes file, the computational overheads of the verification protocol are shown in Figure 3(a) when the value of s is ranged from 1 to 50 and the size of sector is 20-Bytes. Moreover, there exists an optimal value of s from 15 to 25. The computational over-heads of the tag generation are also shown in Figure 3(b). The results indicate that the overheads are reduced when the values of s are increased. Hence, it is necessary to select the optimal number of sectors in each block to minimize the computation costs of clients and storage service providers. 5 CONCLUSION: This paper shows the security of constant user’s in order to access the cloud storage. And customers only need to deal with that service provider, using very simple and unified service interface, without concerning the internal processes between heterogeneous clouds. This model can ensure high data reliability, low compilation time and high security by using intelligent data security strategies. 6 REFERENCES: [1] A. Bajpai, P. Rana, and S. Maitrey, Remote mirroring:A disaster recovery technique in cloud computing, International Journal of Advance Research in Science and Engineering, vol. 2, no. 8, 2013. [2] Y. Tan, H. Jiang, D. Feng, L. Tian, and Z. Yan, CABdedupe: A causality-based de-duplication performance booster for cloud backup services, in Parallel & Distributed Processing Symposium (IPDPS),2011, pp. 1266-1277. [3] W. Li, Y. Yang, and D. Yuan, A novel cost-effective dynamic data replication strategy for reliability in cloud data centers, in IEEE Ninth International Conference on Dependable, Autonomic and Secure Computing ( DASC),2011, pp. 496-502.
  • 4.
    INTERNATIONAL JOURNAL FORTRENDS IN ENGINEERING & TECHNOLOGY VOLUME 4 ISSUE 2 – APRIL 2015 - ISSN: 2349 - 9303 78 [4] J. Zhang and N. Zhang, cloud computing-based data storage and disaster recovery, in IEEE International Conference on Future Computer Science and Education(ICFCSE), 2011, pp. 629-632. [5] T. Wood, H. A. Lagar-Cavilla, K. K. Ramakrishnan, P. Shenoy, and J. Van der Merwe, PipeCloud: Using causality to overcome speed-of-light delays in cloudbased disaster recovery, in Proceedings of the 2nd ACMSymposium on Cloud Computing, 2011, p. 17. [6] D. Bermbach, M. Klems, S. Tai, and M. Menzel,Metastorage: A federated cloud storage system to manage consistency-latency tradeoffs, in IEEE International Conference on Cloud Computing (CLOUD), 2011, pp.452-459. [7] H. Wang, Q. Jing, R. Chen, B. He, Z. Qian, and L. Zhou, Distributed systems meet economics: Pricing in the cloud, in Proceedings of the 2nd USENIX Conference on HotTopics in Cloud Computing, Boston, USA, 2010, p. 6. [8] T. Nguyen, A. Cutway, and W. Shi, Differentiated replication strategy in data centers, in Proc. the IFIP International Conference on Network and ParallelComputing, Guangzhou, China, 2010, pp. 277-288. [9] T. Wood, E. Cecchet, K. K. Ramakrishnan, P. Shenoy, J. Van der Merwe, and A. Venkataramani, Disaster recovery as a cloud service: Economic benefits & deployment challenges, in 2nd USENIX Workshop on HotTopics in Cloud Computing, Boston, USA, 2010. [10] C. Cachin, R. Haas, and M. Vukolic, Dependable storage in the Intercloud, IBM Research, vol. 3783, pp. 1-6, 2010. [11] M. Vrable, S. Savage, and G. M. Voelker, Cumulus: Filesystem backup to the cloud, ACM Transactions onStorage (TOS), vol. 5, no. 4, p. 14, 2009. [12] A. Fox, R. Griffith, A. Joseph, R. Katz, A. Konwinski, G. Lee, and I. Stoica, Above the clouds: A Berkeley view of cloud computing, Dept. Electrical Eng. and Comput. Sciences, University of California, Berkeley,Rep. UCB/EECS, 2009.