Your SlideShare is downloading. ×
Bulk ieee projects 2012 2013
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Bulk ieee projects 2012 2013

373
views

Published on

Bulk IEEE Projects 2012 - 2013, Bulk IEEE Projects 2012, Bulk IEEE Projects, BULK Projects

Bulk IEEE Projects 2012 - 2013, Bulk IEEE Projects 2012, Bulk IEEE Projects, BULK Projects


0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
373
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
4
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. SEABIRDS IEEE 2012 – 2013 SOFTWARE PROJECTS IN VARIOUS DOMAINS | JAVA | J2ME | J2EE | DOTNET |MATLAB |NS2 |SBGC SBGC24/83, O Block, MMDA COLONY 4th FLOOR SURYA COMPLEX,ARUMBAKKAM SINGARATHOPE BUS STOP,CHENNAI-600106 OLD MADURAI ROAD, TRICHY- 620002Web: www.ieeeproject.inE-Mail: ieeeproject@hotmail.comTrichy ChennaiMobile:- 09003012150 Mobile:- 09944361169Phone:- 0431-4012303
  • 2. SBGC Provides IEEE 2012-2013 projects for all Final Year Students. We do assist the studentswith Technical Guidance for two categories. Category 1 : Students with new project ideas / New or Old IEEE Papers. Category 2 : Students selecting from our project list.When you register for a project we ensure that the project is implemented to your fullestsatisfaction and you have a thorough understanding of every aspect of the project.SBGC PROVIDES YOU THE LATEST IEEE 2012 PROJECTS / IEEE 2013 PROJECTSFOR FOLLOWING DEPARTMENT STUDENTSB.E, B.TECH, M.TECH, M.E, DIPLOMA, MS, BSC, MSC, BCA, MCA, MBA, BBA, PHD,B.E (ECE, EEE, E&I, ICE, MECH, PROD, CSE, IT, THERMAL, AUTOMOBILE,MECATRONICS, ROBOTICS) B.TECH(ECE, MECATRONICS, E&I, EEE, MECH , CSE, IT,ROBOTICS) M.TECH(EMBEDDED SYSTEMS, COMMUNICATION SYSTEMS, POWERELECTRONICS, COMPUTER SCIENCE, SOFTWARE ENGINEERING, APPLIEDELECTRONICS, VLSI Design) M.E(EMBEDDED SYSTEMS, COMMUNICATIONSYSTEMS, POWER ELECTRONICS, COMPUTER SCIENCE, SOFTWAREENGINEERING, APPLIED ELECTRONICS, VLSI Design) DIPLOMA (CE, EEE, E&I, ICE,MECH,PROD, CSE, IT)MBA(HR, FINANCE, MANAGEMENT, HOTEL MANAGEMENT, SYSTEMMANAGEMENT, PROJECT MANAGEMENT, HOSPITAL MANAGEMENT, SCHOOLMANAGEMENT, MARKETING MANAGEMENT, SAFETY MANAGEMENT)We also have training and project, R & D division to serve the students and make them joboriented professionals
  • 3. PROJECT SUPPORTS AND DELIVERABLES Project Abstract IEEE PAPER IEEE Reference Papers, Materials & Books in CD PPT / Review Material Project Report (All Diagrams & Screen shots) Working Procedures Algorithm Explanations Project Installation in Laptops Project Certificate
  • 4. JAVA DATAMINING1. A Framework for Personal Mobile Commerce Pattern Mining and PredictionAbstractDue to a wide range of potential applications, research on mobile commerce hasreceived a lot of interests from both of the industry and academia. Among them, oneof the active topic areas is the mining and prediction of users’ mobile commercebehaviors such as their movements and purchase transactions. In this paper, wepropose a novel framework, called Mobile Commerce Explorer (MCE), for mining andprediction of mobile users’ movements and purchase transactions under the context ofmobile commerce. The MCE framework consists of three major components: 1)Similarity Inference Model (SIM) for measuring the similarities among stores and items,which are two basic mobile commerce entities considered in this paper; 2) PersonalMobile Commerce Pattern Mine (PMCP-Mine) algorithm for efficient discovery ofmobile users’ Personal Mobile Commerce Patterns (PMCPs); and 3) Mobile CommerceBehavior Predictor (MCBP) for prediction of possible mobile user behaviors. To our bestknowledge, this is the first work that facilitates mining and prediction of mobile users’commerce behaviors in order to recommend stores and items previously unknown to auser. We perform an extensive experimental evaluation by simulation and show that ourproposals produce excellent results.2. Efficient Extended Boolean RetrievalAbstractExtended Boolean retrieval (EBR) models were proposed nearly three decades ago, buthave had little practical impact, despite their significant advantages compared toeither ranked keyword or pure Boolean retrieval. In particular,EBR models producemeaningful rankings; their query model allows the representation of complex conceptsin an and-or format; and they are scrutable, in that the score assigned to a documentdepends solely on the content of that document, unaffected by any collectionstatistics or other external factors. These characteristics make EBR models attractive indomains typified by medical and legal searching, where the emphasis is on iterativedevelopment of reproducible complex queries of dozens or even hundreds of terms.However, EBR is much more computationally expensive than the alternatives. Weconsider the implementation of the p-norm approach to EBR, and demonstrate that
  • 5. ideas used in the max-score and wand exact optimization techniques for rankedkeyword retrieval can be adaptedto allow selective bypass of documents via a low-cost screening process for this and similar retrieval models. We also propose termindependent bounds that are able to further reduce the number of score calculationsfor short, simple queries under the extended Boolean retrieval model. Together, thesemethods yield an overall saving from 50 to 80percent of the evaluation cost on testqueries drawn from biomedical search.3. Improving Aggregate Recommendation Diversity Using Ranking-Based TechniquesAbstractRecommender systems are becoming increasingly important to individual users andbusinesses for providingpersonalized recommendations. However, while the majority ofalgorithms proposed in recommender systems literature have focused on improvingrecommendation accuracy (as exemplified by the recent Netflix Prize competition) ,other important aspects of recommendation quality, such as the diversity ofrecommendations, have often been overlooked. In this paper, we introduce andexplore a number of item ranking techniques that can generate recommendationsthat have substantially higher aggregate diversity across all users while maintainingcomparable levels of recommendation accuracy. Comprehensive empiricalevaluation consistently shows the diversity gains of the proposed techniques usingseveral real-world rating datasets and different rating prediction algorithms.4. BibPro: A Citation Parser Based on Sequence AlignmentDramatic increase in the number of academic publications has led to growingdemand for efficient organization of the resources to meet researchers needs. As aresult, a number of network services have compiled databases from the publicresources scattered over the Internet. However, publications by different conferencesand journals adopt different citation styles. It is an interesting problem to accuratelyextract metadata from a citation string which is formatted in one of thousands ofdifferent styles. It has attracted a great deal of attention in research in recent years. Inthis paper, based on the notion of sequence alignment, we present a citation parsercalled BibPro that extracts components of a citation string. To demonstrate the efficacy
  • 6. of BibPro, we conducted experiments on three benchmark data sets. The results showthat BibPro achieved over 90 percent accuracy on each benchmark. Even withcitations and associated metadata retrieved from the web as training data, ourexperiments show that BibPro still achieves a reasonable performance5. Extending Attribute Information for Small Data Set ClassificationData quantity is the main issue in the small data set problem, because usuallyinsufficient data will not lead to a robust classification performance. How to extractmore effective information from a small data set is thus of considerable interest. Thispaper proposes a new attribute construction approach which converts the originaldata attributes into a higher dimensional feature space to extract more attributeinformation by a similarity-based algorithm using the classification-oriented fuzzymembership function. Seven data sets with different attribute sizes are employed toexamine the performance of the proposed method. The results show that the proposedmethod has a superior classification performance when compared to principalcomponent analysis (PCA), kernel principal component analysis (KPCA), and kernelindependent component analysis (KICA) with a Gaussian kernel in the support vectormachine (SVM) classifier6. Horizontal Aggregations in SQL to Prepare Data Sets for Data Mining AnalysisPreparing a data set for analysis is generally the most time consuming task in a datamining project, requiring many complex SQL queries, joining tables, and aggregatingcolumns. Existing SQL aggregations have limitations to prepare data sets because theyreturn one column per aggregated group. In general, a significant manual effort isrequired to build data sets, where a horizontal layout is required. We propose simple,yet powerful, methods to generate SQL code to return aggregated columns in ahorizontal tabular layout, returning a set of numbers instead of one number per row. Thisnew class of functions is called horizontal aggregations. Horizontal aggregations builddata sets with a horizontal denormalized layout (e.g., point-dimension,observationvariable, instance-feature), which is the standard layout required by mostdata mining algorithms. We propose three fundamental methods to evaluate horizontalaggregations: CASE: Exploiting the programming CASE construct; SPJ: Based onstandard relational algebra operators (SPJ queries); PIVOT: Using the PIVOT operator,
  • 7. which is offered by some DBMSs. Experiments with large tables compare the proposedquery evaluation methods. Our CASE method has similar speed to the PIVOT operatorand it is much faster than the SPJ method. In general, the CASE and PIVOT methodsexhibit linear scalability, whereas the SPJ method does7. Enabling Multilevel Trust in Privacy Preserving Data MiningPrivacy Preserving Data Mining (PPDM) addresses the problem of developing accuratemodels about aggregated data without access to precise information in individualdata record. A widely studied perturbation-based PPDM approach introduces randomperturbation to individual values to preserve privacy before data are published.Previous solutions of this approach are limited in their tacit assumption of single-leveltrust on data miners. In this work, we relax this assumption and expand the scope ofperturbation-based PPDM to Multilevel Trust (MLT-PPDM). In our setting, the more trusteda data miner is, the less perturbed copy of the data it can access. Under this setting, amalicious data miner may have access to differently perturbed copies of the samedata through various means, and may combine these diverse copies to jointly inferadditional information about the original data that the data owner does not intend torelease. Preventing such diversity attacks is the key challenge of providing MLT-PPDMservices. We address this challenge by properly correlating perturbation across copiesat different trust levels. We prove that our solution is robust against diversity attacks withrespect to our privacy goal. That is, for data miners who have access to an arbitrarycollection of the perturbed copies, our solution prevent them from jointly reconstructingthe original data more accurately than the best effort using any individual copy in thecollection. Our solution allows a data owner to generate perturbed copies of its datafor arbitrary trust levels ondemand.This feature offers data owners maximum flexibility.8.Using Rule Ontology in Repeated Rule Acquisition from Similar Web SitesInferential rules are as essential to the Semantic Web applications as ontology.Therefore, rule acquisition is also an important issue, and the Web that impliesinferential rules can be a major source of rule acquisition. We expect that it will beeasier to acquire rules from a site by using similar rules of other sites in the same domainrather than starting from scratch. We proposed an automatic rule acquisitionprocedure using a rule ontology RuleToOnto, which represents information about therule components and their structures. The rule acquisition procedure consists of the rule
  • 8. component identification step and the rule composition step. We developed A*algorithm for the rule composition and we performed experiments demonstrating thatour ontology-based rule acquisition approach works in a real-world application.DOTNET DATAMINING1. Slicing: A New Approach for Privacy Preserving Data Publishing DotnetSeveral anonymization techniques, such as generalization and bucketization, havebeen designed for privacy preservingmicrodata publishing. Recent work has shown that generalization loses considerableamount of information, especially for highdimensional data. Bucketization, on the otherhand, does not prevent membership disclosure and does not apply for data that donot have a clear separation between quasi-identifying attributes and sensitiveattributes. In this paper, we present a novel technique called slicing, which partitionsthe data both horizontally and vertically. We show that slicing preserves better datautility than generalization and can be used for membership disclosure protection.Another important advantage of slicing is that it can handle high-dimensional data. Weshow how slicing can be used for attribute disclosure protection and develop anefficient algorithm for computing the sliced data that obey the ‘-diversity requirement.Our workload experiments confirm that slicing preserves better utility thangeneralization and is more effective than bucketization in workloads involving thesensitive attribute. Our experiments also demonstrate that slicing can be used toprevent membership disclosure.JAVA NETWORKING1. Latency Equalization as a New Network Service Primitive
  • 9. AbstractMultiparty interactive network applications such as teleconferencing, network gaming,and online trading are gaining popularity. In addition to end-to-end latency bounds,these applications require that the delay difference among multiple clients of theservice is minimized for a good interactive experience. We propose a LatencyEQualization (LEQ) service, which equalizes the perceived latency for all clientsparticipating in an interactive network application. To effectively implement theproposed LEQ service, network support is essential. The LEQ architecture uses a fewrouters in the network as hubs to redirect packets of interactive applications alongpaths with similar end-to-end delay. We first formulate the hub selection problem, proveits NP-hardness, and provide a greedy algorithm to solve it. Through extensivesimulations, we show that our LEQ architecture significantly reduces delay differenceunder different optimization criteria that allow or do not allow compromising the per-user end-to-end delay. Our LEQ service is incrementally deployable in today’s networks,requiring just software modifications to edge routers.2. Adaptive Opportunistic Routing for Wireless Ad Hoc NetworksAbstractA distributed adaptive opportunistic routing scheme for multihop wireless ad hocnetworks is proposed. The proposed scheme utilizes a reinforcement learning frameworkto opportunistically route the packets even in the absence of reliable knowledge aboutchannel statistics and network model. This scheme is shown to be optimal with respectto an expected average per-packet reward criterion. The proposed routing schemejointly addresses the issues of learning and routing in an opportunistic context, wherethe network structure is characterized by the transmission success probabilities. Inparticular, this learning framework leads to a stochastic routing scheme that optimally“explores” and “exploits” the opportunities in the network.3. Revisiting Dynamic Query Protocols in Unstructured Peer-to-Peer NetworksIn unstructured peer-to-peer networks, the average response latency and traffic cost ofa query are two main performance metrics. Controlled-flooding resource query
  • 10. algorithms are widely used in unstructured networks such as peer-to-peer networks. Inthis paper, we propose a novel algorithm named Selective Dynamic Query (SDQ).Based on mathematical programming, SDQ calculates the optimal combination of aninteger TTL value and a set of neighbors to control the scope of the next query. Ourresults demonstrate that SDQ provides finer grained control than other algorithms: itsresponse latency is close to the well-known minimum one via Expanding Ring; in themean time, its traffic cost is also close to the minimum. To our best knowledge, this is thefirst work capable of achieving a best trade-off between response latency and trafficcost.4.Opportunistic Flow-Level Latency Estimation Using Consistent NetFlow Networking 2012JavaThe inherent measurement support in routers (SNMP counters or NetFlow) is not sufficientto diagnose performance problems in IP networks, especially for flow-specific problemswhere the aggregate behavior within a router appears normal. Tomographicapproaches to detect the location of such problems are not feasible in such cases asactive probes can only catch aggregate characteristics. To address this problem, in thispaper, we propose a Consistent NetFlow (CNF) architecture for measuring per-flowdelay measurements within routers. CNF utilizes the existing NetFlow architecture thatalready reports the first and last timestamps per flow, and it proposes hash-basedsampling to ensure that two adjacent routers record the same flows. We devise a novelMultiflow estimator that approximates the intermediate delay samples from otherbackground flows to significantly improve the per-flow latency estimates compared tothe naive estimator that only uses actual flow samples. In our experiments using realbackbone traces and realistic delay models, we show that the Multiflow estimator isaccurate with a median relative error of less than 20% for flows of size greater than 100packets. We also show that Multiflow estimator performs two to three times better thana prior approach based on trajectory sampling at an equivalent packet sampling rate.5. Exploiting Excess Capacity to Improve Robustness of WDM Mesh NetworksExcess capacity (EC) is the unused capacity in a network. We propose ECmanagement techniques to improve network performance. Our techniques exploit theEC in two ways. First, a connection preprovisioning algorithm is used to reduce theconnection setup time. Second, whenever possible, we use protection schemes that
  • 11. have higher availability and shorter protection switching time. Specifically, dependingon the amount of EC available in the network, our proposed EC managementtechniques dynamically migrate connections between high-availability, high-backup-capacity protection schemes and low-availability, low-backup-capacity protectionschemes. Thus, multiple protection schemes can coexist in the network. The four ECmanagement techniques studied in this paper differ in two respects: when theconnections are migrated from one protection scheme to another, and whichconnections are migrated. Specifically, Lazy techniques migrate connections onlywhen necessary, whereas Proactive techniques migrate connections to free upcapacity in advance. Partial Backup Reprovisioning (PBR) techniques try to migrate aminimal set of connections, whereas Global Backup Reprovisioning (GBR) techniquesmigrate all connections. We develop integer linear program (ILP) formulations andheuristic algorithms for the EC management techniques. We then present numericalexamples to illustrate how the EC management techniques improve networkperformance by exploiting the EC in wavelength-division-multiplexing (WDM) meshnetworks6. Leveraging a Compound Graph-Based DHT for Multi-Attribute Range Queries withPerformance AnalysisResource discovery is critical to the usability and accessibility of grid computing systems.Distributed Hash Table (DHT) has been applied to grid systems as a distributedmechanism for providing scalable range-query and multi-attribute resource discovery.Multi-DHT-based approaches depend on multiple DHT networks with each networkresponsible for a single attribute. Single-DHT-based approaches keep the resourceinformation of all attributes in a single node. Both classes of approaches lead to highoverhead. In this paper, we propose a Low-Overhead Range-query Multi-attribute(LORM) DHT-based resource discovery approach. Unlike other DHT-based approaches,LORM relies on a single compound graph-based DHT network and distributes resourceinformation among nodes in balance by taking advantage of the compound graphstructure. Moreover, it has high capability to handle the large-scale and dynamiccharacteristics of resources in grids. Experimental results demonstrate the efficiency ofLORM in comparison with other resource discovery approaches. LORM dramaticallyreduces maintenance and resource discovery overhead. In addition, it yields significantimprovements in resource location efficiency. We also analyze the performance of theLORM approach rigorously by comparing it with other multi-DHT-based and single-DHT-based approaches with respect to their overhead and efficiency. The analytical resultsare consistent with experimental results, and prove the superiority of the LORMapproach in theory
  • 12. JAVA IMAGE PROCESSING1. Abrupt Motion Tracking Via Intensively Adaptive Markov-Chain Monte CarloSamplingThe robust tracking of abrupt motion is a challenging task in computer vision due to itslarge motion uncertainty. While various particle filters and conventional Markov-chainMonte Carlo (MCMC) methods have been proposed for visual tracking, these methodsoften suffer from the well-known local-trap problem or from poor convergence rate. Inthis paper, we propose a novel sampling-based tracking scheme for the abrupt motionproblem in the Bayesian filtering framework. To effectively handle the local-trapproblem, we first introduce the stochastic approximation Monte Carlo (SAMC) samplingmethod into the Bayesian filter tracking framework, in which the filtering distribution isadaptively estimated as the sampling proceeds, and thus, a good approximation tothe target distribution is achieved. In addition, we propose a new MCMC sampler withintensive adaptation to further improve the sampling efficiency, which combines adensity-grid-based predictive model with the SAMC sampling, to give a proposaladaptation scheme. The proposed method is effective and computationally efficient inaddressing the abrupt motion problem. We compare our approach with severalalternative tracking algorithms, and extensive experimental results are presented todemonstrate the effectiveness and the efficiency of the proposed method in dealingwith various types of abrupt motions.2.Vehicle Detection in Aerial Surveillance Using Dynamic Bayesian NetworksWe present an automatic vehicle detection system for aerial surveillance in this paper.In this system, we escape from the stereotype and existing frameworks of vehicledetection in aerial surveillance, which are either region based or sliding window based.We design a pixelwise classification method for vehicle detection. The novelty lies in thefact that, in spite of performing pixelwise classification, relations among neighboringpixels in a region are preserved in the feature extraction process.We consider featuresincluding vehicle colors and local features. For vehicle color extraction, we utilize a
  • 13. color transform to separate vehicle colors and nonvehicle colors effectively. For edgedetection, we apply moment preserving to adjust the thresholds of the Canny edgedetector automatically, which increases the adaptability and the accuracy fordetection in various aerial images. Afterward, a dynamic Bayesian network (DBN) isconstructed for the classification purpose. We convert regional local features intoquantitative observations that can be referenced when applying pixelwiseclassification via DBN. Experiments were conducted on a wide variety of aerial videos.The results demonstrate flexibility and good generalization abilities of the proposedmethod on a challenging data set with aerial surveillance images taken at differentheights and under different camera angles.3.A Secret-Sharing-Based Method for Authentication of Grayscale Document Imagesvia the Use of the PNG Image With a Data Repair CapabilityA new blind authentication method based on the secret sharing technique with a datarepair capability for grayscale document images via the use of the Portable NetworkGraphics (PNG) image is proposed. An authentication signal is generated for eachblock of a grayscale document image, which, together with the binarized blockcontent, is transformed into several shares using the Shamir secret sharing scheme. Theinvolved parameters are carefully chosen so that as many shares as possible aregenerated and embedded into an alpha channel plane. The alpha channel plane isthen combined with the original grayscale image to form a PNG image. During theembedding process, the computed share values are mapped into a range of alphachannel values near their maximum value of 255 to yield a transparent stego-imagewith a disguise effect. In the process of image authentication, an image block ismarked as tampered if the authentication signal computed from the current blockcontent does not match that extracted from the shares embedded in the alphachannel plane. Data repairing is then applied to each tampered block by a reverseShamir scheme after collecting two shares from unmarked blocks. Measures forprotecting the security of the data hidden in the alpha channel are also proposed.Good experimental results prove the effectiveness of the proposed method for realapplications.JAVA NETWORK SECURITY
  • 14. 1. Design and Implementation of TARF: A Trust-Aware Routing Framework for WSNsThe multihop routing in wireless sensor networks (WSNs) offers little protection againstidentity deception through replaying routing information. An adversary can exploit thisdefect to launch various harmful or even devastating attacks against the routingprotocols, including sinkhole attacks, wormhole attacks, and Sybil attacks. The situationis further aggravated by mobile and harsh network conditions. Traditionalcryptographic techniques or efforts at developing trust-aware routing protocols do noteffectively address this severe problem. To secure the WSNs against adversariesmisdirecting the multihop routing, we have designed and implemented TARF, a robusttrust-aware routing framework for dynamic WSNs. Without tight time synchronization orknown geographic information, TARF provides trustworthy and energy-efficient route.Most importantly, TARF proves effective against those harmful attacks developed out ofidentity deception; the resilience of TARF is verified through extensive evaluation withboth simulation and empirical experiments on large-scale WSNs under various scenariosincluding mobile and RF-shielding network conditions. Further, we have implemented alow-overhead TARF module in TinyOS; as demonstrated, this implementation can beincorporated into existing routing protocols with the least effort. Based on TARF, we alsodemonstrated a proof-of-concept mobile target detection application that functionswell against an antidetection mechanism.2. Detecting Spam Zombies by Monitoring Outgoing MessagesCompromised machines are one of the key security threats on the Internet; they areoften used to launch various security attacks such as spamming and spreadingmalware, DDoS, and identity theft. Given that spamming provides a key economicincentive for attackers to recruit the large number of compromised machines, we focuson the detection of the compromised machines in a network that are involved in thespamming activities, commonly known as spam zombies. We develop an effectivespam zombie detection system named SPOT by monitoring outgoing messages of anetwork. SPOT is designed based on a powerful statistical tool called SequentialProbability Ratio Test, which has bounded false positive and false negative error rates. Inaddition, we also evaluate the performance of the developed SPOT system using atwo-month e-mail trace collected in a large US campus network. Our evaluation studiesshow that SPOT is an effective and efficient system in automatically detectingcompromised machines in a network. For example, among the 440 internal IP addressesobserved in the e-mail trace, SPOT identifies 132 of them as being associated withcompromised machines. Out of the 132 IP addresses identified by SPOT, 126 can be
  • 15. either independently confirmed (110) or highly likely (16) to be compromised. Moreover,only seven internal IP addresses associated with compromised machines in the traceare missed by SPOT. In addition, we also compare the performance of SPOT with twoother spam zombie detection algorithms based on the number and percentage ofspam messages originated or forwarded by internal machines, respectively, and showthat SPOT outperforms these two detection algorithms.3. A Hybrid Approach to Private Record Matching Network Security 2012 JavaReal-world entities are not always represented by the same set of features in differentdata sets. Therefore, matching records of the same real-world entity distributed acrossthese data sets is a challenging task. If the data sets contain private information, theproblem becomes even more difficult. Existing solutions to this problem generally followtwo approaches: sanitization techniques and cryptographic techniques. We propose ahybrid technique that combines these two approaches and enables users to trade offbetween privacy, accuracy, and cost. Our main contribution is the use of a blockingphase that operates over sanitized data to filter out in a privacy-preserving mannerpairs of records that do not satisfy the matching condition. We also provide a formaldefinition of privacy and prove that the participants of our protocols learn nothing otherthan their share of the result and what can be inferred from their share of the result, theirinput and sanitized views of the input data sets (which are considered publicinformation). Our method incurs considerably lower costs than cryptographictechniques and yields significantly more accurate matching results compared tosanitization techniques, even when privacy requirements are high.4. ES-MPICH2: A Message Passing Interface with Enhanced Security Network Security2012 JavaAn increasing number of commodity clusters are connected to each other by publicnetworks, which have become a potential threat to security sensitive parallelapplications running on the clusters. To address this security issue, we developed aMessage Passing Interface (MPI) implementation to preserve confidentiality ofmessages communicated among nodes of clusters in an unsecured network. We focuson MPI rather than other protocols, because MPI is one of the most popularcommunication protocols for parallel computing on clusters. Our MPI implementation—called ES-MPICH2—was built based on MPICH2 developed by the Argonne National
  • 16. Laboratory. Like MPICH2, ES-MPICH2 aims at supporting a large variety of computationand communication platforms like commodity clusters and high-speed networks. Weintegrated encryption and decryption algorithms into the MPICH2 library with thestandard MPI interface and; thus, data confidentiality of MPI applications can bereadily preserved without a need to change the source codes of the MPI applications.MPI-application programmers can fully configure any confidentiality services inMPICHI2, because a secured configuration file in ES-MPICH2 offers the programmersflexibility in choosing any cryptographic schemes and keys seamlessly incorporated inES-MPICH2. We used the Sandia Micro Benchmark and Intel MPI Benchmark suites toevaluate and compare the performance of ES-MPICH2 with the original MPICH2version. Our experiments show that overhead incurred by the confidentiality services inES-MPICH2 is marginal for small messages. The security overhead in ES-MPICH2 becomesmore pronounced with larger messages. Our results also show that security overheadcan be significantly reduced in ES-MPICH2 by high-performance clusters.5.Detecting Anomalous Insiders in Collaborative Information SystemsCollaborative information systems (CISs) are deployed within a diverse array ofenvironments that manage sensitive information. Current security mechanisms detectinsider threats, but they are ill-suited to monitor systems in which users function indynamic teams. In this paper, we introduce the community anomaly detection system(CADS), an unsupervised learning framework to detect insider threats based on theaccess logs of collaborative environments. The framework is based on the observationthat typical CIS users tend to form community structures based on the subjectsaccessed (e.g., patients’ records viewed by healthcare providers). CADS consists oftwo components: 1) relational pattern extraction, which derives community structuresand 2) anomaly prediction, which leverages a statistical model to determine whenusers have sufficiently deviated from communities. We further extend CADS intoMetaCADS to account for the semantics of subjects (e.g., patients’ diagnoses). Toempirically evaluate the framework, we perform an assessment with three months ofaccess logs from a real electronic health record (EHR) system in a large medical center.The results illustrate our models exhibit significant performance gains over state-of-the-art competitors. When the number of illicit users is low, MetaCADS is the best model, butas the number grows, commonly accessed semantics lead to hiding in a crowd, suchthat CADS is more prudent.JAVA MOBILECOMPUTING
  • 17. 1.Handling Selfishness in Replica Allocation over a Mobile Ad Hoc NetworkIn a mobile ad hoc network, the mobility and resource constraints of mobile nodes maylead to network partitioning or performance degradation. Several data replicationtechniques have been proposed to minimize performance degradation. Most of themassume that all mobile nodes collaborate fully in terms of sharing their memory space.In reality, however, some nodes may selfishly decide only to cooperate partially, or notat all, with other nodes. These selfish nodes could then reduce the overall dataaccessibility in the network. In this paper, we examine the impact of selfish nodes in amobile ad hoc network from the perspective of replica allocation. We term this selfishreplica allocation. In particular, we develop a selfish node detection algorithm thatconsiders partial selfishness and novel replica allocation techniques to properly copewith selfish replica allocation. The conducted simulations demonstrate the proposedapproach outperforms traditional cooperative replica allocation techniques in terms ofdata accessibility,communication cost, and average query delay.2.Acknowledgment-Based Broadcast Protocol for Reliable and Efficient DataDissemination in Vehicular Ad Hoc NetworksWe propose a broadcast algorithm suitable for a wide range of vehicular scenarios,which only employs local information acquired via periodic beacon messages,containing acknowledgments of the circulated broadcast messages. Each vehicledecides whether it belongs to a connected dominating set (CDS). Vehicles in the CDSuse a shorter waiting period before possible retransmission. At time-out expiration, avehicle retransmits if it is aware of at least one neighbor in need of the message. Toaddress intermittent connectivity and appearance of new neighbors, the evaluationtimer can be restarted. Our algorithm resolves propagation at road intersectionswithout any need to even recognize intersections. It is inherently adaptable to differentmobility regimes, without the need to classify network or vehicle speeds. In a thoroughsimulation-based performance evaluation, our algorithm is shown to provide higherreliability and message efficiency than existing approaches for nonsafety applications.3.Toward Reliable Data Delivery for Highly Dynamic Mobile Ad Hoc Networks
  • 18. This paper addresses the problem of delivering data packets for highly dynamic mobilead hoc networks in a reliable and timely manner. Most existing ad hoc routing protocolsare susceptible to node mobility, especially for large-scale networks. Driven by this issue,we propose an efficient Position-based Opportunistic Routing (POR) protocol whichtakes advantage of the stateless property of geographic routing and the broadcastnature of wireless medium. When a data packet is sent out, some of the neighbornodes that have overheard the transmission will serve as forwarding candidates, andtake turn to forward the packet if it is not relayed by the specific best forwarder within acertain period of time. By utilizing such in-the-air backup, communication is maintainedwithout being interrupted. The additional latency incurred by local route recovery isgreatly reduced and the duplicate relaying caused by packet reroute is alsodecreased. In the case of communication hole, a Virtual Destination-based VoidHandling (VDVH) scheme is further proposed to work together with POR. Boththeoretical analysis and simulation results show that POR achieves excellentperformance even under high node mobility with acceptable overhead and the newvoid handling scheme also works well.4.Protecting Location Privacy in Sensor Networks against a Global EavesdropperWhile many protocols for sensor network security provide confidentiality for the contentof messages, contextual information usually remains exposed. Such contextualinformation can be exploited by an adversary to derive sensitive information such asthe locations of monitored objects and data sinks in the field. Attacks on thesecomponents can significantly undermine any network application. Existing techniquesdefend the leakage of location information from a limited adversary who can onlyobserve network traffic in a small region. However, a stronger adversary, the globaleavesdropper, is realistic and can defeat these existing techniques. This paper firstformalizes the location privacy issues in sensor networks under this strong adversarymodel and computes a lower bound on the communication overhead needed forachieving a given level of location privacy. The paper then proposes two techniques toprovide location privacy to monitored objects (source-location privacy)—periodiccollection and source simulation—and two techniques to provide location privacy todata sinks (sink-location privacy)—sink simulation and backbone flooding. Thesetechniques provide trade-offs between privacy, communication cost, and latency.Through analysis and simulation, we demonstrate that the proposed techniques areefficient and effective for source and sink-location privacy in sensor networks.
  • 19. DOTNET NETWORK SECURITY1.Revisiting Defenses against Large-Scale Online Password Guessing AttacksBrute force and dictionary attacks on password-only remote login services are nowwidespread and ever increasing.Enabling convenient login for legitimate users whilepreventing such attacks is a difficult problem. Automated Turing Tests (ATTs) continue tobe an effective, easy-to-deploy approach to identify automated malicious loginattempts with reasonable cost of inconvenience to users. In this paper, we discuss theinadequacy of existing and proposed login protocols designed to address large scaleonline dictionary attacks (e.g., from a botnet of hundreds of thousands of nodes). Wepropose a new Password Guessing Resistant Protocol (PGRP), derived upon revisitingprior proposals designed to restrict such attacks. While PGRP limits the total number oflogin attempts from unknown remote hosts to as low as a single attempt per username,legitimate users in most cases (e.g., when attempts are made from known, frequently-used machines) can make several failed login attempts before being challenged withan ATT. We analyze the performance of PGRP with two real-world data sets and find itmore promising than existing proposals.2.SecuredTrust: A Dynamic Trust Computation Model for Secured Communication inMultiagent SystemsSecurity and privacy issues have become critically important with the fast expansion ofmultiagent systems. Most network applications such as pervasive computing, gridcomputing, and P2P networks can be viewed as multiagent systems which are open,anonymous, and dynamic in nature. Such characteristics of multiagent systemsintroduce vulnerabilities and threats to providing secured communication. One feasibleway to minimize the threats is to evaluate the trust and reputation of the interactingagents. Many trust/reputation models have done so, but they fail to properly evaluatetrust when malicious agents start to behave in an unpredictable way. Moreover, thesemodels are ineffective in providing quick response to a malicious agent’s oscillatingbehavior. Another aspect of multiagent systems which is becoming critical for sustaininggood service quality is the even distribution of workload among service providingagents. Most trust/reputation models have not yet addressed this issue. So, to cope with
  • 20. the strategically altering behavior of malicious agents and to distribute workload asevenly as possible among service providers; we present in this paper a dynamic trustcomputation model called “SecuredTrust.” In this paper, we first analyze the differentfactors related to evaluating the trust of an agent and then propose a comprehensivequantitative model for measuring such trust. We also propose a novel load-balancingalgorithm based on the different factors defined in our model. Simulation resultsindicate that our model compared to other existing models can effectively cope withstrategic behavioral change of malicious agents and at the same time efficientlydistribute workload among the service providing agents under stable condition.3.Enhanced Privacy ID: A Direct Anonymous Attestation Scheme with EnhancedRevocation Capabilities 2012 Dotnet Network SecurityDirect Anonymous Attestation (DAA) is a scheme that enables the remoteauthentication of a Trusted Platform Module (TPM) while preserving the user’s privacy. ATPM can prove to a remote party that it is a valid TPM without revealing its identity andwithout linkability. In the DAA scheme, a TPM can be revoked only if the DAA privatekey in the hardware has been extracted and published widely so that verifiers obtainthe corrupted private key. If the unlinkability requirement is relaxed, a TPM suspected ofbeing compromised can be revoked even if the private key is not known. However,with the full unlinkability requirement intact, if a TPM has been compromised but itsprivate key has not been distributed to verifiers, the TPM cannot be revoked.Furthermore, a TPM cannot be revoked from the issuer, if the TPM is found to becompromised after the DAA issuing has occurred. In this paper, we present a new DAAscheme called Enhanced Privacy ID (EPID) scheme that addresses the abovelimitations. While still providing unlinkability, our scheme provides a method to revoke aTPM even if the TPM private key is unknown. This expanded revocation property makesthe scheme useful for other applications such as for driver’s license. Our EPID scheme isefficient and provably secure in the same security model as DAA, i.e., in the randomoracle model under the strong RSA assumption and the decisional Diffie-Hellmanassumption.4.Extending Attack Graph-Based Security Metrics and Aggregating Their Application2012 Dotnet Network Security
  • 21. The attack graph is an abstraction that reveals the ways an attacker can leveragevulnerabilities in a network to violate a security policy. When used with attack graph-based security metrics, the attack graph may be used to quantitatively assesssecurityrelevant aspects of a network. The Shortest Path metric, the Number of Pathsmetric, and the Mean of Path Lengths metric are three attack graph-based securitymetrics that can extract security-relevant information. However, one’s usage of thesemetrics can lead to misleading results. The Shortest Path metric and the Mean of PathLengths metric fail to adequately account for the number of ways an attacker mayviolate a security policy. The Number of Paths metric fails to adequately account for theattack effort associated with the attack paths. To overcome these shortcomings, wepropose a complimentary suite of attack graph-based security metrics and specify analgorithm for combining the usage of these metrics. We present simulated results thatsuggest that our approach reaches a conclusion about which of two attack graphscorrespond to a network that is most secure in many instances.DOTNET NETWORKING1.Throughput and Energy Efficiency in Wireless Ad Hoc Networks With GaussianChannelsThis paper studies the bottleneck link capacity under the Gaussian channel model instrongly connected random wireless ad hoc networks, with n nodes independently anduniformly distributed in a unit square. We assume that each node is equipped with twotransceivers (one for transmission and one for reception) and allow all nodes to transmitsimultaneously. We draw lower and upper bounds, in terms of bottleneck link capacity,for homogeneous networks (all nodes have the same transmission power level) andpropose an energy-efficient power assignment algorithm (CBPA) for heterogeneousnetworks (nodes may have different power levels), with a provable bottleneck linkcapacity guarantee of Ω(Blog(1+1/√nlog2n)), where B is the channel bandwidth. Inaddition, we develop a distributed implementation of CBPA with O(n2) messagecomplexity and provide extensive simulation results2.Cross-Layer Analysis of the End-to-End Delay Distribution in Wireless Sensor Networks
  • 22. Emerging applications of wireless sensor networks (WSNs) require real-time quality-of-service (QoS) guarantees to be provided by the network. Due to the nondeterministicimpacts of the wireless channel and queuing mechanisms, probabilistic analysis of QoSis essential. One important metric of QoS in WSNs is the probability distribution of theend-to-end delay. Compared to other widely used delay performance metrics such asthe mean delay, delay variance, and worst-case delay, the delay distribution can beused to obtain the probability to meet a specific deadline for QoS-basedcommunication in WSNs. To investigate the end-to-end delay distribution, in this paper,a comprehensive cross-layer analysis framework, which employs a stochastic queueingmodel in realistic channel environments, is developed. This framework is generic andcan be parameterized for a wide variety of MAC protocols and routing protocols. Casestudies with the CSMA/CA MAC protocol and an anycast protocol are conducted toillustrate how the developed framework can analytically predict the distribution of theend-to-end delay. Extensive test-bed experiments and simulations are performed tovalidate the accuracy of the framework for both deterministic and randomdeployments. Moreover, the effects of various network parameters on the distribution ofend-to-end delay are investigated through the developed framework. To the best ofour knowledge, this is the first work that provides a generic, probabilistic cross-layeranalysis of end-to-end delay in WSNs.3.Routing for Power Minimization in the Speed Scaling ModelWe study network optimization that considers power minimization as an objective.Studies have shown that mechanisms such as speed scaling can significantly reducethe power consumption of telecommunication networks by matching the consumptionof each network element to the amount of processing required for its carried traffic.Most existing research on speed scaling focuses on a single network element inisolation. We aim for a network-wide optimization. Specifically, we study a routingproblem with the objective of provisioning guaranteed speed/bandwidth for a givendemand matrix while minimizing power consumption. Optimizing the routes criticallyrelies on the characteristic of the speed-power curve f(s), which is how power isconsumed as a function of the processing speed s. If f is superadditive, we show thatthere is no bounded approximation in general for integral routing, i.e., each trafficdemand follows a single path. This contrasts with the well-known logarithmicapproximation for subadditive functions. However, for common speed-power curvessuch as polynomials f(s) = μsα, we are able to show a constant approximation via asimple scheme of randomized rounding. We also generalize this rounding approach tohandle the case in which a nonzero startup cost σ appears in the speed-power curve,i.e., f(s) = {σ + μsα, if s >; 0; 0, if s = 0. We present an O((σ/μ)1/α)-approximation, and we
  • 23. discuss why coming up with an approximation ratio independent of the startup costmay be hard. Finally, we provide simulation results to validate our algorithmicapproachesJAVA CLOUDCOMPUTING1.A Gossip Protocol for Dynamic Resource Management in Large Cloud EnvironmentsWe address the problem of dynamic resource management for a large-scale cloudenvironment. Our contribution includes outlining a distributed middleware architectureand presenting one of its key elements: a gossip protocol that (1) ensures fair resourceallocation among sites/applications, (2) dynamically adapts the allocation to loadchanges and (3) scales both in the number of physical machines andsites/applications. We formalize the resource allocation problem as that of dynamicallymaximizing the cloud utility under CPU and memory constraints. We first present aprotocol that computes an optimal solution without considering memory constraintsand prove correctness and convergence properties. Then, we extend that protocol toprovide an efficient heuristic solution for the complete problem, which includesminimizing the cost for adapting an allocation. The protocol continuously executes ondynamic, local input and does not require global synchronization, as other proposedgossip protocols do. We evaluate the heuristic protocol through simulation and find itsperformance to be well-aligned with our design goals2.Optimization of Resource Provisioning Cost in Cloud ComputingIn cloud computing, cloud providers can offer cloud consumers two provisioning plansfor computing resources, namely reservation and on-demand plans. In general, cost ofutilizing computing resources provisioned by reservation plan is cheaper than thatprovisioned by on-demand plan, since cloud consumer has to pay to provider inadvance. With the reservation plan, the consumer can reduce the total resourceprovisioning cost. However, the best advance reservation of resources is difficult to beachieved due to uncertainty of consumers future demand and providers resourceprices. To address this problem, an optimal cloud resource provisioning (OCRP)algorithm is proposed by formulating a stochastic programming model. The OCRP
  • 24. algorithm can provision computing resources for being used in multiple provisioningstages as well as a long-term plan, e.g., four stages in a quarter plan and twelve stagesin a yearly plan. The demand and price uncertainty is considered in OCRP. In thispaper, different approaches to obtain the solution of the OCRP algorithm areconsidered including deterministic equivalent formulation, sample-averageapproximation, and Benders decomposition. Numerical studies are extensivelyperformed in which the results clearly show that with the OCRP algorithm, cloudconsumer can successfully minimize total cost of resource provisioning in cloudcomputing environments3.A Secure Erasure Code-Based Cloud Storage System with Secure Data ForwardingA cloud storage system, consisting of a collection of storage servers, provides long-termstorage services over the Internet.Storing data in a third party’s cloud system causesserious concern over data confidentiality. General encryption schemes protect dataconfidentiality, but also limit the functionality of the storage system because a fewoperations are supported over encrypted data. Constructing a secure storage systemthat supports multiple functions is challenging when the storage system is distributedand has no central authority. We propose a threshold proxy re-encryption scheme andintegrate it with a decentralized erasure code such that a secure distributed storagesystem is formulated. The distributed storage system not only supports secure and robustdata storage and retrieval, but also lets a user forward his data in the storage servers toanother user without retrieving the data back. The main technical contribution is thatthe proxy re-encryption scheme supports encoding operations over encryptedmessages as well as forwarding operations over encoded and encrypted messages.Our method fully integrates encrypting, encoding, and forwarding. We analyze andsuggest suitable parameters for the number of copies of a message dispatched tostorage servers and the number of storage servers queried by a key server. Theseparameters allow more flexible adjustment between the number of storage servers and4.HASBE: A Hierarchical Attribute-Based Solution for Flexible and Scalable AccessControl in Cloud ComputingCloud computing has emerged as one of the most influential paradigms in the ITindustry in recent years. Since this
  • 25. new computing technology requires users to entrust their valuable data to cloudproviders, there have been increasing security and privacy concerns on outsourceddata. Several schemes employing attribute-based encryption (ABE) have beenproposed for access control of outsourced data in cloud computing; however, most ofthem suffer from inflexibility in implementing complex access control policies. In order torealize scalable, flexible, and fine-grained access control of outsourced data in cloudcomputing, in this paper, we propose hierarchical attribute-set-based encryption(HASBE) by extending ciphertext-policy attribute-set-based encryption (ASBE) with ahierarchical structure of users. The proposed scheme not only achieves scalability dueto its hierarchical structure, but also inherits flexibility and fine-grained access control insupporting compound attributes of ASBE. In addition, HASBE employs multiple valueassignments for access expiration time to deal with user revocation more efficientlythan existing schemes. We formally prove the security of HASBE based on security of theciphertext-policy attribute-based encryption (CP-ABE) scheme by Bethencourt et al.and analyze its performance and computational complexity. We implement ourscheme and show that it is both efficient and flexible in dealing with access control foroutsourced data in cloud computing with comprehensive experiments.5. A Distributed Access Control Architecture for Cloud ComputingThe large-scale, dynamic, and heterogeneous nature of cloud computing posesnumerous security challenges. But the clouds main challenge is to provide a robustauthorization mechanism that incorporates multitenancy and virtualization aspects ofresources. The authors present a distributed architecture that incorporates principlesfrom security management and software engineering and propose key requirementsand a design model for the architecture.6.Cloud Computing Security: From Single to Multi-cloudsThe use of cloud computing has increased rapidly in many organizations. Cloudcomputing provides many benefits in terms of low cost and accessibility of data.Ensuring the security of cloud computing is a major factor in the cloud computingenvironment, as users often store sensitive information with cloud storage providers butthese providers may be untrusted. Dealing with "single cloud" providers is predicted tobecome less popular with customers due to risks of service availability failure and thepossibility of malicious insiders in the single cloud. A movement towards "multi-clouds", or
  • 26. in other words, "interclouds" or "cloud-of-clouds" has emerged recently. This papersurveys recent research related to single and multi-cloud security and addressespossible solutions. It is found that the research into the use of multi-cloud providers tomaintain security has received less attention from the research community than has theuse of single clouds. This work aims to promote the use of multi-clouds due to its abilityto reduce security risks that affect the cloud computing user.DOTNET MOBILE COMPUTING1.Local Greedy Approximation for Scheduling in Multihop Wireless Networks 2012 DotnetMobile ComputingIn recent years, there has been a significant amount of work done in developing low-complexity scheduling schemes toachieve high performance in multihop wireless networks. A centralized suboptimalscheduling policy, called Greedy Maximal Scheduling (GMS) is a good candidatebecause its empirically observed performance is close to optimal in a variety of networksettings. However, its distributed realization requires high complexity, which becomes amajor obstacle for practical implementation. In this paper, we develop simpledistributed greedy algorithms for scheduling in multihop wireless networks. We reducethe complexity by relaxing the global ordering requirement of GMS, up to near zero.Simulation results show that the new algorithms approximate the performance of GMS,and outperform the state-of-the-art distributed scheduling policies.2.Network Assisted Mobile Computing with Optimal Uplink Query Processing 2012Dotnet Mobile ComputingMany mobile applications retrieve content from remote servers via user generatedqueries. Processing these queries is often needed before the desired content can beidentified. Processing the request on the mobile devices can quickly sap the limitedbattery resources. Conversely, processing user-queries at remote servers can have slowresponse times due communication latency incurred during transmission of the
  • 27. potentially large query. We evaluate a network-assisted mobile computing scenariowhere midnetwork nodes with “leasing” capabilities are deployed by a serviceprovider. Leasing computation power can reduce battery usage on the mobile devicesand improve response times. However, borrowing processing power from mid-networknodes comes at a leasing cost which must be accounted for when making the decisionof where processing should occur. We study the tradeoff between battery usage,processing and transmission latency, and mid-network leasing. We use the dynamicprogramming framework to solve for the optimal processing policies that suggest theamount of processing to be done at each mid-network node in order to minimize theprocessing and communication latency and processing costs. Through numericalstudies, we examine the properties of the optimal processing policy and the coretradeoffs in such systems.