This document lists 40 Java projects available from TMKS InfoTech in Bangalore, India for 2015. The projects focus on topics including cloud computing, network security, data mining, mobile computing, wireless sensor networks, Android projects, ad hoc networks, and more. Contact details are provided for TMKS InfoTech.
SHREE SIDDHI SOFT SOLUTIONS provides total software solutions to its customers. Apsys works closely with the customers to identify their business processes for computerization and help them implement state-of-the-art solutions. By identifying and enhancing their processes through information technology solutions. SHREE SIDDHI SOFT SOLUTIONS help it customers optimally use their resources. The latest tools are identified and customized systems are built to provide the solutions that not only address the current needs of the customers but also take care of future expansions. All Course Structures and practical’s are created by Experienced Industry Professionals. Keeping the current industry requirement and standard in mind. In a way that is easy for the students.
Contact US:
SHREE SIDDHI SOFT SOLUTIONS
#3, 4th FLOOR, LAKSHMI ARCADE,
11TH CROSS, THILLAI NAGAR,
TRICHY-620018.
Phone NO : +91 90032-44446,900323336,0431-4040672
Email : shreesiddhisoftsolutionstrichy@gmail.com
Website : www.shreesiddhisoftsolutions.com
Sybian Technologies is a leading IT services provider & custom software development company. We offer full cycle custom software development services, from product idea, offshore software development to outsourcing support & enhancement. Sybian employs a knowledgeable group of software developers coming from different backgrounds. We are able to balance product development efforts & project duration to your business needs.
Sybian Technologies invests extensively in R&D to invent new solutions for ever changing needs of your businesses, to make it future-proof, sustainable and consistent. We work in close collaboration with academic institutions and research labs across the world to design, implement and support latest IT based solutions that are futuristic, progressive and affordable. Our services continue to earn trust and loyalty from its clients through its commitment to the following parameters
Final Year Projects & Real Time live Projects
JAVA(All Domains)
DOTNET(All Domains)
ANDROID
EMBEDDED
VLSI
MATLAB
Project Support
Abstract, Diagrams, Review Details, Relevant Materials, Presentation,
Supporting Documents, Software E-Books,
Software Development Standards & Procedure
E-Book, Theory Classes, Lab Working Programs, Project Design & Implementation
24/7 lab session
Final Year Projects For BE,ME,B.Sc,M.Sc,B.Tech,BCA,MCA
PROJECT DOMAIN:
Cloud Computing
Networking
Network Security
PARALLEL AND DISTRIBUTED SYSTEM
Data Mining
Mobile Computing
Service Computing
Software Engineering
Image Processing
Bio Medical / Medical Imaging
Contact Details:
Sybian Technologies Pvt Ltd,
No,33/10 Meenakshi Sundaram Building,
Sivaji Street,
(Near T.nagar Bus Terminus)
T.Nagar,
Chennai-600 017
Ph:044 42070551
Mobile No:9790877889,9003254624,7708845605
Mail Id:sybianprojects@gmail.com,sunbeamvijay@yahoo.com
An Android Application
2018
12 An Empirical Study of the Energy Consumption of Android Applications
2018
13 An Empirical Study of the Performance of Native Apps and Web Apps on
Android
2018
14 An Empirical Study on the Performance of Native Apps and Web Apps on
Android
2018
15 Characterizing Privacy Risks of Mobile Apps with Sensitivity Analysis 2018
16 Droid Fusion: A Novel Multilevel Classifier Fusion Approach for Android
Malware Detection
2018
17 Significant Permission Identification for Machine Learning Based Android
Malware Detection
2018
18 Understanding In-app Ads and Detecting Hidden Attacks through the
Assistant: An Android Application
2018
19 Ad Capsule: Practical Confinement of
Doclick Solutions offering IEEE projects for final year students in the below domain IN OUR R&D DIVISION,
Networking
Data Distribution
Data Mining
Network Security
Image Processing
Parallel & Distribution Systems
Wireless Networks
Hadoop Big Data
Mobile Computing, Etc...
We Provide Projects on Following Topics:
IEEE Projects 2014,2015,2016.
Application Projects (PHP, JAVA/J2EE, ANDROID)
2017 and 2018 academic ieee projects listManju Nath
For More Details::Contact::K.Manjunath - 09535866270
http://www.tmksinfotech.com and http://www.bemtechprojects.com
2017 and 2018 IEEE Projects@ TMKS Infotech,Bangalore
Ontopic's presentation on Storm-Crawler for ApacheCon North America 2015.
Storm-Crawler is a next-generation web crawler that discovers and processes content on the Web, in real-time with low latency. This open source (and Apache Licensed) project is built on the Apache Storm framework, which provides a great foundation for a distributed real-time web crawler.
SHREE SIDDHI SOFT SOLUTIONS provides total software solutions to its customers. Apsys works closely with the customers to identify their business processes for computerization and help them implement state-of-the-art solutions. By identifying and enhancing their processes through information technology solutions. SHREE SIDDHI SOFT SOLUTIONS help it customers optimally use their resources. The latest tools are identified and customized systems are built to provide the solutions that not only address the current needs of the customers but also take care of future expansions. All Course Structures and practical’s are created by Experienced Industry Professionals. Keeping the current industry requirement and standard in mind. In a way that is easy for the students.
Contact US:
SHREE SIDDHI SOFT SOLUTIONS
#3, 4th FLOOR, LAKSHMI ARCADE,
11TH CROSS, THILLAI NAGAR,
TRICHY-620018.
Phone NO : +91 90032-44446,900323336,0431-4040672
Email : shreesiddhisoftsolutionstrichy@gmail.com
Website : www.shreesiddhisoftsolutions.com
Sybian Technologies is a leading IT services provider & custom software development company. We offer full cycle custom software development services, from product idea, offshore software development to outsourcing support & enhancement. Sybian employs a knowledgeable group of software developers coming from different backgrounds. We are able to balance product development efforts & project duration to your business needs.
Sybian Technologies invests extensively in R&D to invent new solutions for ever changing needs of your businesses, to make it future-proof, sustainable and consistent. We work in close collaboration with academic institutions and research labs across the world to design, implement and support latest IT based solutions that are futuristic, progressive and affordable. Our services continue to earn trust and loyalty from its clients through its commitment to the following parameters
Final Year Projects & Real Time live Projects
JAVA(All Domains)
DOTNET(All Domains)
ANDROID
EMBEDDED
VLSI
MATLAB
Project Support
Abstract, Diagrams, Review Details, Relevant Materials, Presentation,
Supporting Documents, Software E-Books,
Software Development Standards & Procedure
E-Book, Theory Classes, Lab Working Programs, Project Design & Implementation
24/7 lab session
Final Year Projects For BE,ME,B.Sc,M.Sc,B.Tech,BCA,MCA
PROJECT DOMAIN:
Cloud Computing
Networking
Network Security
PARALLEL AND DISTRIBUTED SYSTEM
Data Mining
Mobile Computing
Service Computing
Software Engineering
Image Processing
Bio Medical / Medical Imaging
Contact Details:
Sybian Technologies Pvt Ltd,
No,33/10 Meenakshi Sundaram Building,
Sivaji Street,
(Near T.nagar Bus Terminus)
T.Nagar,
Chennai-600 017
Ph:044 42070551
Mobile No:9790877889,9003254624,7708845605
Mail Id:sybianprojects@gmail.com,sunbeamvijay@yahoo.com
An Android Application
2018
12 An Empirical Study of the Energy Consumption of Android Applications
2018
13 An Empirical Study of the Performance of Native Apps and Web Apps on
Android
2018
14 An Empirical Study on the Performance of Native Apps and Web Apps on
Android
2018
15 Characterizing Privacy Risks of Mobile Apps with Sensitivity Analysis 2018
16 Droid Fusion: A Novel Multilevel Classifier Fusion Approach for Android
Malware Detection
2018
17 Significant Permission Identification for Machine Learning Based Android
Malware Detection
2018
18 Understanding In-app Ads and Detecting Hidden Attacks through the
Assistant: An Android Application
2018
19 Ad Capsule: Practical Confinement of
Doclick Solutions offering IEEE projects for final year students in the below domain IN OUR R&D DIVISION,
Networking
Data Distribution
Data Mining
Network Security
Image Processing
Parallel & Distribution Systems
Wireless Networks
Hadoop Big Data
Mobile Computing, Etc...
We Provide Projects on Following Topics:
IEEE Projects 2014,2015,2016.
Application Projects (PHP, JAVA/J2EE, ANDROID)
2017 and 2018 academic ieee projects listManju Nath
For More Details::Contact::K.Manjunath - 09535866270
http://www.tmksinfotech.com and http://www.bemtechprojects.com
2017 and 2018 IEEE Projects@ TMKS Infotech,Bangalore
Ontopic's presentation on Storm-Crawler for ApacheCon North America 2015.
Storm-Crawler is a next-generation web crawler that discovers and processes content on the Web, in real-time with low latency. This open source (and Apache Licensed) project is built on the Apache Storm framework, which provides a great foundation for a distributed real-time web crawler.
The document discusses options used in a web crawler code to control its behavior. The -r option enables recursive retrieval, allowing the crawler to follow links. The -spider option makes the crawler behave like a spider to check web pages are accessible without downloading them. The -domains option limits crawling to the specified domain only. The -l 5 option specifies a depth of 5 pages to avoid spider traps, and --tries=5 sets the number of retries if a connection fails.
Storm-Crawler is a collection of resources for building low-latency, large scale web crawlers on Apache Storm. We will compare with similar projects like Apache Nutch and present several use cases where the storm-crawler is being used. In particular we will see how the Storm-crawler can be used with ElasticSearch and Kibana for crawling and indexing web pages.
M.Tech project on Haar wavelet based approach for image compressionVeerendra B R Revanna
This document presents an image compression technique using Haar wavelet transforms. It begins with background on the need for image compression due to increasing image sizes. It then outlines the encoder and decoder block diagrams and classifications of compression techniques. The key aspects of the proposed technique are presented: using wavelet transforms to exploit spatial and frequency correlations, applying averaging and differencing operations on rows and columns, thresholding the transformed matrix, and reconstructing the original image. Advantages include conceptual simplicity, speed, memory efficiency and reversibility. Results and future work are discussed before concluding on the benefits of wavelet-based compression.
This is my M.Tech Project presentation. The project was carried out at R.V College of Engineering and B.M.S College of Engineering, Bangalore. In this project, the axial load carrying capacity of CFST Columns was studied and the experimental results were compared with Eurocode-4 and AISC-LRFD-2005. The flexural capacity of CFST frames was also carried out.
The document discusses web crawlers, which are programs that systematically browse the World Wide Web to index pages for search engines. It describes how crawlers work by starting with a list of seed URLs and recursively visiting URLs identified by hyperlinks on pages, using strategies like breadth-first or depth-first search. The architecture of crawlers includes components like URL queues, DNS resolution, page fetching and parsing, fingerprinting, and duplicate URL elimination. Crawling policies cover areas such as selection of pages to download, revisiting rates, politeness towards websites, and parallelization across distributed crawlers.
This document outlines the key aspects of web crawling including the motivation, behavior, policies, scheduling, architecture, history, and practical issues of web crawlers. It notes that 5 exabytes of new information are added to the web per year, most directories no longer encourage site submission, and the bandwidth required for comprehensive crawling is expensive, making efficient crawling essential.
This document provides an introduction to web crawlers. It defines a web crawler as a computer program that browses the World Wide Web in a methodical, automated manner to gather pages and support functions like search engines and data mining. The document outlines the key features of crawlers, including robustness, politeness, distribution, scalability, and quality. It describes the basic architecture of a crawler, including the URL frontier that stores URLs to fetch, DNS resolution, page fetching, parsing, duplicate URL elimination, and filtering based on robots.txt files. Issues like prioritizing URLs, change rates, quality, and politeness policies are also discussed.
This is a academic work for developing a crawler that can classify the Web Content using SVM and Naive Bayes for Machine Learning, implemented with Elasticsearch, Crawler4J and Apache Spark.
Web crawlers, also known as robots or bots, are programs that systematically browse the internet and index websites for search engines. Crawlers follow links from seed URLs and download pages to extract new URLs to crawl. They use techniques like breadth-first crawling to efficiently discover as much of the web as possible. Crawlers must have policies to select pages, revisit sites, be polite to not overload websites, and coordinate distributed crawling. Their high-performance architecture is crucial for search engines to comprehensively index the large and constantly changing web.
Web crawling involves automated programs called crawlers or spiders that browse the web methodically to index web pages for search engines. Crawlers start from seed URLs and extract links from visited pages to discover new pages, repeating the process until a desired size or time limit is reached. Crawlers are used by search engines to build indexes of web content and ensure freshness through revisiting URLs. Challenges include the web's large size, fast changes, and dynamic content generation. APIs allow programmatic access to web services and information through REST, HTTP POST, and SOAP.
The document discusses web crawlers, which are programs that download web pages to help search engines index websites. It explains that crawlers use strategies like breadth-first search and depth-first search to systematically crawl the web. The architecture of crawlers includes components like the URL frontier, DNS lookup, and parsing pages to extract links. Crawling policies determine which pages to download and when to revisit pages. Distributed crawling improves efficiency by using multiple coordinated crawlers.
Seminar presentation made by me for the topic of 'Resources for Sentiment Analysis' at IIT Bombay. Includes a set of bonus slides for additional information which was not actually presented.
The document is a project report submitted to Visvesvaraya Technological University for the degree of Master of Technology in Digital Electronics and Communication. The project report describes a hybrid robust watermarking technique based on Discrete Wavelet Transform (DWT), Discrete Cosine Transform (DCT) and Singular Value Decomposition (SVD). The report includes chapters on introduction, literature survey, DWT, DCT, SVD, methodology, project design, algorithms, experimental results and discussion, and conclusion. The project was implemented using MATLAB and evaluated on parameters like throughput and robustness. The results were analyzed and compared to previous techniques to demonstrate the achievements and contributions of the proposed hybrid watermarking technique.
2015 and 2016 ieee for Network SecurityTmks Infotech
This document lists 50 projects from 2015 related to cloud computing, network security, data mining, mobile computing, wireless sensor networks, Android projects, ad hoc networks and service computing. It provides contact information for TMKS InfoTech in Bangalore, India and lists project titles with brief descriptions.
The document discusses options used in a web crawler code to control its behavior. The -r option enables recursive retrieval, allowing the crawler to follow links. The -spider option makes the crawler behave like a spider to check web pages are accessible without downloading them. The -domains option limits crawling to the specified domain only. The -l 5 option specifies a depth of 5 pages to avoid spider traps, and --tries=5 sets the number of retries if a connection fails.
Storm-Crawler is a collection of resources for building low-latency, large scale web crawlers on Apache Storm. We will compare with similar projects like Apache Nutch and present several use cases where the storm-crawler is being used. In particular we will see how the Storm-crawler can be used with ElasticSearch and Kibana for crawling and indexing web pages.
M.Tech project on Haar wavelet based approach for image compressionVeerendra B R Revanna
This document presents an image compression technique using Haar wavelet transforms. It begins with background on the need for image compression due to increasing image sizes. It then outlines the encoder and decoder block diagrams and classifications of compression techniques. The key aspects of the proposed technique are presented: using wavelet transforms to exploit spatial and frequency correlations, applying averaging and differencing operations on rows and columns, thresholding the transformed matrix, and reconstructing the original image. Advantages include conceptual simplicity, speed, memory efficiency and reversibility. Results and future work are discussed before concluding on the benefits of wavelet-based compression.
This is my M.Tech Project presentation. The project was carried out at R.V College of Engineering and B.M.S College of Engineering, Bangalore. In this project, the axial load carrying capacity of CFST Columns was studied and the experimental results were compared with Eurocode-4 and AISC-LRFD-2005. The flexural capacity of CFST frames was also carried out.
The document discusses web crawlers, which are programs that systematically browse the World Wide Web to index pages for search engines. It describes how crawlers work by starting with a list of seed URLs and recursively visiting URLs identified by hyperlinks on pages, using strategies like breadth-first or depth-first search. The architecture of crawlers includes components like URL queues, DNS resolution, page fetching and parsing, fingerprinting, and duplicate URL elimination. Crawling policies cover areas such as selection of pages to download, revisiting rates, politeness towards websites, and parallelization across distributed crawlers.
This document outlines the key aspects of web crawling including the motivation, behavior, policies, scheduling, architecture, history, and practical issues of web crawlers. It notes that 5 exabytes of new information are added to the web per year, most directories no longer encourage site submission, and the bandwidth required for comprehensive crawling is expensive, making efficient crawling essential.
This document provides an introduction to web crawlers. It defines a web crawler as a computer program that browses the World Wide Web in a methodical, automated manner to gather pages and support functions like search engines and data mining. The document outlines the key features of crawlers, including robustness, politeness, distribution, scalability, and quality. It describes the basic architecture of a crawler, including the URL frontier that stores URLs to fetch, DNS resolution, page fetching, parsing, duplicate URL elimination, and filtering based on robots.txt files. Issues like prioritizing URLs, change rates, quality, and politeness policies are also discussed.
This is a academic work for developing a crawler that can classify the Web Content using SVM and Naive Bayes for Machine Learning, implemented with Elasticsearch, Crawler4J and Apache Spark.
Web crawlers, also known as robots or bots, are programs that systematically browse the internet and index websites for search engines. Crawlers follow links from seed URLs and download pages to extract new URLs to crawl. They use techniques like breadth-first crawling to efficiently discover as much of the web as possible. Crawlers must have policies to select pages, revisit sites, be polite to not overload websites, and coordinate distributed crawling. Their high-performance architecture is crucial for search engines to comprehensively index the large and constantly changing web.
Web crawling involves automated programs called crawlers or spiders that browse the web methodically to index web pages for search engines. Crawlers start from seed URLs and extract links from visited pages to discover new pages, repeating the process until a desired size or time limit is reached. Crawlers are used by search engines to build indexes of web content and ensure freshness through revisiting URLs. Challenges include the web's large size, fast changes, and dynamic content generation. APIs allow programmatic access to web services and information through REST, HTTP POST, and SOAP.
The document discusses web crawlers, which are programs that download web pages to help search engines index websites. It explains that crawlers use strategies like breadth-first search and depth-first search to systematically crawl the web. The architecture of crawlers includes components like the URL frontier, DNS lookup, and parsing pages to extract links. Crawling policies determine which pages to download and when to revisit pages. Distributed crawling improves efficiency by using multiple coordinated crawlers.
Seminar presentation made by me for the topic of 'Resources for Sentiment Analysis' at IIT Bombay. Includes a set of bonus slides for additional information which was not actually presented.
The document is a project report submitted to Visvesvaraya Technological University for the degree of Master of Technology in Digital Electronics and Communication. The project report describes a hybrid robust watermarking technique based on Discrete Wavelet Transform (DWT), Discrete Cosine Transform (DCT) and Singular Value Decomposition (SVD). The report includes chapters on introduction, literature survey, DWT, DCT, SVD, methodology, project design, algorithms, experimental results and discussion, and conclusion. The project was implemented using MATLAB and evaluated on parameters like throughput and robustness. The results were analyzed and compared to previous techniques to demonstrate the achievements and contributions of the proposed hybrid watermarking technique.
2015 and 2016 ieee for Network SecurityTmks Infotech
This document lists 50 projects from 2015 related to cloud computing, network security, data mining, mobile computing, wireless sensor networks, Android projects, ad hoc networks and service computing. It provides contact information for TMKS InfoTech in Bangalore, India and lists project titles with brief descriptions.
This document contains details about 27 projects related to big data and cloud computing conducted between 2014-2015 by Spiro Solutions Pvt Ltd. The projects focus on topics like secure data sharing in clouds, privacy-preserving public auditing, identity-based encryption, and workload management in hybrid cloud environments. Contact information is also provided for Spiro Solutions' training office in Vellore, India.
Doclick Solutions offering IEEE projects for final year students in the below domain IN OUR R&D DIVISION,
Networking
Data Distribution
Data Mining
Network Security
Image Processing
Parallel & Distribution Systems
Wireless Networks
Hadoop Big Data
Mobile Computing, Etc...
This document lists 40 Java projects available from TMKS InfoTech in Bangalore, India for 2015. The projects focus on topics including cloud computing, network security, data mining, mobile computing, wireless sensor networks, Android projects, ad hoc networks, and more. Contact details are provided for TMKS InfoTech.
This document lists 40 Java projects available from TMKS InfoTech in Bangalore, India for 2015. The projects focus on topics including cloud computing, network security, data mining, mobile computing, wireless sensor networks, Android projects, ad hoc networks, and more. Contact details are provided for TMKS InfoTech.
This document lists 40 Java projects available from TMKS InfoTech in Bangalore, India for 2015. The projects focus on topics including cloud computing, network security, data mining, mobile computing, wireless sensor networks, Android, ad hoc networks, and more. Contact details are provided for TMKS InfoTech.
This document lists 40 Java projects available from TMKS InfoTech in Bangalore, India for 2015. The projects focus on topics including cloud computing, network security, data mining, mobile computing, wireless sensor networks, Android projects, ad hoc networks, and more. Contact details are provided for TMKS InfoTech.
2015 and 2016 ieee Projects List @ TMKS Infotechmaheshmtech123
This document lists 40 Java projects available from TMKS InfoTech in Bangalore, India for 2015. The projects focus on topics including cloud computing, network security, data mining, mobile computing, wireless sensor networks, Android projects, ad hoc networks, and more. Contact details are provided for TMKS InfoTech.
1. 2015 IEEE Java Projects,TMKS InfoTech, Bangalore
#1510,Sri Gowri Shankara Complex, 2nd Floor,MKKRoad, Near Harishchandra Ghat, Mariyappanapalya, Rajajinagar 2nd
Stage, Bangalore - 560021. Contact:: K.Manjunath -- 09535866270
Sl.No Project Title Year
Cloud Computing
1 Identity-based Encryption with Outsourced Revocation in Cloud Computing 2015
2 Cloud Armor Supporting Reputation-based Trust Management for Cloud Services 2015
3 Innovative Schemes for Resource Allocation in the Cloud for Media Streaming
Applications
2015
4 Space-efficient Verifiable Secret Sharing Using Polynomial Interpolation 2015
5 Service Operator-aware Trust Scheme for Resource Matchmaking across Multiple Clouds 2015
6 Cost-Minimizing Dynamic Migration of Content Distribution Services into Hybrid
Clouds
2015
7 Control Cloud Data Access Privilege and Anonymity With Fully Anonymous Attribute
Based Encryption
2015
8 A Profit Maximization Scheme with Guaranteed Quality of Service in Cloud Computing 2015
9 A Secure and Dynamic Multi-keyword Ranked Search Scheme over Encrypted Cloud
Data
2015
10 Audit Free Cloud Storage via Deniable Attribute based Encryption 2015
11 CHARM: A Cost-efficient Multi cloud Data Hosting Scheme with High Availability 2015
12 Demystifying the Clouds: Harnessing Resource Utilization Models for Cost Effective
Infrastructure Alternatives
2015
13 Circuit Cipher text-policy Attribute-based Hybrid Encryption with Verifiable Delegation
in Cloud Computing
2015
14 Energy-Efficient Fault-Tolerant Data Storage and Processing in Mobile Cloud 2015
15 Combining Efficiency, Fidelity, and Flexibility in Resource Information Services 2015
16 Cost-Effective Authentic and Anonymous Data Sharing with Forward Security 2015
17 Enabling Cloud Storage Auditing with Key Exposure Resistance 2015
18 Energy-aware Load Balancing and Application Scaling for the Cloud Ecosystem 2015
19 Key-Aggregate Searchable Encryption (KASE) for Group Data Sharing via Cloud
Storage
2015
20 OPoR: Enabling Proof of Retrievability in Cloud Computing with Resource-Constrained
Devices
2015
21 Privacy Preserving Public Auditing for Regenerating Code Based Cloud Storage 2015
22 Panda: Public Auditing for Shared Data with Efficient User Revocation in the Cloud 2015
23 Provable Multi copy Dynamic Data Possession in Cloud Computing Systems 2015
24 Public Integrity Auditing for Shared Dynamic Cloud Data with Group User Revocation 2015
25 Reactive Resource Provisioning Heuristics for Dynamic Data flows on Cloud
Infrastructure
2015
26 Shared Authority Based Privacy-preserving Authentication Protocol in Cloud Computing 2015
27 Reducing Fragmentation for In-line De duplication Backup Storage via Exploiting
Backup History and Cache Knowledge
2015
28 Secure Auditing and De duplicating Data in Cloud 2015
29 Secure Distributed De duplication Systems with Improved Reliability 2015
2. 2015 IEEE Java Projects,TMKS InfoTech, Bangalore
#1510,Sri Gowri Shankara Complex, 2nd Floor,MKKRoad, Near Harishchandra Ghat, Mariyappanapalya, Rajajinagar 2nd
Stage, Bangalore - 560021. Contact:: K.Manjunath -- 09535866270
30 Public Integrity Auditing for Dynamic Data Sharing with Multi User Modification 2015
31 Social Cloud Computing: an Opportunity for Technology Enhanced Competence Based
Learning
2015
32 Cloud-Trust a Security Assessment Model for Infrastructure as a Service (IaaS) Clouds 2015
33 CPU Provisioning Algorithms for Service Differentiation in Cloud-Based Environments 2015
34 A secure data self-destructing scheme in cloud computing 2015
35 Identity-Based Distributed Provable Data Possession in Multi-Cloud Storage 2015
36 2015
37 2015
38 2015
39 2015
40 2015
41 2015
42 2015
Network Security
1 Key recovery Attacks on KIDS, a Keyed Anomaly Detection System 2015
2 Authenticated Key Exchange Protocols for Parallel Network File Systems 2015
3 Generating Searchable Public-Key Cipher texts with Hidden Structures for Fast Keyword
Search
2015
4 Secure Distributed De duplication Systems with Improved Reliability 2015
5 A Computational Dynamic Trust Model for User Authorization 2015
6 2015
7 2015
8 2015
9 2015
10 2015
11 2015
12 2015
13 2015
14 2015
15 2015
16 2015
17 2015
18 2015
19 2015
Data Mining
1 Towards Effective Bug Triage with Software Data Reduction Techniques 2015
3. 2015 IEEE Java Projects,TMKS InfoTech, Bangalore
#1510,Sri Gowri Shankara Complex, 2nd Floor,MKKRoad, Near Harishchandra Ghat, Mariyappanapalya, Rajajinagar 2nd
Stage, Bangalore - 560021. Contact:: K.Manjunath -- 09535866270
2 Active Learning for Ranking through Expected Loss Optimization 2015
3 Discovery of Ranking Fraud for Mobile Apps 2015
4 k-Nearest Neighbor Classification over Semantically Secure Encrypted Relational Data 2015
5 Entity Linking with a Knowledge Base: Issues, Techniques, and Solutions 2015
6 PAGE: A Partition Aware Engine for Parallel Graph Computation 2015
7 Query Aware Determinization of Uncertain Objects 2015
8 RRW - A Robust and Reversible Watermarking Technique for Relational Data 2015
9 Rule-Based Method for Entity Resolution 2015
10 Secure Distributed De duplication Systems with Improved Reliability 2015
11 Smart Crawler: A Two-stage Crawler for Efficiently Harvesting Deep-Web Interfaces 2015
12 2015
13 2015
14 2015
15 2015
16 2015
17 2015
18 2015
19
Mobile Computing
1 A Distributed Three-hop Routing Protocol to Increase the Capacity of Hybrid Wireless
Networks
2015
2 Optimum Power Allocation in Sensor Networks for Active Radar Applications 2015
3 2015
4 2015
5 2015
6 2015
7 2015
8 2015
9 2015
Wireless Sensor Networks
1 Cost-Aware Secure Routing (CASER) Protocol Design for Wireless Sensor Networks 2015
2 Data Collection in Multi Application Sharing Wireless Sensor Networks 2015
3 A Lightweight Secure Scheme for Detecting Provenance Forgery and Packet Drop
Attacks in Wireless Sensor Networks
2015
4 2015
5 2015
6 2015
4. 2015 IEEE Java Projects,TMKS InfoTech, Bangalore
#1510,Sri Gowri Shankara Complex, 2nd Floor,MKKRoad, Near Harishchandra Ghat, Mariyappanapalya, Rajajinagar 2nd
Stage, Bangalore - 560021. Contact:: K.Manjunath -- 09535866270
7 2015
8 2015
9 2015
10 2015
11 2015
12 2015
13 2015
14 2015
15 2015
16
17 2015
18 2015
19 2015
20 2015
21 2015
22 2015
Android Projects
1 Cooperative Positioning and Tracking in Disruption Tolerant Networks 2015
2 Friend book A Semantic-based Friend Recommendation System for Social Networks 2015
3 Innovative Schemes for Resource Allocation in the Cloud for Media Streaming
Applications
2015
4 2015
5 2015
6 2015
7 2015
8 2015
9 2015
10 2015
11 2015
12 2015
13 2015
14 2015
15 2015
16 2015
17 2015
18 2015
19 2015
20 2015
5. 2015 IEEE Java Projects,TMKS InfoTech, Bangalore
#1510,Sri Gowri Shankara Complex, 2nd Floor,MKKRoad, Near Harishchandra Ghat, Mariyappanapalya, Rajajinagar 2nd
Stage, Bangalore - 560021. Contact:: K.Manjunath -- 09535866270
Adhoc Networks & VANET & MANET
1 Efficient Data Query in Intermittently Connected Mobile Ad Hoc Social Networks 2015
2 Defending Against Collaborative Attacks by Malicious Nodes in MANETs A
Cooperative Bait Detection Approach
2015
3 2015
4 2015
5 2015
6 2015
7 2015
8 2015
9 2015
10 2015
11 2015
12 2015
13 2015
14 2015
15 2015
16 2015
17 2015
18 2015
19 2015
20 2015
21 2015
22 2015
23 2015
24 2015
25 2015
26 2015
Networking and Service Computing
1 Smart Crawler: A Two-stage Crawler for Efficiently Harvesting Deep-Web Interfaces 2015
2 Generating Searchable Public-Key Cipher texts with Hidden Structures for Fast Keyword
Search
2015
3 Revealing the Trace of High-Quality JPEG Compression Through Quantization Noise
Analysis
2015
4 Detecting Malicious Face book Applications 2015
5 2015
6 2015
6. 2015 IEEE Java Projects,TMKS InfoTech, Bangalore
#1510,Sri Gowri Shankara Complex, 2nd Floor,MKKRoad, Near Harishchandra Ghat, Mariyappanapalya, Rajajinagar 2nd
Stage, Bangalore - 560021. Contact:: K.Manjunath -- 09535866270
7 2015
8 2015
9 2015
Image Processing and Multimedia
1 Learning to Rank Image Tags with Limited Training Examples 2015
2 An Attribute assisted Re ranking Model for Web Image Search 2015
3 Detection and Rectification of Distorted Fingerprints 2015
4 Steganography Using Reversible Texture Synthesis 2015
5 Automatic Face Naming by Learning Discriminative Affinity Matrices From Weakly
Labeled Images
2015
6 2015
7 2015
8 2015
9 2015
10 2015
11 2015
12 2015
13 2015
14 2015
J2ME Mobile Based Projects
1 Adaptation of a virtual campus for mobile learning devices 2015
2 Bluetooth Mobile Based College Campus 2015
3 ECops via Handheld Mobile Devices 2015
4 Privacy-Conscious Location-Based Queries in Mobile Environments 2015
5 Ranking Model Adaptation for Domain-Specific Search 2015
6 SPOC A Secure and Privacy-preserving Opportunistic Computing Framework For
Mobile-Healthcare Emergency
2015
7 The Word in a nut shell : CONCISE RANGE QUERIES 2015
Parallel and Distributed System
1 A Computational Dynamic Trust Model for User Authorization 2015
2 A Secure and Dynamic Multi-keyword Ranked Search Scheme over Encrypted Cloud
Data
2015
7. 2015 IEEE Java Projects,TMKS InfoTech, Bangalore
#1510,Sri Gowri Shankara Complex, 2nd Floor,MKKRoad, Near Harishchandra Ghat, Mariyappanapalya, Rajajinagar 2nd
Stage, Bangalore - 560021. Contact:: K.Manjunath -- 09535866270
3 Authenticated Key Exchange Protocols for Parallel Network File Systems 2015
4 Cost-Effective Authentic and Anonymous Data Sharing with Forward Security 2015
5 Cost-Minimizing Dynamic Migration of Content Distribution Services into Hybrid
Clouds
2015
6 Data Collection in Multi-Application Sharing Wireless Sensor Networks 2015
7 Reducing Fragmentation for In-line De duplication Backup Storage via Exploiting
Backup History
2015
8 Service Operator-aware Trust Scheme for Resource Matchmaking across Multiple Clouds 2015
9 2015
10 2015