M phil-computer-science-data-mining-projectsVijay Karan
This document provides summaries for several M.Phil Computer Science Data Mining Projects written in C#. The projects cover topics such as bridging virtual communities, mood recognition during online tests, surveying the size of the World Wide Web, knowledge sharing in virtual organizations, adaptive provisioning of human expertise in service-oriented systems, cost-aware rank joins with random and sorted access, improving data quality with dynamic forms, targeted data delivery algorithms, and sentiment classification using feature relation networks.
This document discusses using trust and provenance for content filtering on the semantic web. It describes how trust can be inferred between individuals in social networks and how that trust information can be used to filter and recommend content. Applications mentioned include using trust ratings to help intelligence analysts sort through information by identifying reliable sources.
WEB SEARCH ENGINE BASED SEMANTIC SIMILARITY MEASURE BETWEEN WORDS USING PATTE...cscpconf
Semantic Similarity measures plays an important role in information retrieval, natural language processing and various tasks on web such as relation extraction, community mining, document clustering, and automatic meta-data extraction. In this paper, we have proposed a Pattern Retrieval Algorithm [PRA] to compute the semantic similarity measure between the words by
combining both page count method and web snippets method. Four association measures are used to find semantic similarity between words in page count method using web search engines. We use a Sequential Minimal Optimization (SMO) support vector machines (SVM) to find the optimal combination of page counts-based similarity scores and top-ranking patterns from the web snippets method. The SVM is trained to classify synonymous word-pairs and nonsynonymous word-pairs. The proposed approach aims to improve the Correlation values,
Precision, Recall, and F-measures, compared to the existing methods. The proposed algorithm outperforms by 89.8 % of correlation value.
Semantic similarity and semantic relatedness
measure in particular is very important in the current scenario
due to the huge demand for natural language processing based
applications such as chatbots and information retrieval systems
such as knowledge base based FAQ systems. Current approaches
generally use similarity measures which does not use the context
sensitive relationships between the words. This leads to erroneous
similarity predictions and is not of much use in real life
applications. This work proposes a novel approach that gives an
accurate relatedness measure of any two words in a sentence by
taking their context into consideration. This context correction
results in a more accurate similarity prediction which results in
higher accuracy of information retrieval systems.
The document discusses a study that examines the correlation between actor centrality in social networks and their ability to coordinate projects. It outlines the research framework, which involves extracting coordination-related phrases from emails, calculating coordination scores bounded by project scopes, constructing social network matrices using centrality measures, and testing the association between centrality and coordination. Preliminary results on the Enron email network from 1997-2002 are presented. The methodology involves text mining the Enron dataset to calculate coordination scores and social network centrality metrics like degree, closeness, and betweenness centrality.
M phil-computer-science-data-mining-projectsVijay Karan
This document provides summaries for several M.Phil Computer Science Data Mining Projects written in C#. The projects cover topics such as bridging virtual communities, mood recognition during online tests, surveying the size of the World Wide Web, knowledge sharing in virtual organizations, adaptive provisioning of human expertise in service-oriented systems, cost-aware rank joins with random and sorted access, improving data quality with dynamic forms, targeted data delivery algorithms, and sentiment classification using feature relation networks.
This document discusses using trust and provenance for content filtering on the semantic web. It describes how trust can be inferred between individuals in social networks and how that trust information can be used to filter and recommend content. Applications mentioned include using trust ratings to help intelligence analysts sort through information by identifying reliable sources.
WEB SEARCH ENGINE BASED SEMANTIC SIMILARITY MEASURE BETWEEN WORDS USING PATTE...cscpconf
Semantic Similarity measures plays an important role in information retrieval, natural language processing and various tasks on web such as relation extraction, community mining, document clustering, and automatic meta-data extraction. In this paper, we have proposed a Pattern Retrieval Algorithm [PRA] to compute the semantic similarity measure between the words by
combining both page count method and web snippets method. Four association measures are used to find semantic similarity between words in page count method using web search engines. We use a Sequential Minimal Optimization (SMO) support vector machines (SVM) to find the optimal combination of page counts-based similarity scores and top-ranking patterns from the web snippets method. The SVM is trained to classify synonymous word-pairs and nonsynonymous word-pairs. The proposed approach aims to improve the Correlation values,
Precision, Recall, and F-measures, compared to the existing methods. The proposed algorithm outperforms by 89.8 % of correlation value.
Semantic similarity and semantic relatedness
measure in particular is very important in the current scenario
due to the huge demand for natural language processing based
applications such as chatbots and information retrieval systems
such as knowledge base based FAQ systems. Current approaches
generally use similarity measures which does not use the context
sensitive relationships between the words. This leads to erroneous
similarity predictions and is not of much use in real life
applications. This work proposes a novel approach that gives an
accurate relatedness measure of any two words in a sentence by
taking their context into consideration. This context correction
results in a more accurate similarity prediction which results in
higher accuracy of information retrieval systems.
The document discusses a study that examines the correlation between actor centrality in social networks and their ability to coordinate projects. It outlines the research framework, which involves extracting coordination-related phrases from emails, calculating coordination scores bounded by project scopes, constructing social network matrices using centrality measures, and testing the association between centrality and coordination. Preliminary results on the Enron email network from 1997-2002 are presented. The methodology involves text mining the Enron dataset to calculate coordination scores and social network centrality metrics like degree, closeness, and betweenness centrality.
This document contains information about several data mining projects from IEEE 2014, including project titles, languages used, and short abstracts. Some of the projects involve mining social media data to understand student learning experiences, using viral marketing approaches for time-critical campaigns, and developing techniques for image re-ranking and clustering. The document provides details on different data mining techniques and applications.
This document describes several C# projects from IEEE 2014, including summaries of each project. The projects cover topics like localizing jammers in wireless networks, network-coding based cloud storage, privacy-preserving search on encrypted cloud data, compatibility-aware cloud service composition, analyzing social media to understand students' experiences, viral marketing in social networks, opportunistic MAC for underwater sensor networks, WLAN monitoring systems, anonymous vehicle positioning in vehicular networks, and information flow control for cloud security.
Image processing project list for java and dotnetredpel dot com
1. The document discusses several papers related to image processing, multimedia content tagging, and computer vision techniques.
2. One paper proposes a new type of graphical password system called Captcha as Graphical Passwords (CaRP) that uses hard AI problems to address security issues like online guessing attacks.
3. Another paper describes an adaptive algorithm called AMS that can select the optimal service mode (server mode or helper mode) for cloud-based video downloading based on operating conditions like request rate and video population size.
4. The document contains summaries of several other papers related to topics like personalized image search, recognizing on-premise signs from street view images, an improved visual cryptography scheme for halftone
A Decision Support System for Inbound Marketers: An Empirical Use of Latent D...Meisam Hejazi Nia
Infographic is a type of information presentation that inbound marketers use. I suggest a method that can allow the infographic designers to benchmark their design against the previous viral infographics to measure whether a given design decision can help or hurt the probability of the design becoming viral.
Projection Multi Scale Hashing Keyword Search in Multidimensional DatasetsIRJET Journal
The document discusses a novel method called ProMiSH (Projection and Multi Scale Hashing) for keyword search in multi-dimensional datasets. ProMiSH uses random projection and hash-based index structures to achieve high scalability and speedup of more than four orders over state-of-the-art tree-based techniques. Empirical studies on real and synthetic datasets of sizes up to 10 million objects and 100 dimensions show ProMiSH scales linearly with dataset size, dimension, query size, and result size. The method groups objects embedded in a vector space that are tagged with keywords matching a given query.
The document describes a deep neural network approach called MISR for recommending services to new mashups during development. MISR incorporates three types of interactions - content interaction based on matching functionality between mashups and services, implicit neighbor interaction based on similar mashups that have used a service, and explicit neighbor interaction based on similar mashups and commonly used services. Experiments on a real-world dataset showed MISR outperformed other recommendation approaches in evaluating metrics for addressing the cold-start problem of recommending services for new mashups that have no usage history.
Semantically Enriched Knowledge Extraction With Data MiningEditor IJCATR
This document discusses using data mining techniques to extract knowledge from data and representing that knowledge in both human-understandable and machine-understandable formats. It uses an input dataset, applies data mining methods like regression using WEKA and SAS, extracts natural language triples about the results, and generates RDF triples to represent the knowledge in a machine-readable format.
This document discusses how academics can leverage their existing academic publications and research to establish an online presence through search engine optimization. It notes that academics already produce large volumes of well-written, keyword-rich text through their research and publishing activities. This body of work represents a valuable resource that can be used to create web content and populate various online platforms. The document outlines techniques for hosting academic content online, submitting sites to search engines, and monitoring website visibility over time to improve search engine rankings. It argues that with some SEO efforts, academics can promote their research topics and expertise online without incurring significant costs.
Yifan Guo is a PhD student at Case Western Reserve University studying machine learning and big data. He received his B.S. from Beijing University of Posts and Telecommunications and his Master's from Northwestern University. His research projects include developing an image recognition system for identifying pill types, building a movie recommendation system using matrix factorization, and designing an algorithm for a nonlinear integer programming transportation problem.
Cluster Based Web Search Using Support Vector MachineCSCJournals
Now days, searches for the web pages of a person with a given name constitute a notable fraction of queries to Web search engines. This method exploits a variety of semantic information extracted from web pages. The rapid growth of the Internet has made the Web a popular place for collecting information. Today, Internet user access billions of web pages online using search engines. Information in the Web comes from many sources, including websites of companies, organizations, communications and personal homepages, etc. Effective representation of Web search results remains an open problem in the Information Retrieval community. For ambiguous queries, a traditional approach is to organize search results into groups (clusters), one for each meaning of the query. These groups are usually constructed according to the topical similarity of the retrieved documents, but it is possible for documents to be totally dissimilar and still correspond to the same meaning of the query. To overcome this problem, the relevant Web pages are often located close to each other in the Web graph of hyperlinks. It presents a graphical approach for entity resolution & complements the traditional methodology with the analysis of the entity-relationship (ER) graph constructed for the dataset being analyzed. It also demonstrates a technique that measures the degree of interconnectedness between various pairs of nodes in the graph. It can significantly improve the quality of entity resolution. Using Support vector machines (SVMs) which are a set of related Supervised learning methods used for classification of load of user queries to the sever machine to different client machines so that system will be stable. clusters web pages based on their capacities stores whole database on server machine. Keywords: SVM, cluster; ER.
The document presents a proposed approach for sentiment analysis on big social data using Spark. It discusses collecting opinions from social media to analyze large events by tracking public behavior in real-time. The proposed system provides a adaptable sentiment analysis approach using Spark that analyzes social media posts and classifies them by subject in real-time. It also discusses using sentiment data from social media to inform decisions.
Knowledge and Data Engineering IEEE 2015 ProjectsVijay Karan
List of Knowledge and Data Engineering IEEE 2015 Projects. It Contains the IEEE Projects in the Domain Knowledge and Data Engineering for the year 2015
The document discusses web mining, which involves applying data mining techniques to discover useful information and patterns from web data. It covers the types of web data, various applications of web mining, challenges, and different techniques used. These include classification, clustering, association rule mining. It also discusses how web mining can be used to solve search engine problems and how cloud computing provides a new approach for web mining through software as a service.
JPD1436 Web Image Re-Ranking Using Query-Specific Semantic Signatureschennaijp
We have best 2014 free dot not projects topics are available along with all document, you can easy to find out number of documents for various projects titles.
For More Details:
http://jpinfotech.org/final-year-ieee-projects/2014-ieee-projects/dot-net-projects/
This document contains information about several data mining projects from IEEE 2014, including project titles, languages used, and short abstracts. Some of the projects involve mining social media data to understand student learning experiences, using viral marketing approaches for time-critical campaigns, and developing techniques for image re-ranking and clustering. The document provides details on different data mining techniques and applications.
This document describes several C# projects from IEEE 2014, including summaries of each project. The projects cover topics like localizing jammers in wireless networks, network-coding based cloud storage, privacy-preserving search on encrypted cloud data, compatibility-aware cloud service composition, analyzing social media to understand students' experiences, viral marketing in social networks, opportunistic MAC for underwater sensor networks, WLAN monitoring systems, anonymous vehicle positioning in vehicular networks, and information flow control for cloud security.
Image processing project list for java and dotnetredpel dot com
1. The document discusses several papers related to image processing, multimedia content tagging, and computer vision techniques.
2. One paper proposes a new type of graphical password system called Captcha as Graphical Passwords (CaRP) that uses hard AI problems to address security issues like online guessing attacks.
3. Another paper describes an adaptive algorithm called AMS that can select the optimal service mode (server mode or helper mode) for cloud-based video downloading based on operating conditions like request rate and video population size.
4. The document contains summaries of several other papers related to topics like personalized image search, recognizing on-premise signs from street view images, an improved visual cryptography scheme for halftone
A Decision Support System for Inbound Marketers: An Empirical Use of Latent D...Meisam Hejazi Nia
Infographic is a type of information presentation that inbound marketers use. I suggest a method that can allow the infographic designers to benchmark their design against the previous viral infographics to measure whether a given design decision can help or hurt the probability of the design becoming viral.
Projection Multi Scale Hashing Keyword Search in Multidimensional DatasetsIRJET Journal
The document discusses a novel method called ProMiSH (Projection and Multi Scale Hashing) for keyword search in multi-dimensional datasets. ProMiSH uses random projection and hash-based index structures to achieve high scalability and speedup of more than four orders over state-of-the-art tree-based techniques. Empirical studies on real and synthetic datasets of sizes up to 10 million objects and 100 dimensions show ProMiSH scales linearly with dataset size, dimension, query size, and result size. The method groups objects embedded in a vector space that are tagged with keywords matching a given query.
The document describes a deep neural network approach called MISR for recommending services to new mashups during development. MISR incorporates three types of interactions - content interaction based on matching functionality between mashups and services, implicit neighbor interaction based on similar mashups that have used a service, and explicit neighbor interaction based on similar mashups and commonly used services. Experiments on a real-world dataset showed MISR outperformed other recommendation approaches in evaluating metrics for addressing the cold-start problem of recommending services for new mashups that have no usage history.
Semantically Enriched Knowledge Extraction With Data MiningEditor IJCATR
This document discusses using data mining techniques to extract knowledge from data and representing that knowledge in both human-understandable and machine-understandable formats. It uses an input dataset, applies data mining methods like regression using WEKA and SAS, extracts natural language triples about the results, and generates RDF triples to represent the knowledge in a machine-readable format.
This document discusses how academics can leverage their existing academic publications and research to establish an online presence through search engine optimization. It notes that academics already produce large volumes of well-written, keyword-rich text through their research and publishing activities. This body of work represents a valuable resource that can be used to create web content and populate various online platforms. The document outlines techniques for hosting academic content online, submitting sites to search engines, and monitoring website visibility over time to improve search engine rankings. It argues that with some SEO efforts, academics can promote their research topics and expertise online without incurring significant costs.
Yifan Guo is a PhD student at Case Western Reserve University studying machine learning and big data. He received his B.S. from Beijing University of Posts and Telecommunications and his Master's from Northwestern University. His research projects include developing an image recognition system for identifying pill types, building a movie recommendation system using matrix factorization, and designing an algorithm for a nonlinear integer programming transportation problem.
Cluster Based Web Search Using Support Vector MachineCSCJournals
Now days, searches for the web pages of a person with a given name constitute a notable fraction of queries to Web search engines. This method exploits a variety of semantic information extracted from web pages. The rapid growth of the Internet has made the Web a popular place for collecting information. Today, Internet user access billions of web pages online using search engines. Information in the Web comes from many sources, including websites of companies, organizations, communications and personal homepages, etc. Effective representation of Web search results remains an open problem in the Information Retrieval community. For ambiguous queries, a traditional approach is to organize search results into groups (clusters), one for each meaning of the query. These groups are usually constructed according to the topical similarity of the retrieved documents, but it is possible for documents to be totally dissimilar and still correspond to the same meaning of the query. To overcome this problem, the relevant Web pages are often located close to each other in the Web graph of hyperlinks. It presents a graphical approach for entity resolution & complements the traditional methodology with the analysis of the entity-relationship (ER) graph constructed for the dataset being analyzed. It also demonstrates a technique that measures the degree of interconnectedness between various pairs of nodes in the graph. It can significantly improve the quality of entity resolution. Using Support vector machines (SVMs) which are a set of related Supervised learning methods used for classification of load of user queries to the sever machine to different client machines so that system will be stable. clusters web pages based on their capacities stores whole database on server machine. Keywords: SVM, cluster; ER.
The document presents a proposed approach for sentiment analysis on big social data using Spark. It discusses collecting opinions from social media to analyze large events by tracking public behavior in real-time. The proposed system provides a adaptable sentiment analysis approach using Spark that analyzes social media posts and classifies them by subject in real-time. It also discusses using sentiment data from social media to inform decisions.
Knowledge and Data Engineering IEEE 2015 ProjectsVijay Karan
List of Knowledge and Data Engineering IEEE 2015 Projects. It Contains the IEEE Projects in the Domain Knowledge and Data Engineering for the year 2015
The document discusses web mining, which involves applying data mining techniques to discover useful information and patterns from web data. It covers the types of web data, various applications of web mining, challenges, and different techniques used. These include classification, clustering, association rule mining. It also discusses how web mining can be used to solve search engine problems and how cloud computing provides a new approach for web mining through software as a service.
JPD1436 Web Image Re-Ranking Using Query-Specific Semantic Signatureschennaijp
We have best 2014 free dot not projects topics are available along with all document, you can easy to find out number of documents for various projects titles.
For More Details:
http://jpinfotech.org/final-year-ieee-projects/2014-ieee-projects/dot-net-projects/
Similar to IEEE 2014 ASP.NET with C# Projects (20)
This document provides information on several 2015 IEEE Matlab projects related to signal processing and image analysis. It lists the project titles, languages, links, and abstracts for 10 different Matlab projects. The projects cover topics such as target source separation using deep neural networks, hyperspectral image classification using sparse representation, image denoising techniques, and cardiovascular biometrics.
M.Phil Computer Science Wireless Communication ProjectsVijay Karan
List of Wireless Communication IEEE 2006 Projects. It Contains the IEEE Projects in the Domain Wireless Communication for M.Phil Computer Science students.
M.E Computer Science Wireless Communication ProjectsVijay Karan
List of Wireless Communication IEEE 2006 Projects. It Contains the IEEE Projects in the Domain Wireless Communication for M.E Computer Science students.
M.Phil Computer Science Parallel and Distributed System ProjectsVijay Karan
List of Parallel and Distributed System IEEE 2006 Projects. It Contains the IEEE Projects in the Domain Parallel and Distributed System for M.Phil Computer Science students.
M.E Computer Science Parallel and Distributed System ProjectsVijay Karan
List of Parallel and Distributed System IEEE 2006 Projects. It Contains the IEEE Projects in the Domain Parallel and Distributed System for M.E Computer Science students.
Communicating effectively and consistently with students can help them feel at ease during their learning experience and provide the instructor with a communication trail to track the course's progress. This workshop will take you through constructing an engaging course container to facilitate effective communication.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
Beyond Degrees - Empowering the Workforce in the Context of Skills-First.pptxEduSkills OECD
Iván Bornacelly, Policy Analyst at the OECD Centre for Skills, OECD, presents at the webinar 'Tackling job market gaps with a skills-first approach' on 12 June 2024
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
RHEOLOGY Physical pharmaceutics-II notes for B.pharm 4th sem students
IEEE 2014 ASP.NET with C# Projects
1. IEEE 2014 ASP.NET with C# Projects
Web : www.kasanpro.com Email : sales@kasanpro.com
List Link : http://kasanpro.com/projects-list/ieee-2014-asp-net-with-c-projects
Title :Mining Social Media Data for Understanding Student's Learning Experiences
Language : ASP.NET with C#
Project Link : http://kasanpro.com/p/asp-net-with-c-sharp/mining-social-media-data-understanding-students-learning-experien
Abstract : Students' informal conversations on social media (e.g. Twitter, Facebook) shed light into their educational
experiences - opinions, feelings, and concerns about the learning process. Data from such uninstrumented
environment can provide valuable knowledge to inform student learning. Analyzing such data, however, can be
challenging. The complexity of student's experiences reflected from social media content requires human
interpretation. However, the growing scale of data demands automatic data analysis techniques. In this paper, we
developed a workflow to integrate both qualitative analysis and large - scale data mining techniques. We focus on
engineering student's Twitter posts to understand issues and problems in their educational experiences. We first
conducted a qualitative analysis on samples taken from about 25,000 tweets related to engagement, and sleep
deprivation. Based on these results, we implemented a multi - label classification algorithm to classify tweets
reflecting student's problems. We then used the algorithm to train a detector of student problems from about 35,000
tweets streamed at the geo - location of Purdue University. This work, for the first time, presents a methodology and
results that show how informal social media data can provide insights into students' experiences.
Title :Cost-effective Viral Marketing for Time-critical Campaigns in Large-scale Social Networks
Language : ASP.NET with C#
Project Link : http://kasanpro.com/p/asp-net-with-c-sharp/cost-effective-viral-marketing-time-critical-campaigns-large-scale-so
Abstract : Online social networks (OSNs) have become one of the most effective channels for marketing and
advertising. Since users are often influenced by their friends, "wordof- mouth" exchanges, so-called viral marketing, in
social networks can be used to increase product adoption or widely spread content over the network. The common
perception of viral marketing about being cheap, easy, and massively effective makes it an ideal replacement of
traditional advertising. However, recent studies have revealed that the propagation often fades quickly within only few
hops from the sources, counteracting the assumption on the self-perpetuating of influence considered in literature.
With only limited influence propagation, is massively reaching customers via viral marketing still affordable? How to
economically spend more resources to increase the spreading speed? We investigate the cost-effective massive viral
marketing problem, taking into the consideration the limited influence propagation. Both analytical analysis based on
power-law network theory and numerical analysis demonstrate that the viral marketing might involve costly seeding.
To minimize the seeding cost, we provide mathematical programming to find optimal seeding for medium-size
networks and propose VirAds, an efficient algorithm, to tackle the problem on largescale networks. VirAds guarantees
a relative error bound of O(1) from the optimal solutions in power-law networks and outperforms the greedy heuristics
which realizes on the degree centrality. Moreover, we also show that, in general, approximating the optimal seeding
within a ratio better than O(log n) is unlikely possible.
Title :Web Image Re-Ranking UsingQuery-Specific Semantic Signatures
Language : ASP.NET with C#
Project Link :
http://kasanpro.com/p/asp-net-with-c-sharp/web-image-re-ranking-usingquery-specific-semantic-signatures
Abstract : Image re-ranking, as an effective way to improve the results of web-based image search, has been
adopted by current commercial search engines. Given a query keyword, a pool of images are first retrieved by the
search engine based on textual information. By asking the user to select a query image from the pool, the remaining
images are re-ranked based on their visual similarities with the query image. A major challenge is that the similarities
of visual features do not well correlate with images' semantic meanings which interpret users' search intention. On the
other hand, learning a universal visual semantic space to characterize highly diverse images from the web is difficult
and inefficient. In this paper, we propose a novel image re-ranking framework, which automatically offline learns
2. different visual semantic spaces for different query keywords through keyword expansions. The visual features of
images are projected into their related visual semantic spaces to get semantic signatures. At the online stage, images
are re-ranked by comparing their semantic signatures obtained from the visual semantic space specified by the query
keyword. The new approach significantly improves both the accuracy and efficiency of image re-ranking. The original
visual features of thousands of dimensions can be projected to the semantic signatures as short as 25 dimensions.
Experimental results show that 20% ? 35% relative improvement has been achieved on re-ranking precisions
compared with the stateof- the-art methods.