"Data mining и информационный поиск проблемы, алгоритмы, решения"_Краковецкий Александр

10,553 views

Published on

Конференция "AI&BigData Lab", 12 апреля 2014

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
10,553
On SlideShare
0
From Embeds
0
Number of Embeds
9,265
Actions
Shares
0
Downloads
20
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

"Data mining и информационный поиск проблемы, алгоритмы, решения"_Краковецкий Александр

  1. 1. About 1.  CEO of DevRain Solutions – software development (specialization: Windows Phone and Windows 8). 2.  Microsoft Regional Director. 3.  Microsoft Windows Phone Most Valuable Professional. 4.  Telerik Most Valuable Professional. 5.  Best Professional in Software Architecture (Ukrainian IT Award). 6.  Ph.D. 7.  Speaker and IT blogger.
  2. 2. #1: A lot of information 1.  “No information” problem is transformed to the “a lot of information” problem. 2.  Amount of information increases every year in geometric progression. 3.  Big data.
  3. 3. #2: Duplicates 1.  Different chrome not the content. 2.  Copyrighting and plagiarism. 3.  Partially solved for news.
  4. 4. #3: Information waste 1.  Level 1: noisy information such as advertisement, copyright, decoration, etc. 2.  Level 2: useful information, but not very relevant to the topic of the page, such as navigation, directory, etc. 3.  Level 3: relevant information to the theme of the page, but not with prominent importance, such as related topics, topic index, etc. 4.  Level 4: the most prominent part of the page, such as headlines, main content, etc.
  5. 5. #4: Searching time Every second user is watching 5-10 pages to find needed information. My record: 8 hours of uninterrupted search. Found at 23th page on MSN.
  6. 6. #5: Domain “Snow Leopard” Can be “cat” or “operation system” from Apple.
  7. 7. Solutions? Data Mining – intellectual analysis of big amounts of data •  clustering, associated rules, GA, Ant optimization, visualization, decision trees, neural networks. R&D – new algorithms, methods •  Microsoft Research, Yahoo! Research, Google Labs, Arc90 Lab and others. Let’s mix!
  8. 8. #01: A lot of information 1.  Filtering not ranking 2.  Clustering and categorization 3.  Semantic web
  9. 9. #02: Duplicates. NLP 1.  Readability score 2.  NER Dbpedia Spotlight, Reuters OpenCalais 3.  WordNet 4.  Shingles
  10. 10. Shingles
  11. 11. #3: Information waste Readability An Arc90 Lab Readability turns any web page into a clean view for reading now or later on your computer, smartphone, or tablet. https://www.readability.com
  12. 12. Vision-based Page Segmentation Algorithm Presents an automatic top-down, tag-tree independent approach to detect web content structure. It simulates how a user understands web layout structure based on his visual perception. Based on DOM structure analysis and subjective rules. http://research.microsoft.com/apps/ pubs/default.aspx?id=70027
  13. 13. Vision-based Page Segmentation Algorithm Different pages have different visual margins so quality of segmentation algorithm depends on certain web page. If comment is bigger than main content (e.g. habrahabr) the result will not be very precise.
  14. 14. Learning Important Models 1.  Spatial Features {BlockCenterX, BlockCenterY, BlockRectWidth, BlockRectHeight} 2.  Content features {FontSize, FontWeight, InnerTextLength, InnerHtmlLength, ImgNum, ImgSize, LinkNum, LinkTextLength, InteractionNum, InteractionSize, FormNum, FormSize, OptionNum, OptionTextLength, TableNum, ParaNum} http://www.sigkdd.org/sites/default/files/issues/ 6-2-2004-12/2-song.pdf
  15. 15. Semantic and SEO 1.  Semantic tags (article, aside, footer, header etc.) 2.  SEO (meta description, keywords) 3.  Microformats (RSS, hCalendar, hCardetc.) 4.  CMS, common engines and social networks.
  16. 16. SeoRank 1.  Title 2 text. 2.  Meta keywords 2 text. 3.  Headers 2 text. 4.  Meta description 2 text. 5.  WordsIndex, SentencesIndex, WordsInSentencesIndex, LinksIndex, WordsAsLinksIndex, ImgsIndex, ImgsAsLinksIndex etc.
  17. 17. Regression model 1.  Detect valuable properties. 2.  Select model type (linear). 3.  After regression analysis we will get content important model: .305,0002,0267,0 594,0056,0008,0249,0324,0 171614 127653 xxx xxxxxy ⋅+⋅+⋅− −⋅−⋅+⋅−⋅−⋅=
  18. 18. SmartBrowser Software for determining the most relevant content of the HTML pages. h"p://smartbrowser.codeplex.com/    
  19. 19. Search optimal path 1.  Graph analysis (similar pages, clustering and categorization). 2.  Ant simulations (search optimal path using complex criterion). http://touchgraph.com/TGGoogleBrowser.html http://walk2web.com
  20. 20. Ant algorithm The ant colony algorithm is an algorithm for finding optimal paths that is based on the behavior of ants searching for food. Because the ant-colony works on a very dynamic system, the ant colony algorithm works very well in graphs with changing topologies. Examples of such systems include computer networks, and artificial intelligence simulations of workers.
  21. 21. Search optimal path algorithm 1.  User makes a search. 2.  Clustering (removing not relevant cluster pages). 3.  Main content determination and duplicates removal. 4.  Graph structure optimization. 5.  Analyzing content importance and completeness (sorting from most important to less one). 6.  Show the shortest path for viewing searching results.
  22. 22. Trends 1.  Social Search (Facebook, Twitter) and real-time search. 2.  Visual search (Bing). 3.  Expert systems (Wolfram Alpha, Siri and Cortana). 4.  Copyright issues solving.
  23. 23. References 1.  Data Mining SDK http://datamining.codeplex.com/ 2.  Microsoft Research Asia http://research.microsoft.com/en-us/labs/asia/ 3.  Information search lectures by Yandex http://company.yandex.ru/public/seminars/schedule 4.  How Google Works Videos http://bit.ly/bRfUav 5.  How Bing Works http://neotracks.blogspot.com/2009/06/ranknethow-bing-works.html 6.  Data Mining hub http://habrahabr.ru/hub/data_mining/ 7.  http://cstheory.stackexchange.com/ and http://math.stackexchange.com/ 8.  Сравнительный анализ методов определения нечетких дубликатов для Web-документов Зеленков Ю.Г, Сегалович И.В. 2007. http://rcdl2007.pereslavl.ru/papers/paper_65_v1.pdf 9.  Shingles approach http://www.codeisart.ru/part-1-shingles-algorithm-for-web-documents/
  24. 24. Q&A alex.krakovetskiy@devrain.com @msugvnua

×