Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Make Embeddings Semantic Again!

465 views

Published on

The original Semantic Web vision foresees to describe entities in a way that the meaning can be interpreted both by machines and humans. Following that idea, large-scale knowledge graphs capturing a significant portion of knowledge have been developed. In the recent past, vector space embeddings of semantic web knowledge graphs - i.e., projections of a knowledge graph into a lower-dimensional, numerical feature
space (a.k.a. latent feature space) - have been shown to yield superior performance in many tasks, including relation prediction, recommender systems, or the enrichment of predictive data mining tasks. At the same time, those projections describe an entity as a numerical vector, without
any semantics attached to the dimensions. Thus, embeddings are as far from the original Semantic Web vision as can be. As a consequence, the results achieved with embeddings - as impressive as they are in terms of quantitative performance - are most often not interpretable, and it is hard to obtain a justification for a prediction, e.g., an explanation why an item has been suggested by a recommender system. In this paper, we make a claim for semantic embeddings and discuss possible ideas towards their construction.

Published in: Data & Analytics
  • Be the first to comment

Make Embeddings Semantic Again!

  1. 1. 10/15/18 Heiko Paulheim 1 Make Embeddings Semantic Again! Heiko Paulheim
  2. 2. 10/15/18 Heiko Paulheim 2 Are we Driving on the Wrong Side of the Road?
  3. 3. 10/15/18 Heiko Paulheim 3 Are we Driving on the Wrong Side of the Road? • Original ideas: – Assign meaning to data – Allow for machine inference – Explain inference results to the user
  4. 4. 10/15/18 Heiko Paulheim 4 Running Example: Recommender Systems • Content based recommender systems backed by Semantic Web data – (today: knowledge graphs) • Advantages – use rich background information about recommended items (for free) – justifications can be generated (e.g., you like movies by that director)
  5. 5. 10/15/18 Heiko Paulheim 5 Semantic Web and Embeddings • Google Scholar search on Semantic Web Vector Space Embedding TransE RDF2Vec HolE DistMult RESCAL NTN TransR TransH TransD KG2E ComplEx
  6. 6. 10/15/18 Heiko Paulheim 6 The 2009 Semantic Web Layer Cake
  7. 7. 10/15/18 Heiko Paulheim 7 The 2018 Semantic Web Layer Cake Embeddings
  8. 8. 10/15/18 Heiko Paulheim 8 Towards Semantic Vector Space Embeddings cartoon superhero
  9. 9. 10/15/18 Heiko Paulheim 9 How are We Going to Get There? cartoon superhero • Idea 1: A posteriori learning • Each dimension of the embedding model is a target for a separate learning problem • Learn a function to explain the dimension • E.g.: • Just an approximation used for explanations and justifications
  10. 10. 10/15/18 Heiko Paulheim 10 How are We Going to Get There? cartoon superhero • Idea 2: Pattern-based embeddings • Step 1: learn typical patterns that exist in a knowledge graph – e.g., graph pattern learning – e.g., Horn clauses • Step 2a: use those patterns as embedding dimensions – probably not low dimensional • Step 2b: compact the space – e.g., use dimensions for mutually exclusive patterns
  11. 11. 10/15/18 Heiko Paulheim 11 How are We Going to Get There? • Idea 3 to n: yours (I’m happy to hear it)
  12. 12. 10/15/18 Heiko Paulheim 12 A New Design Space quantitative performance semantic interpretability
  13. 13. 10/15/18 Heiko Paulheim 13 Make Embeddings Semantic Again! Heiko Paulheim

×