Discovery hub : an exploratory search engine on the top of DBpediaNicolas MARIE
Discovery hub is an exploratory search engine (http://en.wikipedia.org/wiki/Exploratory_search) which helps you to discover things you might like or be interested in. It widens your cultural and knowledge horizons by revealing and explaining unattended information.
Want a film recommendation related to writers you like ? Want to discover bands at the crossroad of an electro and rock record-labels you like ? Interested by more complex and composite recommendations based on your deepest interests : a writer, a film and a band combination ? Or maybe something simpler ? If you have a thirst for discovery and knowledge, Discovery Hub has answers for you.
Discovery Hub is based on leading edge semantic web technologies. It allows you to discover new and unknown items of interest starting from what you like. Thanks to Discovery Hub you interactively explore DBpedia. DBpedia is a huge knowledge graph derived from Wikipedia data, it is composed of approximately 4 millions entities linked by more than 270 millions connexions. DBpedia covers many topics such as arts, technology, sciences, sport, etc.
Discovery Hub allows performing queries in an innovative way and helps you to navigate rich results. As a hub, it proposes redirections to others platforms to make you benefit from your discoveries (Youtube, Deezer and more). The results are explained in depth thanks to 3 explanatory features. It supports composite explorations i.e. starting from several items of interest; and proposes advanced exploration modes such as serendipitous, multi-lingual, and fine-grained ones
Discovery Hub V2 is more social ! You can like a topic, and share it on Twitter, but more important, now you can share searches you've made, collections you made, to your Discovery Hub followers ! And of course you can also follow your friends and/or interesting people if you find them !
Improving Semantic Search Using Query Log AnalysisStuart Wrigley
Despite the attention Semantic Search is continuously gaining, several challenges affecting tool performance and user experience remain unsolved. Among these are: matching user terms with the searchspace, adopting view-based interfaces in the Open Web as well as supporting users while building their queries. This paper proposes an approach to move a step forward towards tackling these challenges by creating models of usage of Linked Data concepts and properties extracted from semantic query logs as a source of collaborative knowledge. We use two sets of query logs from the USEWOD workshops to create our models and show the potential of using them in the mentioned areas.
The World Wide Web is moving from a Web of hyper-linked documents to a Web of linked data. Thanks to the Semantic Web technological stack and to the more recent Linked Open Data (LOD) initiative, a vast amount of RDF data have been published in freely accessible datasets connected with each other to form the so called LOD cloud. As of today, we have tons of RDF data available in the Web of Data, but only a few applications really exploit their potential power. The availability of such data is for sure an opportunity to feed personalized information access tools such as recommender systems. We will show how to plug Linked Open Data in a recommendation engine in order to build a new generation of LOD-enabled applications.
(Lecture given @ the 11th Reasoning Web Summer School - Berlin - August 1, 2015)
Discovery hub : an exploratory search engine on the top of DBpediaNicolas MARIE
Discovery hub is an exploratory search engine (http://en.wikipedia.org/wiki/Exploratory_search) which helps you to discover things you might like or be interested in. It widens your cultural and knowledge horizons by revealing and explaining unattended information.
Want a film recommendation related to writers you like ? Want to discover bands at the crossroad of an electro and rock record-labels you like ? Interested by more complex and composite recommendations based on your deepest interests : a writer, a film and a band combination ? Or maybe something simpler ? If you have a thirst for discovery and knowledge, Discovery Hub has answers for you.
Discovery Hub is based on leading edge semantic web technologies. It allows you to discover new and unknown items of interest starting from what you like. Thanks to Discovery Hub you interactively explore DBpedia. DBpedia is a huge knowledge graph derived from Wikipedia data, it is composed of approximately 4 millions entities linked by more than 270 millions connexions. DBpedia covers many topics such as arts, technology, sciences, sport, etc.
Discovery Hub allows performing queries in an innovative way and helps you to navigate rich results. As a hub, it proposes redirections to others platforms to make you benefit from your discoveries (Youtube, Deezer and more). The results are explained in depth thanks to 3 explanatory features. It supports composite explorations i.e. starting from several items of interest; and proposes advanced exploration modes such as serendipitous, multi-lingual, and fine-grained ones
Discovery Hub V2 is more social ! You can like a topic, and share it on Twitter, but more important, now you can share searches you've made, collections you made, to your Discovery Hub followers ! And of course you can also follow your friends and/or interesting people if you find them !
Improving Semantic Search Using Query Log AnalysisStuart Wrigley
Despite the attention Semantic Search is continuously gaining, several challenges affecting tool performance and user experience remain unsolved. Among these are: matching user terms with the searchspace, adopting view-based interfaces in the Open Web as well as supporting users while building their queries. This paper proposes an approach to move a step forward towards tackling these challenges by creating models of usage of Linked Data concepts and properties extracted from semantic query logs as a source of collaborative knowledge. We use two sets of query logs from the USEWOD workshops to create our models and show the potential of using them in the mentioned areas.
The World Wide Web is moving from a Web of hyper-linked documents to a Web of linked data. Thanks to the Semantic Web technological stack and to the more recent Linked Open Data (LOD) initiative, a vast amount of RDF data have been published in freely accessible datasets connected with each other to form the so called LOD cloud. As of today, we have tons of RDF data available in the Web of Data, but only a few applications really exploit their potential power. The availability of such data is for sure an opportunity to feed personalized information access tools such as recommender systems. We will show how to plug Linked Open Data in a recommendation engine in order to build a new generation of LOD-enabled applications.
(Lecture given @ the 11th Reasoning Web Summer School - Berlin - August 1, 2015)
Finding knowledge, data and answers on the Semantic Webebiquity
Web search engines like Google have made us all smarter by providing ready access to the world's knowledge whenever we need to look up a fact, learn about a topic or evaluate opinions. The W3C's Semantic Web effort aims to make such knowledge more accessible to computer programs by publishing it in machine understandable form.
<p>
As the volume of Semantic Web data grows software agents will need their own search engines to help them find the relevant and trustworthy knowledge they need to perform their tasks. We will discuss the general issues underlying the indexing and retrieval of RDF based information and describe Swoogle, a crawler based search engine whose index contains information on over a million RDF documents.
<p>
We will illustrate its use in several Semantic Web related research projects at UMBC including a distributed platform for constructing end-to-end use cases that demonstrate the semantic web’s utility for integrating scientific data. We describe ELVIS (the Ecosystem Location Visualization and Information System), a suite of tools for constructing food webs for a given location, and Triple Shop, a SPARQL query interface which searches the Semantic Web for data relevant to a given query ELVIS functionality is exposed as a collection of web services, and all input and output data is expressed in OWL, thereby enabling its integration with Triple Shop and other semantic web resources.
The (very) basics of AI for the Radiology residentPedro Staziaki
The (very) basics of AI for the Radiology resident.
Also on YouTube: https://youtu.be/ia90UKjlmBA
Artificial Intelligence, Machine Learning, Deep Learning, CNN, Convolutional Neural Networks, Support Vector Machine (SVM), GPU. Felipe Kitamura. Pedro Vinícius Staziaki.
Facets and Pivoting for Flexible and Usable Linked Data ExplorationRoberto García
The success of Open Data initiatives has increased the amount of data available on the Web. Unfortunately, most of this data is only available in raw tabular form, what makes analysis and reuse quite difficult for non-experts. Linked Data principles allow for a more sophisticated approach by making explicit both the structure and semantics of the data. However, from the end-user viewpoint, they continue to be monolithic files completely opaque or difficult to explore by making tedious semantic queries. Our objective is to facilitate the user to grasp what kind of entities are in the dataset, how they are interrelated, which are their main properties and values, etc. Rhizomer is a tool for data publishing whose interface provides a set of components borrowed from Information Architecture (IA) that facilitate awareness of the dataset at hand. It automatically generates navigation menus and facets based on the kinds of things in the dataset and how they are described through metadata properties and values. Moreover, motivated by recent tests with end-users, it also provides the possibility to pivot among the faceted views created for each class of resources in the dataset.
PyData 2015 Keynote: "A Systems View of Machine Learning" Joshua Bloom
Despite the growing abundance of powerful tools, building and deploying machine-learning frameworks into production continues to be major challenge, in both science and industry. I'll present some particular pain points and cautions for practitioners as well as recent work addressing some of the nagging issues. I advocate for a systems view, which, when expanded beyond the algorithms and codes to the organizational ecosystem, places some interesting constraints on the teams tasked with development and stewardship of ML products.
About: Dr. Joshua Bloom is an astronomy professor at the University of California, Berkeley where he teaches high-energy astrophysics and Python for data scientists. He has published over 250 refereed articles largely on time-domain transients events and telescope/insight automation. His book on gamma-ray bursts, a technical introduction for physical scientists, was published recently by Princeton University Press. He is also co-founder and CTO of wise.io, a startup based in Berkeley. Josh has been awarded the Pierce Prize from the American Astronomical Society; he is also a former Sloan Fellow, Junior Fellow at the Harvard Society, and Hertz Foundation Fellow. He holds a PhD from Caltech and degrees from Harvard and Cambridge University.
Workshop presented at Webdagene 2013 (http://webdagene.no/en/) September 9, 2013; UX Lisbon (http://www.ux-lx.com), May 12, 2011; UX Hong Kong (http://www.uxhongkong.com/), February 17, 2011.
Keyword-Based Navigation and Search over the Linked Data WebLuca Matteis
Keyword search approaches over RDF graphs have proven intuitive for users. However, these approaches rely on local copies of RDF graphs. In this paper, we present an algorithm that uses RDF keyword search methodologies to find information in the live Linked Data web rather than against local indexes.
http://events.linkeddata.org/ldow2015/papers/ldow2015_paper_06.pdf
Building machine learning systems remains something of an art, from gathering and transforming the right data to selecting and finetuning the most fitting modeling techniques. If we want to make machine learning more accessible and foster skilfull use, we need novel ways to share and reuse findings, and streamline online collaboration. OpenML is an open science platform for machine learning, allowing anyone to easily share data sets, code, and experiments, and collaborate with people all over the world to build better models. It shows, for any known data set, which are the best models, who built them, and how to reproduce and reuse them in different ways. It is readily integrated into several machine learning environments, so that you can share results with the touch of a button or a line of code. As such, it enables large-scale, real-time collaboration, allowing anyone to explore, build on, and contribute to the combined knowledge of the field. Ultimately, this provides a wealth of information for a novel, data-driven approach to machine learning, where we learn from millions of previous experiments to either assist people while analyzing data (e.g., which modeling techniques will likely work well and why), or automate the process altogether.
Domain Identification for Linked Open DataSarasi Sarangi
Linked Open Data (LOD) has emerged as one of the largest collections of interlinked structured datasets on the Web. Although the adoption of such datasets for applications is
increasing, identifying relevant datasets for a specific task or topic is still challenging. As an initial step to make such identification easier, we provide an approach to automatically identify the topic domains of given datasets. Our method utilizes existing knowledge sources, more specifically Freebase, and we present an evaluation which validates the topic domains we can identify with our system. Furthermore, we evaluate the effectiveness of identified topic domains for the purpose of finding relevant datasets, thus showing that our approach improves reusability of LOD datasets.
Metabolomic Data Analysis Workshop and Tutorials (2014)Dmitry Grapov
Get more information:
http://imdevsoftware.wordpress.com/2014/10/11/2014-metabolomic-data-analysis-and-visualization-workshop-and-tutorials/
Recently I had the pleasure of teaching statistical and multivariate data analysis and visualization at the annual Summer Sessions in Metabolomics 2014, organized by the NIH West Coast Metabolomics Center.
Similar to last year, I’ve posted all the content (lectures, labs and software) for any one to follow along with at their own pace. I also plan to release videos for all the lectures and labs.
Developing in R - the contextual Multi-Armed Bandit editionRobin van Emden
Attached, the slides of my presentation on how to create R packages, illustrated with lessons learned in developing "contextual": a package that enables you to easily simulate and analyze contextual multi-armed bandit algorithms.
Code: https://github.com/Nth-iteration-labs/contextual
Walking Our Way to the Web - Fabien Gandon
The Web: Scientific Creativity, Technological Innovation and Society
XXVIII Conference on Contemporary Philosophy and Methodology of Science
9 and 10 March 2023
University of A Coruña
The prospect of Walking our Way to the Web may sound strange to contemporary readers of this article for whom the Web is omnipresent. However, the slogan of the World Wide Web Consortium (W3C) has been, for years, and remains today, to lead “the Web to its full potential” meaning we haven’t reached that potential yet, whatever it is. The first architect of the Web himself, Tim Berners-Lee, said in an interview in 2009: “The Web as I envisaged it, we have not seen it yet. The future is still so much bigger than the past”. And he is still very active, together with the W3C members and Web experts world-wide, in proposing evolutions of the Web architecture to improve its growing usages and applications. In this article we will review the path that led us to the actual Web, the shape it is taking now and the possible evolutions, good and bad, we can identify today. This will lead us to consider the distance that we witness between the initial vision and the reality of the Web today, and to reflect on the possible divergence between the potential we see in the Web and the directions it could take. Our goal in this article is to reflect on how we could walk the delicate path to the full potential of the Web, finding the missing links and avoiding the one too many links.
a shift in our research focus: from knowledge acquisition to knowledge augmen...Fabien Gandon
EKAW 2022 keynote by Fabien GANDON: "a shift in our research focus: from knowledge acquisition to knowledge augmentation"
While EKAW started in 1987 as the European Knowledge Acquisition Workshop, in 2000 it transformed into a conference where we advance knowledge engineering and modelling in general. At the time, this transition also echoed shifts of focus such as moving from the paradigm of expert systems to the more encompassing one of knowledge-based systems. Nowadays, with the current strong interest for knowledge graphs, it is important again to reaffirm that our ultimate goal is not the acquisition of bigger siloed knowledge bases but to support knowledge requisition by and for all kinds of intelligence. Knowledge without intelligence is a highly perishable resource. Intelligence without knowledge is doomed to stagnation. We will defend that intelligence and knowledge, and their evolutions, have to be considered jointly and that the Web is providing a social hypermedia to link them in all their forms. Using examples from several projects, we will suggest that, just like intelligence augmentation and amplification insist on putting humans at the center of the design of artificial intelligence methods, we should think in terms of knowledge augmentation and amplification and we should design a knowledge web to be an enabler of the futures we want.
More Related Content
Similar to Discovery Hub: on-the-fly linked data exploratory search
Finding knowledge, data and answers on the Semantic Webebiquity
Web search engines like Google have made us all smarter by providing ready access to the world's knowledge whenever we need to look up a fact, learn about a topic or evaluate opinions. The W3C's Semantic Web effort aims to make such knowledge more accessible to computer programs by publishing it in machine understandable form.
<p>
As the volume of Semantic Web data grows software agents will need their own search engines to help them find the relevant and trustworthy knowledge they need to perform their tasks. We will discuss the general issues underlying the indexing and retrieval of RDF based information and describe Swoogle, a crawler based search engine whose index contains information on over a million RDF documents.
<p>
We will illustrate its use in several Semantic Web related research projects at UMBC including a distributed platform for constructing end-to-end use cases that demonstrate the semantic web’s utility for integrating scientific data. We describe ELVIS (the Ecosystem Location Visualization and Information System), a suite of tools for constructing food webs for a given location, and Triple Shop, a SPARQL query interface which searches the Semantic Web for data relevant to a given query ELVIS functionality is exposed as a collection of web services, and all input and output data is expressed in OWL, thereby enabling its integration with Triple Shop and other semantic web resources.
The (very) basics of AI for the Radiology residentPedro Staziaki
The (very) basics of AI for the Radiology resident.
Also on YouTube: https://youtu.be/ia90UKjlmBA
Artificial Intelligence, Machine Learning, Deep Learning, CNN, Convolutional Neural Networks, Support Vector Machine (SVM), GPU. Felipe Kitamura. Pedro Vinícius Staziaki.
Facets and Pivoting for Flexible and Usable Linked Data ExplorationRoberto García
The success of Open Data initiatives has increased the amount of data available on the Web. Unfortunately, most of this data is only available in raw tabular form, what makes analysis and reuse quite difficult for non-experts. Linked Data principles allow for a more sophisticated approach by making explicit both the structure and semantics of the data. However, from the end-user viewpoint, they continue to be monolithic files completely opaque or difficult to explore by making tedious semantic queries. Our objective is to facilitate the user to grasp what kind of entities are in the dataset, how they are interrelated, which are their main properties and values, etc. Rhizomer is a tool for data publishing whose interface provides a set of components borrowed from Information Architecture (IA) that facilitate awareness of the dataset at hand. It automatically generates navigation menus and facets based on the kinds of things in the dataset and how they are described through metadata properties and values. Moreover, motivated by recent tests with end-users, it also provides the possibility to pivot among the faceted views created for each class of resources in the dataset.
PyData 2015 Keynote: "A Systems View of Machine Learning" Joshua Bloom
Despite the growing abundance of powerful tools, building and deploying machine-learning frameworks into production continues to be major challenge, in both science and industry. I'll present some particular pain points and cautions for practitioners as well as recent work addressing some of the nagging issues. I advocate for a systems view, which, when expanded beyond the algorithms and codes to the organizational ecosystem, places some interesting constraints on the teams tasked with development and stewardship of ML products.
About: Dr. Joshua Bloom is an astronomy professor at the University of California, Berkeley where he teaches high-energy astrophysics and Python for data scientists. He has published over 250 refereed articles largely on time-domain transients events and telescope/insight automation. His book on gamma-ray bursts, a technical introduction for physical scientists, was published recently by Princeton University Press. He is also co-founder and CTO of wise.io, a startup based in Berkeley. Josh has been awarded the Pierce Prize from the American Astronomical Society; he is also a former Sloan Fellow, Junior Fellow at the Harvard Society, and Hertz Foundation Fellow. He holds a PhD from Caltech and degrees from Harvard and Cambridge University.
Workshop presented at Webdagene 2013 (http://webdagene.no/en/) September 9, 2013; UX Lisbon (http://www.ux-lx.com), May 12, 2011; UX Hong Kong (http://www.uxhongkong.com/), February 17, 2011.
Keyword-Based Navigation and Search over the Linked Data WebLuca Matteis
Keyword search approaches over RDF graphs have proven intuitive for users. However, these approaches rely on local copies of RDF graphs. In this paper, we present an algorithm that uses RDF keyword search methodologies to find information in the live Linked Data web rather than against local indexes.
http://events.linkeddata.org/ldow2015/papers/ldow2015_paper_06.pdf
Building machine learning systems remains something of an art, from gathering and transforming the right data to selecting and finetuning the most fitting modeling techniques. If we want to make machine learning more accessible and foster skilfull use, we need novel ways to share and reuse findings, and streamline online collaboration. OpenML is an open science platform for machine learning, allowing anyone to easily share data sets, code, and experiments, and collaborate with people all over the world to build better models. It shows, for any known data set, which are the best models, who built them, and how to reproduce and reuse them in different ways. It is readily integrated into several machine learning environments, so that you can share results with the touch of a button or a line of code. As such, it enables large-scale, real-time collaboration, allowing anyone to explore, build on, and contribute to the combined knowledge of the field. Ultimately, this provides a wealth of information for a novel, data-driven approach to machine learning, where we learn from millions of previous experiments to either assist people while analyzing data (e.g., which modeling techniques will likely work well and why), or automate the process altogether.
Domain Identification for Linked Open DataSarasi Sarangi
Linked Open Data (LOD) has emerged as one of the largest collections of interlinked structured datasets on the Web. Although the adoption of such datasets for applications is
increasing, identifying relevant datasets for a specific task or topic is still challenging. As an initial step to make such identification easier, we provide an approach to automatically identify the topic domains of given datasets. Our method utilizes existing knowledge sources, more specifically Freebase, and we present an evaluation which validates the topic domains we can identify with our system. Furthermore, we evaluate the effectiveness of identified topic domains for the purpose of finding relevant datasets, thus showing that our approach improves reusability of LOD datasets.
Metabolomic Data Analysis Workshop and Tutorials (2014)Dmitry Grapov
Get more information:
http://imdevsoftware.wordpress.com/2014/10/11/2014-metabolomic-data-analysis-and-visualization-workshop-and-tutorials/
Recently I had the pleasure of teaching statistical and multivariate data analysis and visualization at the annual Summer Sessions in Metabolomics 2014, organized by the NIH West Coast Metabolomics Center.
Similar to last year, I’ve posted all the content (lectures, labs and software) for any one to follow along with at their own pace. I also plan to release videos for all the lectures and labs.
Developing in R - the contextual Multi-Armed Bandit editionRobin van Emden
Attached, the slides of my presentation on how to create R packages, illustrated with lessons learned in developing "contextual": a package that enables you to easily simulate and analyze contextual multi-armed bandit algorithms.
Code: https://github.com/Nth-iteration-labs/contextual
Walking Our Way to the Web - Fabien Gandon
The Web: Scientific Creativity, Technological Innovation and Society
XXVIII Conference on Contemporary Philosophy and Methodology of Science
9 and 10 March 2023
University of A Coruña
The prospect of Walking our Way to the Web may sound strange to contemporary readers of this article for whom the Web is omnipresent. However, the slogan of the World Wide Web Consortium (W3C) has been, for years, and remains today, to lead “the Web to its full potential” meaning we haven’t reached that potential yet, whatever it is. The first architect of the Web himself, Tim Berners-Lee, said in an interview in 2009: “The Web as I envisaged it, we have not seen it yet. The future is still so much bigger than the past”. And he is still very active, together with the W3C members and Web experts world-wide, in proposing evolutions of the Web architecture to improve its growing usages and applications. In this article we will review the path that led us to the actual Web, the shape it is taking now and the possible evolutions, good and bad, we can identify today. This will lead us to consider the distance that we witness between the initial vision and the reality of the Web today, and to reflect on the possible divergence between the potential we see in the Web and the directions it could take. Our goal in this article is to reflect on how we could walk the delicate path to the full potential of the Web, finding the missing links and avoiding the one too many links.
a shift in our research focus: from knowledge acquisition to knowledge augmen...Fabien Gandon
EKAW 2022 keynote by Fabien GANDON: "a shift in our research focus: from knowledge acquisition to knowledge augmentation"
While EKAW started in 1987 as the European Knowledge Acquisition Workshop, in 2000 it transformed into a conference where we advance knowledge engineering and modelling in general. At the time, this transition also echoed shifts of focus such as moving from the paradigm of expert systems to the more encompassing one of knowledge-based systems. Nowadays, with the current strong interest for knowledge graphs, it is important again to reaffirm that our ultimate goal is not the acquisition of bigger siloed knowledge bases but to support knowledge requisition by and for all kinds of intelligence. Knowledge without intelligence is a highly perishable resource. Intelligence without knowledge is doomed to stagnation. We will defend that intelligence and knowledge, and their evolutions, have to be considered jointly and that the Web is providing a social hypermedia to link them in all their forms. Using examples from several projects, we will suggest that, just like intelligence augmentation and amplification insist on putting humans at the center of the design of artificial intelligence methods, we should think in terms of knowledge augmentation and amplification and we should design a knowledge web to be an enabler of the futures we want.
A Never-Ending Project for Humanity Called “the Web”Fabien Gandon
A Never-Ending Project for Humanity Called "the Web"
Fabien Gandon, Wendy Hall
https://hal.inria.fr/WIMMICS/hal-03633526
In this paper we summarized the main historical steps in making the Web, its foundational principles and its evolution. First we mention some of the influences and streams of thought that interacted to bring the Web about. Then we recall that its birthplace, the CERN, had a need for a global hypertext system and at the same time was the perfect microcosm to provide a cradle for the Web. We stress how this invention required to strike a balance between the integration of and the departure from the existing and emerging paradigms of the day. We then review the pillars of the Web architecture and the features that made the Web so viral compared to competitors. Finally we survey the multiple mutations the Web underwent no sooner it was born, evolving in multiple directions. We conclude on the fact the Web is now an architecture, an artefact, a science object and a research and development object, and of which we haven't seen the full potential yet.
CovidOnTheWeb : covid19 linked data published on the WebFabien Gandon
The Covid-on-the-Web project aims to allow biomedical researchers to access, query and make sense of COVID-19 related literature. To do so, it adapts, combines and extends tools to process, analyze and enrich the "COVID-19 Open Research Dataset" (CORD-19) that gathers 50,000+ full-text scientific articles related to the coronaviruses. We report on the RDF dataset and software resources produced in this project by leveraging skills in knowledge representation, text, data and argument mining, as well as data visualization and exploration. The dataset comprises two main knowledge graphs describing (1) named entities mentioned in the CORD-19 corpus and linked to DBpedia, Wikidata and other BioPortal vocabularies, and (2) arguments extracted using ACTA, a tool automating the extraction and visualization of argumentative graphs, meant to help clinicians analyze clinical trials and make decisions. On top of this dataset, we provide several visualization and exploration tools based on the Corese Semantic Web platform, MGExplorer visualization library, as well as the Jupyter Notebook technology. All along this initiative, we have been engaged in discussions with healthcare and medical research institutes to align our approach with the actual needs of the biomedical community, and we have paid particular attention to comply with the open and reproducible science goals, and the FAIR principles.
Web open standards for linked data and knowledge graphs as enablers of EU dig...Fabien Gandon
Web open standards for linked data and knowledge graphs as enablers of EU digital sovereignty
ENDORSE Keynote by Fabien GANDON, 19/03/2021
https://op.europa.eu/en/web/endorse
from linked data & knowledge graphs to linked intelligence & intelligence graphsFabien Gandon
ISWC Vision track talk "from linked data & knowledge graphs to linked intelligence & intelligence graphs or the potential of the semantic Web to break the walls between semantic networks and computational networks"
JURIX talk on representing and reasoning on the deontic aspects of normative rules relying only on standard Semantic Web languages.
The corresponding paper is at https://hal.inria.fr/hal-01643769v1
One Web of pages, One Web of peoples, One Web of Services, One Web of Data, O...Fabien Gandon
Keynote Fabien GANDON, at WIM2016: One Web of pages, One Web of peoples, One Web of Services, One Web of Data, One Web of Things…and with the Semantic Web bind them.
Wimmics Research Team 2015 Activity ReportFabien Gandon
Extract of the activity report of the Wimmics joint research team between Inria Sophia Antipolis - Méditerranée and I3S (CNRS and Université Nice Sophia Antipolis). Wimmics stands for web-instrumented man-machine interactions, communities and semantics. The team focuses on bridging social semantics and formal semantics on the web.
Retours sur le MOOC "Web Sémantique et Web de données"Fabien Gandon
Présentation des caractéristiques et résultats de la première session en 2015 du MOOC "Web Sémantique et Web de données" par Inria, Université de Nice, FUN et UNIT.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
4. related work…
Aemoo Kaminskas & al. LED MORE Seevl Yovisto
Purpose Explorator
y search
Cross-domain
recommendation
Exploratory
search on
ICT domain
Film
recommendati
on
Musical
recommendati
on
Video
exploratory
search
Data DBpedia
EN +
external
services
DBpedia EN
subset
DBpedia +
external
services
DBpedia EN
subset
DBpedia EN
subset
DBpedia
EN+DE
subset
Multi-domain Yes Cross two
domains
No No, cinema No, music Yes
Query Entity
search
Entity selection in
a pre-processed
list
Entity search Entity search Entity
recognition
from Youtube.
Entity
recognition in
keywords
Algorithm EKP
filtered
view
weighted
activation
DBpedia
Ranker
sVSM algo. DBrec
algorithm
Set of
heuristics
Ranking No Yes Yes Yes Yes Yes
Explanations Wikipedia-
based
Path-based No Shared prop. Shared
properties
No
Offline proc. Yes , EKP
part
Yes Yes Yes Yes Yes
goal: domain-independent, customizable, on the fly, remote sources
5. composite interest queries
knowing my interest for X and Y what can I
discover/learn which is related to all these resources?
The Beatles Ken Loach
8. research questions
1. How can we discover linked resources of interest
to be explored ?
2. How to address remote LOD sources for this?
3. How to present and explain the results to the user
for an exploratory objective ?
http://fr.dbpedia.org/sparql
http://es.dbpedia.org/sparql
http://it.dbpedia.org/sparql
11. Album, Band, Film,
Musical Artist, Music
Genre, Person, Radio
Station, Single, Song,
Television Show
Company, Election, Film,
Journalist, Musical
Artist, Newspaper,
Office Holder,
Organisation, Politician,
School, Single,
Television Show, Writer
propagation domain propagation domain
12. research questions
1. How can we discover linked resources of interest
to be explored ?
2. How to address remote LOD sources for it?
3. How to present and explain the results to the user
for an exploratory objective ?
http://fr.dbpedia.org/sparql
http://es.dbpedia.org/sparql
http://it.dbpedia.org/sparql
13. sampling algorithm
1.sparql endpoint = http://xxx/sparql
2.seeds = xxx//The_Beatles, xxx/Ken_Loach
3. compute the propagation domain (w(i,o))
4. find a path between the seeds
5. import path nodes & their neighbors
6. for(i=1; i<=maxPulse; i++){
7. pulse();
8. if(sampleSize <= maxSampleSize){
9. extend the sample
10. }
11.}
19. research questions
1. How can we discover linked resources of interest
to be explored ?
2. How to address remote LOD sources for it?
3. How to present and explain the results to the user
for an exploratory objective ?
http://fr.dbpedia.org/sparql
http://es.dbpedia.org/sparql
http://it.dbpedia.org/sparql
20. Discovery Hub 1.0
1. Start from what you like
or are interested in
3. Be redirected on third-party
platforms to continue the
discovery experience
Book
2.
Explore, understand, disco
ver
…
24. composite queries
• randomly combining Facebook likes of 12 users
• two queries for each participants to judge the top 20 results
- The result interests me [Strongly Disagree … Strongly Agree ]
- The result is unexpected [Strongly Disagree … Strongly Agree ]
Very interesting
Not interesting at all
25. overall
•61.6% of the results were rated as strongly relevant
or relevant by the participants.
•65% of the results were rated as strongly
unexpected or unexpected.
•35.42% of the results were rated both as strongly
relevant or relevant and strongly unexpected or
unexpected.
29. •semantic spreading activation
algorithm coupled to a graph
sampling to address remote
LOD sources.
•faceted browsing and
multiple explanations of
the results.
•on-going extensive user
evaluation
•publicly available http://discoveryhub.co
Discovery Hub : enabling exploratory
search starting from several interests
using linked data sources
1
0,2
0,2 0,2
0,6
0,6
1
0,8
1
30. current work:
- propagation over multiple data sources in parallel.
- redesign of the interface: Discovery Hub 2.0 released
perspective: other applications of semantic spreading
activation