1) The document discusses building intelligent systems that can explain themselves and their decisions.
2) It proposes using existing knowledge sources on the web as background knowledge to generate explanations for systems.
3) Several examples are provided of different types of explanations that could be generated by systems, such as explaining behaviors, scenes, neural attentions, inconsistencies, and more.
Research work presented at the Ontology Summit 2019 (http://ontologforum.org/index.php/ConferenceCall_2019_03_13) in the Narrative & Explanation sessions. Overview of how to automatically build explanations from knowledge graphs and examples of applications.
Uncertainty classification of expert systems a rough set approachEr. rahul abhishek
In this paper, we discussed about the un certainity classifications of the Expert Systems using a Rough Set Approach. It is a Softcomputing technique using this we classified the types of Expert Systems. An expert system has a unique structure, different from traditional programs. It is divided into two parts, one fixed, independent of the expert system: the inference engine, and one variable: the knowledge base. To run an expert system, the engine reasons about the knowledge base like a human. In the 80's a third part appeared: a dialog interface to communicate with users. This ability to conduct a conversation with users was later called "conversational". Rough set theory is a technique deals with uncertainty.
How to build systems that find, access, exchange and reuse information from linked datasources? My keynote at the Platform Linked Data Netherlands Congress.
http://www.pilod.nl/wiki/Congres_Linked_Data_is_FAIR_voor_Iedereen_%E2%80%93_7_november_2018
Application and Methods of Deep Learning in IoTIJAEMSJORNAL
In this talk, we provide a comprehensive overview of how to use a subset of advanced AI techniques, most specifically Deep Learning (DL), to bolster analytics as well as learning in the IoT URL. First and foremost, we define a development environment that integrates big data designs with deep learning models to promote rapid experimentation. There are three main promises made in the proposal: To begin, it illustrates a big data engineering that facilitates big data assortment in the same way that businesses facilitate deep learning models. Then, the language for creating a data perspective is shown, one that transforms the many streams of large data into a format that can be used by an advanced learning system. Third, it demonstrates the success of the framework by applying the tool to a wide range of deep learning use cases. We provide a generalized basis for a variety of DL architectures using numerical examples. We also evaluate and summarize major published research projects that made use of DL in the IoT context. Wonderful Internet of Things gadgets that have integrated DL into their prior knowledge are often discussed.
Research work presented at the Ontology Summit 2019 (http://ontologforum.org/index.php/ConferenceCall_2019_03_13) in the Narrative & Explanation sessions. Overview of how to automatically build explanations from knowledge graphs and examples of applications.
Uncertainty classification of expert systems a rough set approachEr. rahul abhishek
In this paper, we discussed about the un certainity classifications of the Expert Systems using a Rough Set Approach. It is a Softcomputing technique using this we classified the types of Expert Systems. An expert system has a unique structure, different from traditional programs. It is divided into two parts, one fixed, independent of the expert system: the inference engine, and one variable: the knowledge base. To run an expert system, the engine reasons about the knowledge base like a human. In the 80's a third part appeared: a dialog interface to communicate with users. This ability to conduct a conversation with users was later called "conversational". Rough set theory is a technique deals with uncertainty.
How to build systems that find, access, exchange and reuse information from linked datasources? My keynote at the Platform Linked Data Netherlands Congress.
http://www.pilod.nl/wiki/Congres_Linked_Data_is_FAIR_voor_Iedereen_%E2%80%93_7_november_2018
Application and Methods of Deep Learning in IoTIJAEMSJORNAL
In this talk, we provide a comprehensive overview of how to use a subset of advanced AI techniques, most specifically Deep Learning (DL), to bolster analytics as well as learning in the IoT URL. First and foremost, we define a development environment that integrates big data designs with deep learning models to promote rapid experimentation. There are three main promises made in the proposal: To begin, it illustrates a big data engineering that facilitates big data assortment in the same way that businesses facilitate deep learning models. Then, the language for creating a data perspective is shown, one that transforms the many streams of large data into a format that can be used by an advanced learning system. Third, it demonstrates the success of the framework by applying the tool to a wide range of deep learning use cases. We provide a generalized basis for a variety of DL architectures using numerical examples. We also evaluate and summarize major published research projects that made use of DL in the IoT context. Wonderful Internet of Things gadgets that have integrated DL into their prior knowledge are often discussed.
Swoogle: Showcasing the Significance of Semantic SearchIDES Editor
The World Wide Web hosts vast repositories of
information. The retrieval of required information from the
Internet is a great challenge since computer applications
understand only the structure and layout of web pages and
they do not have access to their intended meaning. Semantic
web is an effort to enhance the Internet, so that computers
can process the information presented on WWW, interpret
and communicate with it, to help humans find required
essential knowledge. Application of Ontology is the
predominant approach helping the evolution of the Semantic
web. The aim of our work is to illustrate how Swoogle, a
semantic search engine, helps make computer and WWW
interoperable and more intelligent. In this paper, we discuss
issues related to traditional and semantic web searching. We
outline how an understanding of the semantics of the search
terms can be used to provide better results. The experimental
results establish that semantic search provides more focused
results than the traditional search.
A DEVELOPMENT FRAMEWORK FOR A CONVERSATIONAL AGENT TO EXPLORE MACHINE LEARNIN...mlaij
This study aims to introduce a discussion platform and curriculum designed to help people understand how
machines learn. Research shows how to train an agent through dialogue and understand how information
is represented using visualization. This paper starts by providing a comprehensive definition of AI literacy
based on existing research and integrates a wide range of different subject documents into a set of key AI
literacy skills to develop a user-centered AI. This functionality and structural considerations are organized
into a conceptual framework based on the literature. Contributions to this paper can be used to initiate
discussion and guide future research on AI learning within the computer science community.
6. kr paper journal nov 11, 2017 (edit a)IAESIJEECS
Knowledge Representation (KR) is a fascinating field across several areas of cognitive science and computer science. It is very hard to identify the requirement of a combination of many techniques and inference mechanism to achieve the accuracy for the problem domain. This research attempted to examine those techniques, and to apply them to implement a Cognitive Hybrid Sentence Modeling and Analyzer. The purpose of developing this system is to facilitate people who face the problem of using English language in daily life.
Does artificial neural network support connectivism’s assumptions?Alaa Al Dahdouh
Connectivism was presented as a learning theory for the digital age and connectivists claim that recent developments in Artificial Intelligence (AI) and, more specifically, Artificial Neural Network (ANN) support their assumptions of knowledge connectivity. Yet, very little has been done to investigate this brave allegation. Does the advancement in artificial neural network studies support connectivism’s assumptions? And if yes, to what extent? This paper addresses the aforementioned question by tackling the core concepts of ANN and matching them with connectivist's assumptions. The study employed the qualitative content analysis approach where the researcher started with purposely selected and relatively small content samples in connectivism and ANN literature. The results revealed that ANN partially supports connectivism’s assumptions but this does not mean that other learning theories such as behaviorism and constructivism are not supported as well. The findings enlighten our understanding of connectivism and where it may be applied.
We hosted a fantastic tutorial on Knowledge-infused Deep Learning at the 31st ACM Hypertext Conference on July 14. Broadly, the tutorial covered many exciting applications of Broad- and Community-based Knowledge Graph in Education, Clinical and Social-Media Healthcare, Pandemic, and Cryptomarkets.
We theorized the concept of Knowledge-infusion and showed its importance in gaining explainability and spectacular performance gains. We extended the idea of "Knowledge-infused Deep Learning" to Autonomous Driving, Cyber Social Harms, and DarkWeb.
The tutorial presentation with relevant resources and references are made online at http://kidl2020.aiisc.ai.
Mining knowledge graphs to map heterogeneous relations between the internet o...IJECEIAES
Patterns for the internet of things (IoT) which represent proven solutions used to solve design problems in the IoT are numerous. Similar to objectoriented design patterns, these IoT patterns contain multiple mutual heterogeneous relationships. However, these pattern relationships are hidden and virtually unidentified in most documents. In this paper, we use machine learning techniques to automatically mine knowledge graphs to map these relationships between several IoT patterns. The end result is a semantic knowledge graph database which outlines patterns as vertices and their relations as edges. We have identified four main relationships between the IoT patterns-a pattern is similar to another pattern if it addresses the same use case problem, a large-scale pattern uses a small- scale pattern in a lower level layer, a large pattern is composed of multiple smaller scale patterns underneath it, and patterns complement and combine with each other to resolve a given use case problem. Our results show some promising prospects towards the use of machine learning techniques to generate an automated repository to organise the IoT patterns, which are usually extracted at various levels of abstraction and granularity.
Recruitment Based On Ontology with Enhanced Security Featurestheijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Keystone Summer School 2015: Mauro Dragoni, Ontologies For Information RetrievalMauro Dragoni
The presentation provides an overview of what an ontology is and how it can be used for representing information and for retrieving data with a particular focus on the linguistic resources available for supporting this kind of task. Overview of semantic-based retrieval approaches by highlighting the pro and cons of using semantic approaches with respect to classic ones. Use cases are presented and discussed
Swoogle: Showcasing the Significance of Semantic SearchIDES Editor
The World Wide Web hosts vast repositories of
information. The retrieval of required information from the
Internet is a great challenge since computer applications
understand only the structure and layout of web pages and
they do not have access to their intended meaning. Semantic
web is an effort to enhance the Internet, so that computers
can process the information presented on WWW, interpret
and communicate with it, to help humans find required
essential knowledge. Application of Ontology is the
predominant approach helping the evolution of the Semantic
web. The aim of our work is to illustrate how Swoogle, a
semantic search engine, helps make computer and WWW
interoperable and more intelligent. In this paper, we discuss
issues related to traditional and semantic web searching. We
outline how an understanding of the semantics of the search
terms can be used to provide better results. The experimental
results establish that semantic search provides more focused
results than the traditional search.
A DEVELOPMENT FRAMEWORK FOR A CONVERSATIONAL AGENT TO EXPLORE MACHINE LEARNIN...mlaij
This study aims to introduce a discussion platform and curriculum designed to help people understand how
machines learn. Research shows how to train an agent through dialogue and understand how information
is represented using visualization. This paper starts by providing a comprehensive definition of AI literacy
based on existing research and integrates a wide range of different subject documents into a set of key AI
literacy skills to develop a user-centered AI. This functionality and structural considerations are organized
into a conceptual framework based on the literature. Contributions to this paper can be used to initiate
discussion and guide future research on AI learning within the computer science community.
6. kr paper journal nov 11, 2017 (edit a)IAESIJEECS
Knowledge Representation (KR) is a fascinating field across several areas of cognitive science and computer science. It is very hard to identify the requirement of a combination of many techniques and inference mechanism to achieve the accuracy for the problem domain. This research attempted to examine those techniques, and to apply them to implement a Cognitive Hybrid Sentence Modeling and Analyzer. The purpose of developing this system is to facilitate people who face the problem of using English language in daily life.
Does artificial neural network support connectivism’s assumptions?Alaa Al Dahdouh
Connectivism was presented as a learning theory for the digital age and connectivists claim that recent developments in Artificial Intelligence (AI) and, more specifically, Artificial Neural Network (ANN) support their assumptions of knowledge connectivity. Yet, very little has been done to investigate this brave allegation. Does the advancement in artificial neural network studies support connectivism’s assumptions? And if yes, to what extent? This paper addresses the aforementioned question by tackling the core concepts of ANN and matching them with connectivist's assumptions. The study employed the qualitative content analysis approach where the researcher started with purposely selected and relatively small content samples in connectivism and ANN literature. The results revealed that ANN partially supports connectivism’s assumptions but this does not mean that other learning theories such as behaviorism and constructivism are not supported as well. The findings enlighten our understanding of connectivism and where it may be applied.
We hosted a fantastic tutorial on Knowledge-infused Deep Learning at the 31st ACM Hypertext Conference on July 14. Broadly, the tutorial covered many exciting applications of Broad- and Community-based Knowledge Graph in Education, Clinical and Social-Media Healthcare, Pandemic, and Cryptomarkets.
We theorized the concept of Knowledge-infusion and showed its importance in gaining explainability and spectacular performance gains. We extended the idea of "Knowledge-infused Deep Learning" to Autonomous Driving, Cyber Social Harms, and DarkWeb.
The tutorial presentation with relevant resources and references are made online at http://kidl2020.aiisc.ai.
Mining knowledge graphs to map heterogeneous relations between the internet o...IJECEIAES
Patterns for the internet of things (IoT) which represent proven solutions used to solve design problems in the IoT are numerous. Similar to objectoriented design patterns, these IoT patterns contain multiple mutual heterogeneous relationships. However, these pattern relationships are hidden and virtually unidentified in most documents. In this paper, we use machine learning techniques to automatically mine knowledge graphs to map these relationships between several IoT patterns. The end result is a semantic knowledge graph database which outlines patterns as vertices and their relations as edges. We have identified four main relationships between the IoT patterns-a pattern is similar to another pattern if it addresses the same use case problem, a large-scale pattern uses a small- scale pattern in a lower level layer, a large pattern is composed of multiple smaller scale patterns underneath it, and patterns complement and combine with each other to resolve a given use case problem. Our results show some promising prospects towards the use of machine learning techniques to generate an automated repository to organise the IoT patterns, which are usually extracted at various levels of abstraction and granularity.
Recruitment Based On Ontology with Enhanced Security Featurestheijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Keystone Summer School 2015: Mauro Dragoni, Ontologies For Information RetrievalMauro Dragoni
The presentation provides an overview of what an ontology is and how it can be used for representing information and for retrieving data with a particular focus on the linguistic resources available for supporting this kind of task. Overview of semantic-based retrieval approaches by highlighting the pro and cons of using semantic approaches with respect to classic ones. Use cases are presented and discussed
Presentation held @Knowledge Capture (K-CAP) 2015 in Palisades, NY
Presenting how explanations have been defined in Cognitive Science and abstracting an ontological model from it
Knowledge Discovery tools using Linked Data techniques - {resentation for the Linked Data 4 Knowledge Discovery Workshop at ECML/PKDD2015 conference - http://events.kmi.open.ac.uk/ld4kd2015/ -
Presentation of my work "Using Linked Data Traversal to Label Academic Communities" at the SAVE-SD workshop, co-located with the 24th International World Wide Web Conference at Florence, Italy
An approach to identify how much a Linked Data dataset is biased, using statistical methods and the links between datasets. 28/11/2014 @EKAW2014, Linköping, Sweden
Toxic effects of heavy metals : Lead and Arsenicsanjana502982
Heavy metals are naturally occuring metallic chemical elements that have relatively high density, and are toxic at even low concentrations. All toxic metals are termed as heavy metals irrespective of their atomic mass and density, eg. arsenic, lead, mercury, cadmium, thallium, chromium, etc.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
ANAMOLOUS SECONDARY GROWTH IN DICOT ROOTS.pptxRASHMI M G
Abnormal or anomalous secondary growth in plants. It defines secondary growth as an increase in plant girth due to vascular cambium or cork cambium. Anomalous secondary growth does not follow the normal pattern of a single vascular cambium producing xylem internally and phloem externally.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
1. BUILDING INTELLIGENT SYSTEMS
(THAT CAN EXPLAIN)
Ilaria Tiddi
KR&R group, Faculty of Computer Science, VU Amsterdam
Cooperation Lab, Faculty of Behavioural and Movement Sciences
VU Amsterdam
2. Disclaimer
This is NOT a presentation on eXplainable AI (XAI) ...
...but rather on systems making sense of complex data
…we can argue at Q time if they somewhat overlap
3. ● Learn new knowledge
● Find meaning : we reconcile the contradictions in our knowledge
● Socially interact : we create a shared meaning, we change/influence the
others’ beliefs
● ...and because GDPR says so
Users have a “right to explanation”
for any decisions made about them
WHY DO WE NEED (SYSTEMS THAT) EXPLAIN?
4. Different disciplines, common features [1]:
● Generation of coherence between old and new knowledge
● Same elements (theory, anterior, posterior, circumstances)
● Same processes (psychological , linguistic)
[1] Tiddi et al. (2015), An Ontology Design Pattern to Define Explanations, K-CAP2015
Determinists Hempel&
Oppenheim
Weber&
Durkheim
Charles
Peirce
DEFINING EXPLANATIONS (ε)
V-IV AC
Plato&Aristotle
XVII BC 1948 19641903 2015
?
6. Which types?
● factual ε : why specific ‘everyday’ events occur
● scientific ε : generalising scientific theories
● reason ε : explaining behaviour and decision making
Which processes?
1) cognitive : determining the causes (explanans) of an event (explanandum) and
relating these to a particular context
2) social : transferring knowledge between explainer and explainee
RESEARCH GOAL : INTELLIGENT SYSTEMS THAT CAN EXPLAIN
7. Which audience?
● engineers/scientists/experts
● end-users
Which characteristics?
● Transparency (traceability + verificability)
● Intelligibility + explainability
Which language?
● Visual
● Written
● Spoken
RESEARCH GOAL : INTELLIGENT SYSTEMS THAT CAN EXPLAIN
8. Existing knowledge sources can serve as background (the “old”) knowledge to generate
explanations:
● Plenty of available sources (not only RDF...)
● Connected, centralised hubs
● Multi-domain (serendipity!)
APPROACH : REUSE AVAILABLE KNOWLEDGE SOURCES
9. Generating explanations from the Web of Data [2]
EXAMPLE : FACTUAL, WRITTEN ε
[2] Tiddi. (2016), Explaining Data Patterns using Knowledge from the Web of Data, Ph.D. thesis. Demo: http://dedalo.kmi.open.ac.uk/
Why do people search for “A Song of Ice
and Fire” only in certain periods?”
10. Explaining behaviours and recommending self-learners using online resources [3]
EXAMPLE : REASON, VISUAL ε
[3] http://afel-project.eu
11. Robots finding explanations for their behavior in a smart-city datahub [4]
EXAMPLE : REASON, SPOKEN ε
[4] http://sciroc.eu
[ Shameless advert ]
1st ERL Smart CIties
RObotics Challenge
16–22/09/2019
Milton Keynes,UK
No need to have a
robot!!!
12. Explaining scenes in motion using ShapeNet [5] as background knowledge
(and YOLO [6] for pre-processing)
EXAMPLE : factual&reason, spoken ε
[5] http://www.shapenet.org
[6] https://pjreddie.com/darknet/yolo/
13. Explaining neural attentions
A multi-layer LSTM network to understanding NL
robotic commands [8]
Avoid training biases using linguistic corpora
(FrameNet [7]) combined with domain-specific
datasets
EXAMPLE : REASON, VISUAL ε
[7] https://framenet.icsi.berkeley.edu/fndrupal/
[8] Mensio et al., A Multi-layer LSTM-based Approach for Robot Command Interaction Modeling, Language and Robotics (LangRobo), IROS 2018.
14. Explaining inconsistencies using an autonomous agent in a smart office [9]
Monitoring Health&Safety using a SHACL-based model checking and behavioural trees
Centralised data integration, processing and reasoning
EXAMPLE : FACTUAL, WRITTEN ε
[9] Bastianelli et al., Meet HanS, the Heath&Safety autonomous inspector, Posters&Demos track at ISWC 2018.
15. Re-coding Black Mirror [10] workshops
Bringing social&computer scientists
together to understand the threats of
their own technologies, and raise
awareness on methods explainability
Ethics by Design methodology
MACHINE EXPLANATION NEED MACHINE ETHICS
[10] https://kmitd.github.io/recoding-black-mirror/
16. Databank
● Collection of meta-analyses to study human cooperation
● 3.5k work on social dilemmas ( = benefitting the others vs. self-interest)
My goal
● creation of a research platform generating explanations for human
cooperation (+ search facilities)
● Generalising the methodology to Life & Medical Sciences (long-term)
WHAT ABOUT SCIENTIFIC EXPLANATIONS?
Start by asking the question = why do we need systemsIn brackets because it is the same reason why humans need explainationa
First question is what is intended for expLooking at history (work done as part of my PhD = we use Eta as a symbol for “the concept of explanation”)
There is not a real definition, but people have looked at it from the perspective of their own discipline (hence the color)
Plato & A (connecting Forms and Facts through logos VS deducing the causes of why smtg happened ) Determinists (DesCartes, Leibniz, Newton, Huygens...) deductive process
Peirce Lecture on Pragmatism (expl = deduction + induction)
Carl Hempel & Oppenheim Deductive-Nomological Model / statistical model for explanationWeber & Durkheim (justifying social facts)Put myself just in case
Removing some doubts, this in how I intend - but this is arguableInterepretation is often used as explanation but imo there’s smtg like a subjective aspect addedPeople talk about justAlso about ...ility (a degree of)
Once we give some definitions,
goal : s finding out how to design & implement systems that generate explanations (“that can explain”)A number of subquestions arise / a number of things are needed to build such systems : which types, which processes, Reason = intentional / factual-scientific = unintentional
Processes = one cognitive & one social
But also things like audience and language, as these can change the form the explanation is generated / expressedscientists or researchers might want traceability aas this guarantees transparencyEnd users might prefer simple expl than complicated
The approach I have been using is the reuse of external knowledge to bringAnd this is likely to be the main difference with XAIToday billions of heterogeneous data sources exist (stored/real-time, personal/public terminals…)
We produce them , smart cities, the LOD, google just released...
I am just going through some examples of how systems generating explanations …
Dedalo = the system I develop during my PhD…was using the LOD (the big cloud of bubbles of the previous slide) as background knowledge to explain google trendsTrends = how much a term is searched over time (10 years)We found trends with patterns (repeated peaks) and tried to explain whyExplanations were presented to the user as natural language
A project I was part last year We built a browser plugin to support “self learners” : visually explaining their behaviours, and recommending courses to improveYou can use it too!
A different example : a Project started this year to organise a robotics competition in a smart city MK was part of a big data infrastructure project (2014-2017) We built a Datahub : a large-scale infrastructure aggregating heterogeneous the city’s heterogeneous data
Idea of SciRoc : robots will use the info in the datahub for their tasks, and my research was about helping robots to find explanation for their behaviors in the datahubNo idea who’s going to do that now :)
Work of a RA who worked with me this summer working on semantic mapping (introducing common-sense knowledge on a robots’ map)
Avoid time-expensive model training ---> using YOLO for segmentation
Combining ShapeNet (richly-annotated, large-scale dataset of 3D shapes) with robot sensorial info to perform object classification
Another one worked on LSTM to parse spoken robotic commands
Analysis of the attention layers = Semantic parsing of the small dataset is extremely biased by the little quantity of data Trying to use FrameNet to improve the model = generating explanations using FrameNet
That (supposedly) goes on a monitoring UI for the secutiry to go and repair the problem
One final thingNice thing is the use of vignette
If you have noticed that we were missing one type of E it is normal → I will do it here!First : the dataBank, then we try to generalise