Ontology learning techniques and applications computer science thesis writing...Tutors India
At Tutors India, we offer Computer science and Information Technology Research Guidance services – We deliver exceptional work where your dissertation will deserve publication without significant reworking or alternation.
For #Enquiry
https://www.tutorsindia.com
info@tutorsindia.com
(Whatsapp): +91-8754446690
(UK): +44-1143520021
Geographic and linguistic normalization opensym2014 posterHanteng Liao
Wanna better analyze the geographic and linguistic outreach/dynamics of web traffic? We propose a method of geo‐linguistic normalization to do so, with multilingual Wikipedia projects as the example.
A profile of Applied Data Analysis Lab (ADA Lab) at ICM, University of Warsaw. It contains a brief overview of our research interests, which are located at or near the intersection of text and data mining and open science. This is a version from December 2014.
Ontology learning techniques and applications computer science thesis writing...Tutors India
At Tutors India, we offer Computer science and Information Technology Research Guidance services – We deliver exceptional work where your dissertation will deserve publication without significant reworking or alternation.
For #Enquiry
https://www.tutorsindia.com
info@tutorsindia.com
(Whatsapp): +91-8754446690
(UK): +44-1143520021
Geographic and linguistic normalization opensym2014 posterHanteng Liao
Wanna better analyze the geographic and linguistic outreach/dynamics of web traffic? We propose a method of geo‐linguistic normalization to do so, with multilingual Wikipedia projects as the example.
A profile of Applied Data Analysis Lab (ADA Lab) at ICM, University of Warsaw. It contains a brief overview of our research interests, which are located at or near the intersection of text and data mining and open science. This is a version from December 2014.
Text and data mining in UK and France (ADBU - 13 Dec 16)Rob Johnson
Slides from my presentation in Paris on 13 Dec 2016, summarising the findings of our study on text and data mining in public research for the ADBU. Full report available at http://adbu.fr/etude-tdm/.
Alliance Health Chief Data Scientist Deep Dhillon presentation to UW CS students on mining unstructured healthcare data. This talk describes technical information on a system designed to empower patients and health care professionals by automatically generating health care content that is fresh, authoritative, statistically relevant and not available via other means.
Doctoral Thesis Proposal: An Automatic Knowledge Discovery Strategy In Biomed...Universidad de los Llanos
The growing on variety, volume and velocity of public biomedical databases in the last years have generate an explosion of big data in biology and medicine. Most of these databases comprise structural, molecular and genetic information from different kind of images acquisition modalities and associated metadata having a great potential, not yet exploited, as a source of information and knowledge which could impact biomedical research in different application fields. In fact, new research areas are emerging in this direction, known as bioimage informatics and computational pathology, which are areas basically attempting to apply different methods of image processing, pattern recognition, machine learning and data mining, in multimodal biomedical databases. However, the proposed tools and methods for image collection analysis have some research challenges coming with deluge of big data in biomedicine such as: visual appearance variability, semantic gap between image content and high-level meaning, structural and interpretable representation of image content, semantic inclusion of multimodal information sources, and scalability support with the increasing volume of databases. In this way, the research proposal is addressing the problem of automatic extraction of knowledge from biomedical image collections. Specifically, the goal is to devise methods to automatically find: visual patterns that compactly explain the visual richness of biomedical images, relationships between visual patterns, and relationships between visual patterns and their meaning in a particular biomedical context. In order to solve it, the proposed methodology has three main stages: part-based bioimage representation, semantic bioimage representation and biomedical knowledge discovery. Each stage of methodology state-of-the-art methods from computer vision, image processing, machine learning and data mining will be explored to provide interpretable learning methods supported by high-performance computing.
Post 1What is text analytics How does it differ from text mini.docxstilliegeorgiana
Post 1:
What is text analytics? How does it differ from text mining?
Text Analytics is applying of statistical and machine learning techniques to be able to predict /prescribe or infer any information from the text-mined data. Text mining is a tool that helps in getting the data cleaned up.Text analytics and text mining approaches have essentially equivalent performance. Text analytics requires an expert linguist to produce complex rule sets, whereas text mining requires the analyst to hand-label cases with outcomes or classes to create training data.
Differences between Text Mining and Text Analytics:
• Text Mining and Text Analytics solve the same problems, but use different techniques and are complementary ways to automatically extract meaning from text.
• Text Analytics is developed within the field of computational linguistics. It has the ability to encode human understanding into a series of linguistic rules which are generated by humans are high in precision, but they do not automatically adapt and are usually fragile when tried in new situations.
• Text mining is a newer discipline arising out of the fields of statistics, data mining, and machine learning. Its strength is the ability to inductively create models from collections of historical data. Because statistical models are learned from training data they are adaptive and can identify “unknown unknowns”, leading to the better recall. Still, they can be prone to missing something that would seem obvious to a human.
• Text analytics and text mining approaches have essentially equivalent performance. Text analytics requires an expert linguist to produce complex rule sets, whereas text mining requires the analyst to hand-label cases with outcomes or classes to create training data.
• Due to their different perspectives and strengths, combining text analytics with text mining often leads to better performance than either approach alone.
2. What technologies were used in building Watson (both hardware and software)?
Watson is an extraordinary computer system (a novel combination of advanced hardware an software) designed at answering questions posed in natural human language.Watson is an artificially intelligent computer system capable of answering questions posed in natural language, developed in IBM's DeepQA project by a research team led by principal investigator David Ferrucci. Watson was named after IBM's first CEO and industrialist Thomas J. Watson. The computer system was specifically developed to answer questions on the quiz show Jeopardy! In 2011, Watson competed on Jeopardy! against former winners Brad Rutter and Ken Jennings.
Watson received the first prize of $1 million.The goal was to advance computer science by exploring new ways for computer technology to affect science, business, and society.IBM undertook a challenge to build a computer system that could compete at the human champion level in real time on the American TV quiz show Jeopardy!The extent of the challenge in ...
Post 1What is text analytics How does it differ from text minianhcrowley
Post 1:
What is text analytics? How does it differ from text mining?
Text Analytics is applying of statistical and machine learning techniques to be able to predict /prescribe or infer any information from the text-mined data. Text mining is a tool that helps in getting the data cleaned up.Text analytics and text mining approaches have essentially equivalent performance. Text analytics requires an expert linguist to produce complex rule sets, whereas text mining requires the analyst to hand-label cases with outcomes or classes to create training data.
Differences between Text Mining and Text Analytics:
• Text Mining and Text Analytics solve the same problems, but use different techniques and are complementary ways to automatically extract meaning from text.
• Text Analytics is developed within the field of computational linguistics. It has the ability to encode human understanding into a series of linguistic rules which are generated by humans are high in precision, but they do not automatically adapt and are usually fragile when tried in new situations.
• Text mining is a newer discipline arising out of the fields of statistics, data mining, and machine learning. Its strength is the ability to inductively create models from collections of historical data. Because statistical models are learned from training data they are adaptive and can identify “unknown unknowns”, leading to the better recall. Still, they can be prone to missing something that would seem obvious to a human.
• Text analytics and text mining approaches have essentially equivalent performance. Text analytics requires an expert linguist to produce complex rule sets, whereas text mining requires the analyst to hand-label cases with outcomes or classes to create training data.
• Due to their different perspectives and strengths, combining text analytics with text mining often leads to better performance than either approach alone.
2. What technologies were used in building Watson (both hardware and software)?
Watson is an extraordinary computer system (a novel combination of advanced hardware an software) designed at answering questions posed in natural human language.Watson is an artificially intelligent computer system capable of answering questions posed in natural language, developed in IBM's DeepQA project by a research team led by principal investigator David Ferrucci. Watson was named after IBM's first CEO and industrialist Thomas J. Watson. The computer system was specifically developed to answer questions on the quiz show Jeopardy! In 2011, Watson competed on Jeopardy! against former winners Brad Rutter and Ken Jennings.
Watson received the first prize of $1 million.The goal was to advance computer science by exploring new ways for computer technology to affect science, business, and society.IBM undertook a challenge to build a computer system that could compete at the human champion level in real time on the American TV quiz show Jeopardy!The extent of the challenge in ...
USING MACHINE LEARNING TO BUILD A SEMI-INTELLIGENT BOT ecij
Nowadays, real-time systems and intelligent systems offer more and more control interface based on voice recognition or human language recognition. Robots and drones will soon be mainly controlled by voice. Other robots will integrate bots to interact with their users, this can be useful both in industry and entertainment. At first, researchers were digging on the side of "ontology reasoning". Given all the technical constraints brought by the treatment of ontologies, an interesting solution has emerged in last years: the construction of a model based on machine learning to connect a human language to a knowledge
base (based for example on RDF). We present in this paper our contribution to build a bot that could be used on real-time systems and drones/robots, using recent machine learning technologies.
USING MACHINE LEARNING TO BUILD A SEMI-INTELLIGENT BOT ecij
Nowadays, real-time systems and intelligent systems offer more and more control interface based on voice
recognition or human language recognition. Robots and drones will soon be mainly controlled by voice.
Other robots will integrate bots to interact with their users, this can be useful both in industry and
entertainment. At first, researchers were digging on the side of "ontology reasoning". Given all the
technical constraints brought by the treatment of ontologies, an interesting solution has emerged in last
years: the construction of a model based on machine learning to connect a human language to a knowledge
base (based for example on RDF). We present in this paper our contribution to build a bot that could be
used on real-time systems and drones/robots, using recent machine learning technologies.
IUI 2010: An Informal Summary of the International Conference on Intelligent ...J S
Highlights from the main track, poster/demo-session & the VISSW/UDISW/EGIHMI workshops. This is an informal compilation of personal notes from the conference & proceedings, twitter (#iui2010), Ian Ozsvald's blog (http://ianozsvald.com/), and other sources. Citations were not coherently possible, so I chose to stick with links instead. Please let me know if you'd like to see your work more thoroughly referenced.
The 2011 IEEE/WIC/ACM International Conference on Web Intelligence » industry...Francois Pouilloux
The industry day of the conference aims to bring together people from both academia and industry in a venue that highlights application and practical impact.
I'm pleased to present there on August 22nd 2011.
Stay tuned for the prez file after the event !
Knowledge Management Cultures: A Comparison of Engineering and Cultural Scien...Ralf Klamma
This work in progress presents an approach to compare patterns of communication and knowledge organization in cultural and engineering science projects under the leading point of media use. The goal of the underlying project is to gain a better understanding on similarities and dierences in both areas and to develop more appropriate information system support for both areas. Central to the comparative analysis approach is a process knowledge repository which was successfully used in two case studies about real world information systems.
Text and data mining in UK and France (ADBU - 13 Dec 16)Rob Johnson
Slides from my presentation in Paris on 13 Dec 2016, summarising the findings of our study on text and data mining in public research for the ADBU. Full report available at http://adbu.fr/etude-tdm/.
Alliance Health Chief Data Scientist Deep Dhillon presentation to UW CS students on mining unstructured healthcare data. This talk describes technical information on a system designed to empower patients and health care professionals by automatically generating health care content that is fresh, authoritative, statistically relevant and not available via other means.
Doctoral Thesis Proposal: An Automatic Knowledge Discovery Strategy In Biomed...Universidad de los Llanos
The growing on variety, volume and velocity of public biomedical databases in the last years have generate an explosion of big data in biology and medicine. Most of these databases comprise structural, molecular and genetic information from different kind of images acquisition modalities and associated metadata having a great potential, not yet exploited, as a source of information and knowledge which could impact biomedical research in different application fields. In fact, new research areas are emerging in this direction, known as bioimage informatics and computational pathology, which are areas basically attempting to apply different methods of image processing, pattern recognition, machine learning and data mining, in multimodal biomedical databases. However, the proposed tools and methods for image collection analysis have some research challenges coming with deluge of big data in biomedicine such as: visual appearance variability, semantic gap between image content and high-level meaning, structural and interpretable representation of image content, semantic inclusion of multimodal information sources, and scalability support with the increasing volume of databases. In this way, the research proposal is addressing the problem of automatic extraction of knowledge from biomedical image collections. Specifically, the goal is to devise methods to automatically find: visual patterns that compactly explain the visual richness of biomedical images, relationships between visual patterns, and relationships between visual patterns and their meaning in a particular biomedical context. In order to solve it, the proposed methodology has three main stages: part-based bioimage representation, semantic bioimage representation and biomedical knowledge discovery. Each stage of methodology state-of-the-art methods from computer vision, image processing, machine learning and data mining will be explored to provide interpretable learning methods supported by high-performance computing.
Post 1What is text analytics How does it differ from text mini.docxstilliegeorgiana
Post 1:
What is text analytics? How does it differ from text mining?
Text Analytics is applying of statistical and machine learning techniques to be able to predict /prescribe or infer any information from the text-mined data. Text mining is a tool that helps in getting the data cleaned up.Text analytics and text mining approaches have essentially equivalent performance. Text analytics requires an expert linguist to produce complex rule sets, whereas text mining requires the analyst to hand-label cases with outcomes or classes to create training data.
Differences between Text Mining and Text Analytics:
• Text Mining and Text Analytics solve the same problems, but use different techniques and are complementary ways to automatically extract meaning from text.
• Text Analytics is developed within the field of computational linguistics. It has the ability to encode human understanding into a series of linguistic rules which are generated by humans are high in precision, but they do not automatically adapt and are usually fragile when tried in new situations.
• Text mining is a newer discipline arising out of the fields of statistics, data mining, and machine learning. Its strength is the ability to inductively create models from collections of historical data. Because statistical models are learned from training data they are adaptive and can identify “unknown unknowns”, leading to the better recall. Still, they can be prone to missing something that would seem obvious to a human.
• Text analytics and text mining approaches have essentially equivalent performance. Text analytics requires an expert linguist to produce complex rule sets, whereas text mining requires the analyst to hand-label cases with outcomes or classes to create training data.
• Due to their different perspectives and strengths, combining text analytics with text mining often leads to better performance than either approach alone.
2. What technologies were used in building Watson (both hardware and software)?
Watson is an extraordinary computer system (a novel combination of advanced hardware an software) designed at answering questions posed in natural human language.Watson is an artificially intelligent computer system capable of answering questions posed in natural language, developed in IBM's DeepQA project by a research team led by principal investigator David Ferrucci. Watson was named after IBM's first CEO and industrialist Thomas J. Watson. The computer system was specifically developed to answer questions on the quiz show Jeopardy! In 2011, Watson competed on Jeopardy! against former winners Brad Rutter and Ken Jennings.
Watson received the first prize of $1 million.The goal was to advance computer science by exploring new ways for computer technology to affect science, business, and society.IBM undertook a challenge to build a computer system that could compete at the human champion level in real time on the American TV quiz show Jeopardy!The extent of the challenge in ...
Post 1What is text analytics How does it differ from text minianhcrowley
Post 1:
What is text analytics? How does it differ from text mining?
Text Analytics is applying of statistical and machine learning techniques to be able to predict /prescribe or infer any information from the text-mined data. Text mining is a tool that helps in getting the data cleaned up.Text analytics and text mining approaches have essentially equivalent performance. Text analytics requires an expert linguist to produce complex rule sets, whereas text mining requires the analyst to hand-label cases with outcomes or classes to create training data.
Differences between Text Mining and Text Analytics:
• Text Mining and Text Analytics solve the same problems, but use different techniques and are complementary ways to automatically extract meaning from text.
• Text Analytics is developed within the field of computational linguistics. It has the ability to encode human understanding into a series of linguistic rules which are generated by humans are high in precision, but they do not automatically adapt and are usually fragile when tried in new situations.
• Text mining is a newer discipline arising out of the fields of statistics, data mining, and machine learning. Its strength is the ability to inductively create models from collections of historical data. Because statistical models are learned from training data they are adaptive and can identify “unknown unknowns”, leading to the better recall. Still, they can be prone to missing something that would seem obvious to a human.
• Text analytics and text mining approaches have essentially equivalent performance. Text analytics requires an expert linguist to produce complex rule sets, whereas text mining requires the analyst to hand-label cases with outcomes or classes to create training data.
• Due to their different perspectives and strengths, combining text analytics with text mining often leads to better performance than either approach alone.
2. What technologies were used in building Watson (both hardware and software)?
Watson is an extraordinary computer system (a novel combination of advanced hardware an software) designed at answering questions posed in natural human language.Watson is an artificially intelligent computer system capable of answering questions posed in natural language, developed in IBM's DeepQA project by a research team led by principal investigator David Ferrucci. Watson was named after IBM's first CEO and industrialist Thomas J. Watson. The computer system was specifically developed to answer questions on the quiz show Jeopardy! In 2011, Watson competed on Jeopardy! against former winners Brad Rutter and Ken Jennings.
Watson received the first prize of $1 million.The goal was to advance computer science by exploring new ways for computer technology to affect science, business, and society.IBM undertook a challenge to build a computer system that could compete at the human champion level in real time on the American TV quiz show Jeopardy!The extent of the challenge in ...
USING MACHINE LEARNING TO BUILD A SEMI-INTELLIGENT BOT ecij
Nowadays, real-time systems and intelligent systems offer more and more control interface based on voice recognition or human language recognition. Robots and drones will soon be mainly controlled by voice. Other robots will integrate bots to interact with their users, this can be useful both in industry and entertainment. At first, researchers were digging on the side of "ontology reasoning". Given all the technical constraints brought by the treatment of ontologies, an interesting solution has emerged in last years: the construction of a model based on machine learning to connect a human language to a knowledge
base (based for example on RDF). We present in this paper our contribution to build a bot that could be used on real-time systems and drones/robots, using recent machine learning technologies.
USING MACHINE LEARNING TO BUILD A SEMI-INTELLIGENT BOT ecij
Nowadays, real-time systems and intelligent systems offer more and more control interface based on voice
recognition or human language recognition. Robots and drones will soon be mainly controlled by voice.
Other robots will integrate bots to interact with their users, this can be useful both in industry and
entertainment. At first, researchers were digging on the side of "ontology reasoning". Given all the
technical constraints brought by the treatment of ontologies, an interesting solution has emerged in last
years: the construction of a model based on machine learning to connect a human language to a knowledge
base (based for example on RDF). We present in this paper our contribution to build a bot that could be
used on real-time systems and drones/robots, using recent machine learning technologies.
IUI 2010: An Informal Summary of the International Conference on Intelligent ...J S
Highlights from the main track, poster/demo-session & the VISSW/UDISW/EGIHMI workshops. This is an informal compilation of personal notes from the conference & proceedings, twitter (#iui2010), Ian Ozsvald's blog (http://ianozsvald.com/), and other sources. Citations were not coherently possible, so I chose to stick with links instead. Please let me know if you'd like to see your work more thoroughly referenced.
The 2011 IEEE/WIC/ACM International Conference on Web Intelligence » industry...Francois Pouilloux
The industry day of the conference aims to bring together people from both academia and industry in a venue that highlights application and practical impact.
I'm pleased to present there on August 22nd 2011.
Stay tuned for the prez file after the event !
Knowledge Management Cultures: A Comparison of Engineering and Cultural Scien...Ralf Klamma
This work in progress presents an approach to compare patterns of communication and knowledge organization in cultural and engineering science projects under the leading point of media use. The goal of the underlying project is to gain a better understanding on similarities and dierences in both areas and to develop more appropriate information system support for both areas. Central to the comparative analysis approach is a process knowledge repository which was successfully used in two case studies about real world information systems.
1. Semi-automatic Text Mining
Project Proposal for „Future and Emerging Technologies“
in the EU-IST Programme
S. Staab1
, R. Studer Karlsruhe University
K. Markert, B. Webber University of Edinburgh
N. Kushmerick University College Dublin
B. Bremdal, R. Engels Cognit a.s
http://www.aifb.uni-karlsruhe.de/~sst/Research/Projects/TextMining/
1 Abstract
Motivation: The revolutionary step from printed text to digital documents has lead to an
explosive growth of knowledge available (semi-)publicly through the internet or through
community and coporate intranets. With this flood of potentially useful information, there
comes the urgent need to sift through it, find the golden nuggets of information and
analyze them for making informed decisions.
Problem: The vision in text understanding has been that of fully automatic techniques
that may be exploited for purposes like detecting relevant informations in texts,
summarizing the relevant informations, or answering questions on texts. Nevertheless,
fully automatic text mining appears to be as distant as ever. Approaches that actually
work rely almost exclusivly on information retrieval techniques, hardly exploit the fast
progress in computational linguistics research, and thus exhibit well-known limitations
that lead to inconclusive summarizations or to the abundance of hits in search engines like
AltaVista. In addition, the connotation of text mining—the aggregation and analysis of
information into a piece of knowledge that may lead to an informed action — has hardly
been investigated so far.
Objectives: Our project proposal pursues a threefold objective. First, we want to bridge
the gap between techniques that are actually used for text mining, and thus draw from
current and upcoming progress in the fields of knowledge acquisition, computational
linguistics, information retrieval, information extraction and machine learning.
Second, we want to exploit the particularities found in current web documents. This
implies that we need to consider new web standards for document structuring, viz. XML,
and we must consider semi-structuring information such as given through layout, in tables
or lists.
Finally, we want to go beyond information extraction towards text information
exploitation. This means we want to combine extracted information in order to deduce
knowledge that may not have been in the mind of the authors of the text.
Method: We consider text mining a semi-automatic process that is designed and set up
with a particular application in mind. The design involves the construction of a domain
1
Contact: Steffen Staab, AIFB, Karlsruhe University, D-76128 Karlsruhe, email: staab@aifb.uni-
karlsruhe.de, Tel.: +49(0)721/608 7363, Fax.: +49(0)721/ 693717
2. ontology, the formulation and/or learning of interesting structures with computational
linguistics and/or information retrieval techniques and the exploration of the
corresponding results. Once, the domain specific text mining application is set up the
naive user may run it to extract information and – in particular – to find associations and
rules that were not present in the original texts, but that could only be found by
considering, integrating and comparing various text sources.
Scenario: As an interesting case study we choose the mining of annual business reports
and analysts‘ reports that comment on companies from a particular area (e.g.,
telecommunication). This scenario is very appropriate, because
1. It allows the observation of competitors and the detection of trends that are extremely
important for decision makers, such as trends in organizational structures or in
markets and products.
2. The understanding of these texts cannot be performed in isolation. Rather the
knowledge that needs to be found is mostly available in the annual changes that take
place and in the comparisons between companies in the same trade.
3. The setting is well enough observed and understood by professionals in order to
verify the techniques we develop.
2 Chances for Europe
Multiple chances and possibilities arising from an application of semi-automatic text
mining are given on several levels:
1 Informed Decisions: Results from our project may deliver critical information to
European businesses, thus keeping them competitive, reacting quickly to new trends
and possibilities.
2 Individual Learning: The more time the individual may spend on understanding
interconnections and the less time she spends with searching for information and
testing hypothesis, the more she profits from the information technology that is at
hand, now.
3 Research: Though our scenario develops a particular business case, many research
issues may profit from semi-automatic text mining, too. Indeed, research hypotheses
may be easier to (pre-)test or even to generate (cf. Hearst (1999)).
All these factors are critical to develop a high potential of Europeans and for Europeans.
Informed decisions, faster learning and improved research all work together in keeping
Europe competitive.
4 Partner Profile
We consider text mining as being a knowledge acquisition process that should be
facilitated by learning approaches and by the techniques found in information retrieval
and computational linguistics. Hence, the consortium includes people from these
different communities:
3. Prof. Dr. Studer has a chair for knowledge management at Karlsruhe University. He
has carried out research and organized numerous activities in the fields of knowledge
acquisition, knowledge management and data mining for over 20 years.
Dr. Steffen Staab is senior researcher and lecturer at Karlsruhe University. His research
interests include knowledge management, ontology engineering, information extraction,
and data mining. He is now project manager for Karlsruhe in the project GETESS
(http://www.getess.de), which aims at a specific information extraction system for the
tourism domain and which is funded by the German government.
Prof. Dr. Bonnie Webber...
Dr. Katja Markert....
Dr. Nicholas Kushmerick is College Lecturer in the Department of Computer Science,
University College Dublin, Ireland. Dr. Kushmerick received his Ph.D. in 1997 at the
University of Washington, and his dissertation was nominated for the ACM
Distinguished Dissertation award. Dr. Kushmerick has worked in the areas of planning,
machine learning, and information-extraction, -integration, and -retrieval. His worked
has been published in several international journals, and he has been on the organizing
committee of numerous conferences and workshops. Dr. Kushmerick’s current work
focuses on the use of machine learning to scale up knowledge engineering on the
Internet, in service of problems such as information extraction and designing intelligent
browsing assistants.
Dr. Bernt Arild Bremdal: Studied Marine Technology in Trondheim, Norway. After
finishing his MSc at the NTNU he wrote his PhD at the same university. He got is PhD
on the application of artificial intelligence, rule-based and object-oriented programming
in project planning in 1988. After he has been affiliated with a variety of companies he
co-founded and directs CognIT a.s. Author of more than 50 articles and published reports
on computer applications in engineering and industry, design and planning, object-
oriented technology and artificial intelligence. Most recent publication is Braunschweig
and Bremdal, “AI in the Petroleum Industry.” Volume 2. Edition Technip 1996.
Dr. Robert Engels: Studied Artificial Intelligence, Psychology and (partly) Computer
Science at the university of Amsterdam, NL. He conducted his MSc thesis on
applications of Inductive Logic Programming in Stockholm, Sweden. In 1999 he got his
PhD from the university of Karlsruhe for research conducted in the area of Knowledge
Discovery and Data Mining. He (co-) authored a variety of papers, and organised several
international and national (German) workshops on practical applications of Data Mining.
Currently he is affiliated with CognIT as a senior systems architect.
The work packages would be split along the following lines (bold face indicates
leadership for a particular work package):
Knowledge
Acquisition
Computational
Linguistics
Machine
Learning
Information
Retrieval
Univ. Karlsruhe Ontology
acquisition
Mining
Information
Univ. Edinburgh Information
4. Extraction
with Layout
Univ. College
Dublin
Wrappers with
Ontologies;
Mining
Information
Indexing and
querying
structured
documents
Cognit Ontology
induction
Understanding
XML Texts
5 Partner Adresses
Dr. Steffen Staab, Prof. Dr. Rudi Studer
Institute for Applied Computer Science and Formal Description Methods (AIFB),
Karlsruhe University, D-76128 Karlsruhe, Germany
http://www.aifb.uni-karlsruhe.de/WBS
mailto:staab@aifb.uni-karlsruhe.de,studer@aifb.uni-karlsruhe.de
Dr. Katja Markert, Prof. Dr. Bonnie Webber
Division of Informatics, University of Edinburgh, 80 South Bridge
Edinburgh EH1 1HN, Scotland
http://www.informatics.ed.ac.uk/research/irr/
mailto:markert@cogsci.ed.ac.uk,bonnie@dai.ed.ac.uk
Dr. Nicholas Kushmerick
Department of Computer Science, University College Dublin, Dublin 4, Ireland
http://www.cs.ucd.ie/staff/nick/
mailto:nick@ucd.ie
Dr. Robert Engels, Dr. Bernt Bremdal
Cognit a.s, P.B. 610, N-1754 Halden, Norway
http://www.cognit.no/
mailto:robert.engels@cognit.no,bernt.bremdal@cognit.no