The document discusses the evolution of online learning from early e-learning approaches to modern microlearning. It notes that early e-learning focused on transferring traditional classrooms online but that learning now occurs through small pieces of information on the web. The document proposes that standards for online learning need to be adapted for this microcontent approach and focus on informal, individual learning rather than formal, organizational training.
What Is Micromedia? Living and Learning in Microcontent Environments.jurijmlotman
The document discusses the concepts of micromedia, microcontent, and microlearning. It argues that new technologies have led to information being experienced and shared in smaller chunks suited for mobile and multitasking behaviors. Microlearning reflects how people already learn informally by swimming in a sea of microcontent and microtasks. Designing microlearning experiences involves embedding learning in the periphery of users' attention through signs on screens and knowledge represented as clouds and flows integrated into digital lifestreams.
Future Interface : What the last 50+ Years of Modern Computing History May Te...CA API Management
The age of modern computing is now dates back than half a century. What key accomplishments in the field over these past decades have notably shaped computing today? And what trends and practices today are likely to have an affect on the future of computing?
In this lively presentation, Mike Amundsen - author, presenter, and software architect - highlights key trends in the past fifty years; drawing from diverse sources including physical architecture, industrial design, the psychology of perception, and cross-cultural mono-myth that helped to shape both the art and business of computing today and discusses current social and technological trends that may have a hand in shaping the future of computing. He asks attendees to consider what the future of computing will look like and what business and individuals can do today to prepare for, and influence, computing's future. Amundsen, whose latest book "Building Hypermedia APIs with HTML5 and Node" has been called "[O]ne of the biggest conceptual advances since Roy Fielding first defined the REST architectural style" will focus on the role APIs can play and how hypermedia and other non-linear, collaborative models can influence the way humans and machines communicate in a future "programmable civilization."
The document discusses the evolution of eLearning from eLearning 1.0 to the emerging eLearning 3.0. eLearning 1.0 focused on making content accessible online and tracking user activities. eLearning 2.0 enabled social learning and user-generated content through technologies like blogs and social media. eLearning 3.0 will utilize artificial intelligence, big data, and learning analytics to automatically discover new learning models from user data and apply them to support learners. It will activate learning theories like pragmatism and connectivism through technologies like semantic networks that connect people and information. The document proposes creating learning networks to improve knowledge sharing and make connections between learners, content, and tasks.
This document discusses the evolution of e-learning and predictions about the future of e-learning, referred to as e-learning 3.0. It begins by reviewing e-learning 1.0 and 2.0, noting how technologies and learning theories have progressed. Predictions for e-learning 3.0 include learning becoming contextual and personalized through intelligent technologies that are integrated everywhere. Challenges with current e-learning models are also discussed, such as information overload and an inefficient knowledge cycle. The document suggests artificial intelligence will play a larger role in e-learning 3.0 by helping address these challenges as data and computing resources increase.
Teaser section of my little book, "Experimental Media Voodoo™". It's about what I do as a digital media artist, and my approach to understanding digital culture, discussed and explained in micro-essays, case studies, tutorials, and neat charts!
Currently looking for a publisher for the full-color, full-bleed version! More info at : http://www.badmindtime.com/book .
Knowledge Management System & TechnologyElijah Ezendu
Knowledge management systems (KMS) aim to support knowledge generation, codification, and transfer in organizations. Various technologies can provide value-adding capabilities to boost and entrench knowledge management, including information technology, communication technology, and media technology. While information technology alone is not knowledge management, different technologies can fulfill deliverables that support knowledge management processes within an organization. Properly identifying an organization's required and applicable knowledge management activities facilitates effective mapping of knowledge management processes, which then determines a fitting knowledge management system.
The document discusses the evolution of online learning from early e-learning approaches to modern microlearning. It notes that early e-learning focused on transferring traditional classrooms online but that learning now occurs through small pieces of information on the web. The document proposes that standards for online learning need to be adapted for this microcontent approach and focus on informal, individual learning rather than formal, organizational training.
What Is Micromedia? Living and Learning in Microcontent Environments.jurijmlotman
The document discusses the concepts of micromedia, microcontent, and microlearning. It argues that new technologies have led to information being experienced and shared in smaller chunks suited for mobile and multitasking behaviors. Microlearning reflects how people already learn informally by swimming in a sea of microcontent and microtasks. Designing microlearning experiences involves embedding learning in the periphery of users' attention through signs on screens and knowledge represented as clouds and flows integrated into digital lifestreams.
Future Interface : What the last 50+ Years of Modern Computing History May Te...CA API Management
The age of modern computing is now dates back than half a century. What key accomplishments in the field over these past decades have notably shaped computing today? And what trends and practices today are likely to have an affect on the future of computing?
In this lively presentation, Mike Amundsen - author, presenter, and software architect - highlights key trends in the past fifty years; drawing from diverse sources including physical architecture, industrial design, the psychology of perception, and cross-cultural mono-myth that helped to shape both the art and business of computing today and discusses current social and technological trends that may have a hand in shaping the future of computing. He asks attendees to consider what the future of computing will look like and what business and individuals can do today to prepare for, and influence, computing's future. Amundsen, whose latest book "Building Hypermedia APIs with HTML5 and Node" has been called "[O]ne of the biggest conceptual advances since Roy Fielding first defined the REST architectural style" will focus on the role APIs can play and how hypermedia and other non-linear, collaborative models can influence the way humans and machines communicate in a future "programmable civilization."
The document discusses the evolution of eLearning from eLearning 1.0 to the emerging eLearning 3.0. eLearning 1.0 focused on making content accessible online and tracking user activities. eLearning 2.0 enabled social learning and user-generated content through technologies like blogs and social media. eLearning 3.0 will utilize artificial intelligence, big data, and learning analytics to automatically discover new learning models from user data and apply them to support learners. It will activate learning theories like pragmatism and connectivism through technologies like semantic networks that connect people and information. The document proposes creating learning networks to improve knowledge sharing and make connections between learners, content, and tasks.
This document discusses the evolution of e-learning and predictions about the future of e-learning, referred to as e-learning 3.0. It begins by reviewing e-learning 1.0 and 2.0, noting how technologies and learning theories have progressed. Predictions for e-learning 3.0 include learning becoming contextual and personalized through intelligent technologies that are integrated everywhere. Challenges with current e-learning models are also discussed, such as information overload and an inefficient knowledge cycle. The document suggests artificial intelligence will play a larger role in e-learning 3.0 by helping address these challenges as data and computing resources increase.
Teaser section of my little book, "Experimental Media Voodoo™". It's about what I do as a digital media artist, and my approach to understanding digital culture, discussed and explained in micro-essays, case studies, tutorials, and neat charts!
Currently looking for a publisher for the full-color, full-bleed version! More info at : http://www.badmindtime.com/book .
Knowledge Management System & TechnologyElijah Ezendu
Knowledge management systems (KMS) aim to support knowledge generation, codification, and transfer in organizations. Various technologies can provide value-adding capabilities to boost and entrench knowledge management, including information technology, communication technology, and media technology. While information technology alone is not knowledge management, different technologies can fulfill deliverables that support knowledge management processes within an organization. Properly identifying an organization's required and applicable knowledge management activities facilitates effective mapping of knowledge management processes, which then determines a fitting knowledge management system.
The document discusses the origins and development of the World Wide Web and Semantic Web. It begins with Tim Berners-Lee's original proposal in 1989 to create a global hypertext system using universal document identifiers. It then describes the formation of the World Wide Web Consortium and Berners-Lee's vision of a web of machine-understandable data. The document concludes by examining debates around natural language processing and semantics on the Semantic Web.
The document discusses the vision, architecture, and technology of the Semantic Web. It defines key concepts like semantics, ontology, RDF, and provides an overview of the Semantic Web stack and architecture. Examples of semantic web applications and technologies like SPARQL queries are also presented to illustrate how semantic markup allows machines to understand web content.
Human-Like Computing and Human-Computer InteractionAlan Dix
paper presented at Human Centred Design for Intelligent Environments (HCD4IE) Workshop, HCI2016, Bournemouth, UK, 12 July 2016.
http://alandix.com/academic/papers/HCD4IE-2016-human-like/
This document discusses a lecture on designing ontologies and using them to build semantic applications. The lecture will cover foundations of semantic web technologies, methodologies for semantic content management, designing semantic content management systems, and designing interactive ubiquitous information systems. It provides an outline that includes discussions on ontologies, ontology design, transformation and refactoring, and using ontology networks in content management systems platforms.
The document discusses how linking open data and semantics can benefit digital humanities research using Europeana. It proposes fully implementing the Europeana Data Model to represent cultural heritage objects as linked open data. This would connect objects across domains and with external datasets like DBpedia. Combining this enriched semantic data with tools like SwickyNotes could facilitate new forms of digital scholarship through semantic exploration, context discovery, and knowledge generation.
The document provides an overview of the history and components of personal computers. It discusses how early computers led to developments in input/output devices and graphical user interfaces. It describes the major hardware components of a PC, including the motherboard, processor, memory, storage devices, and input/output ports. The motherboard serves as the central connection point and contains the CPU, memory, and expansion slots. Buses on the motherboard allow communication between the CPU and other devices.
This document summarizes a meeting about connecting people on the social web using open standards. It discusses the history of semantic web projects like FOAF and RDF that aim to make web documents machine-readable and link people and information. It also addresses disagreements between groups working on these issues and emphasizes finding common ground through collaboration and focusing on shared goals of a more decentralized and interconnected web.
1. The document discusses various ideas around the base concepts of servers, language, culture, and identity as they relate to the digital world. It explores how these concepts have changed with technology and the internet.
2. Specific topics covered include how servers act as a personal archive accessible from anywhere, how internet connections change perspectives of space and time, and how screens and devices are no longer fixed but fluid entities.
3. It also examines the keyboard and mouse as tools that evolved from their original purposes and how our understanding and trust of the digital world has developed over time through abstraction and evolution of these interfaces.
Alan Turing first proposed the concept of artificial intelligence in 1950 and suggested computers could be taught to solve problems like humans. Early AI research was limited by computers' inability to store commands and programs. In the 1950s, the Logic Theorist program demonstrated rudimentary problem-solving skills. Advances in computing power and the introduction of machine learning algorithms and expert systems expanded AI research from the 1950s-1980s. Deep learning techniques in the 1980s and availability of neural networks in the 2000s enabled computers to learn from experience and tackle complex tasks in areas like language processing and computer vision.
Describing Everything - Open Web standards and classificationDan Brickley
The document discusses the need for a hybrid approach to classification that combines traditional library classification systems with modern web technologies and standards. It proposes putting classification data on the open web so it can be more widely used and built upon. This will help drive innovation by making the data accessible to developers, designers and content creators.
Chapter 4: Paradigms
from
Dix, Finlay, Abowd and Beale (2004).
Human-Computer Interaction, third edition.
Prentice Hall. ISBN 0-13-239864-8.
http://www.hcibook.com/e3/
The document discusses the history and evolution of paradigms in human-computer interaction (HCI). It describes several paradigm shifts in interactive technologies including: batch processing, time-sharing, interactive computing, graphical displays, personal computing, the World Wide Web, ubiquitous computing. Each new paradigm created a new perception of the human-computer relationship.
What would you do with free pictures of everything on Earth?Paul Houle
A presentation I gave at the NYC Semantic Web meetup on Dec 8, 2011. I talk about Ookaboo.com, a collection of images that are tagged with precise Linked Data terms from Freebase and DBpedia.
NYAI #27: Cognitive Architecture & Natural Language Processing w/ Dr. Catheri...Maryam Farooq
For more AI talks, visit: nyai.co
These slides are from NYAI #27: Cognitive Architecture & Natural Language Processing w/ Dr. Catherine Havasi, which took place Tues, 12/18/19 at Kirkland & Ellis NYC.
[Speaker Bio] Dr. Catherine Havasi is a technology strategist, artificial intelligence researcher, and entrepreneur. In the late 90s, she co-founded the Common Sense Computing Initiative, or ConceptNet, the first crowd-sourced project for artificial intelligence and the largest open knowledge graph for language understanding. ConceptNet has played a role in thousands of AI projects and will be turning 20 next year. She has started several companies commercializing AI research, including Luminoso where she acts as Chief Strategy Officer. She is currently a visiting scientist at the MIT Media Lab where she works on computational creativity and previously directed the Digital Intuition group.
[Abstract] People who build everything from entertainment experiences to financial management face a dilemma: how can you scale what you’re building for broader consumption, yet maintain the personalization that makes it special? A fundamental tension exists between building something individualized, and scaling it to consumers such as visitors at a theme park, or gamers exploring the latest Zelda adventure. True disruption happens when we overcome the idea that one must sacrifice personalization to achieve mass production — like it has in advertising, recommendations, and web search.
Artificial Intelligence practitioners, especially in natural language understanding, dialogue, and cognitive modeling, face the same issue: how can we personalize our models for all audiences without relying on unscalable efforts such as writing specific rules, building dialogue trees, or designing knowledge graphs? Catherine Havasi believes we can remove this dichotomy and achieve “mass personalization.” In this session we’ll discuss how to understand domain text and build believable digital characters. We’ll talk about how adding a little common sense, cognitive architectures, and planning is making this all possible.
nyai.co
Michelangelo di Lodovico Buonarroti Simoni (1475-1564) was an Italian sculptor, painter, architect, and poet of the High Renaissance era. The document discusses various topics related to the history of computing and the internet, including the invention of the computer by Von Neumann, the development of the microprocessor, the creation of ARPANET and the internet in the late 1960s and 1970s, and the development of the World Wide Web in the early 1990s. It also discusses concepts like augmented reality, videoconferencing, and the shift to a knowledge economy driven by information.
Pierre Lévy proposes using a semantic language called IEML (Integrated Electronic MetaLanguage) to advance collective intelligence on the internet. IEML would allow for:
1) Semantic interoperability between different ontologies, folksonomies, and languages.
2) A transparent semantic addressing system to connect ideas rather than just documents.
3) Empowering writing and reading by generating texts automatically from concepts.
IEML aims to enhance collective intelligence by making the underlying semantics of online information more explicit and manipulable.
Building collaborative Machine Learning platform for Dataverse network. Lecture by Slava Tykhonov (DANS-KNAW, the Netherlands), DANS seminar series, 29.03.2022
The document discusses the importance of semantic contextualization in Europeana. It argues that Europeana is more than just an aggregation of digital objects and aims to enable knowledge generation about culture. It presents the DIKW/DIKT models to illustrate how data, information, and knowledge are related. Semantic contextualization in Europeana involves representing objects and their relationships using classes, properties, and embedding them in a linked open data architecture. This facilitates knowledge generation and even speculative thinking when interacting with Europeana.
Bienvenue en GAFAMIE !
La boucle est bouclée :
Sur les articles critiquant l’usage du cloud Microsoft pour héberger le « Health Data Hub », s’affiche … de la pub pour le cloud Microsoft …
De la pub microsoft dans les articles sur le health data hubJean Rohmer
Bienvenue en GAFAMIE !
La boucle est bouclée :
Sur les articles critiquant l’usage du cloud Microsoft pour héberger le « Health Data Hub », s’affiche … de la pub pour le cloud Microsoft …
More Related Content
Similar to Knowledge representation: structured or unstructured?
The document discusses the origins and development of the World Wide Web and Semantic Web. It begins with Tim Berners-Lee's original proposal in 1989 to create a global hypertext system using universal document identifiers. It then describes the formation of the World Wide Web Consortium and Berners-Lee's vision of a web of machine-understandable data. The document concludes by examining debates around natural language processing and semantics on the Semantic Web.
The document discusses the vision, architecture, and technology of the Semantic Web. It defines key concepts like semantics, ontology, RDF, and provides an overview of the Semantic Web stack and architecture. Examples of semantic web applications and technologies like SPARQL queries are also presented to illustrate how semantic markup allows machines to understand web content.
Human-Like Computing and Human-Computer InteractionAlan Dix
paper presented at Human Centred Design for Intelligent Environments (HCD4IE) Workshop, HCI2016, Bournemouth, UK, 12 July 2016.
http://alandix.com/academic/papers/HCD4IE-2016-human-like/
This document discusses a lecture on designing ontologies and using them to build semantic applications. The lecture will cover foundations of semantic web technologies, methodologies for semantic content management, designing semantic content management systems, and designing interactive ubiquitous information systems. It provides an outline that includes discussions on ontologies, ontology design, transformation and refactoring, and using ontology networks in content management systems platforms.
The document discusses how linking open data and semantics can benefit digital humanities research using Europeana. It proposes fully implementing the Europeana Data Model to represent cultural heritage objects as linked open data. This would connect objects across domains and with external datasets like DBpedia. Combining this enriched semantic data with tools like SwickyNotes could facilitate new forms of digital scholarship through semantic exploration, context discovery, and knowledge generation.
The document provides an overview of the history and components of personal computers. It discusses how early computers led to developments in input/output devices and graphical user interfaces. It describes the major hardware components of a PC, including the motherboard, processor, memory, storage devices, and input/output ports. The motherboard serves as the central connection point and contains the CPU, memory, and expansion slots. Buses on the motherboard allow communication between the CPU and other devices.
This document summarizes a meeting about connecting people on the social web using open standards. It discusses the history of semantic web projects like FOAF and RDF that aim to make web documents machine-readable and link people and information. It also addresses disagreements between groups working on these issues and emphasizes finding common ground through collaboration and focusing on shared goals of a more decentralized and interconnected web.
1. The document discusses various ideas around the base concepts of servers, language, culture, and identity as they relate to the digital world. It explores how these concepts have changed with technology and the internet.
2. Specific topics covered include how servers act as a personal archive accessible from anywhere, how internet connections change perspectives of space and time, and how screens and devices are no longer fixed but fluid entities.
3. It also examines the keyboard and mouse as tools that evolved from their original purposes and how our understanding and trust of the digital world has developed over time through abstraction and evolution of these interfaces.
Alan Turing first proposed the concept of artificial intelligence in 1950 and suggested computers could be taught to solve problems like humans. Early AI research was limited by computers' inability to store commands and programs. In the 1950s, the Logic Theorist program demonstrated rudimentary problem-solving skills. Advances in computing power and the introduction of machine learning algorithms and expert systems expanded AI research from the 1950s-1980s. Deep learning techniques in the 1980s and availability of neural networks in the 2000s enabled computers to learn from experience and tackle complex tasks in areas like language processing and computer vision.
Describing Everything - Open Web standards and classificationDan Brickley
The document discusses the need for a hybrid approach to classification that combines traditional library classification systems with modern web technologies and standards. It proposes putting classification data on the open web so it can be more widely used and built upon. This will help drive innovation by making the data accessible to developers, designers and content creators.
Chapter 4: Paradigms
from
Dix, Finlay, Abowd and Beale (2004).
Human-Computer Interaction, third edition.
Prentice Hall. ISBN 0-13-239864-8.
http://www.hcibook.com/e3/
The document discusses the history and evolution of paradigms in human-computer interaction (HCI). It describes several paradigm shifts in interactive technologies including: batch processing, time-sharing, interactive computing, graphical displays, personal computing, the World Wide Web, ubiquitous computing. Each new paradigm created a new perception of the human-computer relationship.
What would you do with free pictures of everything on Earth?Paul Houle
A presentation I gave at the NYC Semantic Web meetup on Dec 8, 2011. I talk about Ookaboo.com, a collection of images that are tagged with precise Linked Data terms from Freebase and DBpedia.
NYAI #27: Cognitive Architecture & Natural Language Processing w/ Dr. Catheri...Maryam Farooq
For more AI talks, visit: nyai.co
These slides are from NYAI #27: Cognitive Architecture & Natural Language Processing w/ Dr. Catherine Havasi, which took place Tues, 12/18/19 at Kirkland & Ellis NYC.
[Speaker Bio] Dr. Catherine Havasi is a technology strategist, artificial intelligence researcher, and entrepreneur. In the late 90s, she co-founded the Common Sense Computing Initiative, or ConceptNet, the first crowd-sourced project for artificial intelligence and the largest open knowledge graph for language understanding. ConceptNet has played a role in thousands of AI projects and will be turning 20 next year. She has started several companies commercializing AI research, including Luminoso where she acts as Chief Strategy Officer. She is currently a visiting scientist at the MIT Media Lab where she works on computational creativity and previously directed the Digital Intuition group.
[Abstract] People who build everything from entertainment experiences to financial management face a dilemma: how can you scale what you’re building for broader consumption, yet maintain the personalization that makes it special? A fundamental tension exists between building something individualized, and scaling it to consumers such as visitors at a theme park, or gamers exploring the latest Zelda adventure. True disruption happens when we overcome the idea that one must sacrifice personalization to achieve mass production — like it has in advertising, recommendations, and web search.
Artificial Intelligence practitioners, especially in natural language understanding, dialogue, and cognitive modeling, face the same issue: how can we personalize our models for all audiences without relying on unscalable efforts such as writing specific rules, building dialogue trees, or designing knowledge graphs? Catherine Havasi believes we can remove this dichotomy and achieve “mass personalization.” In this session we’ll discuss how to understand domain text and build believable digital characters. We’ll talk about how adding a little common sense, cognitive architectures, and planning is making this all possible.
nyai.co
Michelangelo di Lodovico Buonarroti Simoni (1475-1564) was an Italian sculptor, painter, architect, and poet of the High Renaissance era. The document discusses various topics related to the history of computing and the internet, including the invention of the computer by Von Neumann, the development of the microprocessor, the creation of ARPANET and the internet in the late 1960s and 1970s, and the development of the World Wide Web in the early 1990s. It also discusses concepts like augmented reality, videoconferencing, and the shift to a knowledge economy driven by information.
Pierre Lévy proposes using a semantic language called IEML (Integrated Electronic MetaLanguage) to advance collective intelligence on the internet. IEML would allow for:
1) Semantic interoperability between different ontologies, folksonomies, and languages.
2) A transparent semantic addressing system to connect ideas rather than just documents.
3) Empowering writing and reading by generating texts automatically from concepts.
IEML aims to enhance collective intelligence by making the underlying semantics of online information more explicit and manipulable.
Building collaborative Machine Learning platform for Dataverse network. Lecture by Slava Tykhonov (DANS-KNAW, the Netherlands), DANS seminar series, 29.03.2022
The document discusses the importance of semantic contextualization in Europeana. It argues that Europeana is more than just an aggregation of digital objects and aims to enable knowledge generation about culture. It presents the DIKW/DIKT models to illustrate how data, information, and knowledge are related. Semantic contextualization in Europeana involves representing objects and their relationships using classes, properties, and embedding them in a linked open data architecture. This facilitates knowledge generation and even speculative thinking when interacting with Europeana.
Similar to Knowledge representation: structured or unstructured? (20)
Bienvenue en GAFAMIE !
La boucle est bouclée :
Sur les articles critiquant l’usage du cloud Microsoft pour héberger le « Health Data Hub », s’affiche … de la pub pour le cloud Microsoft …
De la pub microsoft dans les articles sur le health data hubJean Rohmer
Bienvenue en GAFAMIE !
La boucle est bouclée :
Sur les articles critiquant l’usage du cloud Microsoft pour héberger le « Health Data Hub », s’affiche … de la pub pour le cloud Microsoft …
Les 40 ans de l'Institut Fredrik Bull avec liens video: 40 ans d'informatique...Jean Rohmer
Une journée exceptionnelle avec les grands noms de l'informatique française, pour faire un retour sur les 40 dernières années et donner une vision du futur
Utiliser le langage d'intelligence artificielle PROLOG pour résoudre des jeux mathématiques et géométriques: combien y-a-t-il de de triangles dans une figure ?
Arithmetics with symbolic Artificial Intelligence and Prolog: it's a child's...Jean Rohmer
The document discusses how to teach arithmetic concepts like addition and multiplication to a computer using PROLOG. It describes modeling the concepts after how children naturally learn them. Addition is defined recursively and grounded in the basic concept of counting numbers. With just two PROLOG rules, the computer can now solve addition problems. Multiplication can also be implemented in a similar recursive manner, grounded in previous knowledge.
L'informatique n'est pas l'amie des donnéesJean Rohmer
Voici la présentation que j'ai faire au colloque GREC-O "Les systèmes complexes face au tsunami exponentiel du numérique".
Pour moi, une donnée est une phrase entière, un énoncé.
J'y explique que l'ordinateur "matériel" n'a pas été fait pour traiter les données, les langages de programmation non plus.
Et que cela handicape beaucoup les utilisations de l'informatique.
This document explains how to build a deductive inference engine for rule-based systems, business rules. It leads to a useful architecure for Complex Event Processing and Data streams
Expériences de gestion des connaissances avec IDELIANCE: supprimons le document!Jean Rohmer
Cet article tire quelques leçons de la conception et
de l’usage de l’outil IDELIANCE depuis une di-
zaine d’années. Idéliance est un outil de gestion de
réseaux sémantiques développé à partir de 1993,
c’est à dire à une époque où Internet était encore
très peu répandu dans l’industrie, et le Web sé-
mantique tout à fait in
existant. Nous résumons
brièvement les caractéristiques de Idéliance, et
nous nous intéressons surtout aux applications in-
dustrielles qui en ont été faites. Ceci est l’occasion
de s’interroger sur les
motivations des « cols
blancs » vis à vis de la gestion des connaissances,
que nous opposerons ici à la gestion documen-
taire.
Mots clés :
Ingénierie des connaissances ; représen-
tation des connaissances ; attitudes personnelles et
collectives face à la gestion des connaissances
Intelligence Artificielle: résolution de problèmes en Prolog ou Prolog pour l...Jean Rohmer
Ce papier explique en détail et de manière pédagogique comment résoudre des problèmes en intelligence artificielle à l'aide du langage Prolog. Les classiques du loup, chèvre et chou, et de la tour de Hanoï sont expliqués en détail. On décrit comment appliquer l'approche "general problem solver" en Prolog
Artificial Intelligence Past Present and FutureJean Rohmer
A presentation from IFIP Congress 2004, where I give my vision of the evolution of AI, reasons of AI winter, and belief that it is the monly way to improve Information Processing in the future
Semantic networks, business rules, inference engines, and complex event proc...Jean Rohmer
Theses slides describe the principles of semantic networks (or triples) as a general and flexible representation format. We introduce the notion of deduction rules / inferences on semantic networks / triples. The detailed design of an inference engine -forward chaining- is introduced. It uses the so-called "delta driven computing" to optimise inference. The gereralization to forward chaining is provided, using the "Alexander Method". Principles of the implementation in Java are introduced, with appropriate methods on matrix operations, inparticular relational Join operations. FInally, we show how we can implement "Complex Event processing" and trigger mechanisms.
De l'IA au Calcul Littéraire: Pourquoi j'ai zappé le Web Sémantique Jean Rohmer
Je parle en tant que chercheur, programmeur et utilisateur de mes développements
Je fais de l’informatique depuis 44 ans
Je suis déçu par l’évolution de l’informatique
Depuis 40 ans on n’a presque rien trouvé de neuf en logiciel
Le logiciel n’est pas réductible à de l’ingénierie
J’essaie de construire des Amplificateurs d’Intelligence
J’écris du contenu sémantique chaque jour depuis 1997
Le nœud du problème est le langage: langage de programmation et langage naturel
Etymologiquement, programmer veut dire « écrire à l’avance »
Le futur n’est pas écrit, donc la programmation n’a pas de futur
La programmation n’a pas de passé: on a oublié les meilleurs langages (Lisp, APL, Prolog) et l’Intelligence Artificielle des années 80.
Il est très difficile de développer des applications intelligentes avec les langages à la mode
On a oublié ce qu’était une application intelligente
« Software Engineering » est une contradiction dans les termes
Il y a deux sortes de langages de programmation: ceux faits pour programmer les machines (à la mode), ceux faits pour résoudre des problèmes difficiles (oubliés)
Ideliance is a precursor of so-called Semantic Web. FIrst version was developped in 1993. It allows individuals and groups to organize their personal or collective / corporate knowledge as a semantic network. This document is a presentation of the product written in year 2000. Ideliance has been marketed for large companies like Air LIquide, France Telecom, PSA, CEA, EDF, GDF, Danone, Merckk. It has been used in Military Intelligence applications. It has been designed by Sylvie Le Bars (Arkandis.com) Jean Rohmer, and implemented mainly by Stéphane Jean and Denis Poisson.
1) The document discusses potential issues with overreliance on standards and methods in computer science and information systems. It argues that strict adherence to standards can increase failure risks and limit innovation.
2) Some standards are proposed before the concepts they aim to standardize are fully developed or widely adopted, called "standards of fantasy." This can misdirect significant resources into unrealistic projects.
3) The document draws parallels between overreliance on standards in computer science and the misuse of methods in the financial sector that contributed to the financial crisis, such as securitization obscuring risks. Overall it argues for more liberal and human-centered approaches to programming instead of rigid standardization.
This document introduces Litteratus Calculus, a new semantic framework based on minimal autonomous sentences called inferons. An interlogos is a subset of words common to two inferons, and an argos is a graph connecting inferons and interlogos. Litteratus Calculus uses inductive analogy operations to perform queries, inferences, and other tasks over inferons. The goal is to build a more sustainable and democratic semantic web where any person can contribute inferons without extensive formatting or logic rules.
Internet in 2020 rohmer open world forum 2011Jean Rohmer
The document discusses Jean Rohmer's views on the internet in 2020. It makes several claims: (1) Natural language will be the ultimate open source standard in 2020, as it has existed for over 100,000 years; (2) Everyone should develop a "Personet" to mirror their brain and connect/protect it from the global internet; (3) Increasing global intelligence relies on contributing natural language to the network with an "Alterity" attitude.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Knowledge representation: structured or unstructured?
1. KNOWLEDGE REPRESENTATION:
DESTRUCTURING
THE
STRUCTURED vs NON-STRUCTURED
DEBATE
Jean Rohmer
ESILV Paris
jean.rohmer@devinci.fr
Presented at ECAI 2012 Montpellier
Workshop on AI and KM
My personal background in CS, AI and KM
Started Computer Science 45 years ago
Started AI 32 years ago
Started KM 24 years ago
Management of Bull CEDIAG team
IDELIANCE Semantic Tool (1993)
Many Military Intelligence Applications Data + Text + Semantics
Blog: "PLEXUS LOGOS CALX"
See also SLIDESHARE Jean Rohmer
Progress in KR is slow.
Mesopotamia 5500 years ago:
Mesopotamia in the 21 th Century: still Stone Age:
2. AI and KM: once a Love Story
In the late 80's a love story between AI and KM
Their alliances: (rings) Knowledge Representation and Inference
Importance of KRL languages, KADS modelling : Open Kads tool (1991)
Early 90's: economical crisis: the AI + KM couple almost starving
AI and KM were young, promising, but still immature
KM alone could earn some living in large corporations
The Web arrived and seduced KM
AI was left alone
3. <<< Tim Berners Lee paper proposing the Web was rejected at the 1991 ACM Hypertext
Conference>>>
Hypertext was very close to KM
Catastrophe
2012: Large scientific Agencies manage all their projects with EXCEL
2012: Many Engineering Schools have no real information systems
2012: ECAI program, proceedings are available just in PDF, without any tool for knowledge
organization
2012: they swapped my last name and first name in SOME ECAI registration files
AI and KM are alone
AI lives with Automatic Learning Algorithms
KM flirts with wikis, blogs, social networks
The main tool for AI is SVM algorithm (sort of joke)
The main tool for KM is EXCEL + POWERPOINT (not a joke)
There is no paper on KR at ECAI 2012
Denegation: "AI is hidden everywhere"
Laurence Danlos: (NL guru):
4. "We failed to make machines adapt to humans; we humans have learnt how to use windows
and menus"
History
In the early 80's, AI languages (LISP, PROLOG, KRL, Constraints later) were seen as the
promise of a revolution in programming computers: declarative programming
1982: Alain COLMERAUER declares that PROLOG is designed to replace COBOL
European Esprit programme: 1982: KIMS project "Knowledge and Information Management
System"
Earlier: Alan Turing tried to get funds from UK Gvt to build a sort of LISP MACHINE
Earlier: Leibniz and Descartes proposed universal knowledge representation and reasoning
languages.
PROJECT OF A COMPUTABLE UNIVERSAL LANGUAGE
INCLUDING UNIVERSAL ONTOLOGIES
WITH « COMBINATORIAL » MECANISMS
DESCARTES :
« établir un ordre entre toutes les pensées, … de même qu'il y en a un établi entre les nombres »
« cette langue aiderait au jugement , lui représentant si distinctement les choses qu’il lui serait presque impossible de se
tromper »
« je tiens que cette langue est possible … mais n ’espérez jamais la voir en usage … sauf au Paradis Terrestre … »
LEIBNIZ :
« quoique cette langue dépende de la vraie philosophie, elle ne dépend pas de sa perfection »
« à mesure que la science des hommes croîtra, cette langue croîtra aussi »
« alors raisonner et calculer sera la même chose »
80's: Expert Systems with KNOWLEDGE ENGINEERS
1988 -1992: METAPEDIA project in SPAIN: a fully object-oriented encyclopaedia
5. 1990: Idea that future Corporate Information Systems would be Knowledge Based Systems
1991: MNEMOS EUREKA European project
1991 (Bull Cediag):
Corporate Intelligence = Corporate Memory + Corporate Decision + Corporate Visibility
PROLOG
6. In 2012 we celebrate the 40th anniversary of PROLOG
(Where is the cake ?)
Personal History
1984: “Alexander Method” (Foundation of Datalog / Deductive Databases)
For me, illuminated by Prolog , “Everything was logic predicates”
1990: Expert Systems were very successful
1990: Expert Systems demand much more intellectual energy than available
1993: Start developing IDELIANCE: a personal semantic networks manager for "everybody"
fr.slideshare.net/Jean_Rohmer/ideliance-semantic-network-2000
IDELIANCE: Personal Memory + “Intelligence Amplifier “
Mid 90's: sadness that AI languages disappear from education
2003: Semantic Networks is a too complex formalism for people; 99% reject it
2003: Idea of LITTERATUS CALCULUS: use plain natural language to represent knowledge
LITTERATUS CALCULUS:
express anything with "inferons": minimal and autonomous sentences in natural language
2001 +: Strong critique of Semantic Web à la W3C
Structured vs Unstructured
Unstructured is in fact HYPER-structured
Structured is in fact HYPO-structured
Natural Language is HYPER-structured
Natural language structures are so complex that we do not know how our brain master them
So-called structured information (databases, RDF triples) are trivial structures
to match computer limitations
All the problem of KR is that we are not able to write programs which understand natural
language
Semantic Networks is a good compromise between man and machine
7. Semantic Networks were used already in the 16th Century to represent complex information
Semantic Networks are readable by humans if small enough
(Not billions of triples, leave it to NoSQL! )
Semantic Networks is a 2D representation
2D representation avoids the usage of variables as in formal logic
IDELIANCE Semantic Network editor: experience since 1993
Used by many NON CS professionals in large corporation
99% of people are reluctant to write themselves semantic networks
Use semantic networks with a Subject Verb Complement (SVC) paradigm
Let people use natural language to name S, V, C (never RDF, "Resources", URI ...)
8. Let people write "SVC on SVC" using a 4th ID field (NOT contexts, named graphs ...) (SVCI
format):
Please users, not standardization committees
Negative effects of the Web and Semantic Web on KR
Is Semantic Web a bad Joke ?
SW 2001: "Machines understand and help Humans" (Scientific American Paper)
SW 2006: "A machine-to-machine Web of data"
SW 2011: Linked Data: "Humans help Machines"
SW 2016: ????
An endless loop / ping-pong of failures between manual and automatic, structured and
unstructured
Notion of URI is just a physical address scheme without any natural support
The Web reinforces the notion of -long- document
RDF has no "human face"
RDF is at best low level engineering and exchange format
Structured data publishing -dbpedia, Google- do not follow SW standards
9. Ontologies are too simplistic at RDF level
Ontologies are too complex at DL level
What was difficult to solve in the 90's with powerful KR languages on limited problems
cannot be solved in the 2010's with just Java and RDF at the Web scale
What we have to do is to install a good KR on the Internet, rethinking all the KM issues
The best -only- KR available is natural language
Natural Language does not imply "Document"
Natural Language does not mean "non -structured"
Représentation 1
A good KRL should be enjoyed by people
People should write, query, compute themselves with their KRL
Example of personal objective: take my reading notes directly in a KRL
Parabola of the ship inside the bottle:
Knowledge must be cut into articulated small parts
Example of personal objective:
Summarize "Cours de Linguistique Générale" of Ferdinand de Saussure with my KRL
Tools are important! Never say "This is just a tool".
Intelligence is just a tool ... ????
Natural Language is just a tool ... ???
Many people say "Computer is just a tool" AND "Computers will change everything" …
Theory
Theory of the two black holes
10. Man-machine compromise schema
A good KR should be targeted at killing applications (App-Killer and not Killer App!)
Applications hide all knowledge:
they presents users with a closed, limited, repressive view of the world
Replace applications by the way people will interact and compute with knowledge
A good KR should be targeted at killing the Document paradigm
Document paradigm is a concept imposed by the technology of "volumen' and "codex" more
than 2000 years ago
A good KR should aim at revolutionizing the Web (what else ?)
Representation 2
People should enjoy using themselves directly KR
People should write KR instead of writing documents
Computations on KR done directly by users should replace applications
exactly as EXCEL does with numeric data
11. KR should be the backbone of "Semantic EXCEL" and "Semantic PowerPoint"
Collective KM fails if it is not grounded in personal KM, through a personal, intensive effort
to write, read, retrieve, combine, compute knowledge with a good KR
We must invent new ways of browsing, editing, computing on knowledge.
Examples of new computations:
"In between", "novelty detection", "how to", "what looks like" , "online graph mining"...
How to proceed towards a good KR ?
Issue: what else do we have than KR progress to improve information systems ?
We must abandon the paradigm of PRO-GRAMMING
PRO-GRAMMING means “WRITTEN BY ADVANCE”
We most practice IM-PRO-GRAMMING
IM-PRO-GRAMMING means IM-PRO-VE
IM-PRO-GRAMMING means IM-PRO-VISE
IM-PRO-GRAMMING needs the appropriate KR paradigm
LITTERATUS CALCULUS
The only thing you put in a computer is sentences in natural language
12. INFERON: minimal and autonomous sentence
Every information is INFERON
There are no entities
Entities emerge from sentences
Instead of “Sentences are built from entities”
Many tools to manage inferons: editing, browsing, query, inference, ...
My personal KB has today 70 000 inferons
A first version of a Litteratus Calculus tool is being implemented (since 2003 …)
Current work: how to install INFERONS on the Internet ?