The document describes a method for learning relational grammars from sequences of actions performed by a user steering a robot. It induces grammars from sequences of state-action pairs by identifying repeated n-grams and generalizing rules. Experiments applying the method to navigation tasks and gesture recognition are discussed.
The document discusses the history of programming and attempts to address issues of modularity and scale. It describes how programming evolved from individual statements to procedures to classes but each new concept introduced new issues. Modules and packages were introduced to group classes but were not fully satisfactory. The document proposes that object teams unify classes and packages by treating a team as both a class and package. This allows nesting modules at any level of scale to better support modular composition of systems from small building blocks.
The document discusses key concepts related to memory models in C#, including:
1. The compilation process involves lexical analysis, parsing, semantic analysis, optimization, and code generation.
2. Value types are stored on the stack while reference types are stored on the heap.
3. The garbage collector performs memory management by freeing up unused memory on the heap.
A Logic Meta-Programming Foundation for Example-Driven Pattern Detection in O...Coen De Roover
Presentation at the Postdoctoral symposium of the 2011 International Conference on Software Maintenance, accompanying the paper
http://soft.vub.ac.be/Publications/2011/vub-soft-tr-11-11.pdf
A Recommender System for Refining Ekeko/X TransformationCoen De Roover
This document discusses an automated recommender system for refining Ekeko/X transformations. It begins by introducing logic meta-programming and how it allows querying a "database" of program information using logic relations. Templates with meta-variables and directives are used to specify transformations, and formal operators define ways to mutate templates. A genetic search evaluates templates based on precision, recall, partial matches, and directive usage to recommend refinements for better specifying transformations.
This document provides an overview of BERT (Bidirectional Encoder Representations from Transformers) and how it works. It discusses BERT's architecture, which uses a Transformer encoder with no explicit decoder. BERT is pretrained using two tasks: masked language modeling and next sentence prediction. During fine-tuning, the pretrained BERT model is adapted to downstream NLP tasks through an additional output layer. The document outlines BERT's code implementation and provides examples of importing pretrained BERT models and fine-tuning them on various tasks.
Basis for comparison programming languagesAbdo ELhais
This document compares several popular programming languages across various features and qualities. It discusses object orientation, static vs dynamic typing, generics, inheritance, renaming, overloading, operator overloading, higher order functions, garbage collection, uniform access, class variables/methods, reflection, access control, design by contract, multithreading, regular expressions, pointer arithmetic, language integration, and built-in security. The languages compared are Java, C#, C++, Python, and Visual Basic.
This document provides an introduction and background to a study on grammatical interference from Indonesian in the English writing of students in Indonesia. It discusses that Indonesian students face difficulties with English grammar due to differences between the languages. The study aims to identify the types of grammatical errors caused by interference from students' first language. It intends to help students, teachers, and future researchers understand the challenges students face and improve English teaching and learning. The scope is limited to analyzing writing samples from one class to identify grammatical structures affected by Indonesian interference.
The document discusses the history of programming and attempts to address issues of modularity and scale. It describes how programming evolved from individual statements to procedures to classes but each new concept introduced new issues. Modules and packages were introduced to group classes but were not fully satisfactory. The document proposes that object teams unify classes and packages by treating a team as both a class and package. This allows nesting modules at any level of scale to better support modular composition of systems from small building blocks.
The document discusses key concepts related to memory models in C#, including:
1. The compilation process involves lexical analysis, parsing, semantic analysis, optimization, and code generation.
2. Value types are stored on the stack while reference types are stored on the heap.
3. The garbage collector performs memory management by freeing up unused memory on the heap.
A Logic Meta-Programming Foundation for Example-Driven Pattern Detection in O...Coen De Roover
Presentation at the Postdoctoral symposium of the 2011 International Conference on Software Maintenance, accompanying the paper
http://soft.vub.ac.be/Publications/2011/vub-soft-tr-11-11.pdf
A Recommender System for Refining Ekeko/X TransformationCoen De Roover
This document discusses an automated recommender system for refining Ekeko/X transformations. It begins by introducing logic meta-programming and how it allows querying a "database" of program information using logic relations. Templates with meta-variables and directives are used to specify transformations, and formal operators define ways to mutate templates. A genetic search evaluates templates based on precision, recall, partial matches, and directive usage to recommend refinements for better specifying transformations.
This document provides an overview of BERT (Bidirectional Encoder Representations from Transformers) and how it works. It discusses BERT's architecture, which uses a Transformer encoder with no explicit decoder. BERT is pretrained using two tasks: masked language modeling and next sentence prediction. During fine-tuning, the pretrained BERT model is adapted to downstream NLP tasks through an additional output layer. The document outlines BERT's code implementation and provides examples of importing pretrained BERT models and fine-tuning them on various tasks.
Basis for comparison programming languagesAbdo ELhais
This document compares several popular programming languages across various features and qualities. It discusses object orientation, static vs dynamic typing, generics, inheritance, renaming, overloading, operator overloading, higher order functions, garbage collection, uniform access, class variables/methods, reflection, access control, design by contract, multithreading, regular expressions, pointer arithmetic, language integration, and built-in security. The languages compared are Java, C#, C++, Python, and Visual Basic.
This document provides an introduction and background to a study on grammatical interference from Indonesian in the English writing of students in Indonesia. It discusses that Indonesian students face difficulties with English grammar due to differences between the languages. The study aims to identify the types of grammatical errors caused by interference from students' first language. It intends to help students, teachers, and future researchers understand the challenges students face and improve English teaching and learning. The scope is limited to analyzing writing samples from one class to identify grammatical structures affected by Indonesian interference.
The document discusses representational and denotational approaches to semantic analysis. Representational approaches emphasize discovering the conceptual structures underlying language, while denotational approaches emphasize the link between language and external reality. There are debates around whether meaning is constructed from smaller meaning components or depends on relations between words. The document also discusses different views on conceptualization, linguistic codification, and argument structure in language.
The Copenhagen School of Linguistics, also known as the Linguistic Circle of Copenhagen, was founded by Louis Hjelmslev and Viggo Brøndal in the mid-20th century. It developed Hjelmslev's theory of glossematics, which aimed to further structuralism by analyzing language as a formal system. The school was influential in linguistic structuralism along with the Geneva and Prague Schools. It studied the formal properties of language and their interrelations through successive works published by its members. In 1989, a new generation of linguists inspired by cognitive linguistics founded the School of Danish Functional Grammar to continue the Copenhagen School's tradition.
The Edge of Linguistics lecture series from Prof. Fredreck J. Newmeyer
During Oct 7 to Oct 17, Prof. Newmeyer offered a lecture series on a wide range of linguistic topics in Beijing Language and Culture University.
Lecture 1: The Chomskyan Revolution
Lecture 2: Constraining the Theory
Lecture 3: The Boundary between Syntax and Semantics
Lecture 4: The Boundary between Competence and Performance
Lecture 5: Can One Language Be ‘More Complex’ Than Another?
Background:
Fredreck J. Newmeyer is Professor Emeritus of Linguistics at the University of Washington and adjunct professor in the University Of British Columbia Department Of Linguistics and the Simon Fraser University Department of Linguistics. He has published widely in theoretical and English syntax.
1) The document discusses similarities between three linguistic schools: Hjelmslev's Glossematics, Lamb's Stratificational-Cognitive Linguistics, and Halliday's Systemic-Functional Linguistics.
2) Key similarities include their basis in Saussure's dichotomies, aim for descriptive adequacy, status as post-Bloomfieldian, adoption of formal syntax, view of language as incorporating non-linguistic phenomena, and use of progressive deductive analysis.
3) All three schools represent language as a relational network organized into strata, with primary linguistic strata and sometimes peripheral extra-linguistic strata, and analyze relationships between elements
This document provides an introduction to the field of linguistics. It defines linguistics as the scientific study of language and discusses how it differs from traditional grammar in being descriptive rather than prescriptive. The document outlines the scope of linguistics, dividing it into micro- and macrolinguistics. Microlinguistics includes the study of phonetics, phonology, morphology, syntax, semantics and pragmatics. Macrolinguistics encompasses sociolinguistics, psycholinguistics, neurolinguistics and other fields. It also discusses the usefulness of linguistics for students of language, teachers and researchers.
1) The presentation discusses Caffeine, a tool for dynamic analysis of Java programs that uses Prolog predicates to model execution events and perform analyses.
2) Caffeine models execution events like field accesses, method calls, and class loads. Queries can be written in Prolog to analyze program behavior, like counting method calls.
3) The implementation has performance issues due to overhead from event generation, requiring instrumentation tricks. Analyzing complex relationships like composition is discussed.
LISP: How I Learned To Stop Worrying And Love ParanthesesDominic Graefen
The document discusses functional programming and compares it to object-oriented programming. It provides a brief history of functional programming languages like Lisp from the 1950s and newer languages that have become popular more recently like Clojure, F# and Scala. It explains some key aspects of functional programming like higher-order functions, recursion, pure functions and using functions as values. It also discusses why functional programming has become more popular again recently, in part due to multi-core processors and a need for concurrency.
The document summarizes work on Caffeine, a tool for dynamic analysis of Java programs. It discusses:
1) The motivation for dynamic analysis to understand program behavior at runtime.
2) Models used in Caffeine including a trace model to record execution events and an execution model using Prolog predicates.
3) An example analysis counting the number of backtracks in a n-queens algorithm to illustrate using Caffeine.
"APE: Learning User's Habits to Automate Repetitive Tasks"butest
The document describes an adaptive programming environment called APE that aims to apply machine learning techniques to automatically perform repetitive tasks on a user's behalf. APE learns a user's habits by observing their actions and uses this knowledge to suggest performing repetitive tasks when appropriate. It contains three software agents: an Observer that monitors user actions, an Apprentice that learns habits from these actions, and an Assistant that suggests tasks based on the learned habits. Existing machine learning algorithms are not well-suited for this task, so the document also describes a new algorithm called IDHYS that is designed specifically for learning user habits from small datasets quickly and with low error rates. Experimental results show IDHYS outperforms the state-of-
The document discusses strategies for mid-level robot control and learning. It proposes using a common language called teleo-reactive (T-R) programs to represent robot control programs that were explicitly programmed, planned, or learned. The document describes T-R programs and some preliminary experiments on learning T-R programs through reinforcement learning. It found that perceptual imperfections like noise and aliasing pose challenges for learning but that T-R programs can still be learned to achieve robot tasks. Further experimentation is needed to develop the approach.
This document summarizes a lecture on object-oriented programming and virtual functions in C++. It discusses:
1) The difference between static and dynamic binding, and how virtual functions allow for dynamic/late binding by performing function binding at runtime based on the object's type.
2) An example showing that without virtual functions, calling a function through a base class pointer results in the base class function being called rather than the derived class function.
3) How declaring a function as virtual in the base class allows the correct overridden function to be called in the derived class when called through a base class pointer, enabling polymorphism.
This document discusses the rise of dynamic programming languages. It provides examples of popular dynamic languages like JavaScript, Ruby, Python, and Lisp. It outlines key characteristics of dynamic languages like being dynamically typed, late binding, interpretive, reflective, and having lightweight syntax. The document uses R as a case study to illustrate how dynamic languages can be functional, support powerful data structures and graphics, are embeddable and extensible through packages. It argues dynamic languages are widely used and growing in popularity due to being interactive, portable and failure oblivious.
[IROS2017] Online Spatial Concept and Lexical Acquisition with Simultaneous L...Akira Taniguchi
○Akira Taniguchi, Yoshinobu Hagiwara, Tadahiro Taniguchi, and Tetsunari Inamura, "Online Spatial Concept and Lexical Acquisition with Simultaneous Localization and Mapping", IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2017), 2017.
Video: https://youtu.be/hVKQCdbRQVM
A tutorial on deep learning at icml 2013Philip Zheng
This document provides an overview of deep learning presented by Yann LeCun and Marc'Aurelio Ranzato at an ICML tutorial in 2013. It discusses how deep learning learns hierarchical representations through multiple stages of non-linear feature transformations, inspired by the hierarchical structure of the mammalian visual cortex. It also compares different types of deep learning architectures and training protocols.
A robot may need to use a tool to solve a complex problem. Currently, tool use must be pre-programmed by a human. However, this is a difficult task and can be helped if the robot is able to learn how to use a tool by itself. Most of the work in tool use learning by a robot is done using a feature-based representation. Despite many successful results, this representation is limited in the types of tools and tasks that can be handled. Furthermore, the complex relationship between a tool and other world objects cannot be captured easily. Relational learning methods have been proposed to overcome these weaknesses [1, 2]. However, they have only been evaluated in a sensor-less simulation to avoid the complexities and uncertainties of the real world. We present a real world implementation of a relational tool use learning system for a robot. In our experiment, a robot requires around ten examples to learn to use a hook-like tool to pull a cube from a narrow tube.
Golem-II+ is the latest service robot developed by the Golem Group. We design and construct domain independent service robots. Our developments are based in a theory of Human-Robot Communication centered in the specification of protocols representing the structure of service robots' tasks, which are called Dialogue Models (DMs).
The document discusses industrial robot applications and programming. It describes how robots are used for material handling, assembly, processing and inspection operations that are hazardous, repetitive or difficult for humans. It then covers various types of material handling applications including pick and place, palletizing, machine loading/unloading and stacking operations. The document also discusses robot programming methods, languages, accuracy, repeatability and resolution.
Parameterizing and Assembling IR-based Solutions for SE Tasks using Genetic A...Annibale Panichella
This document proposes using genetic algorithms to parameterize and assemble information retrieval (IR)-based solutions for software engineering tasks. The goal is to find an optimal configuration of IR parameters like term extraction, stopword removal, weighting, and distance functions. The approach evaluates different IR configurations based on how well they cluster related documents, as measured by silhouette coefficient. Empirical tests on traceability recovery and duplicate bug detection tasks show this "GA-IR" approach finds configurations comparable to ideal ones and outperforming baselines. The clustering hypothesis - that better document clusters correlate with better IR performance - is also supported.
The document discusses representational and denotational approaches to semantic analysis. Representational approaches emphasize discovering the conceptual structures underlying language, while denotational approaches emphasize the link between language and external reality. There are debates around whether meaning is constructed from smaller meaning components or depends on relations between words. The document also discusses different views on conceptualization, linguistic codification, and argument structure in language.
The Copenhagen School of Linguistics, also known as the Linguistic Circle of Copenhagen, was founded by Louis Hjelmslev and Viggo Brøndal in the mid-20th century. It developed Hjelmslev's theory of glossematics, which aimed to further structuralism by analyzing language as a formal system. The school was influential in linguistic structuralism along with the Geneva and Prague Schools. It studied the formal properties of language and their interrelations through successive works published by its members. In 1989, a new generation of linguists inspired by cognitive linguistics founded the School of Danish Functional Grammar to continue the Copenhagen School's tradition.
The Edge of Linguistics lecture series from Prof. Fredreck J. Newmeyer
During Oct 7 to Oct 17, Prof. Newmeyer offered a lecture series on a wide range of linguistic topics in Beijing Language and Culture University.
Lecture 1: The Chomskyan Revolution
Lecture 2: Constraining the Theory
Lecture 3: The Boundary between Syntax and Semantics
Lecture 4: The Boundary between Competence and Performance
Lecture 5: Can One Language Be ‘More Complex’ Than Another?
Background:
Fredreck J. Newmeyer is Professor Emeritus of Linguistics at the University of Washington and adjunct professor in the University Of British Columbia Department Of Linguistics and the Simon Fraser University Department of Linguistics. He has published widely in theoretical and English syntax.
1) The document discusses similarities between three linguistic schools: Hjelmslev's Glossematics, Lamb's Stratificational-Cognitive Linguistics, and Halliday's Systemic-Functional Linguistics.
2) Key similarities include their basis in Saussure's dichotomies, aim for descriptive adequacy, status as post-Bloomfieldian, adoption of formal syntax, view of language as incorporating non-linguistic phenomena, and use of progressive deductive analysis.
3) All three schools represent language as a relational network organized into strata, with primary linguistic strata and sometimes peripheral extra-linguistic strata, and analyze relationships between elements
This document provides an introduction to the field of linguistics. It defines linguistics as the scientific study of language and discusses how it differs from traditional grammar in being descriptive rather than prescriptive. The document outlines the scope of linguistics, dividing it into micro- and macrolinguistics. Microlinguistics includes the study of phonetics, phonology, morphology, syntax, semantics and pragmatics. Macrolinguistics encompasses sociolinguistics, psycholinguistics, neurolinguistics and other fields. It also discusses the usefulness of linguistics for students of language, teachers and researchers.
1) The presentation discusses Caffeine, a tool for dynamic analysis of Java programs that uses Prolog predicates to model execution events and perform analyses.
2) Caffeine models execution events like field accesses, method calls, and class loads. Queries can be written in Prolog to analyze program behavior, like counting method calls.
3) The implementation has performance issues due to overhead from event generation, requiring instrumentation tricks. Analyzing complex relationships like composition is discussed.
LISP: How I Learned To Stop Worrying And Love ParanthesesDominic Graefen
The document discusses functional programming and compares it to object-oriented programming. It provides a brief history of functional programming languages like Lisp from the 1950s and newer languages that have become popular more recently like Clojure, F# and Scala. It explains some key aspects of functional programming like higher-order functions, recursion, pure functions and using functions as values. It also discusses why functional programming has become more popular again recently, in part due to multi-core processors and a need for concurrency.
The document summarizes work on Caffeine, a tool for dynamic analysis of Java programs. It discusses:
1) The motivation for dynamic analysis to understand program behavior at runtime.
2) Models used in Caffeine including a trace model to record execution events and an execution model using Prolog predicates.
3) An example analysis counting the number of backtracks in a n-queens algorithm to illustrate using Caffeine.
"APE: Learning User's Habits to Automate Repetitive Tasks"butest
The document describes an adaptive programming environment called APE that aims to apply machine learning techniques to automatically perform repetitive tasks on a user's behalf. APE learns a user's habits by observing their actions and uses this knowledge to suggest performing repetitive tasks when appropriate. It contains three software agents: an Observer that monitors user actions, an Apprentice that learns habits from these actions, and an Assistant that suggests tasks based on the learned habits. Existing machine learning algorithms are not well-suited for this task, so the document also describes a new algorithm called IDHYS that is designed specifically for learning user habits from small datasets quickly and with low error rates. Experimental results show IDHYS outperforms the state-of-
The document discusses strategies for mid-level robot control and learning. It proposes using a common language called teleo-reactive (T-R) programs to represent robot control programs that were explicitly programmed, planned, or learned. The document describes T-R programs and some preliminary experiments on learning T-R programs through reinforcement learning. It found that perceptual imperfections like noise and aliasing pose challenges for learning but that T-R programs can still be learned to achieve robot tasks. Further experimentation is needed to develop the approach.
This document summarizes a lecture on object-oriented programming and virtual functions in C++. It discusses:
1) The difference between static and dynamic binding, and how virtual functions allow for dynamic/late binding by performing function binding at runtime based on the object's type.
2) An example showing that without virtual functions, calling a function through a base class pointer results in the base class function being called rather than the derived class function.
3) How declaring a function as virtual in the base class allows the correct overridden function to be called in the derived class when called through a base class pointer, enabling polymorphism.
This document discusses the rise of dynamic programming languages. It provides examples of popular dynamic languages like JavaScript, Ruby, Python, and Lisp. It outlines key characteristics of dynamic languages like being dynamically typed, late binding, interpretive, reflective, and having lightweight syntax. The document uses R as a case study to illustrate how dynamic languages can be functional, support powerful data structures and graphics, are embeddable and extensible through packages. It argues dynamic languages are widely used and growing in popularity due to being interactive, portable and failure oblivious.
[IROS2017] Online Spatial Concept and Lexical Acquisition with Simultaneous L...Akira Taniguchi
○Akira Taniguchi, Yoshinobu Hagiwara, Tadahiro Taniguchi, and Tetsunari Inamura, "Online Spatial Concept and Lexical Acquisition with Simultaneous Localization and Mapping", IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2017), 2017.
Video: https://youtu.be/hVKQCdbRQVM
A tutorial on deep learning at icml 2013Philip Zheng
This document provides an overview of deep learning presented by Yann LeCun and Marc'Aurelio Ranzato at an ICML tutorial in 2013. It discusses how deep learning learns hierarchical representations through multiple stages of non-linear feature transformations, inspired by the hierarchical structure of the mammalian visual cortex. It also compares different types of deep learning architectures and training protocols.
A robot may need to use a tool to solve a complex problem. Currently, tool use must be pre-programmed by a human. However, this is a difficult task and can be helped if the robot is able to learn how to use a tool by itself. Most of the work in tool use learning by a robot is done using a feature-based representation. Despite many successful results, this representation is limited in the types of tools and tasks that can be handled. Furthermore, the complex relationship between a tool and other world objects cannot be captured easily. Relational learning methods have been proposed to overcome these weaknesses [1, 2]. However, they have only been evaluated in a sensor-less simulation to avoid the complexities and uncertainties of the real world. We present a real world implementation of a relational tool use learning system for a robot. In our experiment, a robot requires around ten examples to learn to use a hook-like tool to pull a cube from a narrow tube.
Golem-II+ is the latest service robot developed by the Golem Group. We design and construct domain independent service robots. Our developments are based in a theory of Human-Robot Communication centered in the specification of protocols representing the structure of service robots' tasks, which are called Dialogue Models (DMs).
The document discusses industrial robot applications and programming. It describes how robots are used for material handling, assembly, processing and inspection operations that are hazardous, repetitive or difficult for humans. It then covers various types of material handling applications including pick and place, palletizing, machine loading/unloading and stacking operations. The document also discusses robot programming methods, languages, accuracy, repeatability and resolution.
Parameterizing and Assembling IR-based Solutions for SE Tasks using Genetic A...Annibale Panichella
This document proposes using genetic algorithms to parameterize and assemble information retrieval (IR)-based solutions for software engineering tasks. The goal is to find an optimal configuration of IR parameters like term extraction, stopword removal, weighting, and distance functions. The approach evaluates different IR configurations based on how well they cluster related documents, as measured by silhouette coefficient. Empirical tests on traceability recovery and duplicate bug detection tasks show this "GA-IR" approach finds configurations comparable to ideal ones and outperforming baselines. The clustering hypothesis - that better document clusters correlate with better IR performance - is also supported.
An Introduction to C# and .NET Framework
Recursive Method Call ( A method can call itself.
Access Specifier
Return type
Method name
Parameter List
Method body )
Recursive
Arrays
Inheritance
C++ logic
polymorphism
Ideal for learning the basics of java, and can also be used as a school project for class 10 and 12. The document contains illustrative pictures, and carefully selected information to make the basics of java strong. Pls ignore small spelling mistakes made.
Similar to Learning Relational Grammars from Sequences of Actions (20)
Este documento presenta un resumen breve de la historia de la ingeniería de software desde la década de 1960 hasta la década de 1990. Comenzó con el uso de grandes computadoras y lenguajes como Fortran y COBOL, luego evolucionó hacia lenguajes más estructurados como Algol y Pascal, y finalmente condujo al desarrollo de la programación orientada a objetos, interfaces gráficas de usuario y software libre.
Blanca Vargas Govea reflects on her 10 year journey pursuing a PhD degree and career in research. She started as a research assistant and became a PhD student, focusing her thesis on robotics. After receiving her PhD, she moved to new cities for postdoc research, taking on challenges in data analysis, recommender systems, and the semantic web. Along the way, she discovered a passion for teaching and dealing with the ups and downs of academic work while relying on family and friends for support. Looking to the future, she hopes to continue learning, teaching, conducting research, generating new ideas, writing, and enjoying life beyond her specialized field of work.
Este documento presenta el aprendizaje de programas teleo-reactivos (PTRs) básicos y jerárquicos para controlar un robot móvil. Se describe la representación del ambiente, el aprendizaje de PTRs básicos mediante clonación para tareas como navegación, y el aprendizaje de PTRs jerárquicos que pueden incluir otros PTRs. Los experimentos muestran la capacidad del robot para realizar tareas como navegación y clasificación de ademanes controlado por los PTRs aprendidos.
Este documento introduce el uso de árboles de decisión y selección de atributos en WEKA. Explica brevemente el origen de WEKA y sus interfaces principales. Luego, describe cómo construir árboles de decisión dividiendo recursivamente los datos basados en el atributo que proporciona la mayor ganancia de información. Finalmente, introduce la importancia de la selección de atributos para mejorar la calidad de los datos y el rendimiento de los algoritmos de aprendizaje automático.
Este documento describe el aprendizaje de reglas para un sistema de recomendación contextual. Presenta la metodología utilizada, incluyendo la selección de atributos, la preparación de datos y el aprendizaje automático de reglas usando ILP. La evaluación muestra que el enfoque contextual obtiene el menor desempeño, posiblemente debido a reglas demasiado generales o con sobreajuste. Se necesitan reglas que capturen los datos sin estos problemas para mejorar las recomendaciones.
Este documento propone aplicar técnicas de minería semántica para generar reglas que mejoren los sistemas de recomendación. Describe el estado actual de los sistemas de recomendación y las limitaciones de los enfoques actuales. Luego presenta una propuesta para aplicar minería semántica para extraer reglas a partir de atributos de usuario, servicio y entorno que capturen mejor las preferencias de los usuarios. Finalmente, muestra ejemplos de reglas generadas que consideran factores como la ocupación, edad, intereses cultural
Effects of relevant contextual features in the performance of a restaurant re...Blanca Alicia Vargas Govea
The document describes experiments conducted to evaluate the effects of relevant contextual features on the performance of a restaurant recommender system called Surfeous. Key findings include:
- Using a reduced subset of attributes (hours, days, accepts, cuisine) performed as well or better than using all attributes, indicating feature selection can improve efficiency.
- For recall, subsets generally outperformed a context-free approach, suggesting contextual attributes enrich recommendations.
- Fusion achieved similar precision and NDCG as the context-free approach, while rules alone provided lower performance.
Este documento describe los desafíos y enfoques para evaluar sistemas de recomendación. Explica que la evaluación es importante para mostrar el cumplimiento del objetivo, analizar deficiencias y permitir comparaciones. Sin embargo, existen dificultades como la variedad de algoritmos, falta de metodología estándar y escasez de datos de prueba. El documento propone nuevas métricas, enfoques centrados en el usuario y considerar atributos de contexto. Finalmente, presenta gráficos sobre precision y recall para ilustrar su metodología
Effects of relevant contextual features in the performance of a restaurant re...Blanca Alicia Vargas Govea
The document describes a restaurant recommender system called Surfeous that incorporates contextual information to make recommendations. It evaluates the impact of different contextual features on recommendation performance. Key findings include:
1) Feature selection identified a minimum relevant subset of 5 attributes (cuisine, hours, days, accepts, address) that achieved similar or better precision, recall, and NDCG scores compared to using all 23 attributes.
2) Incorporating contextual rules to match user profiles improved recommendation performance over a context-free baseline.
3) The best performing subsets were D for precision, C for recall, and D and G for NDCG, demonstrating the value of selective contextual attributes.
Este documento describe los tipos de sistemas de recomendación, incluyendo filtrado de contenido, colaborativo e híbrido, y los desafíos de evaluar estos sistemas. También discute los enfoques para la evaluación, incluyendo métricas de precisión, cobertura y satisfacción del usuario.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
3. Introduction
Motivation
Robot, go to the living room Increasing demand of service robots
◮ a robot has to be programmed
according to the task to be
accomplished
◮ programming abilities to perform
tasks is difficult and usually,
users are not programmers or
robotic experts
◮ a possibility is that the user
shows the robot what to do
A mobile robot that learns to navigate and to identify gestures
´
(Robotics Lab INAOE, Mexico) CIARP 09 November 18, 2009 3 / 23
4. Introduction
Motivation
Sequences are used to describe different problems in many fields
◮ The robot can learn from
Goal
dynamic obstacle sequences of actions generated
turn-right
by the user
turn-left
orient go-forward
◮ In this work the idea is to learn
leave-a-
grammars from sequences of
orient
trap
actions that can be used to
execute tasks or as classifiers
Grammars are used as control programs and as classifiers
´
(Robotics Lab INAOE, Mexico) CIARP 09 November 18, 2009 4 / 23
5. Grammar learning
Grammar learning
FOSeq (First Order Sequence learning)
Input: a set of sequences of first order state-action pairs. FOSeq:
1. Induces a grammar for each sequence, identifying repeated
elements (n-grams e.g., sub-sequences of n-items, in our case
n-predicates) .
2. Evaluates the sequences with each learned grammar.
3. Selects the best evaluated grammar and a list of candidates to
improve the evaluation of the best grammar.
4. Applies a generalization process to the best grammar.
Output: A grammar that can be applied as controller or as classifier.
´
(Robotics Lab INAOE, Mexico) CIARP 09 November 18, 2009 5 / 23
6. Grammar learning
Input
The user steers the robot to execute the task. Raw data from the
sensors are transformed into predicates
The input is a sequence of first order state-action pairs like the
following:
pred1(State1,action1),pred2(State2,action1),pred1(State3,action2),. . .
´
(Robotics Lab INAOE, Mexico) CIARP 09 November 18, 2009 6 / 23
7. Grammar learning Induction
Induction
◮ The algorithm looks for n-grams that appear at least twice in the
sequence.
◮ The candidate n-grams are incrementally searched by their length.
◮ The n-gram with the highest frequency of each iteration is
selected, generating a new grammar rule and replacing in the
sequence, all occurrences of the n-gram with a new non-terminal
symbol.
´
(Robotics Lab INAOE, Mexico) CIARP 09 November 18, 2009 7 / 23
8. Grammar learning Evaluation
Induction: example
S→a b c b c b c b a b c b d b e b c S2 → R2 d b e R1
b c (5) R1 → b c
Add: R1 → b c R2 → a R1 b
S1 → a R1 R1 R1 b a R1 b d b e R1
Removing repeated items:
S1 → a R1 b a R1 b d b e R1
R1 b (2)
a R1 (2) R2 d b e R1
a R1 b (2)
Add: R2 → a R1 b a R1 b b c
S2 → R2 R2 d b e R1
Removing repeated items: b c
S2 → R2 d b e R1
´
(Robotics Lab INAOE, Mexico) CIARP 09 November 18, 2009 8 / 23
9. Grammar learning Evaluation
Induction: predicates
When the items of the sequence are first order predicates, the learned
grammar is a definite clause grammar (DCG). DCGs:
◮ are an extension of context free grammars,
◮ can have arguments, and
◮ are expressed and executed in Prolog.
´
(Robotics Lab INAOE, Mexico) CIARP 09 November 18, 2009 9 / 23
10. Grammar learning Evaluation
Evaluation
◮ Every learned grammar is used to parse all the sequences in the
set of traces provided by the user.
◮ For each parsed sequence, each grammar is evaluated using the
following function:
n
cj
eval(Gi ) = (1)
cj + fj
j=1
where Gi is the grammar being evaluated, cj and fj are the number
of items that the grammar is able or unable to parse respectively.
◮ The best evaluated grammar is selected.
´
(Robotics Lab INAOE, Mexico) CIARP 09 November 18, 2009 10 / 23
11. Grammar learning Generalization
Generalization
The key idea is to obtain a new grammar that improves the covering of
the best grammar.
1. Select the grammar that provides the largest number of different
instantiations of predicates.
2. Compute the lgg* between both grammars.
3. Accept the new grammar rule if it improves the original coverage,
otherwise discard it.
4. The process continues until a coverage threshold is reached or
there is no longer improvement with the generalization process.
*Least general generalization (lgg) [Plotkin,1970]
´
(Robotics Lab INAOE, Mexico) CIARP 09 November 18, 2009 11 / 23
12. Grammar learning Generalization
lgg example
c1 = pred(State,action1) ← c2 = pred(State,action3) ←
cond1(State,action1), cond1(State,action1),
cond2(State,action2). cond2(State,action3).
Computing lgg(c1,c2), the clause:
pred(State,Action) ←
cond1(State,action1),
cond2(State,Action).
where the constants action2 and action3 of predicate cond2 are
replaced by the variable Action.
´
(Robotics Lab INAOE, Mexico) CIARP 09 November 18, 2009 12 / 23
13. Experiments
Experiments
Learning navigation tasks Classification of dynamic gestures
Stop
Left
´
(Robotics Lab INAOE, Mexico) CIARP 09 November 18, 2009 13 / 23
16. Experiments Navigation
Navigation experiments: Markovito (1/2)
A service ActivMedia robot equipped with a sonar ring, a Laser SICK
LMS200 and a vision stereo system.
◮ Navigating to several places in the environment.
Each place has a previously defined name (e.g.,
kitchen, sofa).
◮ following a human under user commands,
◮ finding one of a set of different objects in a house,
and
◮ delivering messages and/or objects between
different people.
´
(Robotics Lab INAOE, Mexico) CIARP 09 November 18, 2009 16 / 23
18. Experiments Gesture recognition
Learning gesture grammars
Goal: learn grammars from sequences of dynamic gestures and use
them to classify new sequences.
(a) (b) (c) (d) (e)
(f) (g) (h) (i) (j)
(a) initial/final position, (b) atenttion, (c) come, (d) left, (e) right, (f) stop, (g) turn-right, (h) turn-left,
(i) waving-hand, (j) point ´
[Aviles,2006]
´
(Robotics Lab INAOE, Mexico) CIARP 09 November 18, 2009 18 / 23
19. Experiments Gesture recognition
Representation
Each sequence is a vector with sets of seven attributes describing the
executed gesture. An example of a sequence is:
(+ + −− T F F),(+ + −+ T F F),(+ + +0 T F F),(+ + ++T F F)...
Motion features Posture features
◮ ∆area ◮ form : horizontal(−),
vertical(+), tilted (0).
◮ ∆x
◮ above (the head).
◮ ∆y
◮ right (to the head).
These features take one of
three possible values: {+,−,0} ◮ torso (hand over the torso).
indicating increment, decrement
or no change.
hmov(State,right), vmov(State,up), size(State,inc), forma(State,vertical),
right(State,yes), above face(State,no),. . .
´
(Robotics Lab INAOE, Mexico) CIARP 09 November 18, 2009 19 / 23
21. Experiments Gesture recognition
Example of gesture grammars
Come Point
S → ...R8,size(dec),R6,R1,R12,R7
S → ... R3,size(dec),form(horizontal),
R1 → above head(State,no),
right(si),above head(no),over torso(no)
over torso(State,yes)
R1 → above head(State,no),
R8 → hmov(State,right),
over torso(State,yes)
vmov(State,down)
R3 → hmov(State,right),
...
vmov(State,down)
Relational grammars help to find similarities between different gestures
´
(Robotics Lab INAOE, Mexico) CIARP 09 November 18, 2009 21 / 23
22. Conclusions
Conclusions
◮ We have introduced an algorithm called FOSeq, that takes
sequences of states and actions and induces a grammar able to
parse and reproduce the sequences.
◮ FOSeq learns a grammar for each sequence, followed by a
generalization process between the best evaluated grammar and
other grammars to produce a generalized grammar covering most
of the sequences.
◮ FOSeq has been applied to learn navigation tasks and to learn
grammars from gesture sequences with very competitive results.
´
(Robotics Lab INAOE, Mexico) CIARP 09 November 18, 2009 22 / 23
23. Future work
Future work
◮ Learn more TRPs to solve other robotic tasks,
◮ extend the experiments with gestures to obtain a general grammar
for a gesture performed by more than one person,
◮ reproduce gestures with a manipulator.
Thank you for your attention
´
(Robotics Lab INAOE, Mexico) CIARP 09 November 18, 2009 23 / 23