This document discusses using Petri nets and the ICO modeling language to formally specify gestural interactions in games. It presents a case study of modeling the game of Pong, which uses gestures captured by a Kinect sensor to control a paddle. The modeling approach involves three layers - the first layer transforms raw Kinect data into hand position tokens, the second layer detects gestures like opening and closing hands, and the third layer generates interaction events from the gestures. This formal modeling approach aims to make the interaction specification reusable, maintainable and amenable to analysis. Future work includes further developing the models to handle additional gestures and game elements.
The presentation briefly answers the questions:
1. What is Machine Learning?
2. Ideas behind Neural Networks?
3. What is Deep Learning? How different is it from NN?
4. Practical examples of applications.

For more information:
https://www.quora.com/How-does-deep-learning-work-and-how-is-it-different-from-normal-neural-networks-and-or-SVM
http://stats.stackexchange.com/questions/114385/what-is-the-difference-between-convolutional-neural-networks-restricted-boltzma
https://www.youtube.com/watch?v=n1ViNeWhC24 - presentation by Ng
http://techtalks.tv/talks/deep-learning/58122/ - deep learning tutorial and slides - http://www.cs.nyu.edu/~yann/talks/lecun-ranzato-icml2013.pdf
Deep learning for NLP - http://www.socher.org/index.php/DeepLearningTutorial/DeepLearningTutorial
papers: http://www.cs.toronto.edu/~hinton/science.pdf
http://machinelearning.wustl.edu/mlpapers/paper_files/AISTATS2010_ErhanCBV10.pdf
http://arxiv.org/pdf/1206.5538v3.pdf
http://arxiv.org/pdf/1404.7828v4.pdf
More recommendations - https://www.quora.com/What-are-the-best-resources-to-learn-about-deep-learning
Deep Learning for Computer Vision: A comparision between Convolutional Neural...Vincenzo Lomonaco
In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general.
However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of “intelligence”.
The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them.
CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems.
HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain.
In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.
The presentation briefly answers the questions:
1. What is Machine Learning?
2. Ideas behind Neural Networks?
3. What is Deep Learning? How different is it from NN?
4. Practical examples of applications.

For more information:
https://www.quora.com/How-does-deep-learning-work-and-how-is-it-different-from-normal-neural-networks-and-or-SVM
http://stats.stackexchange.com/questions/114385/what-is-the-difference-between-convolutional-neural-networks-restricted-boltzma
https://www.youtube.com/watch?v=n1ViNeWhC24 - presentation by Ng
http://techtalks.tv/talks/deep-learning/58122/ - deep learning tutorial and slides - http://www.cs.nyu.edu/~yann/talks/lecun-ranzato-icml2013.pdf
Deep learning for NLP - http://www.socher.org/index.php/DeepLearningTutorial/DeepLearningTutorial
papers: http://www.cs.toronto.edu/~hinton/science.pdf
http://machinelearning.wustl.edu/mlpapers/paper_files/AISTATS2010_ErhanCBV10.pdf
http://arxiv.org/pdf/1206.5538v3.pdf
http://arxiv.org/pdf/1404.7828v4.pdf
More recommendations - https://www.quora.com/What-are-the-best-resources-to-learn-about-deep-learning
Deep Learning for Computer Vision: A comparision between Convolutional Neural...Vincenzo Lomonaco
In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general.
However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of “intelligence”.
The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them.
CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems.
HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain.
In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.
"Mainstream access to deep learning technology will greatly impact most industries over the next three to five years."
So what exactly is deep learning? How does it work? And most importantly, why should you even care?
Deep learning is used in the research community and in industry to help solve many big data problems such as computer vision, speech recognition, and natural language processing.
Practical examples include:
-Vehicle, pedestrian and landmark identification for driver assistance
-Image recognition
-Speech recognition and translation
-Natural language processing
-Life sciences
-What You Will Learn
-Understand the intuition behind Artificial Neural Networks
-Apply Artificial Neural Networks in practice
-Understand the intuition behind Convolutional Neural Networks
-Apply Convolutional Neural Networks in practice
-Understand the intuition behind Recurrent Neural Networks
-Apply Recurrent Neural Networks in practice
-Understand the intuition behind Self-Organizing Maps
-Apply Self-Organizing Maps in practice
-Understand the intuition behind Boltzmann Machines
-Apply Boltzmann Machines in practice
-Understand the intuition behind AutoEncoders
-Apply AutoEncoders in practice
Brain-Computer Interfacing, Consciousness, and the Global Brain: Towards the ...ringoring
Brain implants to increase intellectual capabilities, control machines using thought alone, or transfer information directly into the brain are quite popular in sci-fi and have been recently adopted as concrete research goals by many science and engineering teams around the world. Currently developed prototypes adopt a "black box" model - they do not allow the user to experience what is going on inside the implant. Our brain, however, doesn\'t only process sensory input and calculate appropriate behaviors based on it, but also enables us to experience what is happening. In other words, brain activity is accompanied by consciousness.
Potentially, brain chips can also be designed to have consciousness inside them. Inserted into a human brain, such a conscious implant would expand the user\'s conscious experience with its own contents. For this, however, a new kind of brain-machine interface should be developed that would merge consciousness in two separate systems - the chip and the brain - into single, unified one.
Progress in conscious interfaces could eventually allow us to unify consciousness of different human beings, leading to the emergence of special kind of global brain, in which every individual will experience itself being a GB, and won\'t become just one of the cogs in this huge super-intelligent system.
Deep learning is now making the Artificial Intelligence near to Human. Machine Learning and Deep Artificial Neural Network make the copy of Human Brain. The success is due to large storage, computation with efficient algorithms to handle more behavioral and cognitive problem
What is "deep learning" and why is it suddenly so popular? In this talk I explore how Deep Learning provides a convenient framework for expressing learning problems and using GPUs to solve them efficiently.
"You Can Do It" by Louis Monier (Altavista Co-Founder & CTO) & Gregory Renard (CTO & Artificial Intelligence Lead Architect at Xbrain) for Deep Learning keynote #0 at Holberton School (http://www.meetup.com/Holberton-School/events/228364522/)
If you want to assist to similar keynote for free, checkout http://www.meetup.com/Holberton-School/
Handwritten Recognition using Deep Learning with RPoo Kuan Hoong
R User Group Malaysia Meet Up - Handwritten Recognition using Deep Learning with R
Source code available at: https://github.com/kuanhoong/myRUG_DeepLearning
Timo Honkela: From Patterns of Movement to Subjectivity of UnderstandingTimo Honkela
Human visual system interprets information obtained through eyes to build a model of the surrounding world. This channel is our main source for understanding the world. Walking on a street, reading a book, or watching a movie all rely on our visual system. The relationship between movement, visual perception and language is complex. Movement is a specific focus of this presentation for several reasons. It is a fundamental part of human activities that ground our understanding of the world. Abstract meanings are often constructed as metaphoric extensions of movement schemas. As there is an increasing amount of video and motion tracking data available, formation of semantic models based on movement using computational methods is becoming feasible. In addition to movement, multilinguality and subjectivity of understanding are also addressed.
Separating Gesture Detection and Application Control Concerns with a Multimo...Luís Fernandes
Gesture-controlled applications are typically tied to specific gestures, and also tied to specific recognition methods and specific gesture-detection devices. We propose a concern-separation architecture, which mediates the following concerns: gesture acquisition; gesture recognition; and gestural control. It enables application developers to respond to gesture-independent commands, recognized using plug-in gesture-recognition modules that process gesture data via both device-dependent and device-independent data formats and callbacks. Its feasibility is demonstrated with a sample implementation.
"Mainstream access to deep learning technology will greatly impact most industries over the next three to five years."
So what exactly is deep learning? How does it work? And most importantly, why should you even care?
Deep learning is used in the research community and in industry to help solve many big data problems such as computer vision, speech recognition, and natural language processing.
Practical examples include:
-Vehicle, pedestrian and landmark identification for driver assistance
-Image recognition
-Speech recognition and translation
-Natural language processing
-Life sciences
-What You Will Learn
-Understand the intuition behind Artificial Neural Networks
-Apply Artificial Neural Networks in practice
-Understand the intuition behind Convolutional Neural Networks
-Apply Convolutional Neural Networks in practice
-Understand the intuition behind Recurrent Neural Networks
-Apply Recurrent Neural Networks in practice
-Understand the intuition behind Self-Organizing Maps
-Apply Self-Organizing Maps in practice
-Understand the intuition behind Boltzmann Machines
-Apply Boltzmann Machines in practice
-Understand the intuition behind AutoEncoders
-Apply AutoEncoders in practice
Brain-Computer Interfacing, Consciousness, and the Global Brain: Towards the ...ringoring
Brain implants to increase intellectual capabilities, control machines using thought alone, or transfer information directly into the brain are quite popular in sci-fi and have been recently adopted as concrete research goals by many science and engineering teams around the world. Currently developed prototypes adopt a "black box" model - they do not allow the user to experience what is going on inside the implant. Our brain, however, doesn\'t only process sensory input and calculate appropriate behaviors based on it, but also enables us to experience what is happening. In other words, brain activity is accompanied by consciousness.
Potentially, brain chips can also be designed to have consciousness inside them. Inserted into a human brain, such a conscious implant would expand the user\'s conscious experience with its own contents. For this, however, a new kind of brain-machine interface should be developed that would merge consciousness in two separate systems - the chip and the brain - into single, unified one.
Progress in conscious interfaces could eventually allow us to unify consciousness of different human beings, leading to the emergence of special kind of global brain, in which every individual will experience itself being a GB, and won\'t become just one of the cogs in this huge super-intelligent system.
Deep learning is now making the Artificial Intelligence near to Human. Machine Learning and Deep Artificial Neural Network make the copy of Human Brain. The success is due to large storage, computation with efficient algorithms to handle more behavioral and cognitive problem
What is "deep learning" and why is it suddenly so popular? In this talk I explore how Deep Learning provides a convenient framework for expressing learning problems and using GPUs to solve them efficiently.
"You Can Do It" by Louis Monier (Altavista Co-Founder & CTO) & Gregory Renard (CTO & Artificial Intelligence Lead Architect at Xbrain) for Deep Learning keynote #0 at Holberton School (http://www.meetup.com/Holberton-School/events/228364522/)
If you want to assist to similar keynote for free, checkout http://www.meetup.com/Holberton-School/
Handwritten Recognition using Deep Learning with RPoo Kuan Hoong
R User Group Malaysia Meet Up - Handwritten Recognition using Deep Learning with R
Source code available at: https://github.com/kuanhoong/myRUG_DeepLearning
Timo Honkela: From Patterns of Movement to Subjectivity of UnderstandingTimo Honkela
Human visual system interprets information obtained through eyes to build a model of the surrounding world. This channel is our main source for understanding the world. Walking on a street, reading a book, or watching a movie all rely on our visual system. The relationship between movement, visual perception and language is complex. Movement is a specific focus of this presentation for several reasons. It is a fundamental part of human activities that ground our understanding of the world. Abstract meanings are often constructed as metaphoric extensions of movement schemas. As there is an increasing amount of video and motion tracking data available, formation of semantic models based on movement using computational methods is becoming feasible. In addition to movement, multilinguality and subjectivity of understanding are also addressed.
Separating Gesture Detection and Application Control Concerns with a Multimo...Luís Fernandes
Gesture-controlled applications are typically tied to specific gestures, and also tied to specific recognition methods and specific gesture-detection devices. We propose a concern-separation architecture, which mediates the following concerns: gesture acquisition; gesture recognition; and gestural control. It enables application developers to respond to gesture-independent commands, recognized using plug-in gesture-recognition modules that process gesture data via both device-dependent and device-independent data formats and callbacks. Its feasibility is demonstrated with a sample implementation.
My introductory slides on interaction design and the basics of prototyping for the Intelligent Interactive Systems master's Information Science course given at the University of Amsterdam.
"Testing the “end of privacy” hypothesis in computer-mediated communication An agent-based modelling approach", Paola Tubaro & Antonio A. Casilli, presentation at the Fondation CIGREF, Paris, Nov 14th, 2011
Deep Learning has taken the digital world by storm. As a general purpose technology, it is now present in all walks of life. Although the fundamental developments in methodology have been slowing down in the past few years, applications are flourishing with major breakthroughs in Computer Vision, NLP and Biomedical Sciences. The primary successes can be attributed to the availability of large labelled data, powerful GPU servers and programming frameworks, and advances in neural architecture engineering. This combination enables rapid construction of large, efficient neural networks that scale to the real world. But the fundamental questions of unsupervised learning, deep reasoning, and rapid contextual adaptation remain unsolved. We shall call what we currently have Deep Learning 1.0, and the next possible breakthroughs as Deep Learning 2.0.
This is part 2 of the Tutorial delivered at IEEE SSCI 2020, Canberra, December 1st (Virtual).
Network of Excellence in Internet Science (JRA1, Towards a Theory of Internet...i_scienceEU
The Network of Excellence in Internet Science aims to achieve a deeper multidisciplinary understanding of the Internet as a societal and technological artefact.
More information: http://internet-science.eu/
Twitter: @i_scienceEU
Presentatie Internet of Things Conferentie 9 april 2013 door Ben van Lier van...Centric
Ben van Lier, Internet of Things-expert van Centric, sprak tijdens de tweede Internet of Things-conferentie die op dinsdag 9 april werd gehouden in Rotterdam.
[JPDC,JCC@LMN22] Ad hoc systems Management and specification with distributed...Universidad de los Andes
This presents the main ideas behind Distributed ad hoc Petri nets (DaPNs). A specification and run-time model to manage and reason about ad hoc distributed systems.
The presentation describes the main contributions of our work validated through a case study on vehicular ad hoc distributed networks.
This work is available at the Journal of Parallel and Distributed Computing, and was presented at the Lo Mejor de lo nuestro #LMN2022, at the Jornadas Chilenas de Computación 2022
Link to the paper: https://doi.org/10.1016/j.jpdc.2022.06.015
Keynote talk targeted to PhD students, during the BENEVOL 2023 research seminar (focused on software evolution) in Nijmegen, 27 November 2023, by Tom Mens (full professor in software engineering at University of Mons, Belgium). The keynote aims to provide tips, tricks and practical advice on how to become successful as a PhD student.
Recognising bot activity in collaborative software developmentTom Mens
Presentation by Natarajan Chidambaram during the International ICSE Workshop on Bots in Software Engineering (BotSE 2023) in Australia. Joint work with Mehdi Golzadeh, Tom Mens, Alexandre Decan of the Software Engineering Lab of the University of Mons and with Eleni Constantinou.
A Dataset of Bot and Human Activities in GitHubTom Mens
Presentation at the IEEE International Conference on Mining Software Repositories (MSR 2023) by Natarajan Chidambaram (Software Engineering Lab, University of Mons, Belgium) of a dataset of bot and human activities extracted from GitHub
In this presentation we explore how the CI/CD landscape on GitHub has evolved since the introduction of GitHub Actions. This presentation is based on several peer-reviewed articles published in 2022 and 2023.
Nurturing the Software Ecosystems of the FutureTom Mens
In January 2018, four Software Engineering research groups located in different Belgian Universities launched a five year research project to nurture the software ecosystems of the future. We assembled a diverse team of about a dozen researchers and embarked on an exciting journey leading to a rich and diverse suite of papers, tools and datasets. Halfway into the project the corona pandemic intervened, but despite several months of lockdown, we succeeded in increasing inter-university collaboration. In this paper we share our achievements so that the BENEVOL community may benefit from our experience.
Comment programmer un robot en 30 minutes?Tom Mens
Comment apprendre à programmer un robot en 30 minutes? Atelier organisé par Tom Mens (en collaboration avec Pierre Zielinski, Gauvain Devillez et Sebastien Bonte) lors des Journées Math-Sciences du Printemps des Sciences 2022 à l'Université de Mons
On the rise and fall of CI services in GitHubTom Mens
Presentation of SANER 2022 conference article "On the rise and fall of CI services in GitHub" by Mehdi Golzadeh (co-authored with Alexandre Decan and Tom Mens).
On backporting practices in package dependency networksTom Mens
Presentation at FOSDEM 2022 Composition and Dependency Management DevRoom of empirical research on backporting practices in package dependency networks, published in the IEEE Transactions in Software Engineering in 2021 (https://doi.org/10.1109/TSE.2021.3112204)
Joint work by Alexandre Decan, Tom Mens; Ahmed Zeourali, Coen De Roover as part of the Belgian Excellence of Science research project SECOASSIST (https://secoassist.github.io)
Comparing semantic versioning practices in Cargo, npm, Packagist and RubygemsTom Mens
Presentation by Tom Mens at PackagingCon 2021 on Wednesday 10 November 2021.
Abstract: Semantic versioning (semver) is a commonly accepted open source practice, used by many package management systems to inform whether new package releases introduce possibly backward incompatible changes. Maintainers depending on such packages can use this practice to reduce the risk of breaking changes in their own packages by specifying version constraints on their dependencies. Depending on the amount of control a package maintainer desires to assert over her package dependencies, these constraints can range from very permissive to very restrictive. We empirically compared the evolution of semver compliance in four package management systems: Cargo, npm, Packagist and Rubygems. We discuss to what extent ecosystem-specific characteristics influence the degree of semver compliance, and we suggest to develop tools adopting the wisdom of the crowds to help package maintainers decide which type of version constraints they should impose on their dependencies.
We also studied to which extent the packages distributed by these package managers are still using a 0.y.z release, suggesting less stable and immature packages. We explore the effect of such "major zero" packages on semantic versioning adoption.
Our findings shed insight in some important differences between package managers with respect to package versioning policies.
Our empirical results have been published in two peer-reviewed academic journals: the IEEE Transactions in Software Engineering (https://doi.org/10.1109/TSE.2019.2918315) and Elsevier Science of Computer Programming (https://doi.org/10.1016/j.scico.2021.102656).
Achknowledgments: Research conducted in the context of the SECOASSIST "Excellence of Science" Research Project.
Presentation by Tom Mens at FOSDEM21 (Free Open Source Developers Meeting, February 2021). Published in Science of Computer Programming, August 2021.
https://doi.org/10.1016/j.scico.2021.102656
Abstract: When developing open source software end-user applications or reusable software packages, developers depend on software packages distributed through package managers such as npm, Packagist, Cargo, RubyGems. In addition to this, empirical evidence has shown that these package managers adhere to a large extent to semantic versioning principles. Packages that are still in major version zero are considered unstable according to semantic versioning, as some developers consider such packages as immature, still being under initial development.
This presentation reports on large-scale empirical evidence on the use of dependencies towards 0.y.z versions in four different software package distributions: Cargo, npm, Packagist and RubyGems. We study to which extent packages get stuck in the zero version space, never crossing the psychological barrier of major version zero. We compare the effect of the policies and practices of package managers on this phenomenon. We do not reveal the results of our findings in this abstract yet, as it would spoil the fun of the presentation.
Evaluating a bot detection model on git commit messagesTom Mens
Detecting the presence of bots in distributed software development activity is very important in order to prevent bias in socio-technical empirical studies. In previous work, we proposed a classification model to detect bots in GitHub repositories based on the pull request and issue comments of GitHub accounts. The current study generalises the approach to git contributors based on their commit messages. We train and evaluate the classification model on a large dataset of 6,922 git contributors. The original model based on pull request and issue comments obtained a precision of 0.77 on this dataset, whereas retraining the classification model on git commit messages increased the precision to 0.80. As a proof-of-concept, we implemented this model in BoDeGiC, an open source command-line tool to detect bots in git repositories.
Is my software ecosystem healthy? It depends!Tom Mens
QUATIC 2020 keynote presentation by Tom Mens (University of Mons) on dependency-related health issues in software ecosystems and research advances to address such health issues. Part of the presented research has been conducted as part of the Belgian SECO-ASSIST Excellence of Science Research Project.
Bot or not? Detecting bots in GitHub pull request activity based on comment s...Tom Mens
Presentation by Mehdi Golzadeh (Software Engineering Lab, University of Mons) of an article published at the 2nd International ICSE Workshop on Bots In Software Engineering (BotSE). See https://doi.org/10.1145/3387940.3391503
Abstract: Many empirical studies focus on socio-technical activity in social coding platforms such as GitHub, for example to study the onboarding, abandonment, productivity and collaboration among team members. Such studies face the difficulty that GitHub activity can also be generated automatically by bots of a different nature. It therefore becomes imperative to distinguish such bots from human users. We propose an automated approach to detect bots in GitHub pull request activity. Relying on the assumption that bots contain repetitive message patterns in their pull request comments, we analyse the similarity between multiple messages from the same GitHub identity, using a clustering method that combines the Jaccard and Levenshtein distance. We empirically evaluate our approach by analysing 20,090 comments of 250 users and 42 bots in 1,262 GitHub repositories. Our results show that the method is able to clearly separate bots from human users.
Comparing dependency issues across software package distributions (FOSDEM 2020)Tom Mens
This talk reports on our findings based on multiple empirical studies that we have conducted to understand different aspects of dependency management and their practical implications. This includes:
* the outdatedness of package dependencies, the transitive impact of such "technical lag", and its relation to the presence of bugs and security vulnerabilities.
* the impact of using either more permissive or more restrictive version contraints on dependencies.
* the virtues and limitations of being compliant to semantic versioning, a common policy to inform dependents whether new releases of software packages introduce possibly backward incompatible changes.
* the impact of specific characteristics, policies and tools used by the packaging ecosystem and its supporting community on all of the above.
The contents of the talk is primarily based on the following peer-reviewed scientific articles:
* What do package dependencies tell us about semantic versioning? Alexandre Decan, Tom Mens. IEEE Transactions on Software Engineering, 2019. https://doi.org/10.1109/TSE.2019.2918315
* An empirical comparison of dependency network evolution in seven software packaging ecosystems. Alexandre Decan, Tom Mens, Philippe Grosjean. Empirical Software Engineering 24(1):381-416, 2019. https://doi.org/10.1007/s10664-017-9589-y
* A formal framework for measuring technical lag in component repositories and its application to npm. Ahmed Zerouali, Tom Mens, Jesus Gonzalez‐Barahona, Alexandre Decan, Eleni Constantinou, Gregorio Robles. Journal of Software: Evolution and Process 31(8), 2019. https://doi.org/10.1002/smr.2157
* On the Impact of Security Vulnerabilities in the npm Package Dependency Network. Alexandre Decan, Tom Mens, Eleni Constantinou. International Conference on Mining Software Repositories, 2018. https://doi.org/10.1145/3196398.3196401
* On the Evolution of Technical Lag in the npm Package Dependency Network. Alexandre Decan, Tom Mens, Eleni Constantinou. International Conference on Software Maintenance and Evolution, 2018. https://doi.org/10.1109/ICSME.2018.00050
Measuring Technical Lag in Software Deployments (CHAOSScon 2020)Tom Mens
Presentation at CHAOSSCon Europe 2020 about the generic technical lag software measurement framework. Technical lag measures the increasing difference between deployed software components and the ideal upstream software components.
For more information, see https://doi.org/10.1002/smr.2157
This presentation reports on the research results achieved in the context of the interuniversity interdisciplinary research project SECOHealth "Vers une méthodologie et analyse socio-technique interdisciplinaire de la santé des écosystèmes logiciels" co-financed by FRS-FNRS Belgium and FRQ (FRSC - FRNT, Québec) with principal investigators Tom Mens (UMONS), Bram Adams (Polytechnique Montréal) and Josianne Marsan (Université Laval).
Introduction to the research seminar on empirical analysis of open source software ecosystems, organised by the SECO-ASSIST "excellence of science" research project, on September 4th, 2019 at the University of Mons, Belgium. With invited presentations by Alexander Serebrenik, Jesus Gonzalez-Barahona, Dario Di Nucci and Henrique Nucci. The seminar concludes with the public PhD defense of Ahmed Zerouali (supervised by Tom Mens) on the topic of "A Measurement Framework for Analyzing Technical Lag in Open-Source Software Ecosystems"
Empirically Analysing the Socio-Technical Health of Software Package ManagersTom Mens
Invited presentation at Concordia University (Montreal, Canada) by Eleni Constantinou and Tom Mens on recent research about the socio-technical health issues in software package management ecosystems.
Abstract: The large majority of today’s software is relying on open software software components. Such components are typically distributed through package managers for a wide variety of programming languages, and developed and maintained through online distributed software development services like GitHub. Software component repositories are perceived as software ecosystems that constitute complex and evolving socio-technical software dependency networks. Because of their complexity and evolution, these ecosystems tend to suffer from a wide variety of software health issues that can be either technical or social in nature. Examples of such issues include the ecosystem fragility due to exponential growth and transitive dependencies; the abundance of outdated, unmaintained or obsolete software components; the prolonged presence of unfixed bugs and security vulnerabilities; the abandonment or high turnover of key contributors, suboptimal collaboration between contributors, and many more. This presentation will report on our past and ongoing empirical research that studies such health factors within and across different software packaging ecosystems (such as npm, RubyGems, Cargo, CRAN, CPAN). We provide empirical evidence of some of the health problems, compare their presence across different ecosystems, and suggest ways to reduce their potential impact by providing concrete guidelines and tools. The presented research Is being conducted by researchers of the Software Engineering Lab at the University of Mons in the context of two ongoing projects SECOHealth and SECO-ASSIST, aiming to analyse and improve the health of software ecosystems.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
Safalta Digital marketing institute in Noida, provide complete applications that encompass a huge range of virtual advertising and marketing additives, which includes search engine optimization, virtual communication advertising, pay-per-click on marketing, content material advertising, internet analytics, and greater. These university courses are designed for students who possess a comprehensive understanding of virtual marketing strategies and attributes.Safalta Digital Marketing Institute in Noida is a first choice for young individuals or students who are looking to start their careers in the field of digital advertising. The institute gives specialized courses designed and certification.
for beginners, providing thorough training in areas such as SEO, digital communication marketing, and PPC training in Noida. After finishing the program, students receive the certifications recognised by top different universitie, setting a strong foundation for a successful career in digital marketing.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Petrinect: Modeling gestural interactions with executable Petri nets
1. PetriNect : Modeling gestural interactions with executable Petri nets
PetriNect : Modeling gestural interactions with
executable Petri nets
Romuald Deshayes1 Philippe Palanque2 Tom Mens1
Université de Mons – UMONS, Belgium
firstname.name@umons.ac.be
Université de Toulouse, France
name@irit.fr
4 september 2012
Romuald Deshayes – UMONS 1 / 31
2. PetriNect : Modeling gestural interactions with executable Petri nets
Table of contents
1 Introduction
Goal
Case Study
Tools and formalisms used
2 Petshop and ICO
3 Modeling of the case study
4 Discussion
5 Future Work and improvements
6 Conclusion
Romuald Deshayes – UMONS 2 / 31
3. PetriNect : Modeling gestural interactions with executable Petri nets
Introduction
Introduction
Introduction
Goal
Case Study
Tools and formalisms used
Romuald Deshayes – UMONS 3 / 31
4. PetriNect : Modeling gestural interactions with executable Petri nets
Introduction
What is it about ?
(Informal) Games
Formal specification
techniques
VS
Romuald Deshayes – UMONS 4 / 31
5. PetriNect : Modeling gestural interactions with executable Petri nets
Introduction
Goal
Goal of the talk
Why and how to use formal specification techniques for interaction
with games ?
Why ?
Because it is not more How ?
complicated than By using tools
coding allowing to execute
And it is safer ! Games formal models
also have the right to See what’s next !
security
Romuald Deshayes – UMONS 5 / 31
6. PetriNect : Modeling gestural interactions with executable Petri nets
Introduction
Case Study
Case study
Pong Game
Almost like the classic game
Uses Kinect
With free hands only (gestural interaction)
⇒ Very simple to play
Move the hand to control the paddle
Close it to grab the ball
Interaction modeled with executable Petri-nets
Romuald Deshayes – UMONS 6 / 31
7. PetriNect : Modeling gestural interactions with executable Petri nets
Introduction
Case Study
Gestural interaction
Gestural Interaction ?
In a general way : control a computer with the hands
In our case study : move or manipulate virtual objects
displayed on the screen with gestures
Possible Gestures
FlyOver
Grab
Release
Drag
Zoom (in/out)
Romuald Deshayes – UMONS 7 / 31
8. PetriNect : Modeling gestural interactions with executable Petri nets
Introduction
Case Study
Kinect
Kinect : what and why ?
3D sensor
Equipped with powerful real-time tracking algorithms
Detects hand position in metric space (point in space)
Homemade algorithm to detect if hand is open/closed
For our case study, we will use the position and status of the hands
and the head
Romuald Deshayes – UMONS 8 / 31
9. PetriNect : Modeling gestural interactions with executable Petri nets
Introduction
Tools and formalisms used
Tools and formalisms used
Work done in IRIT Lab
Petri-net rules
Formalism called ICO based on Petri-nets
Tool called Petshop to create ICO’s and execute them
Romuald Deshayes – UMONS 9 / 31
10. PetriNect : Modeling gestural interactions with executable Petri nets
Petshop and ICO
Petshop and ICO
Petshop and ICO
Romuald Deshayes – UMONS 10 / 31
11. PetriNect : Modeling gestural interactions with executable Petri nets
Petshop and ICO
Petri-net
Directed bipartite graph
Node correspond to Places and Transitions
Directed arcs connect places and transitions
Tokens travel from place to place
State of the Petri net is represented by the position of the
tokens in the places
Non-deterministic execution
locality in transitions
Romuald Deshayes – UMONS 11 / 31
12. PetriNect : Modeling gestural interactions with executable Petri nets
Petshop and ICO
ICO
ICO : main key concepts
Defined as a formal description technique designed to the
specification, modeling and implementation of interactive
systems
Uses high-level Petri nets (HLPN) to describe interactive
systems’ behavioural aspects
In HLPN, tokens can carry informations
In ICO, tokens can store Java objects (e.g. Kinect data)
In ICO, transitions can contain Java code (some processing on
Kinect data)
Romuald Deshayes – UMONS 12 / 31
13. PetriNect : Modeling gestural interactions with executable Petri nets
Petshop and ICO
ICO example
Romuald Deshayes – UMONS 13 / 31
14. PetriNect : Modeling gestural interactions with executable Petri nets
Petshop and ICO
Petshop
Petshop
Tool suite dedicated to the engineering of interactive systems.
Supports ICO
Allows the editing, simulation and dynamic execution of
models.
Use of petshop to create our models, link them together, execute
them, and interact with Java code
Romuald Deshayes – UMONS 14 / 31
15. PetriNect : Modeling gestural interactions with executable Petri nets
Petshop and ICO
Petshop
Romuald Deshayes – UMONS 15 / 31
16. PetriNect : Modeling gestural interactions with executable Petri nets
Modeling of the case study
Modeling of the case study
Modeling of the case study
Romuald Deshayes – UMONS 16 / 31
17. PetriNect : Modeling gestural interactions with executable Petri nets
Modeling of the case study
Modeling of the case study
Models used to process kinect raw data
Transform data into abstract events
Events are interpreted by virtual objects models
Romuald Deshayes – UMONS 17 / 31
18. PetriNect : Modeling gestural interactions with executable Petri nets
Modeling of the case study
Modeling of the case study
Use Petshop to create a 3 layered ICO model
Encapsulate the data of hands and head of each player into
tokens (position and state)
Models are created to be independent of the number of players
(ID for each player)
Only one instance of each layer whatever the amount of virtual
objects to control
Each virtual object can be modeled using petshop (4th layer)
Romuald Deshayes – UMONS 18 / 31
19. PetriNect : Modeling gestural interactions with executable Petri nets
Modeling of the case study
Raw data to positions
Layer 1 : Raw data to positions
Receives positions of hands and head of each player at regular
time intervals
3 tokens are generated at each kinect frame refresh (1 for the
head, 1 for each hand)
Informations about head and hand positions are stored in the
tokens
ICO model joins data from head and hand and create a new
token for each hand
Tokens are sent to the next layer
Romuald Deshayes – UMONS 19 / 31
20. PetriNect : Modeling gestural interactions with executable Petri nets
Modeling of the case study
Raw data to positions
Romuald Deshayes – UMONS 20 / 31
21. PetriNect : Modeling gestural interactions with executable Petri nets
Modeling of the case study
Positions to gestures
Layer 2 : Positions to gestures
Receives absolute positions of hands (1 token per hand per
player)
Uses 2 consecutive tokens to calculate relative movement
Detects state changes of the hands (open/closed)
Sends open/close/move events to the 3rd layer
Romuald Deshayes – UMONS 21 / 31
22. PetriNect : Modeling gestural interactions with executable Petri nets
Modeling of the case study
Positions to gestures
Romuald Deshayes – UMONS 22 / 31
23. PetriNect : Modeling gestural interactions with executable Petri nets
Modeling of the case study
Gestures to events
Layer 3 : Gestures to events
Receives move/open/close events from layer 2
Stores the state of each player in a 4 places sub Petri net
Joins information about the player and events from layer 2 to
generate events
Generates 5 different events
Grab
Release
Flyover
Drag
Zoom (in/out)
Sends events to modeled objects
Romuald Deshayes – UMONS 23 / 31
24. PetriNect : Modeling gestural interactions with executable Petri nets
Modeling of the case study
Gestures to events
Romuald Deshayes – UMONS 24 / 31
25. PetriNect : Modeling gestural interactions with executable Petri nets
Discussion
Discussion
Discussion
Romuald Deshayes – UMONS 25 / 31
26. PetriNect : Modeling gestural interactions with executable Petri nets
Discussion
Discussion
Modeling has a lot of advantages :
Reusable models, easy to maintain or extend (dynamicity)
Models hide some technical complexity
Amenable to formal analysis
Can deal with Multi-user
Easy to integrate with many different virtual objects
Goal : toolkit for gestural interaction
Romuald Deshayes – UMONS 26 / 31
27. PetriNect : Modeling gestural interactions with executable Petri nets
Future Work and improvements
Future Work and improvements
Future Work and improvements
Romuald Deshayes – UMONS 27 / 31
28. PetriNect : Modeling gestural interactions with executable Petri nets
Future Work and improvements
Future work and improvements
Future Work and improvements :
Specify the game (rules and graphics) with Petri nets
Performance issue
Threads tweaking (constant token flow)
Compile for deployment
Extend the models to deal with more gestures
Create a DSL over ICO to simplify the specification of virtual
objects behaviour
Romuald Deshayes – UMONS 28 / 31
29. PetriNect : Modeling gestural interactions with executable Petri nets
Conclusion
Conclusion
Conclusion
Romuald Deshayes – UMONS 29 / 31
30. PetriNect : Modeling gestural interactions with executable Petri nets
Conclusion
Conclusion
Wrap-up
Possible to specify gestural interaction of a game with formal
methods
Formal methods might help for safety issues in gaming too
Lot of other advantages to resort to executable modeling
instead of coding
Dynamicity lead to productivity increase
Hides some technical details
Easy to read and extend
Petshop is a cool tool suite, USE IT !
Romuald Deshayes – UMONS 30 / 31
31. PetriNect : Modeling gestural interactions with executable Petri nets
Conclusion
Thank you
Thank you for your attention !
Questions ?
Romuald Deshayes – UMONS 31 / 31