Deze presentatie werd gegeven door Thomas Winters op 26 mei 2023 in Lieven Scheire's voorstelling over Artificiële Intelligentie.
Meer informatie: https://thomaswinters.be/talk/2023nerdlandfestival-2
TorfsBot or Not? Evaluating User Perception on Imitative Text Generation (CLI...Thomas Winters
This talk was presented by Thomas Winters at the 33rd Computational Linguistics in the Netherlands conference.
Abstract:
Mimicking an individual's writing style using automated text generation approaches presents a challenge in determining the optimal level of imitation versus exaggeration. This study investigates the believability and user interaction of a text generator employing Markov models and dynamic templates in comparison to source data. Using the TorfsBot Twitterbot, which emulates the writing style of the Belgian professor Rik Torfs, we developed a secondary Twitterbot, \"TorfsBot or Not?”, that conducted daily polls asking users to identify whether a tweet originated from Rik Torfs or TorfsBot. The study collected 43K votes from approximately 500 polls. The findings reveal that participants correctly identified the source of the tweets 71% of the time, with majority votes inaccurately attributing the source in 14% of the polls. Furthermore, we observed a positive correlation between the number of interactions on the source tweet and the perceived believability that the tweet originated from Rik Torfs. Our results suggest that even relatively simple text generation models can approximately replicate a target's writing style, and that a closer approximation to the source material may positively influence user engagement.
Prompt engineering: de kunst van het leren communiceren met AI (Juni 2023)Thomas Winters
Deze talk werd gebracht aan de VRT op 23 juni 2023 door Thomas Winters.
De presentatie leert een basis aan prompt engineering technieken voor ChatGPT en GPT-4, alsook voor afbeeldinggenerators zoals DALL-E, StableDiffusion & Midjourney
Deze presentatie werd gegeven op 27 mei 2023 op het Nerdland Festival door Thomas Winters als onderdeel van de kindershow van Jeroen Baert en Els Aerts.
We visualiseren kinderverhalen nadat ze eerst zin per zin zelf verhalen verzonnen, en tonen hoe AI tegenwoordig zulke tekeningen zelf kan maken. Dan laten we ze verhalen met AI schrijven, en leggen we uit hoe zulke taalmodellen leerden door volgende woorden te voorspellen.
Meer informatie: https://thomaswinters.be/talk/2023nerdlandfestival-3
Deze presentatie was gegeven door Thomas Winters op 27/05/2023 op het Nerdland Festival.
Meer info op https://thomaswinters.be/talk/2023nerdlandfestival
How do you teach computers humor + Text Generators as Creative Partners (May ...Thomas Winters
This guest lecture was given on the 10th of May 2023 at the "Computational Creativity" class of prof. Tim Van de Cruys & on the 16th of May 2023 for the "Humor and Creativity in Language" class by prof. Kurt Feyaerts.
The first part provides an overview on computational humor with a focus on the research work of Thomas Winters.
The second part shows how to generate humor using prompt engineering on GPT models like GPT-4 and ChatGPT.
TorfsBot or Not? Evaluating User Perception on Imitative Text Generation (CLI...Thomas Winters
This talk was presented by Thomas Winters at the 33rd Computational Linguistics in the Netherlands conference.
Abstract:
Mimicking an individual's writing style using automated text generation approaches presents a challenge in determining the optimal level of imitation versus exaggeration. This study investigates the believability and user interaction of a text generator employing Markov models and dynamic templates in comparison to source data. Using the TorfsBot Twitterbot, which emulates the writing style of the Belgian professor Rik Torfs, we developed a secondary Twitterbot, \"TorfsBot or Not?”, that conducted daily polls asking users to identify whether a tweet originated from Rik Torfs or TorfsBot. The study collected 43K votes from approximately 500 polls. The findings reveal that participants correctly identified the source of the tweets 71% of the time, with majority votes inaccurately attributing the source in 14% of the polls. Furthermore, we observed a positive correlation between the number of interactions on the source tweet and the perceived believability that the tweet originated from Rik Torfs. Our results suggest that even relatively simple text generation models can approximately replicate a target's writing style, and that a closer approximation to the source material may positively influence user engagement.
Prompt engineering: de kunst van het leren communiceren met AI (Juni 2023)Thomas Winters
Deze talk werd gebracht aan de VRT op 23 juni 2023 door Thomas Winters.
De presentatie leert een basis aan prompt engineering technieken voor ChatGPT en GPT-4, alsook voor afbeeldinggenerators zoals DALL-E, StableDiffusion & Midjourney
Deze presentatie werd gegeven op 27 mei 2023 op het Nerdland Festival door Thomas Winters als onderdeel van de kindershow van Jeroen Baert en Els Aerts.
We visualiseren kinderverhalen nadat ze eerst zin per zin zelf verhalen verzonnen, en tonen hoe AI tegenwoordig zulke tekeningen zelf kan maken. Dan laten we ze verhalen met AI schrijven, en leggen we uit hoe zulke taalmodellen leerden door volgende woorden te voorspellen.
Meer informatie: https://thomaswinters.be/talk/2023nerdlandfestival-3
Deze presentatie was gegeven door Thomas Winters op 27/05/2023 op het Nerdland Festival.
Meer info op https://thomaswinters.be/talk/2023nerdlandfestival
How do you teach computers humor + Text Generators as Creative Partners (May ...Thomas Winters
This guest lecture was given on the 10th of May 2023 at the "Computational Creativity" class of prof. Tim Van de Cruys & on the 16th of May 2023 for the "Humor and Creativity in Language" class by prof. Kurt Feyaerts.
The first part provides an overview on computational humor with a focus on the research work of Thomas Winters.
The second part shows how to generate humor using prompt engineering on GPT models like GPT-4 and ChatGPT.
Hoe schrijven computers zelf tekst? (Kinderlezing)Thomas Winters
Deze kinderlezing over taaltechnologie werd gegeven door Thomas Winters op 26 maart 2023 op ScienceVille te Leuven.
Het had de volgende beschrijving:
Schrijf jij graag verhalen? Improviseer je wel eens hele scènes? Wist je dat computers dat tegenwoordig óók kunnen? Ontdek hoe ze dat kunnen in deze interactieve lezing vol improvisatie en creatief schrijven met artificiële intelligentie!
This talk is about using AI as your creative partner, both for text as well as images.
It was presented by Thomas Winters on the 14th of February 2023 to the VRT Creative Lab
This talk was presented at "Night of the Prompts" by Thomas Winters on the 21th of December 2022.
The talk contains a brief history of image generators and prompt engineering tricks and applications.
How to Attract & Survive Media Attention as PhDThomas Winters
This talk was given on the 24th of November for the Leuven.AI institute with some tips & tricks for attracting & surviving media attention as a PhD student, based on experiences from Thomas Winters.
How can AI be a creative partner for PR & marketing?Thomas Winters
This talk was given on 15 november 2022 by Thomas Winters at Ketchum to teach them about the capabilities of current creative AI techniques for marketing & PR purposes.
Beter leren praten met Artificiële IntelligentieThomas Winters
Deze presentatie werd gegeven op 12 november 2022 op de Nacht van de Vrijdenker in de Vooruit in Gent.
Het gaat over de recentste generatie artificiele intelligentie zoals GPT-3 en DALL-E, en hoe je hiertegen best praat (prompt engineering). Daarna worden enkele hulpmiddelen aangereikt om AI te ontmaskeren.
Computational Humor: Can a machine have a sense of humor (2022)Thomas Winters
Can computers have a sense of humor? In this talk, we discuss humor theory, some symbolic humor generation methods and then showcase how prompt engineering can help generate humor automatically.
This talk has been given multiple times by Thomas Winters. This particular version has been personalized for the keynote talk of the postgraduate AI students graduation on the 13th of September 2022.
More information of this talk on https://thomaswinters.be/talk/2022kulak
Thomas Winters en Jeroen Baert speelden "TorfsBot Or Not?" op de Nerdland Openingsnight 2022 voor 1500+ nerds! https://www.nerdlandfestival.be/programma-item/nerdland-opening-night-met-alle-nerdlanders
This slideshow was originally presented by Thomas Winters on the 21th of May 2022 at the ALAN hackaton, where they built an AI actor. More info: https://www.facebook.com/events/1386001341878142
The slideshow shows some techniques that might inspire to create AI-powered theatre robots & generators.
Computational Humor: Can a machine have a sense of humor? (2020)Thomas Winters
Can computers have a sense of humor? In this talk, we discuss some dimensions of computational humor, where the research field stands and showcase some of Thomas Winters' work in this field.
This talk has been given multiple times by Thomas Winters, in particular on the 11th of December 2020 as a DTAI seminar (KU Leuven's Declarative Languages and Artificial Intelligence Research lab)
More information of this talk on https://thomaswinters.be/talk/2020dtai
Abstract: Can a machine have a sense of humor? At first glance, this question may seem paradoxical, given that humor is an intrinsically human trait. By limiting the scope to specific types of jokes and by hand-coding rules for them, researchers generally have been able to create several methods for detecting and generating humor. Recently, large scale pre-trained language models like BERT, GPT-2/GPT-3 and our own Dutch RobBERT model opened the way for learning even better insights into humor. In this talk, we provide an overview of the history of computational humor, discuss several types of humor tasks that have been automated using artificial intelligence, illustrate several useful applications of computational humor and position several of our own research projects in this field.
Humor Workshop: Hoe schrijf je satire? (KU Leugen)Thomas Winters
Hoe schrijf je een mop? Hoe verbeter je een mop? En hoe schrijf je efficiënt satire?
Deze presentatie van Thomas Winters (improvisator en computationele humor onderzoeker) opent de motorkap van humor, om aan te tonen dat humor schrijven wel degelijk een leerbare vaardigheid is. Het leert aan van waar je inspiratie kan halen om een humoristische connectie te verzinnen, enkele basisprincipes om van die connectie een mop te schrijven en hoe je dan die mop kan verbeteren tot een geweldige mop.
De presentatie bevat ook talrijke oefeningen om zelf uit te proberen, om zo je eigen humorvaardigheden aan te scherpen.
Meer informatie over deze presentatie op https://thomaswinters.be/talk/2022veto
Om Thomas Winters uit te nodigen om deze workshop live te geven, kan je contact opzoeken via https://thomaswinters.be/contact
Survival of the Wittiest: Evolving Satire with Language ModelsThomas Winters
This talk was originally presented by Thomas Winters on the 18th of September 2021 at the 12th International Conference on Computational Creativity.
More information can be found on thomaswinters.be/talk/2021iccc
Abstract: Large pre-trained transformer-based language models have revolutionized the field of natural language processing in recent years. While BERT-like models perform exceptionally well for analytical tasks such as classification and regression, their text generation capabilities are usually limited to predicting tokens within a given context. In this paper, we introduce GALMET, a model that generates text by using genetic algorithms with BERT-like language models for evolving text. We use GALMET with the RoBERTa language model to automatically evolve real headlines into more satirical headlines. This is achieved by adapting the masked language head to the headlines domain for the mutation operator and finetuning a regression head to distinguish headlines from satire for the fitness function. We evaluated our system by comparing generated satirical headlines against human-edited headlines and just the fine-tuned masked language head. We found that while humans generally outperform the model, generations by GALMET are also often preferred over human-edited headlines. However, we also found that only using the fine-tuned masked language model gives slightly preferred satire due to generating more readable sentences. GALMET is thus a first step towards a new way of creating text generators using masked language models by transforming text guided by scores from another language model.
Discovering Textual Structures: Generative Grammar Induction using Template T...Thomas Winters
This talk was originally presented by Thomas Winters on the 10th of September 2020 at the 11th International Conference on Computational Creativity (ICCC20).
More information can be found on https://thomaswinters.be/talk/2020iccc
Abstract: Natural language generation provides designers with methods for automatically generating text, e.g. for creating summaries, chatbots and game content. In practise, text generators are often either learned and hard to interpret, or created by hand using techniques such as grammars and templates. In this paper, we introduce a novel grammar induction algorithm for learning interpretable grammars for generative purposes, called Gitta. We also introduce the novel notion of template trees to discover latent templates in corpora to derive these generative grammars. By using existing human-created grammars, we found that the algorithm can reasonably approximate these grammars using only a few examples. These results indicate that Gitta could be used to automatically learn interpretable and easily modifiable grammars, and thus provide a stepping stone for human-machine co-creation of generative models.
Dutch Humor Detection by Generating Negative ExamplesThomas Winters
This talk was originally presented by Thomas Winters on the 20th of November 2020 at the 29th Belgian Dutch Conference on Machine Learning (Benelearn 2020). The conference awarded this presentation the "Best Video Award".
A video of this talk is also available on https://www.youtube.com/watch?v=U1cShms67ec
More information, see https://thomaswinters.be/talk/2020benelearn
Abstract:
Modelling Mutually Interactive Fictional Character Conversational AgentsThomas Winters
This talk was originally presented by Thomas Winters on the 6th of November 2019 at BNAIC19, the 31th Benelux Conference on Artificial Intelligence.
The focus on the talk is about modelling interactive Twitterbots. These are based on the Belgian children TV show Samson & Gert, to create the Samsonbots.
We also show and release our new probabilistic context-free grammar modeling tool called Babbly.
Paper abstract: Conversational agents, such as chatbots and virtual assistants, are typically modelled to have a broad, generic personality, which they employ in their communication with single human beings. However, by framing a conversational agent as existing fictional characters, humans can imagine a shallow agent to have a larger personality than without this framing. Using multiple such agents allows for conversational interactions that help construct stories with or without human intervention, leading to multi-agent human-computer interactive story telling. In this paper, we model six semi-independent Twitterbots based on fictional characters based on the Belgian children’s TV show Samson & Gert, which are mutually interactive with each other as well as with other Twitter users. To achieve this, we first introduce a new language for modelling generative weighted context-free grammars called Babbly and a new framework for easily specifying complex Twitterbot behaviour. We found that these bots were not only well received by users, but also created lots of interesting, unexpected positive interactions. Using fictional characters as framing for conversational agents can thus help achieving interesting personalities and shows potential in interactive computational story telling.
Generating Philosophical Statements using Interpolated Markov Models and Dyna...Thomas Winters
This slideshow was originally presented by Thomas Winters on the 7Th of August 2019 at the the 31th European Summer School in Logic, Language and Information ( = ESSLLI ).
More information: https://thomaswinters.be/talk/2019esslli
Abstract: Automatically imitating input text is a common task in natural language generation, often used to create humorous results. Classic algorithms for learning to imitate text, e.g. simple Markov chains, usually have a trade-off between originality and syntactic correctness. We present two ways of automatically parodying philosophical statements from examples overcoming this issue, and show how these can work in interactive systems as well. The first algorithm uses interpolated Markov models with extensions to improve the quality of the generated texts. For the second algorithm, we propose dynamically extracting templates and filling these with new content. To illustrate these algorithms, we implemented TorfsBot, a Twitterbot imitating the witty, semi-philosophical tweets of professor Rik Torfs, the previous KU Leuven rector. We found that users preferred generative models that focused on local coherent sentences, rather than those mimicking the global structure of a philosophical statement. The proposed algorithms are thus valuable new tools for automatic parody as well as template learning systems.
Towards a General Framework for Humor Generation from Rated ExamplesThomas Winters
This talk was originally presented by Thomas Winters on the 21th of June 2019 at the 10th International Conference on Computational Creativity.
More information: https://thomaswinters.be/talk/2019iccc
Abstract: Many computer systems are becoming increasingly tailored to their users, customizing and optimizing their experience. However, most conversational agents do not follow this trend when it comes to humorous interactions. Instead, they employ pre-written answers regardless of whether the user liked previous similar interactions. While there already exist several computational humor systems that can successfully generate jokes, their joke generation models, parameters or even both are often fixed. In this paper, we propose GOOFER, a general framework for computational humor that learns joke structures and parameterizations from rated example jokes. This framework uses metrical schemas, a new notion we introduce, which are a generalization of several types of other schemas. This new type of schema makes regular schemas compatible with machine learning techniques. We also propose a strategy for identifying useful humor metrics based on humor theory, which can be used as features for the machine learning algorithm. The GOOFER framework uses these novel concepts to construct a pipeline with new components around previous generators. Using a mapping to our previous work on analogy jokes, we show that this framework cannot only generate this type of jokes well, but also find the importance of specific humor metrics for template values. This indicates that it is on the right track towards joke generation systems that can automatically learn new templates and schemas from rated examples. This work thus forms a stepping stone towards creating programs with a sense of humor that is adaptable to the user.
Generating Dutch Punning Riddles about Current AffairsThomas Winters
This talk was presented by Thomas Winters at CLIN29, the conference for Computational Linguistics in the Netherlands in 2019.
More info: https://thomaswinters.be/talk/2019clin
Abstract: Computational humor is a field within computational creativity, pushing computers towards the understanding and generation of one of the quintessentially human aspects of communication, humor. Giving conversational agents the ability to create relevant jokes could help them transform from tools to friends. While existing punning riddle generators (e.g. JAPE and STANDUP) already perform well, they only have limited support for steering the topic and only operate in English. We present MopjesBot, a bot posting series of generated Dutch punning riddles about the news on a daily basis. It generates puns such as "Het is een Belgisch politica en komt tot net boven de enkel: Maggie De Sok". The generator achieves this by first scraping news articles and performing named entity recognition. Afterwards, it chooses the most frequent, still unused, suitable name, detects the syllable boundaries, and employs rhyme dictionaries, unigrams and syllable counters to insert a pun into the chosen name. It then uses online knowledge bases and sentence analysis to describe both the famous person and the inserted word in riddle form. Finally, it posts a subset of the resulting punning riddles about the chosen news-worthy person online for human users to enjoy. We found that this method works well for a large number of popular Belgian figures (e.g. TV personalities, politicians). This research shows how linking several existing NLP tools can lead to generation of humor based on current affairs. By extension, it helps open the path to giving conversational agents a temporally sensitive sense of humor.
This is the DeepStochLog presentation, published at AAAI22 (Association for the Advancement of Artificial Intelligence 2022).
Authors: Thomas Winters*, Giuseppe Marra*, Robin Manhaeve, Luc De Raedt
*equal contribution
Code: https://github.com/ml-kuleuven/deepstochlog
Abstract: Recent advances in neural symbolic learning, such as DeepProbLog, extend probabilistic logic programs with neural predicates. Like graphical models, these probabilistic logic programs define a probability distribution over possible worlds, for which inference is computationally hard. We propose DeepStochLog, an alternative neural symbolic framework based on stochastic definite clause grammars, a type of stochastic logic program, which defines a probability distribution over possible derivations. More specifically, we introduce neural grammar rules into stochastic definite clause grammars to create a framework that can be trained end-to-end. We show that inference and learning in neural stochastic logic programming scale much better than for neural probabilistic logic programs. Furthermore, the experimental evaluation shows that DeepStochLog achieves state-of-the-art results on challenging neural symbolic learning tasks.
Hoe schrijven computers zelf tekst? (Kinderlezing)Thomas Winters
Deze kinderlezing over taaltechnologie werd gegeven door Thomas Winters op 26 maart 2023 op ScienceVille te Leuven.
Het had de volgende beschrijving:
Schrijf jij graag verhalen? Improviseer je wel eens hele scènes? Wist je dat computers dat tegenwoordig óók kunnen? Ontdek hoe ze dat kunnen in deze interactieve lezing vol improvisatie en creatief schrijven met artificiële intelligentie!
This talk is about using AI as your creative partner, both for text as well as images.
It was presented by Thomas Winters on the 14th of February 2023 to the VRT Creative Lab
This talk was presented at "Night of the Prompts" by Thomas Winters on the 21th of December 2022.
The talk contains a brief history of image generators and prompt engineering tricks and applications.
How to Attract & Survive Media Attention as PhDThomas Winters
This talk was given on the 24th of November for the Leuven.AI institute with some tips & tricks for attracting & surviving media attention as a PhD student, based on experiences from Thomas Winters.
How can AI be a creative partner for PR & marketing?Thomas Winters
This talk was given on 15 november 2022 by Thomas Winters at Ketchum to teach them about the capabilities of current creative AI techniques for marketing & PR purposes.
Beter leren praten met Artificiële IntelligentieThomas Winters
Deze presentatie werd gegeven op 12 november 2022 op de Nacht van de Vrijdenker in de Vooruit in Gent.
Het gaat over de recentste generatie artificiele intelligentie zoals GPT-3 en DALL-E, en hoe je hiertegen best praat (prompt engineering). Daarna worden enkele hulpmiddelen aangereikt om AI te ontmaskeren.
Computational Humor: Can a machine have a sense of humor (2022)Thomas Winters
Can computers have a sense of humor? In this talk, we discuss humor theory, some symbolic humor generation methods and then showcase how prompt engineering can help generate humor automatically.
This talk has been given multiple times by Thomas Winters. This particular version has been personalized for the keynote talk of the postgraduate AI students graduation on the 13th of September 2022.
More information of this talk on https://thomaswinters.be/talk/2022kulak
Thomas Winters en Jeroen Baert speelden "TorfsBot Or Not?" op de Nerdland Openingsnight 2022 voor 1500+ nerds! https://www.nerdlandfestival.be/programma-item/nerdland-opening-night-met-alle-nerdlanders
This slideshow was originally presented by Thomas Winters on the 21th of May 2022 at the ALAN hackaton, where they built an AI actor. More info: https://www.facebook.com/events/1386001341878142
The slideshow shows some techniques that might inspire to create AI-powered theatre robots & generators.
Computational Humor: Can a machine have a sense of humor? (2020)Thomas Winters
Can computers have a sense of humor? In this talk, we discuss some dimensions of computational humor, where the research field stands and showcase some of Thomas Winters' work in this field.
This talk has been given multiple times by Thomas Winters, in particular on the 11th of December 2020 as a DTAI seminar (KU Leuven's Declarative Languages and Artificial Intelligence Research lab)
More information of this talk on https://thomaswinters.be/talk/2020dtai
Abstract: Can a machine have a sense of humor? At first glance, this question may seem paradoxical, given that humor is an intrinsically human trait. By limiting the scope to specific types of jokes and by hand-coding rules for them, researchers generally have been able to create several methods for detecting and generating humor. Recently, large scale pre-trained language models like BERT, GPT-2/GPT-3 and our own Dutch RobBERT model opened the way for learning even better insights into humor. In this talk, we provide an overview of the history of computational humor, discuss several types of humor tasks that have been automated using artificial intelligence, illustrate several useful applications of computational humor and position several of our own research projects in this field.
Humor Workshop: Hoe schrijf je satire? (KU Leugen)Thomas Winters
Hoe schrijf je een mop? Hoe verbeter je een mop? En hoe schrijf je efficiënt satire?
Deze presentatie van Thomas Winters (improvisator en computationele humor onderzoeker) opent de motorkap van humor, om aan te tonen dat humor schrijven wel degelijk een leerbare vaardigheid is. Het leert aan van waar je inspiratie kan halen om een humoristische connectie te verzinnen, enkele basisprincipes om van die connectie een mop te schrijven en hoe je dan die mop kan verbeteren tot een geweldige mop.
De presentatie bevat ook talrijke oefeningen om zelf uit te proberen, om zo je eigen humorvaardigheden aan te scherpen.
Meer informatie over deze presentatie op https://thomaswinters.be/talk/2022veto
Om Thomas Winters uit te nodigen om deze workshop live te geven, kan je contact opzoeken via https://thomaswinters.be/contact
Survival of the Wittiest: Evolving Satire with Language ModelsThomas Winters
This talk was originally presented by Thomas Winters on the 18th of September 2021 at the 12th International Conference on Computational Creativity.
More information can be found on thomaswinters.be/talk/2021iccc
Abstract: Large pre-trained transformer-based language models have revolutionized the field of natural language processing in recent years. While BERT-like models perform exceptionally well for analytical tasks such as classification and regression, their text generation capabilities are usually limited to predicting tokens within a given context. In this paper, we introduce GALMET, a model that generates text by using genetic algorithms with BERT-like language models for evolving text. We use GALMET with the RoBERTa language model to automatically evolve real headlines into more satirical headlines. This is achieved by adapting the masked language head to the headlines domain for the mutation operator and finetuning a regression head to distinguish headlines from satire for the fitness function. We evaluated our system by comparing generated satirical headlines against human-edited headlines and just the fine-tuned masked language head. We found that while humans generally outperform the model, generations by GALMET are also often preferred over human-edited headlines. However, we also found that only using the fine-tuned masked language model gives slightly preferred satire due to generating more readable sentences. GALMET is thus a first step towards a new way of creating text generators using masked language models by transforming text guided by scores from another language model.
Discovering Textual Structures: Generative Grammar Induction using Template T...Thomas Winters
This talk was originally presented by Thomas Winters on the 10th of September 2020 at the 11th International Conference on Computational Creativity (ICCC20).
More information can be found on https://thomaswinters.be/talk/2020iccc
Abstract: Natural language generation provides designers with methods for automatically generating text, e.g. for creating summaries, chatbots and game content. In practise, text generators are often either learned and hard to interpret, or created by hand using techniques such as grammars and templates. In this paper, we introduce a novel grammar induction algorithm for learning interpretable grammars for generative purposes, called Gitta. We also introduce the novel notion of template trees to discover latent templates in corpora to derive these generative grammars. By using existing human-created grammars, we found that the algorithm can reasonably approximate these grammars using only a few examples. These results indicate that Gitta could be used to automatically learn interpretable and easily modifiable grammars, and thus provide a stepping stone for human-machine co-creation of generative models.
Dutch Humor Detection by Generating Negative ExamplesThomas Winters
This talk was originally presented by Thomas Winters on the 20th of November 2020 at the 29th Belgian Dutch Conference on Machine Learning (Benelearn 2020). The conference awarded this presentation the "Best Video Award".
A video of this talk is also available on https://www.youtube.com/watch?v=U1cShms67ec
More information, see https://thomaswinters.be/talk/2020benelearn
Abstract:
Modelling Mutually Interactive Fictional Character Conversational AgentsThomas Winters
This talk was originally presented by Thomas Winters on the 6th of November 2019 at BNAIC19, the 31th Benelux Conference on Artificial Intelligence.
The focus on the talk is about modelling interactive Twitterbots. These are based on the Belgian children TV show Samson & Gert, to create the Samsonbots.
We also show and release our new probabilistic context-free grammar modeling tool called Babbly.
Paper abstract: Conversational agents, such as chatbots and virtual assistants, are typically modelled to have a broad, generic personality, which they employ in their communication with single human beings. However, by framing a conversational agent as existing fictional characters, humans can imagine a shallow agent to have a larger personality than without this framing. Using multiple such agents allows for conversational interactions that help construct stories with or without human intervention, leading to multi-agent human-computer interactive story telling. In this paper, we model six semi-independent Twitterbots based on fictional characters based on the Belgian children’s TV show Samson & Gert, which are mutually interactive with each other as well as with other Twitter users. To achieve this, we first introduce a new language for modelling generative weighted context-free grammars called Babbly and a new framework for easily specifying complex Twitterbot behaviour. We found that these bots were not only well received by users, but also created lots of interesting, unexpected positive interactions. Using fictional characters as framing for conversational agents can thus help achieving interesting personalities and shows potential in interactive computational story telling.
Generating Philosophical Statements using Interpolated Markov Models and Dyna...Thomas Winters
This slideshow was originally presented by Thomas Winters on the 7Th of August 2019 at the the 31th European Summer School in Logic, Language and Information ( = ESSLLI ).
More information: https://thomaswinters.be/talk/2019esslli
Abstract: Automatically imitating input text is a common task in natural language generation, often used to create humorous results. Classic algorithms for learning to imitate text, e.g. simple Markov chains, usually have a trade-off between originality and syntactic correctness. We present two ways of automatically parodying philosophical statements from examples overcoming this issue, and show how these can work in interactive systems as well. The first algorithm uses interpolated Markov models with extensions to improve the quality of the generated texts. For the second algorithm, we propose dynamically extracting templates and filling these with new content. To illustrate these algorithms, we implemented TorfsBot, a Twitterbot imitating the witty, semi-philosophical tweets of professor Rik Torfs, the previous KU Leuven rector. We found that users preferred generative models that focused on local coherent sentences, rather than those mimicking the global structure of a philosophical statement. The proposed algorithms are thus valuable new tools for automatic parody as well as template learning systems.
Towards a General Framework for Humor Generation from Rated ExamplesThomas Winters
This talk was originally presented by Thomas Winters on the 21th of June 2019 at the 10th International Conference on Computational Creativity.
More information: https://thomaswinters.be/talk/2019iccc
Abstract: Many computer systems are becoming increasingly tailored to their users, customizing and optimizing their experience. However, most conversational agents do not follow this trend when it comes to humorous interactions. Instead, they employ pre-written answers regardless of whether the user liked previous similar interactions. While there already exist several computational humor systems that can successfully generate jokes, their joke generation models, parameters or even both are often fixed. In this paper, we propose GOOFER, a general framework for computational humor that learns joke structures and parameterizations from rated example jokes. This framework uses metrical schemas, a new notion we introduce, which are a generalization of several types of other schemas. This new type of schema makes regular schemas compatible with machine learning techniques. We also propose a strategy for identifying useful humor metrics based on humor theory, which can be used as features for the machine learning algorithm. The GOOFER framework uses these novel concepts to construct a pipeline with new components around previous generators. Using a mapping to our previous work on analogy jokes, we show that this framework cannot only generate this type of jokes well, but also find the importance of specific humor metrics for template values. This indicates that it is on the right track towards joke generation systems that can automatically learn new templates and schemas from rated examples. This work thus forms a stepping stone towards creating programs with a sense of humor that is adaptable to the user.
Generating Dutch Punning Riddles about Current AffairsThomas Winters
This talk was presented by Thomas Winters at CLIN29, the conference for Computational Linguistics in the Netherlands in 2019.
More info: https://thomaswinters.be/talk/2019clin
Abstract: Computational humor is a field within computational creativity, pushing computers towards the understanding and generation of one of the quintessentially human aspects of communication, humor. Giving conversational agents the ability to create relevant jokes could help them transform from tools to friends. While existing punning riddle generators (e.g. JAPE and STANDUP) already perform well, they only have limited support for steering the topic and only operate in English. We present MopjesBot, a bot posting series of generated Dutch punning riddles about the news on a daily basis. It generates puns such as "Het is een Belgisch politica en komt tot net boven de enkel: Maggie De Sok". The generator achieves this by first scraping news articles and performing named entity recognition. Afterwards, it chooses the most frequent, still unused, suitable name, detects the syllable boundaries, and employs rhyme dictionaries, unigrams and syllable counters to insert a pun into the chosen name. It then uses online knowledge bases and sentence analysis to describe both the famous person and the inserted word in riddle form. Finally, it posts a subset of the resulting punning riddles about the chosen news-worthy person online for human users to enjoy. We found that this method works well for a large number of popular Belgian figures (e.g. TV personalities, politicians). This research shows how linking several existing NLP tools can lead to generation of humor based on current affairs. By extension, it helps open the path to giving conversational agents a temporally sensitive sense of humor.
This is the DeepStochLog presentation, published at AAAI22 (Association for the Advancement of Artificial Intelligence 2022).
Authors: Thomas Winters*, Giuseppe Marra*, Robin Manhaeve, Luc De Raedt
*equal contribution
Code: https://github.com/ml-kuleuven/deepstochlog
Abstract: Recent advances in neural symbolic learning, such as DeepProbLog, extend probabilistic logic programs with neural predicates. Like graphical models, these probabilistic logic programs define a probability distribution over possible worlds, for which inference is computationally hard. We propose DeepStochLog, an alternative neural symbolic framework based on stochastic definite clause grammars, a type of stochastic logic program, which defines a probability distribution over possible derivations. More specifically, we introduce neural grammar rules into stochastic definite clause grammars to create a framework that can be trained end-to-end. We show that inference and learning in neural stochastic logic programming scale much better than for neural probabilistic logic programs. Furthermore, the experimental evaluation shows that DeepStochLog achieves state-of-the-art results on challenging neural symbolic learning tasks.
Hoe werken tekstgenerators? (Special Guest in Lieven Scheire's AI voorstelling)
1. 1
TorfsBot
1. Tel in alle Rik Torfs teksten hoe vaak een woord
volgt op de vorige paar woorden
2. Begin met beginwoorden van Rik & neem steeds
willekeurig mogelijk volgend woord
“gevolgd door”
4: een
2: zijn
1: iemand
1: acht
Beste,