Producing a variant of code is highly challenging, particularly for individuals unfamiliar with programming. This demonstration introduces a novel use of generative AI to aid end-users in customizing code. We first describe how generative AI can be used to customize code through prompts and instructions, and further demonstrate its potential in building end-user tools for configuring code. We showcase how to transform an undocumented, technical, low-level TikZ into a user-friendly, configurable, Web-based customization tool written in Python, HTML, CSS, and JavaScript and itself configurable. We discuss how generative AI can support this transformation process and traditional variability engineering tasks, such as identification and implementation of features, synthesis of a template code generator, and development of end-user configurators. We believe it is a first step towards democratizing variability programming, opening a path for end-users to adapt code to their needs.
OpenGL Fixed Function to Shaders - Porting a fixed function application to “m...ICS
Watch the video here: http://bit.ly/1TA24fU
OpenGL Fixed Function to Shaders - Porting a fixed function application to “modern” OpenGL - Webinar Mar 2016
Tips And Tricks For Bioinformatics Software Engineeringjtdudley
This is a talk I've given twice at Stanford recently. It's essentially a brain dump of my thoughts on being a Bioinformatician with lots of links to useful tools.
React, Angular, TypeScript… Over the past few years, all these names took part in a fashionable phenomenon, pushed by the big names of the web industry. But what if a parallel world exists beyond this marketed mirror? A better world where you can easily build flexible applications with reusable components. Bourre will explain the right philosophy for building flexible applications with reusable components. As well, he will showcase the few tools to better achieve these goals, pushing forward the limits of the JS eco-system.
Overview
React, Angular, TypeScript… Over the past few years, all these names took part in a fashionable phenomenon, pushed by the big names of the web industry. But what if a parallel world exists beyond this marketed mirror? A better world where you can easily build flexible applications with reusable components.
Objective
Bourre will explain the right philosophy for building flexible applications with reusable components. As well, he will showcase the right tools to better achieve this goal with the dark side of the force.
Target Audience
Every developer who wants to share some inspiration about code design and push forward the limits of the JS eco-system.
Assumed Audience Knowledge
Devs that have already dealt with long life-cycle scalability and maintainability.
Five Things Audience Members Will Learn
Build modular and cross-platform applications
Create configurable architectures driven by DSL
Separate configurability from reusability
Design by contract with dependency injection
Use Haxe metaprogramming to boost code performances
OpenGL Fixed Function to Shaders - Porting a fixed function application to “m...ICS
Watch the video here: http://bit.ly/1TA24fU
OpenGL Fixed Function to Shaders - Porting a fixed function application to “modern” OpenGL - Webinar Mar 2016
Tips And Tricks For Bioinformatics Software Engineeringjtdudley
This is a talk I've given twice at Stanford recently. It's essentially a brain dump of my thoughts on being a Bioinformatician with lots of links to useful tools.
React, Angular, TypeScript… Over the past few years, all these names took part in a fashionable phenomenon, pushed by the big names of the web industry. But what if a parallel world exists beyond this marketed mirror? A better world where you can easily build flexible applications with reusable components. Bourre will explain the right philosophy for building flexible applications with reusable components. As well, he will showcase the few tools to better achieve these goals, pushing forward the limits of the JS eco-system.
Overview
React, Angular, TypeScript… Over the past few years, all these names took part in a fashionable phenomenon, pushed by the big names of the web industry. But what if a parallel world exists beyond this marketed mirror? A better world where you can easily build flexible applications with reusable components.
Objective
Bourre will explain the right philosophy for building flexible applications with reusable components. As well, he will showcase the right tools to better achieve this goal with the dark side of the force.
Target Audience
Every developer who wants to share some inspiration about code design and push forward the limits of the JS eco-system.
Assumed Audience Knowledge
Devs that have already dealt with long life-cycle scalability and maintainability.
Five Things Audience Members Will Learn
Build modular and cross-platform applications
Create configurable architectures driven by DSL
Separate configurability from reusability
Design by contract with dependency injection
Use Haxe metaprogramming to boost code performances
Deep Anomaly Detection from Research to Production Leveraging Spark and Tens...Databricks
Anomaly detection has numerous applications in a wide variety of fields. In banking, with ever growing heterogeneity and complexity, the difficulty of discovering deviating cases using conventional techniques and scenario definitions is on the rise. In our talk, we’ll present an outline of Swedbank’s ways of constructing and leveraging scalable pipelines based on Spark and Tensorflow in combination with an in-house tailor-made platform to develop, deploy and monitor deep anomaly detection models. In summary, this talk will present Swedbank’s approach on building, unifying and scaling an end-to-end solution using large amounts of heterogeneous imbalanced data. In this talk we will include sections with the following topics: Feature engineering: transactions2vec; Anomaly detection and its applications in banking; Deep anomaly detection methods: Deep SVDD and Generative adversarial networks, Model overview and code snippets in Tensorflow estimator API; Model Deployment: An overview of how the different puzzle pieces outlined above are put together and operationalized to create and end-to-end deployment.
Takes the reader through the various components of windowing systems, and how to develop and benchmark various Graphics applications using OpenGL and other toolsets. Also includes a Cheatsheet that covers various terminologies used in the Graphics world.
The migration and reengineering of existing variants into a software product line (SPL) is an error-prone and time-consuming activity. Many extractive approaches have been proposed, spanning different activities from feature identification and naming to the synthesis of reusable artefacts. In this paper, we explore how large language model (LLM)-based assistants can support domain analysts and developers. We revisit four illustrative cases of the literature where the challenge is to migrate variants written in different formalism (UML class diagrams, Java, GraphML, statecharts). We systematically report on our experience with ChatGPT-4, describing our strategy to prompt LLMs and documenting positive aspects but also failures. We compare the use of LLMs with state-of-the-art approach, BUT4Reuse. While LLMs offer potential in assisting domain analysts and developers in transitioning software variants into SPLs, their intrinsic stochastic nature and restricted ability to manage large variants or complex structures necessitate a semiautomatic approach, complete with careful review, to counteract inaccuracies.
Semplificare l'observability per progetti ServerlessLuciano Mammino
Hai mai pensato che le tue lambda functions possano fallire senza che tu te ne accorga? Se la risposta é "SI" probabilmente é perché ti sei giá "bruciato" giocando con il cloud, dove errori e fallimenti sono sempre dietro l'angolo. Purtroppo non possiamo prevenire tutti i fallimenti, pero' possiamo essere notificati quando qualcosa va storto cosí da poter reagire tempestivamente. Ma come fare a configurare il nostro ambiente AWS per raggiungere un buon livello di "Observability"? Se hai giá provato ad utilizzare CloudWatch saprai giá quanto possa essere complesso. In questo talk, esploreremo il tema dell'observability per applicazioni Serverless su AWS. Discuteremo problemi e best practices. Infine vi proporró un tool che permette di automatizzare la configurazione di CloudWatch per l'80% delle esigenze in pochi minuti!
The development of modern IDEs is still a challenging and time-consuming task, which requires implementing the support for language-specific features such as syntax highlighting or validation.
When the IDE targets a graphical language, its development becomes even more complex due to the rendering and manipulation of the graphical notation symbols.
To simplify the development of IDEs, the Language Server Protocol (LSP) proposes a decoupled approach based on language-agnostic clients and language-specific servers.
LSP clients communicate changes to LSP servers, which validate and store language instances.
However, LSP only addresses textual languages (i.e., character as atomic unit) and neglects the support for graphical ones (i.e., nodes/edges as atomic units).
In this paper, we present our vision to decouple graphical language IDEs discussing the alternatives for integrating LSP's ideas in their development. Moreover, we propose a novel LSP infrastructure to simplify the development of new graphical modeling tools, in which Web technologies may be used for editor front-ends while leveraging existing modeling frameworks to build language servers.
Are you considering a microservices architecture for your project? In this slides I've collected a list of aspects to take into account when approaching the ms world.
These slides have been presented to the Microscope event yield in Madrid on Jan 1st, 2016.
Modern workplace conference create an immersive experience with office 365 ...Alexander Meijers
Think of provisioning information on real-life objects or straw through Cloud data like persons, related contacts, documents and other stuff. This allows you to build rich applications containing information you normally process in a 2D world like your browsers. By extending it to a 3D world, you are able to process the data in a completely different way. Think of creating teams of people within your organization and group them based on specialties, getting a more clear inside view of your site structure in SharePoint or have a 3D model of the Microsoft Graph entities related objects
In the field of touchy processing, many people would access the touchy phones, keypads etc. where the disadvantage of touchy system is all about touch screen. So, to overcome this problem, we are going to develop a project based on touch less device which is used to access and process our data with minimum time complexity for optimization of the sourcing data. Our project contain marker to highlight required key term. Touch less detects both the size and location of “Marker’s Gestures” for writing purposes. Whereas the camera played a key role to select our object that is an image for it’s processing. In this, we are giving command on camera to identify the gestures by Touch less SDK and reducing manual efforts.
In the field of touchy processing, many people would access the touchy phones, keypads etc. where the disadvantage of touchy system is all about touch screen. So, to overcome this problem, we are going to develop a project based on touch less device which is used to access and process our data with minimum time complexity for optimization of the sourcing data. Our project contain marker to highlight required key term. Touch less detects both the size and location of “Marker’s Gestures” for writing purposes. Whereas the camera played a key role to select our object that is an image for it’s processing. In this, we are giving command on camera to identify the gestures by Touch less SDK and reducing manual efforts.
The Next Mainstream Programming Language: A Game Developer’s Perspectiveguest4fd7a2
Tim Sweeney\'s talk at the Symposium on Principles of Programming Languages 2006. Tim is the founder of Epic Games and the lead architect of the Unreal series of engines
Consequences of using the Copy-Paste method in C++ programming and how to dea...Andrey Karpov
I create the PVS-Studio analyzer detecting errors in source code of C/C++/C++0x software. So I have to review a large amount of source code of various applications where we detected suspicious code fragments with the help of PVS-Studio. I have collected a lot of examples demonstrating that an error occurred because of copying and modifying a code fragment. Of course, it has been known for a long time that using Copy-Paste in programming is a bad thing. But let's try to investigate this problem closely instead of limiting ourselves to just saying "do not copy the code".
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Slides presented at MODEVAR 2024 co-located with VaMoS 2024: https://modevar.github.io/program/ Thanks to Jessie Galasso and Chico Sundermann for the invitation Abstract: "Variability models (e.g., feature models) are widely considered in software engineering research and practice, in order to develop software product lines, configurable systems, generators, or adaptive systems. Numerous success stories of variability models have been reported in different domains (e.g., automotive, aerospace, avionics) and the applicability broadens. However, the use of variability models is not yet universally adopted… Why? Some examples: variability models are not continuously extracted from projects and artefacts; each time a variability model is used, its expressiveness and language should be specialized; learning-based models are decoupled from variability models; modeling software variability should cover different layers and their interactions, etc. The list is arguably opinionated, incomplete, and based on my own practical experience, but also on observations of existing works. I will give 24 reasons to occupy your day as a researcher or practitioner in the field of variability modeling."
Slides: https://github.com/acherm/24RWVMANYU-VaMoS-MODEVAR/blob/main/vamos2024.md (Markdown content) PDF: https://github.com/acherm/24RWVMANYU-VaMoS-MODEVAR/blob/main/VaMoS2024-MODEVAR.pdf (slides in PDF format)
More Related Content
Similar to A Demonstration of End-User Code Customization Using Generative AI
Deep Anomaly Detection from Research to Production Leveraging Spark and Tens...Databricks
Anomaly detection has numerous applications in a wide variety of fields. In banking, with ever growing heterogeneity and complexity, the difficulty of discovering deviating cases using conventional techniques and scenario definitions is on the rise. In our talk, we’ll present an outline of Swedbank’s ways of constructing and leveraging scalable pipelines based on Spark and Tensorflow in combination with an in-house tailor-made platform to develop, deploy and monitor deep anomaly detection models. In summary, this talk will present Swedbank’s approach on building, unifying and scaling an end-to-end solution using large amounts of heterogeneous imbalanced data. In this talk we will include sections with the following topics: Feature engineering: transactions2vec; Anomaly detection and its applications in banking; Deep anomaly detection methods: Deep SVDD and Generative adversarial networks, Model overview and code snippets in Tensorflow estimator API; Model Deployment: An overview of how the different puzzle pieces outlined above are put together and operationalized to create and end-to-end deployment.
Takes the reader through the various components of windowing systems, and how to develop and benchmark various Graphics applications using OpenGL and other toolsets. Also includes a Cheatsheet that covers various terminologies used in the Graphics world.
The migration and reengineering of existing variants into a software product line (SPL) is an error-prone and time-consuming activity. Many extractive approaches have been proposed, spanning different activities from feature identification and naming to the synthesis of reusable artefacts. In this paper, we explore how large language model (LLM)-based assistants can support domain analysts and developers. We revisit four illustrative cases of the literature where the challenge is to migrate variants written in different formalism (UML class diagrams, Java, GraphML, statecharts). We systematically report on our experience with ChatGPT-4, describing our strategy to prompt LLMs and documenting positive aspects but also failures. We compare the use of LLMs with state-of-the-art approach, BUT4Reuse. While LLMs offer potential in assisting domain analysts and developers in transitioning software variants into SPLs, their intrinsic stochastic nature and restricted ability to manage large variants or complex structures necessitate a semiautomatic approach, complete with careful review, to counteract inaccuracies.
Semplificare l'observability per progetti ServerlessLuciano Mammino
Hai mai pensato che le tue lambda functions possano fallire senza che tu te ne accorga? Se la risposta é "SI" probabilmente é perché ti sei giá "bruciato" giocando con il cloud, dove errori e fallimenti sono sempre dietro l'angolo. Purtroppo non possiamo prevenire tutti i fallimenti, pero' possiamo essere notificati quando qualcosa va storto cosí da poter reagire tempestivamente. Ma come fare a configurare il nostro ambiente AWS per raggiungere un buon livello di "Observability"? Se hai giá provato ad utilizzare CloudWatch saprai giá quanto possa essere complesso. In questo talk, esploreremo il tema dell'observability per applicazioni Serverless su AWS. Discuteremo problemi e best practices. Infine vi proporró un tool che permette di automatizzare la configurazione di CloudWatch per l'80% delle esigenze in pochi minuti!
The development of modern IDEs is still a challenging and time-consuming task, which requires implementing the support for language-specific features such as syntax highlighting or validation.
When the IDE targets a graphical language, its development becomes even more complex due to the rendering and manipulation of the graphical notation symbols.
To simplify the development of IDEs, the Language Server Protocol (LSP) proposes a decoupled approach based on language-agnostic clients and language-specific servers.
LSP clients communicate changes to LSP servers, which validate and store language instances.
However, LSP only addresses textual languages (i.e., character as atomic unit) and neglects the support for graphical ones (i.e., nodes/edges as atomic units).
In this paper, we present our vision to decouple graphical language IDEs discussing the alternatives for integrating LSP's ideas in their development. Moreover, we propose a novel LSP infrastructure to simplify the development of new graphical modeling tools, in which Web technologies may be used for editor front-ends while leveraging existing modeling frameworks to build language servers.
Are you considering a microservices architecture for your project? In this slides I've collected a list of aspects to take into account when approaching the ms world.
These slides have been presented to the Microscope event yield in Madrid on Jan 1st, 2016.
Modern workplace conference create an immersive experience with office 365 ...Alexander Meijers
Think of provisioning information on real-life objects or straw through Cloud data like persons, related contacts, documents and other stuff. This allows you to build rich applications containing information you normally process in a 2D world like your browsers. By extending it to a 3D world, you are able to process the data in a completely different way. Think of creating teams of people within your organization and group them based on specialties, getting a more clear inside view of your site structure in SharePoint or have a 3D model of the Microsoft Graph entities related objects
In the field of touchy processing, many people would access the touchy phones, keypads etc. where the disadvantage of touchy system is all about touch screen. So, to overcome this problem, we are going to develop a project based on touch less device which is used to access and process our data with minimum time complexity for optimization of the sourcing data. Our project contain marker to highlight required key term. Touch less detects both the size and location of “Marker’s Gestures” for writing purposes. Whereas the camera played a key role to select our object that is an image for it’s processing. In this, we are giving command on camera to identify the gestures by Touch less SDK and reducing manual efforts.
In the field of touchy processing, many people would access the touchy phones, keypads etc. where the disadvantage of touchy system is all about touch screen. So, to overcome this problem, we are going to develop a project based on touch less device which is used to access and process our data with minimum time complexity for optimization of the sourcing data. Our project contain marker to highlight required key term. Touch less detects both the size and location of “Marker’s Gestures” for writing purposes. Whereas the camera played a key role to select our object that is an image for it’s processing. In this, we are giving command on camera to identify the gestures by Touch less SDK and reducing manual efforts.
The Next Mainstream Programming Language: A Game Developer’s Perspectiveguest4fd7a2
Tim Sweeney\'s talk at the Symposium on Principles of Programming Languages 2006. Tim is the founder of Epic Games and the lead architect of the Unreal series of engines
Consequences of using the Copy-Paste method in C++ programming and how to dea...Andrey Karpov
I create the PVS-Studio analyzer detecting errors in source code of C/C++/C++0x software. So I have to review a large amount of source code of various applications where we detected suspicious code fragments with the help of PVS-Studio. I have collected a lot of examples demonstrating that an error occurred because of copying and modifying a code fragment. Of course, it has been known for a long time that using Copy-Paste in programming is a bad thing. But let's try to investigate this problem closely instead of limiting ourselves to just saying "do not copy the code".
Similar to A Demonstration of End-User Code Customization Using Generative AI (20)
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Slides presented at MODEVAR 2024 co-located with VaMoS 2024: https://modevar.github.io/program/ Thanks to Jessie Galasso and Chico Sundermann for the invitation Abstract: "Variability models (e.g., feature models) are widely considered in software engineering research and practice, in order to develop software product lines, configurable systems, generators, or adaptive systems. Numerous success stories of variability models have been reported in different domains (e.g., automotive, aerospace, avionics) and the applicability broadens. However, the use of variability models is not yet universally adopted… Why? Some examples: variability models are not continuously extracted from projects and artefacts; each time a variability model is used, its expressiveness and language should be specialized; learning-based models are decoupled from variability models; modeling software variability should cover different layers and their interactions, etc. The list is arguably opinionated, incomplete, and based on my own practical experience, but also on observations of existing works. I will give 24 reasons to occupy your day as a researcher or practitioner in the field of variability modeling."
Slides: https://github.com/acherm/24RWVMANYU-VaMoS-MODEVAR/blob/main/vamos2024.md (Markdown content) PDF: https://github.com/acherm/24RWVMANYU-VaMoS-MODEVAR/blob/main/VaMoS2024-MODEVAR.pdf (slides in PDF format)
Programming variability is central to the design and implementation of software systems that can adapt to a variety of contexts and requirements, providing increased flexibility and customization. Managing the complexity that arises from having multiple features, variations, and possible configurations is known to be highly challenging for software developers. In this paper, we explore how large language model (LLM)-based assistants can support the programming of variability. We report on new approaches made possible with LLM-based assistants, like: features and variations can be implemented as prompts; augmentation of variability out of LLM-based domain knowledge; seamless implementation of variability in different kinds of artefacts, programming languages, and frameworks, at different binding times (compile-time or run-time). We are sharing our data (prompts, sessions, generated code, etc.) to support the assessment of the effectiveness and robustness of LLMs for variability-related tasks.
https://inria.hal.science/hal-04153310/
talk @ DiverSE coffee (informal, work in progress)
abstract: Cheating in chess is a serious issue, mainly due to the use of chess engines during play.
Chess engines like Stockfish or AlphaZero-like variant can give to the cheater a decisive advantage, since almost perfect moves can be played.
Already in 2006, Topalov accused Kramnik as part of the world championship.
Sébastien Feller https://en.wikipedia.org/wiki/S%C3%A9bastien_Feller case in Chess Olympiad in 2010 was a shock.
With the rise of online games and $$$, cheating is even more problematic.
A few weeks ago, Magnus Carlsen https://en.wikipedia.org/wiki/Magnus_Carlsen accused Hans Niemann (HN) of cheating over the board and refused to ever play him again.
You may have heard headlines with "anal bead" supposed to help HN.
In fact, I'm not specifically aiming to talk about chess (and cheating).
I rather want to discuss how science has been (and will be) at the heart of the anti-cheating chess problem.
I will first argue that many people (chess hobbyists/experts, data nerds, etc.), most being non-scientists, have actually done science for trying to demonstrate or refute the cheating case.
The basic idea is to confront moves played by humans (players) with those of computer engines.
With the sharing of data (analysis of chess games like those played by HN), scripts, and methods, numerous results and conclusions have emerged, getting popularized with social media (twitch, Youtube, twitter, etc.)
On the one hand, I've been quite excited to see all this energy for trying to advance our understanding and propose interesting ideas/analysis.
On the other hand, there have been some failures in the quality of some analysis or the choice of closed systems to compute unclear metrics.
In-between, there have been a report by chess.com and the analysis of the computer scientist Ken Regan https://cse.buffalo.edu/~regan/ the world renewed specialist.
I still think the problem is open (eg Regan's method is too conservative and missing many cases; chess.com methodology, though unclear and opaque at some points, is certainly effective for online cheating, but not over the board detection).
I will present a variability model of the space of experiments/methods that can be considered to address the problem.
This model can be used to pilot the collaborative effort, to reproduce, replicate or reject some experiments, and to gain confidence or robustness in some conclusions.
Another last point I want to discuss is that most probably the anti-cheating chess problem cannot be resolved solely with retrospective computational analysis.
It's just too uncertain, especially if cheaters are "smart".
(Cyber-)security experts, psychologists, chess players, and of course computer science nerds/professionals can contribute to address this multidisciplinary problem.
preprint: https://hal.inria.fr/hal-03720273
presented at SPLC 2022 research track
Linux kernels are used in a wide variety of appliances, many of them having strong requirements on the kernel size due to constraints such as limited memory or instant boot. With more than nine thousands of configuration options to choose from, developers and users of Linux actually spend significant effort to document, understand, and eventually tune (combinations of) options for meeting a kernel size. In this paper, we describe a large-scale endeavour automating this task and predicting a given Linux kernel binary size out of unmeasured configurations. We first experiment that state-of-theart solutions specifically made for configurable systems such as performance-influence models cannot cope with that number of options, suggesting that software product line techniques may need to be adapted to such huge configuration spaces. We then show that tree-based feature selection can learn a model achieving low prediction errors over a reduced set of options. The resulting model, trained on 95 854 kernel configurations, is fast to compute, simple to interpret and even outperforms the accuracy of learning without feature selection.
Keynote VariVolution/VM4ModernTech@SPLC 2022
At compile-time or at runtime, varying software is a powerful means to achieve optimal functional and performance goals. An observation is that only considering the software layer might be naive to tune the performance of the system or test that the functionality behaves correctly. In fact, many layers (hardware, operating system, input data, build process etc.), themselves subject to variability, can alter performances of software configurations. For instance, configurations' options may have very different effects on execution time or energy consumption when used with different input data, depending on the way it has been compiled and the hardware on which it is executed.
In this talk, I will introduce the concept of “deep software variability” which refers to the interactions of all external layers modifying the behavior or non-functional properties of a software system. I will show how compile-time options, inputs, and software evolution (versions), some dimensions of deep variability, can question the generalization of the variability knowledge of popular configurable systems like Linux, gcc, xz, or x264.
I will then argue that machine learning (ML) is particularly suited to manage very large variants space. The key idea of ML is to build a model based on sample data -- here observations about software variants in variable settings -- in order to make predictions or decisions. I will review state-of-the-art solutions developed in software engineering and software product line engineering while connecting with works in ML (e.g., transfer learning, dimensionality reduction, adversarial learning). Overall, the key challenge is to leverage the right ML pipeline in order to harness all variability layers (and not only the software layer), leading to more efficient systems and variability knowledge that truly generalizes to any usage and context.
From this perspective, we are starting an initiative to collect data, software, reusable artefacts, and body of knowledge related to (deep) software variability: https://deep.variability.io
Finally, I will open a broader discussion on how machine learning and deep software variability relate to the reproducibility, replicability, and robustness of scientific, software-based studies (e.g., in neuroimaging and climate modelling).
Software has eaten the world and will continue to impact society, spanning numerous application domains, including the way we're doing science and acquiring knowledge.
Software variability is key since developers, engineers, entrepreneurs, scientists can explore different hypothesis, methods, and custom products to hopefully fit a diversity of needs and usage.
In this talk, I will first illustrate the importance of software variability using different concrete examples.
I will then show that, though highly desirable, software variability also introduces an enormous complexity due to the combinatorial explosion of possible variants.
For example, 10^6000 variants of the Linux kernel may be build, much more than the estimated number of atoms in the universe (10^80).
Hence, the big picture and challenge for the audience: any organization capable of mastering software variability will have a competitive advantage.
Summer School EIT Digital.
5th July Rennes 2022
With large scale and complex configurable systems, it is hard for users to choose the right combination of options (i.e., configurations) in order to obtain the wanted trade-off between functionality and performance goals such as speed or size. Machine learning can help in relating these goals to the configurable system options, and thus, predict the effect of options on the outcome, typically after a costly training step. However, many configurable systems evolve at such a rapid pace that it is impractical to retrain a new model from scratch for each new version. In this paper, we propose a new method to enable transfer learning of binary size predictions among versions of the same configurable system. Taking the extreme case of the Linux kernel with its ≈ 14, 500 configuration options, we first investigate how binary size predictions of kernel size degrade over successive versions. We show that the direct reuse of an accurate prediction model from 2017 quickly becomes inaccurate when Linux evolves, up to a 32% mean error by August 2020. We thus propose a new approach for transfer evolution-aware model shifting (TEAMS). It leverages the structure of a configurable system to transfer an initial predictive model towards its future versions with a minimal amount of extra processing for each version. We show that TEAMS vastly outperforms state of the art approaches over the 3 years history of Linux kernels, from 4.13 to 5.8.
Biology, medicine, physics, astrophysics, chemistry: all these scientific domains need to process large amount of data with more and more complex software systems. For achieving reproducible science, there are several challenges ahead involving multidisciplinary collaboration and socio-technical innovation with software at the center of the problem. Despite the availability of data and code, several studies report that the same data analyzed with different software can lead to different results. I am seeing this problem as a manifestation of deep software variability: many factors (operating system, third-party libraries, versions, workloads, compile-time options and flags, etc.) themselves subject to variability can alter the results, up to the point it can dramatically change the conclusions of some scientific studies. In this keynote, I argue that deep software variability is a threat and also an opportunity for reproducible science. I first outline some works about (deep) software variability, reporting on preliminary evidence of complex interactions between variability layers. I then link the ongoing works on variability modelling and deep software variability in the quest for reproducible science.
Most modern software systems are subject to variation or come in many variants. Web browsers like Firefox or Chrome are available on different operating systems, in different languages, while users can configure 2000+ preferences or install numerous 3rd parties extensions (or plugins). Web servers like Apache, operating systems like the Linux kernel, or a video encoder like x264 are other examples of software systems that are highly configurable at compile-time or at run-time for delivering the expected functionality andmeeting the various desires of users. Variability ("the ability of a software system or artifact to be efficiently extended, changed,customized or configured for use in a particular context") is therefore a crucial property of software systems. Organizations capable of mastering variability can deliver high-quality variants (or products) in a short amount of time and thus attract numerous customers, new use-cases or usage contexts. A hard problem for end-users or software developers is to master the combinatorial explosion induced by variability: Hundreds of configuration options can be combined, each potentially with distinct functionality and effects on execution time, memory footprint, quality of the result, etc. The first part of this course will introduce variability-intensive systems, their applications and challenges, in various software contexts. We will use intuitive examples (like a generator of LaTeX paper variants) and real-world systems (like the Linux kernel). A second objective of this course is to show the relevance of ArtificialIntelligence (AI) techniques for exploring and taming such enormous variability spaces. In particular, we will introduce how (1) satisfiability and constraint programming solvers can be used to properly model and reason about variability; (2) how machine learning can be used to discover constraints and predict the variability behavior of configurable systems or software product lines.
http://ejcp2019.icube.unistra.fr/
Presented at SIGCSE 2018: https://sigcse2018.sigcse.org/
Software Product Line (SPL) engineering has emerged to provide the means to efficiently model, produce, and maintain multiple similar software variants, exploiting their common properties, and managing their variabilities (differences). With over two decades of existence, the community of SPL researchers and practitioners is thriving as can be attested by the extensive research output and the numerous successful industrial projects. Education has a key role to support the next generation of practitioners to build highly complex, variability-intensive systems. Yet, it is unclear how the concepts of variability and SPLs are taught, what are the possible missing gaps and difficulties faced, what are the benefits, or what is the material available. Also, it remains unclear whether scholars teach what is actually needed by industry. In this article we report on three initiatives we have conducted with scholars, educators, industry practitioners, and students to further understand the connection between SPLs and education, i.e., an online survey on teaching SPLs we performed with 35 scholars, another survey on learning SPLs we conducted with 25 students, as well as two workshops held at the International Software Product Line Conference in 2014 and 2015 with both researchers and industry practitioners participating. We build upon the two surveys and the workshops to derive recommendations for educators to continue improving the state of practice of teaching SPLs, aimed at both individual educators as well as the wider community.
Please find the pre-print online:
https://hal.inria.fr/hal-01334851
Feature models are widely used to encode the configurations of a software product line in terms of mandatory, optional and exclusive features as well as propositional constraints over the features. Numerous computationally expensive procedures have been developed to model check, test, configure, debug, or compute relevant information of feature models. In this paper we explore the possible improvement of relying on the enumeration of all configurations when performing automated analysis operations. The key idea is to pre-compile configurations so that reasoning operations (queries and transformations) can then be performed in polytime. We tackle the challenge of how to scale the existing enu-meration techniques. We show that the use of distributed computing techniques might offer practical solutions to previously unsolvable problems and opens new perspectives for the automated analysis of software product lines.
Product Derivation is a key activity in Software Product Line Engineering. During this process, derivation operators modify or create core assets (e.g., model elements, source code instructions, components) by adding, removing or substituting them according to a given configuration. The result is a derived product that generally needs to conform to a programming or modeling language. Some operators lead to invalid products when applied to certain assets, some others do not; knowing this in advance can help to better use them, however this is challenging, specially if we consider assets expressed in extensive and complex languages such as Java. In this paper, we empirically answer the following question: which product line operators, applied to which program elements, can synthesize variants of programs that are incorrect, correct or perhaps even conforming to test suites? We implement source code transformations, based on the derivation operators of the Common Variability Language. We automatically synthesize more than 370,000 program variants from a set of 8 real large Java projects (up to 85,000 lines of code), obtaining an extensive panorama of the sanity of the operations.
Paper was presented at SPLC'15
Many real-world product lines are only represented as non-hierarchical collections of distinct products, described by their configuration values. As the manual preparation of feature models is a tedious and labour-intensive activity, some techniques have been proposed to automatically generate boolean feature models from product descriptions. However , none of these techniques is capable of synthesizing feature attributes and relations among attributes, despite the huge relevance of attributes for documenting software product lines. In this paper, we introduce for the first time an algorithmic and parametrizable approach for computing a legal and appropriate hierarchy of features, including feature groups, typed feature attributes, domain values and relations among these attributes. We have performed an empirical evaluation by using both randomized configuration matrices and real-world examples. The initial results of our evaluation show that our approach can scale up to matrices containing 2,000 attributed features, and 200,000 distinct configurations in a couple of minutes.
Paper has been presented at SPLC'15 (Nashville, USA)
Variability is omnipresent in numerous kinds of artefacts (e.g., source code, product matrices) and in different shapes (e.g., conditional compilation, differences among product descriptions). For understanding, reasoning about, maintaining or evolving variability, practitioners usually need an explicit encoding of variability (ie a variability model). As a result, numerous techniques have been developed to reverse engineer variability (e.g., through the mining of features and constraints from source code) or for migrating a set of products as a variability system. In this talk we will first present tool-supported techniques for synthesizing variability models from constraints or product descriptions.
Practitioners can build Boolean feature models with an interactive environment for selecting a meaningful and sound hierarchy.
Attributes can also be synthesized for encoding numerical values and constraints among them.
We will present key results we obtain through experiments with the SPLOT repository and product comparison matrices coming from Wikipedia and BestBuy.
Finally we will introduce OpenCompare.org a recent initiative for editing, reasoning, and mining product comparison matrices.
This talk has been done at REVE'15 workshop co-located with SPLC'15 (software product line conference): http://www.isse.jku.at/reve2015/program.html
Pandoc (http://pandoc.org/) is a universal document converter. Taking as input Markdown/Wiki-like syntax, you can obtain an .odt, .docx, .tex, or even Beamer slides (like here)!
In this presentation we introduce Pandoc.
We then zoom on some interesting technical details (parameters, template language, ability to embed LaTeX into Markdown, ways to rewrite your AST with external bindings, etc.)
We believe Pandoc is a very useful piece of software that is worth studying (including for education/research purposes).
The slides have been written by Erwan Bousse (http://people.irisa.fr/Erwan.Bousse/) in Pandoc (of course!) and presented in the context of the DiverSE coffee (a weekly meeting of the DiverSE team: http://diverse.irisa.fr).
External or internal domain-specific languages (DSLs) or (fluent)
APIs? Whoever you are – a developer or a user of a DSL –
you usually have to choose side; you should not! What about
metamorphic DSLs that change their shape according to your
needs? Our 4-years journey of providing the "right" support
(in the domain of feature modeling), led us to develop an external
DSL, different shapes of an internal API, and maintain
all these languages. A key insight is that there is no one-size-fits-
all solution or no clear superiority of a solution compared
to another. On the contrary, we found that it does make sense
to continue the maintenance of an external and internal DSL.
Based on our experience and on an analysis of the DSL engineering
field, the vision that we foresee for the future of
software languages is their ability to be self-adaptable to the
most appropriate shape (including the corresponding integrated
development environment) according to a particular
usage or task. We call metamorphic DSL such a language,
able to change from one shape to another shape.
The talk has been presented at SPLASH conference in Portland (USA), Onward! Essays track.
Paper is here: https://hal.archives-ouvertes.fr/hal-01061576/fr
3D printing is gaining more and more momentum to build
customized product in a wide variety of fields. We con-
duct an exploratory study of Thingiverse, the most popular
Website for sharing user-created 3D design les, in order to
establish a possible connection with software product line
(SPL) engineering. We report on the socio-technical aspects
and current practices for modeling variability, implementing
variability, configuring and deriving products, and reusing
artefacts. We provide hints that SPL-alike techniques are
practically used in 3D printing and thus relevant. Finally,
we discuss why the customization in the 3D printing eld
represents a challenging playground for SPL engineering.
Feature Models (FMs) are the de-facto standard for documenting, model checking, and reasoning about the configurations of a software system. This paper introduces WebFML a comprehensive environment for synthesizing FMs from various kinds of artefacts (e.g. propositional formula, dependency graph, FMs or product comparison matrices). A key feature of WebFML is an interactive support (through ranking lists, clusters, and logical heuristics) for choosing a sound and meaningful hierarchy. WebFML opens avenues for numerous practical applications (e.g., merging multiple product lines, slicing a configuration process, reverse engineering configurable systems).
tinyurl.com/WebFMLDemo
More from University of Rennes, INSA Rennes, Inria/IRISA, CNRS (20)
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
3. Generative AI and variability: rising interests!
Mathieu Acher and Jabier Martinez. 2023. Generative AI for Reengineering Variants
into Software Product Lines: An Experience Report. VariVolution@SPLC 2013
Mathieu Acher, Jean-Marc Jézéquel, and José A. Galindo. 2023. On Programming
Variability with Large Language Model-based Assistant. In SPLC 2023
Sandra Greiner, Klaus Schmid, Thorsten Berger, Sebastian Krieter, Kristof Meixner
Generative AI And Software Variability – A Research Vision, VaMoS 2024
Galindo et al. Large Language Models to generate meaningful feature model
instances, SPLC 2023
3
4. LLM
Hypothesis: Large language models (LLMs) act as a new variability compiler capable
of transforming a high-level specification (“prompt”) into variable code, features,
generators, configurable systems, etc. written in a given technological space.
Motto:
“features as
prompts”
4
Mathieu Acher, Jean-Marc
Jézéquel, and José A.
Galindo. 2023. On
Programming Variability
with Large Language
Model-based Assistant. In
SPLC 2023
5. LLM
“End-user software engineering focuses on empowering individuals
who are not professional programmers to design, implement, and
maintain software applications.“
Two scenarios:
● (1) end-users can implement variability with the assistance of LLMs;
● (2) LLMs can be used to construct specialized, configurable tools for
end-users
Motto:
“features as
prompts”
5
10. 10
What’s this TikZ code is doing?
Code understanding
The level of details is too low, and
the explanations are not really
actionable, especially for relating
code elements to visual aspects.
12. 12
“can you vary the eye of the squid
with the blue color? tell
me what to varies in the TikZ code”
Feature location
To change the color of the squid’s eye to blue, you’ll need
to modify the fill commands that define the eye. The lines
that define the eye are:
fill [Grey50]
(64, 16) circle [radius=32];
fill [BlueGrey900] (64, 16) circle [radius=12];
To change the eye color to blue, you can replace the
colors
...
fill [blue] (64, 16) circle [radius=32];
fill [blue!60!white] (64, 16) circle [radius=12];
...
13. 13
“would it be possible to vary the
original code using a
templating engine in Python in such
a way it is possible to have
either eye of the squid with the blue
color or the original eye
(as in the original TikZ code)? ... can
you parameterize the spots
with a random color? if not random
mode, then it’s the default
color”
Feature and variation
implementation
16. Here is a TikZ code:
tikzset{%
cat/.pic={
tikzset{x=3cm/5,y=3cm/5,shift={(0,-1/3)}}
useasboundingbox (-1,-1) (1,2);
fill [BlueGrey900] (0,-2)
.. controls ++(180:3) and ++(0:5/4) .. (-2,0)
arc (270:90:1/5)
.. controls ++(0:2) and ++(180:11/4) .. (0,-2+2/5);
foreach i in {-1,1}
scoped[shift={(1/2*i,9/4)}, rotate=45*i]{
clip [overlay] (0, 5/9) ellipse [radius=8/9];
clip [overlay] (0,-5/9) ellipse [radius=8/9];
fill [BlueGrey900] ellipse [radius=1];
clip [overlay] (0, 7/9) ellipse [radius=10/11];
clip [overlay] (0,-7/9) ellipse [radius=10/11];
fill [Purple100] ellipse [radius=1];
};
fill [BlueGrey900] ellipse [x radius=3/4, y radius=2];
fill [BlueGrey100] ellipse [x radius=1/3, y radius=1];
fill [BlueGrey900]
(0,15/8) ellipse [x radius=1, y radius=5/6]
(0, 8/6) ellipse [x radius=1/2, y radius=1/2]
{[shift={(-1/2,-2)}, rotate= 10] ellipse [x radius=1/3, y radius=5/4]}
{[shift={( 1/2,-2)}, rotate=-10] ellipse [x radius=1/3, y radius=5/4]};
fill [BlueGrey500]
(-1/9,11/8) ellipse [x radius=1/5, y radius=1/5]
( 1/9,11/8) ellipse [x radius=1/5, y radius=1/5];
fill [Purple100]
(0,12/8) ellipse [x radius=1/10, y radius=1/5]
(0,12/8+1/9) ellipse [x radius=1/5 , y radius=1/10];
foreach i in {-1,1}
scoped[shift={(1/2*i,2)}, rotate=35*i]{
clip [overlay] (0, 1/7) ellipse [radius=2/7];
clip [overlay] (0,-1/7) ellipse [radius=2/7];
fill [Yellow50] ellipse [radius=1];
};
scoped{
clip (-1,-2) rectangle ++(2,1);
fill [BlueGrey900] (0,-2) ellipse [radius=1/2];
fill [Grey100]
(-1/2,-2) ellipse [x radius=1/3, y radius=1/4]
( 1/2,-2) ellipse [x radius=1/3, y radius=1/4];
};
foreach i in {-1,1}
foreach j in {-1,0,1}
fill [Grey100, shift={(0,11/8)}, xscale=i, rotate=j*15,
shift=(0:1/2)]
ellipse [x radius=1/3, y radius=1/64];
},
I’d like to change the color of the cat’s tail in red
inaccurate
feature location
or/and incorrect
code synthesis
is totally
possible (even
given the same
prompt)
Feature location:
beware!
17. To change the color of the squid’s eye to blue, you’ll need
to modify the fill commands that define the eye. The lines
that define the eye are:
fill [Grey50]
(64, 16) circle [radius=32];
fill [BlueGrey900] (64, 16) circle [radius=12];
To change the eye color to blue, you can replace the
colors
...
fill [blue] (64, 16) circle [radius=32];
fill [blue!60!white] (64, 16) circle [radius=12];”
...
“can you vary the eye of the squid
with the blue color? tell
me what to varies in the TikZ code”
Lots of back and
forth for
integrating the
code!
18. “can you vary the eye of the squid
with the blue color? tell
me what to varies in the TikZ code”
“would it be possible to vary the
original code using a
templating engine in Python in such
a way it is possible to have
either eye of the squid with the blue
color or the original eye
(as in the original TikZ code)? ... can
you parameterize the spots
with a random color? if not random
mode, then it’s the default
color”
Prompts flow: tension
between high-level
specification and
implementation details
19. Conclusion
LLMs can be leveraged to support end-users in the customization of code and
technical artefacts (customizing TikZ without knowing TikZ?)
Transformation of an undocumented, technical, low-level TikZ into a user-friendly,
configurable, Web-based customization tool written in Python, HTML, CSS, and
JavaScript and itself customizable.
LLMs can support traditional variability engineering tasks, such as identification and
implementation of features, synthesis of a template code generator, and development
of end-user configurators.
The rise and fall of LLMs: many caveats and failures at the different steps!
Future work: thorough evaluation and more robust, automated, end-to-end
customization
Exciting time: democratization of programming and software-related customization! 19