This document discusses maintaining and evolving large software systems over time. It presents an approach using reverse engineering, metrics, analysis, and visualization tools to help understand large and complex codebases. The Moose environment provides analyses, queries, and visualizations to extract and represent information from source code in a language-independent way. This helps support software maintenance and evolution by improving understanding of large systems.
This document provides biographical information about Tudor Gîrba, including his date of birth, education history, and career. It also discusses software engineering challenges like project failure rates and the complexity of software. Additionally, it describes tools and techniques for software analysis like metrics, queries, and visualization. Finally, it provides information on the Moose software analysis platform and its community of contributors over many years.
This document discusses assessing software systems and provides examples of visualization tools that can be used for software assessment. It begins with background information on the author and dates. It then provides examples of different types of visualizations including class hierarchies, distribution maps, feature maps, code cities, duplication views, hierarchy evolution patterns, ownership maps, and clone evolution. The document emphasizes that software assessment relies more on visualization tools than traditional metrics and fan-in/fan-out analysis. It concludes by restating the author's name and website.
Reverse engineering is analyzing a subject system to identify its components and relationships and create more abstract representations. It is often used to understand legacy software systems that are valuable but complex. Reverse engineering processes typically involve disassembling code, understanding its behavior through testing, and reassembling it at a higher level of abstraction. This allows engineers to regain control over complex legacy systems and transform them when needed.
Holistic software assessment at the University of ZurichTudor Girba
The document discusses holistic software assessment, which involves understanding a software system to support decision making. It involves crafting analyses, hypothesizing about existing analyses, interpreting results confidently, and acting on insights. The assessment approach aims to be explicit, integrated, tailored, and able to reshape software development. Various case studies are cited that apply holistic assessment techniques to open source software projects.
Moose is an analysis, modeling, and visualization platform built by researchers over 100 person-years since 1997. It includes tools like FAMIX for language-independent meta-modeling, Mondrian for scripting visualizations, and EyeSee for creating charts. Moose has been used in various research groups and has resulted in over 150 publications. The platform and its components are under continuous development by researchers at the University of Bern and other institutions.
This document discusses long living software systems and approaches to supporting their maintenance over time. It outlines how a majority of effort is spent on maintenance through perfective changes and new functionality. The author's research focuses on areas like metamodeling, metrics, program understanding, and visualizations to help teams maintain large software systems through analyses of code, people and practices. Their open-source Moose environment is designed to be extensible and has been applied to real-life industrial systems of various sizes written in different languages.
This document provides an overview of C++ Essentials, a book that introduces the C++ programming language. The book is divided into 12 chapters that cover topics such as variables, expressions, statements, functions, arrays, pointers, classes, inheritance, templates, exceptions, input/output streams, and the preprocessor. Each chapter presents the concepts through explanations and examples in a concise tutorial style suitable for beginners to learn C++.
This document provides biographical information about Tudor Gîrba, including his date of birth, education history, and career. It also discusses software engineering challenges like project failure rates and the complexity of software. Additionally, it describes tools and techniques for software analysis like metrics, queries, and visualization. Finally, it provides information on the Moose software analysis platform and its community of contributors over many years.
This document discusses assessing software systems and provides examples of visualization tools that can be used for software assessment. It begins with background information on the author and dates. It then provides examples of different types of visualizations including class hierarchies, distribution maps, feature maps, code cities, duplication views, hierarchy evolution patterns, ownership maps, and clone evolution. The document emphasizes that software assessment relies more on visualization tools than traditional metrics and fan-in/fan-out analysis. It concludes by restating the author's name and website.
Reverse engineering is analyzing a subject system to identify its components and relationships and create more abstract representations. It is often used to understand legacy software systems that are valuable but complex. Reverse engineering processes typically involve disassembling code, understanding its behavior through testing, and reassembling it at a higher level of abstraction. This allows engineers to regain control over complex legacy systems and transform them when needed.
Holistic software assessment at the University of ZurichTudor Girba
The document discusses holistic software assessment, which involves understanding a software system to support decision making. It involves crafting analyses, hypothesizing about existing analyses, interpreting results confidently, and acting on insights. The assessment approach aims to be explicit, integrated, tailored, and able to reshape software development. Various case studies are cited that apply holistic assessment techniques to open source software projects.
Moose is an analysis, modeling, and visualization platform built by researchers over 100 person-years since 1997. It includes tools like FAMIX for language-independent meta-modeling, Mondrian for scripting visualizations, and EyeSee for creating charts. Moose has been used in various research groups and has resulted in over 150 publications. The platform and its components are under continuous development by researchers at the University of Bern and other institutions.
This document discusses long living software systems and approaches to supporting their maintenance over time. It outlines how a majority of effort is spent on maintenance through perfective changes and new functionality. The author's research focuses on areas like metamodeling, metrics, program understanding, and visualizations to help teams maintain large software systems through analyses of code, people and practices. Their open-source Moose environment is designed to be extensible and has been applied to real-life industrial systems of various sizes written in different languages.
This document provides an overview of C++ Essentials, a book that introduces the C++ programming language. The book is divided into 12 chapters that cover topics such as variables, expressions, statements, functions, arrays, pointers, classes, inheritance, templates, exceptions, input/output streams, and the preprocessor. Each chapter presents the concepts through explanations and examples in a concise tutorial style suitable for beginners to learn C++.
The document introduces OCaml×Scope, a new API search engine for OCaml similar to Hoogle for Haskell. It summarizes the limitations of existing OCaml API search tools and describes how OCaml×Scope addresses these by scraping documentation from cmt/cmti files of over 100 OPAM packages. Future work includes improving search results grouping, adding a better web GUI, and integrating remote querying. The tool provides name and type-based searching of OCaml APIs to help programmers.
Using functional programming within an industrial product group: perspectives...Anil Madhavapeddy
We present a case-study of using OCaml within a large product development project, focussing on both the technical and non- technical issues that arose as a result. We draw comparisons between the OCaml team and the other teams that worked on the project, providing comparative data on hiring patterns and cross-team code contribution.
A student discusses how becoming a student of Stephane Ducasse has led to a new lifestyle, including new styles of excursions and making new friends, concluding that it has truly been a new way of life.
The document provides an overview of Stéphane Ducasse's expertise in software evolution and reengineering. It discusses challenges in maintaining large software projects over time. It introduces Moose, an open-source reengineering platform developed by Ducasse to help with tasks like program understanding, metrics analysis, visualization, and detecting duplicated code. The document provides examples of how Moose can be used to analyze software structure, identify patterns of change, understand class hierarchies and how they evolve, and characterize how properties spread across packages over multiple versions of a system.
SLE/GPCE Keynote: What's the value of an end user? Platforms and Research: Th...Stéphane Ducasse
This talk will present the synergy arising from building platforms on top of which do our research. RMOD our team [1] is developing two platforms: Pharo (a dynamic reflective object-oriented language supporting live programming) and Moose (an open-source software analysis platform [2]). Developing platforms forces us to develop really usable systems. While some activities are more engineering than research per se, it is really interesting to deeply understand problems or impacts of certain design decisions. Developing platforms is rewarding because it is more a long term effort and ensures a degree of stability. Platforms also often exhibit non-linear growth that is really exciting. Finally this setup raises many interesting questions such as “What is the value in terms of citations or published papers of a couple of end-users”, or “Is it not really stupid not to work on latest hype language?” To try to open our minds, I will draw parallels with the notion of wealth of an ecosystem in biology. In the second part of the talk I will present some selected results around Pharo and Moose such as: automatic minimal system core generation, dynamic core updates, selector namespace, dependencies in past commit branches and automatic migration rule generation.
[1] http://rmod.lille.inria.fr/
[2] http://www.moosetechnology.org/
[3] http://www.pharo.org/
Succeeding with Functional-first Programming in Enterprisedsyme
This document provides an overview of how functional-first programming languages like F# can help teams developing analytical components in finance. It notes that the recurring business problems for such teams are time to market, efficiency, correctness, and managing complexity. Functional-first languages help address these problems by enabling simple, correct, and robust code. They also allow for rapid integration of components through strong interoperability. Additionally, their strong typing helps maintain efficiency while empowering developers to tackle more complex problems. The document provides several examples of successful uses of F# in finance, insurance, biotech, advertising, and other domains to illustrate how it helps solve problems faster and more robustly.
Greenfield projects are awesome – you can develop highest quality application using best practices on the market. But what if your bread actually is Legacy projects?
Does it mean that you need to descend into darkness of QA absence? Does it mean that you can’t use Agile or modern communication practices like BDD?
This talk will show you how to be successful even with the oldest legacy projects out there through the usage of Agile processes and tools like Impact Mapping, Feature Mapping, Example Workshop, Story and Spec BDDs.
This document summarizes a study of CEO succession events among the largest 100 U.S. corporations between 2005-2015. The study analyzed executives who were passed over for the CEO role ("succession losers") and their subsequent careers. It found that 74% of passed over executives left their companies, with 30% eventually becoming CEOs elsewhere. However, companies led by succession losers saw average stock price declines of 13% over 3 years, compared to gains for companies whose CEO selections remained unchanged. The findings suggest that boards generally identify the most qualified CEO candidates, though differences between internal and external hires complicate comparisons.
This document discusses software metrics and design problems. It provides examples of common code metrics like lines of code, cyclomatic complexity, and coupling between objects. However, it notes that metrics have limitations in that they measure symptoms rather than causes of problems and don't directly lead to improvement actions on their own due to issues with thresholds and granularity. Metrics are better used to assess and improve quality when combined with design principles rather than in isolation. Reverse engineering is proposed as a way to better understand large, existing systems by analyzing components and relationships at a more abstract level.
Pragmatic Design Quality Assessment - (Tutorial at ICSE 2008)Tudor Girba
This set of slides was used for the tutorial given by Tudor Girba, Michele Lanza and Radu Marinescu at International Conference on Software Engineering (ICSE) 2008.
Metrics and problem detection in software systems can help assess quality and facilitate improvement. While metrics provide measurable values, they have limitations including arbitrary thresholds and lack of context. Design problems like god classes and duplicated code negatively impact quality and maintenance. Detection strategies apply metric-based rules to find issues like high complexity, low cohesion, or excessive foreign data usage. Code duplication is analyzed at lexical, syntactical and semantic levels using techniques like string matching and tree comparison to identify significant duplicated code blocks. Metrics and detection strategies provide an initial understanding but their interpretation requires relating findings to design principles and domain knowledge.
Software understanding in the large (EVO 2008)Tudor Girba
This document discusses software understanding at a large scale. It notes that systems are complex with many facets, but simple tools can provide useful information. Every system and technology is unique, requiring specialized understanding approaches. Reverse engineering and visual queries can help analyze a system, but one must consider findings in full perspective as results may be biased by the method or goal.
Modeling History to Understand Software Evolution with Hismo 2008-03-12Tudor Girba
This document discusses modeling software history and evolution. It presents several techniques for visualizing and analyzing changes in software over time, including matrices to show class evolution, metrics to detect design flaws, and strategies that consider historical context. Evolution information can provide insights into patterns of change, but modeling history poses challenges due to the large amount of data involved.
Modeling History to Understand Software Evolution With Hismo 2008-02-25 Tudor Girba
This document discusses modeling software history and evolution. It presents several techniques for visualizing and analyzing changes in software systems over time, including class evolution matrices, metrics for measuring changes in attributes and methods, and strategies for detecting design flaws based on historical patterns of change. The goal is to gain insights into how software systems evolve in order to support tasks like maintenance, comprehension, and reverse engineering.
History analysis provides useful information about when systems change, how they change, what changes, what may change in the future, and who made changes. Modeling history and changes over time can reveal patterns and dependencies within large and complex systems.
Enhancing agile development through software assessmentTudor Girba
This document discusses enhancing agile development with software assessment. It advocates that assessment should be a continuous and contextual discipline. Various software visualization and analysis techniques are presented that can provide assessments of code quality, complexity, duplication and dependencies to support iterative improvement.
Understanding software systems is hampered by their sheer size and complexity. Software visualization encodes the data found in these systems into pictures and enables the human eye to interpret it. In this lecture we present the concepts of software visualization and we show several examples of how visualizations can help in understanding software systems.
Humane assessment with Moose at Benevol 2010Tudor Girba
This document discusses concepts related to software engineering including humane assessment, feedback, reverse engineering, and tailored vs. generic approaches. It presents information on websites related to assessment and moose technology. The document also contains code snippets and diagrams showing nesting and relationships between concepts.
This document summarizes the key points about using history to understand software evolution:
1. Software evolution can be modeled and measured explicitly using system history, class history, and version history.
2. Metrics like ENOM, LENOM, and EENOM can quantify changes in attributes like the number of methods over time and identify patterns like balanced, late, or early changers.
3. Detection strategies are metric-based queries that use historical data to detect design flaws or assess stability over time.
4. Visualizing hierarchy evolution over multiple versions can reveal common evolution patterns and the persistence or instability of different parts of the system.
5. Understanding historical patterns may help predict where future
The document introduces OCaml×Scope, a new API search engine for OCaml similar to Hoogle for Haskell. It summarizes the limitations of existing OCaml API search tools and describes how OCaml×Scope addresses these by scraping documentation from cmt/cmti files of over 100 OPAM packages. Future work includes improving search results grouping, adding a better web GUI, and integrating remote querying. The tool provides name and type-based searching of OCaml APIs to help programmers.
Using functional programming within an industrial product group: perspectives...Anil Madhavapeddy
We present a case-study of using OCaml within a large product development project, focussing on both the technical and non- technical issues that arose as a result. We draw comparisons between the OCaml team and the other teams that worked on the project, providing comparative data on hiring patterns and cross-team code contribution.
A student discusses how becoming a student of Stephane Ducasse has led to a new lifestyle, including new styles of excursions and making new friends, concluding that it has truly been a new way of life.
The document provides an overview of Stéphane Ducasse's expertise in software evolution and reengineering. It discusses challenges in maintaining large software projects over time. It introduces Moose, an open-source reengineering platform developed by Ducasse to help with tasks like program understanding, metrics analysis, visualization, and detecting duplicated code. The document provides examples of how Moose can be used to analyze software structure, identify patterns of change, understand class hierarchies and how they evolve, and characterize how properties spread across packages over multiple versions of a system.
SLE/GPCE Keynote: What's the value of an end user? Platforms and Research: Th...Stéphane Ducasse
This talk will present the synergy arising from building platforms on top of which do our research. RMOD our team [1] is developing two platforms: Pharo (a dynamic reflective object-oriented language supporting live programming) and Moose (an open-source software analysis platform [2]). Developing platforms forces us to develop really usable systems. While some activities are more engineering than research per se, it is really interesting to deeply understand problems or impacts of certain design decisions. Developing platforms is rewarding because it is more a long term effort and ensures a degree of stability. Platforms also often exhibit non-linear growth that is really exciting. Finally this setup raises many interesting questions such as “What is the value in terms of citations or published papers of a couple of end-users”, or “Is it not really stupid not to work on latest hype language?” To try to open our minds, I will draw parallels with the notion of wealth of an ecosystem in biology. In the second part of the talk I will present some selected results around Pharo and Moose such as: automatic minimal system core generation, dynamic core updates, selector namespace, dependencies in past commit branches and automatic migration rule generation.
[1] http://rmod.lille.inria.fr/
[2] http://www.moosetechnology.org/
[3] http://www.pharo.org/
Succeeding with Functional-first Programming in Enterprisedsyme
This document provides an overview of how functional-first programming languages like F# can help teams developing analytical components in finance. It notes that the recurring business problems for such teams are time to market, efficiency, correctness, and managing complexity. Functional-first languages help address these problems by enabling simple, correct, and robust code. They also allow for rapid integration of components through strong interoperability. Additionally, their strong typing helps maintain efficiency while empowering developers to tackle more complex problems. The document provides several examples of successful uses of F# in finance, insurance, biotech, advertising, and other domains to illustrate how it helps solve problems faster and more robustly.
Greenfield projects are awesome – you can develop highest quality application using best practices on the market. But what if your bread actually is Legacy projects?
Does it mean that you need to descend into darkness of QA absence? Does it mean that you can’t use Agile or modern communication practices like BDD?
This talk will show you how to be successful even with the oldest legacy projects out there through the usage of Agile processes and tools like Impact Mapping, Feature Mapping, Example Workshop, Story and Spec BDDs.
This document summarizes a study of CEO succession events among the largest 100 U.S. corporations between 2005-2015. The study analyzed executives who were passed over for the CEO role ("succession losers") and their subsequent careers. It found that 74% of passed over executives left their companies, with 30% eventually becoming CEOs elsewhere. However, companies led by succession losers saw average stock price declines of 13% over 3 years, compared to gains for companies whose CEO selections remained unchanged. The findings suggest that boards generally identify the most qualified CEO candidates, though differences between internal and external hires complicate comparisons.
This document discusses software metrics and design problems. It provides examples of common code metrics like lines of code, cyclomatic complexity, and coupling between objects. However, it notes that metrics have limitations in that they measure symptoms rather than causes of problems and don't directly lead to improvement actions on their own due to issues with thresholds and granularity. Metrics are better used to assess and improve quality when combined with design principles rather than in isolation. Reverse engineering is proposed as a way to better understand large, existing systems by analyzing components and relationships at a more abstract level.
Pragmatic Design Quality Assessment - (Tutorial at ICSE 2008)Tudor Girba
This set of slides was used for the tutorial given by Tudor Girba, Michele Lanza and Radu Marinescu at International Conference on Software Engineering (ICSE) 2008.
Metrics and problem detection in software systems can help assess quality and facilitate improvement. While metrics provide measurable values, they have limitations including arbitrary thresholds and lack of context. Design problems like god classes and duplicated code negatively impact quality and maintenance. Detection strategies apply metric-based rules to find issues like high complexity, low cohesion, or excessive foreign data usage. Code duplication is analyzed at lexical, syntactical and semantic levels using techniques like string matching and tree comparison to identify significant duplicated code blocks. Metrics and detection strategies provide an initial understanding but their interpretation requires relating findings to design principles and domain knowledge.
Software understanding in the large (EVO 2008)Tudor Girba
This document discusses software understanding at a large scale. It notes that systems are complex with many facets, but simple tools can provide useful information. Every system and technology is unique, requiring specialized understanding approaches. Reverse engineering and visual queries can help analyze a system, but one must consider findings in full perspective as results may be biased by the method or goal.
Modeling History to Understand Software Evolution with Hismo 2008-03-12Tudor Girba
This document discusses modeling software history and evolution. It presents several techniques for visualizing and analyzing changes in software over time, including matrices to show class evolution, metrics to detect design flaws, and strategies that consider historical context. Evolution information can provide insights into patterns of change, but modeling history poses challenges due to the large amount of data involved.
Modeling History to Understand Software Evolution With Hismo 2008-02-25 Tudor Girba
This document discusses modeling software history and evolution. It presents several techniques for visualizing and analyzing changes in software systems over time, including class evolution matrices, metrics for measuring changes in attributes and methods, and strategies for detecting design flaws based on historical patterns of change. The goal is to gain insights into how software systems evolve in order to support tasks like maintenance, comprehension, and reverse engineering.
History analysis provides useful information about when systems change, how they change, what changes, what may change in the future, and who made changes. Modeling history and changes over time can reveal patterns and dependencies within large and complex systems.
Enhancing agile development through software assessmentTudor Girba
This document discusses enhancing agile development with software assessment. It advocates that assessment should be a continuous and contextual discipline. Various software visualization and analysis techniques are presented that can provide assessments of code quality, complexity, duplication and dependencies to support iterative improvement.
Understanding software systems is hampered by their sheer size and complexity. Software visualization encodes the data found in these systems into pictures and enables the human eye to interpret it. In this lecture we present the concepts of software visualization and we show several examples of how visualizations can help in understanding software systems.
Humane assessment with Moose at Benevol 2010Tudor Girba
This document discusses concepts related to software engineering including humane assessment, feedback, reverse engineering, and tailored vs. generic approaches. It presents information on websites related to assessment and moose technology. The document also contains code snippets and diagrams showing nesting and relationships between concepts.
This document summarizes the key points about using history to understand software evolution:
1. Software evolution can be modeled and measured explicitly using system history, class history, and version history.
2. Metrics like ENOM, LENOM, and EENOM can quantify changes in attributes like the number of methods over time and identify patterns like balanced, late, or early changers.
3. Detection strategies are metric-based queries that use historical data to detect design flaws or assess stability over time.
4. Visualizing hierarchy evolution over multiple versions can reveal common evolution patterns and the persistence or instability of different parts of the system.
5. Understanding historical patterns may help predict where future
This document discusses software evolution analysis and visualization. It begins with definitions of software and key concepts in software evolution. It then discusses mining software repositories to understand who made changes and when. Visualization techniques are presented to help understand the evolutionary process, such as the Evolution Matrix and CodeCity tools. Analyzing bugs as first-class entities over their lifecycle is also discussed. The goal of the analysis is to understand past evolution and help predict future changes.
1) Restructuring involves transforming a program to fit current needs rather than just changing external behavior like refactoring does.
2) Software should be habitable and find the right place for everything, so take a critical look at design issues like God classes and feature envy.
3) Common restructuring techniques include splitting God classes, removing middlemen, eliminating duplicate code, and moving behavior closer to data.
Reverse engineering techniques involve metrics, queries, and visualizations to analyze software systems. Metrics compress a system into measurable properties like lines of code and complexity. Queries detect patterns and flaws through metric-based rules. Visualizations compress a system into graphical representations like UML diagrams and polymetric views to show relationships and changes over time.
Testing and Migration
1. Legacy systems often lack tests, but writing tests enables safe evolution by allowing incremental changes and constant feedback.
2. Migration is a restructuring that changes a system's infrastructure, and big-bang migrations often fail due to user resistance to change.
3. Incremental migration with a bridge between old and new systems preserves familiarity while building confidence in the new system through prototyping and testing after every small change.
The humane software assessment (Choose Forum 2009)Tudor Girba
The document discusses the "humane software assessment" which is a human activity involving 3 steps: 1) humans build the system, 2) humans perform the assessment, and 3) humans consume the assessment. It notes that the shape of an organization influences the shape of the software system it builds, and discusses how humans identify patterns, form hypotheses, and ask questions when performing assessments rather than processing information like machines. The assessment is presented as a human-centered process.
Moose is an analysis, modeling, visualization, and tool building platform built in Berne and used by several research groups. It includes various modeling, visualization, and analysis tools that use metrics and detection strategies to analyze software systems at different levels of abstraction.
This article discusses the 1965 awards given by the Institute of International Education for the most effective statements within criticism to the progress and development of design over the past five years. It highlights how the international jury, chaired by British consultant Walter Munk and David Strout from the USA, studied some 200 articles and essays and selected the awards. The awards totaled $61,500 and represented the field of design.
Similar to Helping you reengineering your legacy (20)
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
5. Software is complex.
29% Succeeded
18% Failed
53% Challenged
The Standish Group, 2004
6. How large is your project?
1’000’000 lines of code
* 2 = 2’000’000 seconds
/ 3600 = 560 hours
/ 8 = 70 days
/ 20 = 3 months
7. Software development
is more than forward engineering.
Fo
rw
ar
d
en
gin
ee
rin
g
{ { { {
{
{
} { {
Actual development
} } }
}
} } } }
{
8. Maintenance is
is needed to evolve the code.
Fo
rw
ing ar
r
ee d
en
gin gin
en ee
e
rs rin
ve g
Re
{ { { {
{
{
} { {
Actual development
} } }
}
} } } }
{
9. Roadmap
• Some facts
• Our approach
• Supporting maintenance
• Moose an open-platform
• Some visual examples
• Conclusion
LSE
S.Ducasse 9
10. Supporting the evolution of applications
Our research goal and agenda grounded in reality
How to help companies maintaining their large
software?
What is the xray for software?
code, people, practices
Which analyses?
How can you monitor your system (dashboards....)
How to present extracted information?
S.Ducasse 10
12. Software Metrics
[LMO99, OOPSLA00]
Duplicated Code Identification
Understanding Large Systems [ICSM99, ICSM02]
Group Identification
[WCRE99, TSI00, TSE03]
Static/Dynamic Information [ASE03]
Test Generation
[ICSM99]
Feature Analysis [CSMR 06]
Concept Identification
[JSME 06]
Analyses [WCRE 06]
Class Understanding
[OOPSLA01,TSE04]
Package Blueprints Reverse
[ICSM 07]
Engineering
Distribution Maps
[ICSM 06]
Representation Transformations
Language Independent
Refactorings
[IWPSE 00]
Evolution
Language Independent Meta
Model (FAMIX) Reengineering Patterns
[UML99] Version Analyses
An Extensible Reengineering [ICSM 05]
Environment (Moose) HISMO metamodel
[Models 06] [JSME 05]
LSE
S.Ducasse 12
13. One Example: who is responsible of what?
(4) Visualisation
(3) Analyses
(2) Modèle
(1) Extraction
Distribution Map of authors
on JBoss
S.Ducasse 13
14. Moose is a powerful environment
McCabe = 21
NOM 0
= 102 0
3,0
75
=
...
C
LO
Metrics Queries Visualizations
{ {
{
{
}
}
}
} }
{
15. Metrics compress the system into numbers
0
Cyclomatic complexity = 21 00
3,
75
NOM
= 102 =
OC
L
{ {
{
{
}
}
}
} }
{
28. Based on similar commit signature
Edit Takeover
Monologue Familiarization Dialogue
LSE
S.Ducasse 28
29. How can we predict changes?
Common wisdom stresses that what changes yesterday
will change today, but it is true?
In the Sahara the weather is constant,
tomorrow: 90% chance that it is the same as today
In Belgium, the weather is changing really fast (sea
influence), 30% chance that it is the same as today
LSE
S.Ducasse 29
30. With history analysis we can get the
climate of a software system
Past Late Future Early
Changers Changers
1, TopLENOM1..i (S, t1) ∩
TopEENOMi..n (S, t2) ≠ ∅
YWi(S) =
0, TopLENOM1..i (S, t1) ∩
TopEENOMi..n (S, t2) = ∅
∑ YWi(S, t1, t2)
YW(S, t1, t2) =
Past Present Future n-2
hit
versions version versions
LSE
S.Ducasse 30
31. Roadmap
• Some facts
• Our approach
• Supporting maintenance
• Moose an open-platform
• Some visual examples
• Conclusion
LSE
S.Ducasse 31
33. Moose has been validated on real life systems
written in different languages
• Several large, industrial case studies (NDA)
• Harman-Becker
• Nokia
• Daimler
• Siemens
• Different implementation languages (C++, Java,
Smalltalk, Cobol)
• Different sizes
LSE
S.Ducasse 33
34. Current Team Previous Team
Stéphane Ducasse Serge Demeyer
Tudor Gîrba Michele Lanza
Adrian Kuhn Sander Tichelaar
Current Contributors menPrevious Contributors
~ 100 years
Hani Abdeen Ilham Alloui Tobias Aebi Frank Buchli
Gabriela Arevalo Mihai Balint Thomas Bühler
Calogero Butera
Philipp Bunge
Marco D’Ambros Daniel Frey
Georges Golomingi
Orla Greevy Markus Hofstetter David Gurtner Reinout Heeck
Matthias Junker Adrian Lienhard Markus Kobel Michael Locher
Martin von Löwis
Mircea Lungu Pietro Malorgio Michael Meer
Michael Meyer Damien Pollet Laura Ponisio Daniel Ratiu
Sara Sellos Lucas Streit Matthias Rieger Azadeh Razavizadeh
Toon Verwaest Roel Wuyts Andreas Schlapbach Daniel Schweizer
Richard Wettel Mauricio Seeberger Lukas Steiger
Daniele Talerico Herve Verjus
Violeta Voinescu.
35. Possible New Research Directions
• Remodularization
• Clustering analysis
• Open and Modular modules
• SOA - Service Identification
• Architecture Extraction/Validation
• Software Quality
• Cost prediction
• EJB Analysis
• Business rules extraction
• Model transformation
LSE
S.Ducasse 35
36. Evolution is difficult
• We are expert in reengineering
• We are interested in your problems!
• Moose is open-source, you can use it, extend it, change
it
• We can collaborate!
{ {
{
{
}
}
}
} }
{
NOM > 10 &
LOC > 100
LSE
S.Ducasse 36