In this paper, we develop a vision of software evolution based
on a feature-oriented perspective. From the fact that features
provide a common ground to all stakeholders, we derive a
hypothesis that changes can be eectively managed in a
feature-oriented manner. Assuming that the hypothesis holds,
we argue that feature-oriented software evolution relying
on automatic traceability, analyses, and recommendations
reduces existing challenges in understanding and managing
evolution. We illustrate these ideas using an automotive
example and raise research questions for the community.
The document discusses using design patterns and constraints to automate the detection and correction of inter-class design defects. It presents a tool called Ptidej that models design patterns as abstract models with constraints, detects distorted concrete models in code, and helps correct defects. The tool is evaluated on several open source projects, demonstrating its ability to assess flexibility and understandability by making code conform more closely to design patterns. Future work aims to improve semantics, scalability, and automation.
The document presents a thesis defense on task scheduling for partially dynamically reconfigurable devices. It defines the scheduling problem, proposes an ILP formulation and a heuristic scheduler called Napoleon to solve the problem. Experimental results show that on average, Napoleon obtains 18.6% better schedule length than other algorithms. Future work includes integrating Napoleon into a scheduling framework and testing new anti-fragmentation techniques.
This document summarizes a study of variability model coevolution patterns in the Linux kernel. The researchers analyzed over 4,000 changes to identify 13 patterns of how the variability model, mappings, and implementation coevolved together. The most common patterns were adding and removing optional modular and non-modular features. A key finding was that merges, where a feature is removed from the model but still supported, require analyzing all coevolving artifacts. The patterns provide concrete operations for tool builders to mirror real-world coevolution.
The document discusses several topics related to integrated circuit design flow:
- It outlines the basic steps in analog and digital IC design flows, including schematic design, simulation, layout, DRC/LVS checks, and post-layout simulation.
- It reviews concepts like feature size, which refers to the minimum dimensions that can be reliably manufactured, and how feature size has decreased over time according to Moore's Law.
- It covers reliability challenges like defects that can cause circuit failures and how yield is modeled based on die area and defect density.
The document discusses test and measurement system architecture. It begins by introducing some LabVIEW tools and frameworks. It then defines software architecture and discusses key aspects like different design levels, architectural patterns, and considerations for configuration, maintenance, and managing product variants. Examples of simplifying complex test systems through use of XML configuration and separating generic test code from product-specific code are provided. Strategies for background measurement processes and sequential "hit and run" testing are also contrasted. The conclusion emphasizes that architecture involves more than just software design and to keep designs simple.
Documented Requirements are not Useless After All!Lionel Briand
The document discusses challenges related to requirements in agile software development projects, including dealing with natural language requirements, domain knowledge, frequent changes, and configuring requirements for product families. It outlines research conducted to address these challenges through natural language processing techniques like analyzing compliance with boilerplate templates, extracting domain models from requirements, and analyzing change impact when requirements change. The talk aims to discuss when documented requirements are important in practice and how they can be supported to effectively handle common challenges.
The document discusses using design patterns and constraints to automate the detection and correction of inter-class design defects. It presents a tool called Ptidej that models design patterns as abstract models with constraints, detects distorted concrete models in code, and helps correct defects. The tool is evaluated on several open source projects, demonstrating its ability to assess flexibility and understandability by making code conform more closely to design patterns. Future work aims to improve semantics, scalability, and automation.
The document presents a thesis defense on task scheduling for partially dynamically reconfigurable devices. It defines the scheduling problem, proposes an ILP formulation and a heuristic scheduler called Napoleon to solve the problem. Experimental results show that on average, Napoleon obtains 18.6% better schedule length than other algorithms. Future work includes integrating Napoleon into a scheduling framework and testing new anti-fragmentation techniques.
This document summarizes a study of variability model coevolution patterns in the Linux kernel. The researchers analyzed over 4,000 changes to identify 13 patterns of how the variability model, mappings, and implementation coevolved together. The most common patterns were adding and removing optional modular and non-modular features. A key finding was that merges, where a feature is removed from the model but still supported, require analyzing all coevolving artifacts. The patterns provide concrete operations for tool builders to mirror real-world coevolution.
The document discusses several topics related to integrated circuit design flow:
- It outlines the basic steps in analog and digital IC design flows, including schematic design, simulation, layout, DRC/LVS checks, and post-layout simulation.
- It reviews concepts like feature size, which refers to the minimum dimensions that can be reliably manufactured, and how feature size has decreased over time according to Moore's Law.
- It covers reliability challenges like defects that can cause circuit failures and how yield is modeled based on die area and defect density.
The document discusses test and measurement system architecture. It begins by introducing some LabVIEW tools and frameworks. It then defines software architecture and discusses key aspects like different design levels, architectural patterns, and considerations for configuration, maintenance, and managing product variants. Examples of simplifying complex test systems through use of XML configuration and separating generic test code from product-specific code are provided. Strategies for background measurement processes and sequential "hit and run" testing are also contrasted. The conclusion emphasizes that architecture involves more than just software design and to keep designs simple.
Documented Requirements are not Useless After All!Lionel Briand
The document discusses challenges related to requirements in agile software development projects, including dealing with natural language requirements, domain knowledge, frequent changes, and configuring requirements for product families. It outlines research conducted to address these challenges through natural language processing techniques like analyzing compliance with boilerplate templates, extracting domain models from requirements, and analyzing change impact when requirements change. The talk aims to discuss when documented requirements are important in practice and how they can be supported to effectively handle common challenges.
In this webinar, "Simulate Functional Models: Generate Cost, Schedule, Performance, and Resource Analytics Rapidly," Dr. Steven Dam walks you through scripting and simulation using Innoslate's Discrete Event and Monte Carlo Simulators. This allows you to plan ahead by analyzing your system or project’s cost, schedule, and performance.
In this presentation is briefly introduced the use of Docker for Data Science.
Are presented arguments like the management of containers and the creation of new Docker images
Following topics will be addressed into presentation:
Motivation and goals of splitting monolith application
Criteria and markers to start splitting process. Is it necessary at all?
Optimal order of extracting microservices
How organize the whole process in closed iterative steps?
What can be done with common libraries and shared code?
Options for technology and deployment of target microservices
How organize and motivate the teams and convince management?
Speaker Bio
Andrei is a Software Architect in VMWare Tanzu Labs. The areas of his interest are REST API design, Microservices, Cloud, resilient distributed systems, security and agile development. Andrei is PMC and committer of Apache CXF and committer of Syncope projects.
Mike Bartley - Innovations for Testing Parallel Software - EuroSTAR 2012TEST Huddle
EuroSTAR Software Testing Conference 2012 presentation on Innovations for Testing Parallel Software by Mike Bartley.
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
The document discusses some real needs for and limits of reconfigurable computing systems. It describes how partial dynamic reconfiguration can provide flexibility and enhance performance but introduces drawbacks. Simulation and verification tools are needed to design such systems. Reconfiguration times significantly impact latency so tasks should be reused and reconfiguration hidden when possible through techniques like relocation.
Clipper: A Low-Latency Online Prediction Serving System: Spark Summit East ta...Spark Summit
Machine learning is being deployed in a growing number of applications which demand real-time, accurate, and robust predictions under heavy query load. However, most machine learning frameworks and systems only address model training and not deployment.
In this talk, we present Clipper, a general-purpose low-latency prediction serving system. Interposing between end-user applications and a wide range of machine learning frameworks, Clipper introduces a modular architecture to simplify model deployment across frameworks. Furthermore, by introducing caching, batching, and adaptive model selection techniques, Clipper reduces prediction latency and improves prediction throughput, accuracy, and robustness without modifying the underlying machine learning frameworks. We evaluated Clipper on four common machine learning benchmark datasets and demonstrate its ability to meet the latency, accuracy, and throughput demands of online serving applications. We also compared Clipper to the Tensorflow Serving system and demonstrate comparable prediction throughput and latency on a range of models while enabling new functionality, improved accuracy, and robustness.
Traceability Beyond Source Code: An Elusive Target?Lionel Briand
This document discusses traceability beyond source code, which is an elusive target. It provides an overview and examples from industrial research projects on requirements-requirements, requirements-design, requirements-test cases traceability. The examples show challenges in capturing changes precisely and change rationales. Automated traceability approaches can reduce effort but may lack accuracy required for certification. Traceability is important for certification, change management and economic decision making.
Yiannis Kanellopoulos, Management Consultant for Sustainable Software, Software Improvement Group, at the Startup Founder 101 event that took place on Wednesday, February 12th, 2014 in the context of the"4th Infocom Mobiles & Apps 2014" conference.
Distributed systems in practice, in theory (ScaleConf Colombia)Aysylu Greenberg
This document discusses distributed systems concepts in both theory and practice. It covers operating systems research into concurrency and synchronization primitives. It then discusses distributed consensus algorithms from the 1970s and 1980s and the use of leases for distributed locking and leader election. It also discusses how staged event-driven architectures, leases, and allowing some inaccuracy can help build robust, scalable systems pipelines in industry. The document advocates using computer science research to inform design decisions and mitigate technical risks.
SCM Transformation Challenges and How to Overcome ThemCompuware
If your enterprise is focused on continuously improving quality, velocity and efficiency, you’re going to win against those that aren’t. Driving improvements on the mainframe, and in turn throughout the business, requires the transformation of three things: culture, processes and tools. In other words, changing mindsets, implementing modern practices (Agile, DevOps, CI/CD) and replacing outdated technology.
Mainframe source code management is currently a critical area in need of modernization and should be one of the initial tooling changes organizations make when setting out to improve mainframe systems delivery.
During this session, Compuware specialist Lars-Erik Berglund shares the challenges organizations face with mainframe source code management and what you can do to overcome those.
The document discusses key topics in computer architecture including instruction set architecture, pipelining, memory hierarchy, parallelism, and performance evaluation metrics. It notes that computer performance is measured by execution time, throughput, or latency. Trends show that logic, DRAM, and disk capacities double every 2-3 years while speeds improve more slowly. Amdahl's Law and the CPI equation are introduced as quantitative principles for evaluating performance improvements and tradeoffs.
This document provides information about a computer organization and design course. It includes details about the instructor, tutors, grade determination, textbook, and course content. The course will cover major computer system components, CPU design, techniques for improving performance and energy efficiency, and multiprocessor architecture. It aims to teach how computer hardware and software interact and how design choices affect performance.
Improving Software Maintenance using Unsupervised Machine Learning techniquesValerio Maggio
"Improving Software Maintenance using Unsupervised Machine Learning techniques": Ph.D. defence presentation.
Unsupervised Machine Learning techniques have been used to face different software maintenance issues such as Software Modularisation and Clone detection.
Balancing Infrastructure with Optimization and Problem FormulationAlex D. Gaudio
- How do we currently think about Data Science?
- Why is infrastructure important to our field?
- Two tools we've built on Sailthru's Data Science team to deal with these problems are "Stolos" and "Relay.Mesos".
VLSI and Embedded System DESIGN – provides an overview of VLSI design and embedded systems. It discusses the challenges in VLSI design including power dissipation and interconnect issues. It then defines embedded systems and describes their characteristics like being single-purposed, tightly constrained, and reactive in real-time. Key technologies for embedded systems design are discussed including processor technology, integrated circuit technology, and design technologies which allow a unified view of hardware and software co-design. Optimization of design metrics like cost, performance, power, and flexibility is a major challenge.
Provenance for Data Munging EnvironmentsPaul Groth
Data munging is a crucial task across domains ranging from drug discovery and policy studies to data science. Indeed, it has been reported that data munging accounts for 60% of the time spent in data analysis. Because data munging involves a wide variety of tasks using data from multiple sources, it often becomes difficult to understand how a cleaned dataset was actually produced (i.e. its provenance). In this talk, I discuss our recent work on tracking data provenance within desktop systems, which addresses problems of efficient and fine grained capture. I also describe our work on scalable provence tracking within a triple store/graph database that supports messy web data. Finally, I briefly touch on whether we will move from adhoc data munging approaches to more declarative knowledge representation languages such as Probabilistic Soft Logic.
Presented at Information Sciences Institute - August 13, 2015
This document discusses the history of computer development and trends in computer hardware over time. It provides examples of early mainframe computers from 1965 that occupied entire rooms and cost millions, compared to modern laptops from 2008 that are thousands of times more powerful yet small and inexpensive. It outlines Moore's Law and trends related to transistor counts doubling every 1-2 years and processor performance doubling every 18 months. The document also discusses shrinking chip sizes over time and the limits of chip manufacturing.
A seminar in advanced Software Engineering concerning using models to guide the development process, and QVT to transfer a model into another model automatically
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
In this webinar, "Simulate Functional Models: Generate Cost, Schedule, Performance, and Resource Analytics Rapidly," Dr. Steven Dam walks you through scripting and simulation using Innoslate's Discrete Event and Monte Carlo Simulators. This allows you to plan ahead by analyzing your system or project’s cost, schedule, and performance.
In this presentation is briefly introduced the use of Docker for Data Science.
Are presented arguments like the management of containers and the creation of new Docker images
Following topics will be addressed into presentation:
Motivation and goals of splitting monolith application
Criteria and markers to start splitting process. Is it necessary at all?
Optimal order of extracting microservices
How organize the whole process in closed iterative steps?
What can be done with common libraries and shared code?
Options for technology and deployment of target microservices
How organize and motivate the teams and convince management?
Speaker Bio
Andrei is a Software Architect in VMWare Tanzu Labs. The areas of his interest are REST API design, Microservices, Cloud, resilient distributed systems, security and agile development. Andrei is PMC and committer of Apache CXF and committer of Syncope projects.
Mike Bartley - Innovations for Testing Parallel Software - EuroSTAR 2012TEST Huddle
EuroSTAR Software Testing Conference 2012 presentation on Innovations for Testing Parallel Software by Mike Bartley.
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
The document discusses some real needs for and limits of reconfigurable computing systems. It describes how partial dynamic reconfiguration can provide flexibility and enhance performance but introduces drawbacks. Simulation and verification tools are needed to design such systems. Reconfiguration times significantly impact latency so tasks should be reused and reconfiguration hidden when possible through techniques like relocation.
Clipper: A Low-Latency Online Prediction Serving System: Spark Summit East ta...Spark Summit
Machine learning is being deployed in a growing number of applications which demand real-time, accurate, and robust predictions under heavy query load. However, most machine learning frameworks and systems only address model training and not deployment.
In this talk, we present Clipper, a general-purpose low-latency prediction serving system. Interposing between end-user applications and a wide range of machine learning frameworks, Clipper introduces a modular architecture to simplify model deployment across frameworks. Furthermore, by introducing caching, batching, and adaptive model selection techniques, Clipper reduces prediction latency and improves prediction throughput, accuracy, and robustness without modifying the underlying machine learning frameworks. We evaluated Clipper on four common machine learning benchmark datasets and demonstrate its ability to meet the latency, accuracy, and throughput demands of online serving applications. We also compared Clipper to the Tensorflow Serving system and demonstrate comparable prediction throughput and latency on a range of models while enabling new functionality, improved accuracy, and robustness.
Traceability Beyond Source Code: An Elusive Target?Lionel Briand
This document discusses traceability beyond source code, which is an elusive target. It provides an overview and examples from industrial research projects on requirements-requirements, requirements-design, requirements-test cases traceability. The examples show challenges in capturing changes precisely and change rationales. Automated traceability approaches can reduce effort but may lack accuracy required for certification. Traceability is important for certification, change management and economic decision making.
Yiannis Kanellopoulos, Management Consultant for Sustainable Software, Software Improvement Group, at the Startup Founder 101 event that took place on Wednesday, February 12th, 2014 in the context of the"4th Infocom Mobiles & Apps 2014" conference.
Distributed systems in practice, in theory (ScaleConf Colombia)Aysylu Greenberg
This document discusses distributed systems concepts in both theory and practice. It covers operating systems research into concurrency and synchronization primitives. It then discusses distributed consensus algorithms from the 1970s and 1980s and the use of leases for distributed locking and leader election. It also discusses how staged event-driven architectures, leases, and allowing some inaccuracy can help build robust, scalable systems pipelines in industry. The document advocates using computer science research to inform design decisions and mitigate technical risks.
SCM Transformation Challenges and How to Overcome ThemCompuware
If your enterprise is focused on continuously improving quality, velocity and efficiency, you’re going to win against those that aren’t. Driving improvements on the mainframe, and in turn throughout the business, requires the transformation of three things: culture, processes and tools. In other words, changing mindsets, implementing modern practices (Agile, DevOps, CI/CD) and replacing outdated technology.
Mainframe source code management is currently a critical area in need of modernization and should be one of the initial tooling changes organizations make when setting out to improve mainframe systems delivery.
During this session, Compuware specialist Lars-Erik Berglund shares the challenges organizations face with mainframe source code management and what you can do to overcome those.
The document discusses key topics in computer architecture including instruction set architecture, pipelining, memory hierarchy, parallelism, and performance evaluation metrics. It notes that computer performance is measured by execution time, throughput, or latency. Trends show that logic, DRAM, and disk capacities double every 2-3 years while speeds improve more slowly. Amdahl's Law and the CPI equation are introduced as quantitative principles for evaluating performance improvements and tradeoffs.
This document provides information about a computer organization and design course. It includes details about the instructor, tutors, grade determination, textbook, and course content. The course will cover major computer system components, CPU design, techniques for improving performance and energy efficiency, and multiprocessor architecture. It aims to teach how computer hardware and software interact and how design choices affect performance.
Improving Software Maintenance using Unsupervised Machine Learning techniquesValerio Maggio
"Improving Software Maintenance using Unsupervised Machine Learning techniques": Ph.D. defence presentation.
Unsupervised Machine Learning techniques have been used to face different software maintenance issues such as Software Modularisation and Clone detection.
Balancing Infrastructure with Optimization and Problem FormulationAlex D. Gaudio
- How do we currently think about Data Science?
- Why is infrastructure important to our field?
- Two tools we've built on Sailthru's Data Science team to deal with these problems are "Stolos" and "Relay.Mesos".
VLSI and Embedded System DESIGN – provides an overview of VLSI design and embedded systems. It discusses the challenges in VLSI design including power dissipation and interconnect issues. It then defines embedded systems and describes their characteristics like being single-purposed, tightly constrained, and reactive in real-time. Key technologies for embedded systems design are discussed including processor technology, integrated circuit technology, and design technologies which allow a unified view of hardware and software co-design. Optimization of design metrics like cost, performance, power, and flexibility is a major challenge.
Provenance for Data Munging EnvironmentsPaul Groth
Data munging is a crucial task across domains ranging from drug discovery and policy studies to data science. Indeed, it has been reported that data munging accounts for 60% of the time spent in data analysis. Because data munging involves a wide variety of tasks using data from multiple sources, it often becomes difficult to understand how a cleaned dataset was actually produced (i.e. its provenance). In this talk, I discuss our recent work on tracking data provenance within desktop systems, which addresses problems of efficient and fine grained capture. I also describe our work on scalable provence tracking within a triple store/graph database that supports messy web data. Finally, I briefly touch on whether we will move from adhoc data munging approaches to more declarative knowledge representation languages such as Probabilistic Soft Logic.
Presented at Information Sciences Institute - August 13, 2015
This document discusses the history of computer development and trends in computer hardware over time. It provides examples of early mainframe computers from 1965 that occupied entire rooms and cost millions, compared to modern laptops from 2008 that are thousands of times more powerful yet small and inexpensive. It outlines Moore's Law and trends related to transistor counts doubling every 1-2 years and processor performance doubling every 18 months. The document also discusses shrinking chip sizes over time and the limits of chip manufacturing.
A seminar in advanced Software Engineering concerning using models to guide the development process, and QVT to transfer a model into another model automatically
Similar to Feature-Oriented Software Evolution (20)
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...Zilliz
Join us to introduce Milvus Lite, a vector database that can run on notebooks and laptops, share the same API with Milvus, and integrate with every popular GenAI framework. This webinar is perfect for developers seeking easy-to-use, well-integrated vector databases for their GenAI apps.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Feature-Oriented Software Evolution
1. Feature-Oriented Software Evolution
(Vision paper)
Leonardo Passos1 Krzysztof Czarnecki1 Sven Apel2
Andrzej Wąsowski3 Christian Käster4 Jianmei Guo1
Claus Hunsen2
1University of Waterloo 2University of Passau 3IT University 4CMU
The Seventh International Workshop on Variability Modelling of Software-intensive Systems
1/37
7. Example
Automotive embedded software:
• Changing regulations
◦ ABS is now mandatory in the EU
• Market differentiating enhancements
◦ Electronic stability control (SC) improves ABS by preventing skidding
3/37
8. Example
Automotive embedded software:
• Changing regulations
◦ ABS is now mandatory in the EU
• Market differentiating enhancements
◦ Electronic stability control (SC) improves ABS by preventing skidding
• New technology availability
3/37
9. Example
Automotive embedded software:
• Changing regulations
◦ ABS is now mandatory in the EU
• Market differentiating enhancements
◦ Electronic stability control (SC) improves ABS by preventing skidding
• New technology availability
◦ Laser-based distance sensors are more precise than radio-based ones
3/37
13. Scenario
• Integration can scatter different artifacts
• Different levels of abstractions not mastered by all stakeholders
5/37
14. Scenario
• Integration can scatter different artifacts
• Different levels of abstractions not mastered by all stakeholders
System
level
Code
level
M
anagem
entlevel
5/37
15. Scenario
• Integration can scatter different artifacts
• Different levels of abstractions not mastered by all stakeholders
System
level
Code
level
M
anagem
entlevel
⇐ Developers
5/37
16. Scenario
• Integration can scatter different artifacts
• Different levels of abstractions not mastered by all stakeholders
System
level
Code
level
M
anagem
entlevel
⇐ System engineers
5/37
17. Scenario
• Integration can scatter different artifacts
• Different levels of abstractions not mastered by all stakeholders
System
level
Code
level
M
anagem
entlevel
⇐ Project managers
5/37
20. In practical settings. . .
Complex and large software systems have:
• Diverse set of stakeholders
7/37
21. In practical settings. . .
Complex and large software systems have:
• Diverse set of stakeholders
• Diverse set of artifacts
7/37
22. In practical settings. . .
Complex and large software systems have:
• Diverse set of stakeholders
• Diverse set of artifacts
• Different stakeholders have particular “views” over the software
7/37
23. In practical settings. . .
Complex and large software systems have:
• Diverse set of stakeholders
• Diverse set of artifacts
• Different stakeholders have particular “views” over the software
Stakeholders need a common meeting point
7/37
33. Hypothesis
Arguments favouring the hypothesis:
• Feature = cohesive requirements bundle
• Requirements are a common point among all stakeholders
10/37
34. Hypothesis
Arguments favouring the hypothesis:
• Feature = cohesive requirements bundle
• Requirements are a common point among all stakeholders
• Features are more coarse-grained than individual requirements
10/37
35. Hypothesis
Arguments favouring the hypothesis:
• Feature = cohesive requirements bundle
• Requirements are a common point among all stakeholders
• Features are more coarse-grained than individual requirements
◦ Facilitates understanding
10/37
36. Hypothesis
Arguments favouring the hypothesis:
• Feature = cohesive requirements bundle
• Requirements are a common point among all stakeholders
• Features are more coarse-grained than individual requirements
◦ Facilitates understanding
• Evolution can be put in simple terms
10/37
37. Hypothesis
Arguments favouring the hypothesis:
• Feature = cohesive requirements bundle
• Requirements are a common point among all stakeholders
• Features are more coarse-grained than individual requirements
◦ Facilitates understanding
• Evolution can be put in simple terms
◦ Add new feature, retire old ones, etc.
10/37
42. Purpose of our work
Research agenda based
on our vision for feature-oriented
software evolution
This presentation covers part of that agenda
(see paper for more details)
13/37
57. Tracing
• Traceability has to be recovered from a multi-space setting:
◦ Recover traceability of different artifacts (e.g.: FM, Build files, C
code)
17/37
58. Tracing
• Traceability has to be recovered from a multi-space setting:
◦ Recover traceability of different artifacts (e.g.: FM, Build files, C
code)
◦ Integrate the evolution history of those artifacts over time
17/37
59. Tracing
• Traceability has to be recovered from a multi-space setting:
◦ Recover traceability of different artifacts (e.g.: FM, Build files, C
code)
◦ Integrate the evolution history of those artifacts over time
◦ Draw an evolution history (timeline)
17/37
60. Tracing
• Traceability has to be recovered from a multi-space setting:
◦ Recover traceability of different artifacts (e.g.: FM, Build files, C
code)
◦ Integrate the evolution history of those artifacts over time
◦ Draw an evolution history (timeline)
YRS-M1
YRS-M2
(t2)(t1) (t3)
YS
YRS
M M
M
17/37
63. Tracing (Research questions)
• Tracing certain artifacts can be daunting
◦ Individual build rules in build files (e.g., make is Turing-complete)
19/37
64. Tracing (Research questions)
• Tracing certain artifacts can be daunting
◦ Individual build rules in build files (e.g., make is Turing-complete)
◦ Fine-grained variability analysis in code is costly
19/37
65. Tracing (Research questions)
• Tracing certain artifacts can be daunting
◦ Individual build rules in build files (e.g., make is Turing-complete)
◦ Fine-grained variability analysis in code is costly
RQ: How to recover traceability links in build files and source
code in variability-aware systems?
19/37
66. Tracing (Research questions)
• Tracing certain artifacts can be daunting
◦ Individual build rules in build files (e.g., make is Turing-complete)
◦ Fine-grained variability analysis in code is costly
RQ: How to recover traceability links in build files and source
code in variability-aware systems?
RQ: Once recovered, how to update them to reflect the
temporal evolution in place?
19/37
68. Tracing (Research questions)
• Different artifacts = different sources to draw the evolution in place
◦ Mailing lists
19/37
69. Tracing (Research questions)
• Different artifacts = different sources to draw the evolution in place
◦ Mailing lists
◦ Commit patches and log messages
19/37
70. Tracing (Research questions)
• Different artifacts = different sources to draw the evolution in place
◦ Mailing lists
◦ Commit patches and log messages
◦ Bug reports in bug tracking systems
19/37
71. Tracing (Research questions)
• Different artifacts = different sources to draw the evolution in place
◦ Mailing lists
◦ Commit patches and log messages
◦ Bug reports in bug tracking systems
RQ: Which sources are trustworthy?
19/37
76. Analyses
After the evolution of the SPL, stakeholders noticed that:
• Maintenance is taking longer
• Productivity has decreased
22/37
77. Analyses
After the evolution of the SPL, stakeholders noticed that:
• Maintenance is taking longer
• Productivity has decreased
• Bugs are starting to rise
22/37
78. Analyses
After the evolution of the SPL, stakeholders noticed that:
• Maintenance is taking longer
• Productivity has decreased
• Bugs are starting to rise
Well-known phenomena of software aging
22/37
93. Consistency checking (Research questions)
• Variability aware-analysis is costly.
RQ: Do existing approaches for variability-aware type-
checking, flow-analysis and model-checking scale to large
systems?
26/37
94. Consistency checking (Research questions)
• Variability aware-analysis is costly.
RQ: Do existing approaches for variability-aware type-
checking, flow-analysis and model-checking scale to large
systems?
• Existing flow-analysis is intra-procedural.
26/37
95. Consistency checking (Research questions)
• Variability aware-analysis is costly.
RQ: Do existing approaches for variability-aware type-
checking, flow-analysis and model-checking scale to large
systems?
• Existing flow-analysis is intra-procedural.
RQ: How to adapt existing inter-procedural analyses to
handle variability?
26/37
98. Impact analysis
Goal: assess impact of changes
Scenario:
• To identify bugs, stakeholders in our SPL have created formal
specifications of the system’s features
• Support for cruise control (CC)
28/37
99. Impact analysis
Goal: assess impact of changes
Scenario:
• To identify bugs, stakeholders in our SPL have created formal
specifications of the system’s features
• Support for cruise control (CC)
Car
YRS
YRS-M2
SC
ABSSC
SCConv
Featuremodel
BRA
ABSConv
YRS SC
YRS-M1
CC
28/37
101. Impact analysis
Stability-control behaviour property: No subsystem increases
acceleration when SC is engaged
1. CC cruise speed
is set
2. Engage CC
3. Drivers looses
control
4. Engage SC
5. CC accelerates to
achieve cruise speed
29/37
102. Impact analysis
Stability-control behaviour property: No subsystem increases
acceleration when SC is engaged
1. CC cruise speed
is set
2. Engage CC
3. Drivers looses
control
4. Engage SC
5. CC accelerates to
achieve cruise speed
Adding CC violates the given property
29/37
103. Impact analysis
Stability-control behaviour property: No subsystem increases
acceleration when SC is engaged
1. CC cruise speed
is set
2. Engage CC
3. Drivers looses
control
4. Engage SC
5. CC accelerates to
achieve cruise speed
Adding CC violates the given property
(Impact analysis aims to detect that promptly)
29/37
104. Impact analysis (Research questions)
• Currently, consistency between implementation assets (code) and
the system’s specified property is mostly intractable.
30/37
105. Impact analysis (Research questions)
• Currently, consistency between implementation assets (code) and
the system’s specified property is mostly intractable.
RQ: How to verify that the system implementation does not
break its specified properties?
30/37
107. Architectural analysis
• Feature model = view of the system architecture
• From the recovered traces, one can track the “health of the system”
• Different indicators can be collected to assess the system evolution:
◦ code metrics
◦ process metrics
◦ feature-based
metrics
◦ feature-model based metrics
◦ product-line based metrics
32/37
109. Architectural analysis
• Evidence relating scattering and defects is rather preliminary.
RQ: Can we provide more evidence for the relationship
between scattering and defects?
33/37
115. Recommendations + research question
Consistency:
• Fix recommendations for different artifacts types
35/37
116. Recommendations + research question
Consistency:
• Fix recommendations for different artifacts types
RQ: How to devise a fixing recommender integrating
different artifacts, with different abstraction levels?
35/37
117. Recommendations + research question
Consistency:
• Fix recommendations for different artifacts types
RQ: How to devise a fixing recommender integrating
different artifacts, with different abstraction levels?
Impact analysis:
• Point which features are more likely to have defects after a change
35/37
118. Recommendations + research question
Consistency:
• Fix recommendations for different artifacts types
RQ: How to devise a fixing recommender integrating
different artifacts, with different abstraction levels?
Impact analysis:
• Point which features are more likely to have defects after a change
RQ: Which feature-based metrics are good defect predictors?
35/37
120. Recommendations + research question
Architectural analysis:
• Propose merges (features are too similar)
35/37
121. Recommendations + research question
Architectural analysis:
• Propose merges (features are too similar)
• Suggest feature retirement
35/37
122. Recommendations + research question
Architectural analysis:
• Propose merges (features are too similar)
• Suggest feature retirement
• Suggest which features to modularize
35/37
123. Recommendations + research question
Architectural analysis:
• Propose merges (features are too similar)
• Suggest feature retirement
• Suggest which features to modularize
RQ: Which scenarios should be supported (are required in
practice)?
35/37
124. Conclusion
• We hypothesized that feature-oriented evolution can mitigate
existing challenges in evolving large-complex systems
• From that hypothesis, we presented our vision based on tracing,
analyses and recommendations
• We are have started working on the realization of that vision
36/37