Tutorial given at ER 2019. It provides a state of the art in data-driven requirements engineering dealing with topics like feedback management, decision-making and others
The document discusses data-driven requirements engineering (RE), which uses data collected from system usage to inform the RE process. It describes traditional RE and how data-driven RE aims to supplement it using explicit feedback from users and implicit feedback from usage data. The data is analyzed using natural language processing, machine learning, and topic modeling to categorize issues and identify topics. Challenges of data-driven RE include integrating it into development processes, collecting and processing heterogeneous data, and gaining user trust. Overall, it presents an opportunity to deliver more business value but requires the right approach and consideration of traditional RE methods.
The document discusses how to derive dependency structures for legacy J2EE applications. It proposes analyzing all application tiers together using a language-independent model and parsing various artifacts. Configuration files and limited data flow analysis are used to understand dependencies. Container dependencies are explicitly codified by studying technology specifications and codifying dependency rules to apply when certain code patterns are detected in applications. This allows completing an application's dependency graph.
This document outlines a research plan to collect and identify microservices patterns and anti-patterns. The plan involves extracting patterns from literature and existing software, categorizing the patterns, and creating an automated tool to identify patterns by analyzing deployment scripts and source code. The contributions will be an exhaustive catalog of microservices patterns and anti-patterns, and a fully automated tool to identify them. Challenges include the newness of microservices and lack of open source projects. The goal is to support microservices development and maintenance.
This document presents an approach for automatically identifying antipatterns in microservice-based systems. It defines a meta-model with 13 components to capture necessary information about a system and its microservices. It also identifies 15 common microservice antipatterns. Detection rules are defined for each antipattern based on analyzing the system's source code, dependencies, configuration and other artifacts. The goal is to develop a tool based on this approach to help developers minimize antipatterns in microservice systems and improve their maintenance and evolution.
The document discusses the state of practices of service identification in the industry for migrating legacy systems to service-oriented architectures (SOA). It finds that while service identification is seen as important, it remains primarily a manual process focused on identifying coarse-grained business services from source code and business processes. Wrapping and clustering functionalities are common techniques. Fully automating service identification is still challenging due to the need to understand complex legacy system dependencies. The document recommends service identification be business-driven and follow proven methodologies.
Building a Quality Modelio with Q-Rapids by Softeamaabherve
This document discusses Softeam's use of the Q-RAPIDS tool for monitoring software quality metrics across multiple projects. It describes Softeam's development process, tools currently used, initial results from the Q-RAPIDS deployment, and areas for future improvement, including aggregating data from different sources, monitoring multiple parallel projects, and automating management of data collectors.
How to define Quality Models for Measuring Software Qualityuqasar
Presentation about Software Quality Assurance and the use of the U-QASAR methodology to create a Quality Model that reflects the company priorities and needs
This document discusses an analysis of factors that affect the productivity of enterprise software projects. It presents the results of regression and variance analyses conducted on a database of over 3,000 Japanese software projects. The regression analysis found that a project's size, number of test cases, and number of faults explained 75% of the variability in project effort. Several qualitative factors were also found to significantly impact productivity based on one-dimensional and two-dimensional variance analyses, including clarity of roles and objectives, working space conditions, and how quality assurance was conducted.
The document discusses data-driven requirements engineering (RE), which uses data collected from system usage to inform the RE process. It describes traditional RE and how data-driven RE aims to supplement it using explicit feedback from users and implicit feedback from usage data. The data is analyzed using natural language processing, machine learning, and topic modeling to categorize issues and identify topics. Challenges of data-driven RE include integrating it into development processes, collecting and processing heterogeneous data, and gaining user trust. Overall, it presents an opportunity to deliver more business value but requires the right approach and consideration of traditional RE methods.
The document discusses how to derive dependency structures for legacy J2EE applications. It proposes analyzing all application tiers together using a language-independent model and parsing various artifacts. Configuration files and limited data flow analysis are used to understand dependencies. Container dependencies are explicitly codified by studying technology specifications and codifying dependency rules to apply when certain code patterns are detected in applications. This allows completing an application's dependency graph.
This document outlines a research plan to collect and identify microservices patterns and anti-patterns. The plan involves extracting patterns from literature and existing software, categorizing the patterns, and creating an automated tool to identify patterns by analyzing deployment scripts and source code. The contributions will be an exhaustive catalog of microservices patterns and anti-patterns, and a fully automated tool to identify them. Challenges include the newness of microservices and lack of open source projects. The goal is to support microservices development and maintenance.
This document presents an approach for automatically identifying antipatterns in microservice-based systems. It defines a meta-model with 13 components to capture necessary information about a system and its microservices. It also identifies 15 common microservice antipatterns. Detection rules are defined for each antipattern based on analyzing the system's source code, dependencies, configuration and other artifacts. The goal is to develop a tool based on this approach to help developers minimize antipatterns in microservice systems and improve their maintenance and evolution.
The document discusses the state of practices of service identification in the industry for migrating legacy systems to service-oriented architectures (SOA). It finds that while service identification is seen as important, it remains primarily a manual process focused on identifying coarse-grained business services from source code and business processes. Wrapping and clustering functionalities are common techniques. Fully automating service identification is still challenging due to the need to understand complex legacy system dependencies. The document recommends service identification be business-driven and follow proven methodologies.
Building a Quality Modelio with Q-Rapids by Softeamaabherve
This document discusses Softeam's use of the Q-RAPIDS tool for monitoring software quality metrics across multiple projects. It describes Softeam's development process, tools currently used, initial results from the Q-RAPIDS deployment, and areas for future improvement, including aggregating data from different sources, monitoring multiple parallel projects, and automating management of data collectors.
How to define Quality Models for Measuring Software Qualityuqasar
Presentation about Software Quality Assurance and the use of the U-QASAR methodology to create a Quality Model that reflects the company priorities and needs
This document discusses an analysis of factors that affect the productivity of enterprise software projects. It presents the results of regression and variance analyses conducted on a database of over 3,000 Japanese software projects. The regression analysis found that a project's size, number of test cases, and number of faults explained 75% of the variability in project effort. Several qualitative factors were also found to significantly impact productivity based on one-dimensional and two-dimensional variance analyses, including clarity of roles and objectives, working space conditions, and how quality assurance was conducted.
Data Science at Roche: From Exploration to Productionization - Frank BlockRising Media Ltd.
The document summarizes the data science process at Roche Diagnostics from initial ideas through productionization. It discusses how the data science team works end-to-end from initial proofs-of-value (POVs) through several selection gates to deploy models into production. Examples are provided of how data insights led to identifying issues in production processes and developing predictive models for applications like sensor image processing, case classification, and advanced service analytics. Key lessons highlighted include the importance of business proximity, developing business literacy, and focusing on innovative ideas that maximize impact to successfully transition data science projects to production.
Melbourne materials institute miicrc rapid productisationUTSBusinessSchool
The Rapid Productisation program involves 4 projects over 7 years to create platforms for more efficient manufacturing:
1) Plug and Play manufacturing will develop modular production line components.
2) Manufacturing in the Cloud will provide a design/production portal.
3) Critical Components and Platforms will develop an ASIC chip and bionics foundry.
4) Testing and Compliance will create tools to ensure regulatory approval and quality control.
The goal is to decrease costs and provide resources to help businesses innovate and grow.
FOCUS: A Recommender System for Mining API Function Calls and Usage PatternsDavide Ruscio
Software developers interact with APIs on a daily basis and, therefore, often face the need to learn how to use new APIs suitable for their purposes. Previous work has shown that recommending usage patterns to developers facilitates the learning process. Current approaches to usage pattern recommendation, however, still suffer from high redundancy and poor run-time performance. In this paper, we reformulate the problem of usage pattern recommendation in terms of a collaborative filtering recommender system. We present a new tool, FOCUS, which mines open-source project repositories to recommend API method invocations and usage patterns by analyzing how APIs are used in projects similar to the current project. We evaluate FOCUS on a large number of Java projects extracted from GitHub and Maven Central and find that it outperforms the state-of-the-art approach PAM with regards to success rate, accuracy, and execution time. Results indicate the suitability of context-aware collaborative-filtering recommender systems to provide API usage patterns.
Lec01 inroduction to software cost estimation ver1.pptJuwieKaren
This document discusses software cost estimation and productivity measurement. It covers fundamental estimation questions around effort, time and costs. Software cost components include hardware, software, travel, training and effort costs. Productivity can be measured in lines of code, function points or object points. The document also discusses challenges in estimation and productivity comparisons related to language level, quality, and changing requirements.
Curiosity and Xray present - In sprint testing: Aligning tests and teams to r...Curiosity Software Ireland
This webinar was co-hosted by Xray and Curiosity Software on 18th May 2021. Watch the on demand recording here: https://opentestingplatform.curiositysoftware.ie/xray-in-sprint-testing-webinar
In-sprint testing must tackle three pressing problems:
1. You must know exactly what needs testing before each release. There’s not time to test everything.
2. You need up-to-date and aligned test assets, including test cases, data, scripts and CI/CD artefacts.
3. Test teams must know what needs testing, when, and have on demand access to environments, tests and data.
These problems are near-impossible to crack at organisations who struggle with application complexity, rapid system change, and overly-manual testing processes. Challenges include:
1. Test creation time. Manually creating test cases, data and scripts is slow and unsystematic, resulting in low coverage tests.
2. Slow test maintenance. Changes break tests, with little time in sprints to check test cases, scripts, and data.
3. Knowing when testing is “done”. There is little measurability or peace of mind when systems “go live”.
This webinar will set out how maintaining a “digital twin” of the system under test prioritises testing time AND maintains rigorous tests in-sprint. You will see how:
1. Intuitive flowcharts generate optimised test cases, scripts, and data.
2. Feeding changes into the models maintains up-to-date tests.
3. Pushing the tests to agile test management tooling then makes sure that teams know which tests to run, when, with full traceability and a measurable definition of ‘done’.
James Walker, Curiosity’s Director of Technology, and Sérgio Freire, Head of Product Evangelism for Xray, will set out this cutting-edge approach to in-sprint testing. Günther-Matthias Bär, Test Automation Engineer at Sogeti, will then draw on implementation experience to discuss the value of the proposed approach.
Benchmarking for Big Data Applications with the DataBench Framework, Arne Ber...DataBench
The document discusses benchmarking for big data applications using the DataBench framework. It provides an overview of business and technical benchmarking, describes how DataBench links the two through its workflow and toolbox, and outlines some early results from DataBench's business user survey. It also discusses identifying relevant benchmarks based on the BDVA reference model and introduces some benchmarks that could be integrated into DataBench's toolbox, including HiBench, SparkBench, and YCSB.
2014 Asdenca - Capability-driven development of a soa platform, a case studyCaaS EU FP7 Project
These slides describe the EVR case study that focuses on capability modelling within a service-oriented architecture development project. The paper discusses the lessons learned, as well as open challenges to feedback the improvement of the CDD methodology.
This document discusses techniques for estimating the cost of software projects. It explains that software cost estimation aims to predict the effort, time and total cost required. The key components of software costs are outlined as labor costs, hardware/software costs, and overhead costs. The document then examines various techniques for measuring programmer productivity and estimating project size, including lines of code, function points, and object points. Finally, it analyzes different estimation techniques like algorithmic modeling, expert judgment, analogy, and top-down vs. bottom-up approaches.
Eccenca provides an open source semantic information logistics architecture called SMILA that standardizes interfaces for search, integration, and information management. SMILA creates a large pool of reusable connectors and add-ins through an open source community. It offers a standardized, flexible and cost-effective solution for information logistics projects compared to proprietary alternatives.
This document discusses various software metrics that can be used to measure and improve software development processes and products. It describes several traditional metrics like lines of code and function points. It also discusses more modern frameworks like the Capability Maturity Model Integration and Six Sigma that use a metrics-driven approach. The document provides examples of how different metrics can provide insights into areas like project effort, cost, schedule, quality and productivity. It compares traditional and modern software development techniques and their use of metrics.
Abhishank Gaba has a BASc in Mechatronics Engineering from the University of Waterloo with a GPA of 4.0. He has experience leading projects involving machine learning and computer vision to detect critical points in pipes and identify tissue patterns. His relevant work experience includes product management and software development roles at startups focused on ignition interlock devices and smart underwear. He also has experience in quality assurance and software development.
Requirements traceability ensures that source
code is consistent with documentation and that all requirements
have been implemented. During software evolution, features
are added, removed, or modified, the code drifts away from its
original requirements. Thus traceability recovery approaches
becomes necessary to re-establish the traceability relations
between requirements and source code.
This paper presents an approach (Coparvo) complementary
to existing traceability recovery approaches for object-oriented
programs. Coparvo reduces false positive links recovered by
traditional traceability recovery processes thus reducing the
manual validation effort.
Coparvo assumes that information extracted from different
entities (e.g., class names, comments, class variables, or methods signatures) are different information sources; they may
have different level of reliability in requirements traceability
and each information source may act as a different expert
recommending traceability links.
We applied Coparvo on three data sets, Pooka, SIP Communicator, and iTrust, to filter out false positive links recovered
via the information retrieval approach i.e., vector space model.
The results show that Coparvo significantly improves the of
the recovered links accuracy and also reduces up to
This document is a seminar report submitted by a student named Shahbaz Khan to Visvesvaraya Technological University in partial fulfillment of a bachelor's degree in electronics and communication engineering. The report describes a project to predict house prices in Mumbai using machine learning models. It explores a dataset of Mumbai house listings, applies techniques like data visualization, transformation and several regression models to predict prices. It finds that linear regression has the best performance and can be used to build a house price prediction application.
How to drive real business value from your virtual Supply Chain twin?Bluecrux
This is the full presentation of Anneleen Tronquo (Partner bluecrux) & Valerie Vandenbroucke (Product Manager LightsOutPlanning bluecrux), presented at Logipharma 2019 (Wednesday 10 April, 2019). Learn how a virtual twin can enlighten your Supply Chain, using practical case studies.
A Method for Evaluating End-User Development TechnologiesClaudia Melo
Presentation at Americas Conference on Information Systems, 2017. Paper abstract:
End-user development (EUD) is a strategy that can reduce a considerable amount of business demand on
IT departments. Empowering the end-user in the context of software development is only possible
through technologies that allow them to manipulate data and information without the need for deep
programming knowledge. The successful selection of appropriate tools and technologies is highly
dependent on the context in which the end-user is embedded. End-users should be a central piece in any
software package evaluation, being key in the evaluation process in the end-user development context.
However, little research has empirically examined software package evaluation criteria and techniques in
general, and in the end-user development context in particular. This paper aims to provide a method for
technology evaluation in the context of end-user development and to present the evaluation of two
platforms. We conclude our study proposing a set of suggestions for future research.
Presentation at ACM Conference - Semantics2017, September 11--14, 2017, Amsterdam, Netherlands
This work was supported by grants from the EU H2020 Framework Programme provided for the project HOBBIT (GA no. 688227).
Risk and Engineering Knowledge Integration in Cyber-physical Production Syste...SEAA 2022
Felix Rinker 1,2
Kristof Meixner 1,2
Sebastian Kropatschek 3
Elmar Kiesling 4
Stefan Biffl 1,3
1 ISE TU Wien
2 CDL SQI TU Wien
3 CDP Wien
4 IDPKM WU Wien
5 OvGU Magdeburg
Managing an Experimentation Platform by LinkedIn Product LeaderProduct School
Main Takeaways:
-Establishing a culture of experimentation at scale
-Developing the product vision and strategy
-Backlog prioritization based on Impact Score formula
This document discusses using use case points (UCP) to estimate software development effort. UCP involves classifying use cases and actors based on complexity, then calculating unadjusted use case and actor weights. Technical and environmental factors are also assessed. These variables are used in an equation to determine the adjusted use case points and estimated effort in hours or weeks. The document presents this method and tools to automate it. It also compares UCP to function points and shares results from applying UCP in three industry projects, finding the estimates were close to expert assessments.
This document discusses feature engineering and machine learning approaches for predicting customer behavior. It begins with an overview of feature engineering, including how it is used for image recognition, text mining, and generating new variables from existing data. The document then discusses challenges with artificial intelligence and machine learning models, particularly around explainability. It concludes that for smaller datasets, feature engineering can improve predictive performance more than complex machine learning models, while large datasets are better suited to machine learning approaches. Testing on a small travel acquisition dataset confirmed that traditional models with feature engineering outperformed neural networks.
On the use of requirement patterns to analyse RfP documents - ER 2019Xavier Franch
This paper proposes using requirement patterns to analyze Request for Proposal (RFP) documents from the perspective of technology providers responding to calls for tenders. The researchers conducted a case study analyzing RFPs in the railway domain. They identified requirement patterns, added attributes to patterns to indicate domain and compliance rules, and created a pattern catalogue. They then evaluated the benefits of using patterns through a questionnaire with experts. Respondents were cautiously optimistic about productivity gains but concerned about output quality. Future work includes improving tool support and integrating customers and providers into the bidding process.
CIbSE-RET 2019 keynote - The Road towards Data-Driven REXavier Franch
The keynote presentation discusses moving from traditional requirements engineering to data-driven requirements engineering. It outlines the data-driven requirements engineering cycle which involves gathering feedback, analyzing usage data, mining repositories, and using analytics for decision making. Feedback can be gathered explicitly from users and implicitly through monitoring quality of service. Both forms of feedback need to be analyzed, categorized, and summarized. Repository mining involves defining quality metrics and evaluating software attributes. All gathered and analyzed data can then be used to support strategic decision making about requirements through analytics tools and stakeholder prioritization. While data-driven requirements engineering offers benefits, challenges also exist in terms of resources, expertise, and transparency.
Data Science at Roche: From Exploration to Productionization - Frank BlockRising Media Ltd.
The document summarizes the data science process at Roche Diagnostics from initial ideas through productionization. It discusses how the data science team works end-to-end from initial proofs-of-value (POVs) through several selection gates to deploy models into production. Examples are provided of how data insights led to identifying issues in production processes and developing predictive models for applications like sensor image processing, case classification, and advanced service analytics. Key lessons highlighted include the importance of business proximity, developing business literacy, and focusing on innovative ideas that maximize impact to successfully transition data science projects to production.
Melbourne materials institute miicrc rapid productisationUTSBusinessSchool
The Rapid Productisation program involves 4 projects over 7 years to create platforms for more efficient manufacturing:
1) Plug and Play manufacturing will develop modular production line components.
2) Manufacturing in the Cloud will provide a design/production portal.
3) Critical Components and Platforms will develop an ASIC chip and bionics foundry.
4) Testing and Compliance will create tools to ensure regulatory approval and quality control.
The goal is to decrease costs and provide resources to help businesses innovate and grow.
FOCUS: A Recommender System for Mining API Function Calls and Usage PatternsDavide Ruscio
Software developers interact with APIs on a daily basis and, therefore, often face the need to learn how to use new APIs suitable for their purposes. Previous work has shown that recommending usage patterns to developers facilitates the learning process. Current approaches to usage pattern recommendation, however, still suffer from high redundancy and poor run-time performance. In this paper, we reformulate the problem of usage pattern recommendation in terms of a collaborative filtering recommender system. We present a new tool, FOCUS, which mines open-source project repositories to recommend API method invocations and usage patterns by analyzing how APIs are used in projects similar to the current project. We evaluate FOCUS on a large number of Java projects extracted from GitHub and Maven Central and find that it outperforms the state-of-the-art approach PAM with regards to success rate, accuracy, and execution time. Results indicate the suitability of context-aware collaborative-filtering recommender systems to provide API usage patterns.
Lec01 inroduction to software cost estimation ver1.pptJuwieKaren
This document discusses software cost estimation and productivity measurement. It covers fundamental estimation questions around effort, time and costs. Software cost components include hardware, software, travel, training and effort costs. Productivity can be measured in lines of code, function points or object points. The document also discusses challenges in estimation and productivity comparisons related to language level, quality, and changing requirements.
Curiosity and Xray present - In sprint testing: Aligning tests and teams to r...Curiosity Software Ireland
This webinar was co-hosted by Xray and Curiosity Software on 18th May 2021. Watch the on demand recording here: https://opentestingplatform.curiositysoftware.ie/xray-in-sprint-testing-webinar
In-sprint testing must tackle three pressing problems:
1. You must know exactly what needs testing before each release. There’s not time to test everything.
2. You need up-to-date and aligned test assets, including test cases, data, scripts and CI/CD artefacts.
3. Test teams must know what needs testing, when, and have on demand access to environments, tests and data.
These problems are near-impossible to crack at organisations who struggle with application complexity, rapid system change, and overly-manual testing processes. Challenges include:
1. Test creation time. Manually creating test cases, data and scripts is slow and unsystematic, resulting in low coverage tests.
2. Slow test maintenance. Changes break tests, with little time in sprints to check test cases, scripts, and data.
3. Knowing when testing is “done”. There is little measurability or peace of mind when systems “go live”.
This webinar will set out how maintaining a “digital twin” of the system under test prioritises testing time AND maintains rigorous tests in-sprint. You will see how:
1. Intuitive flowcharts generate optimised test cases, scripts, and data.
2. Feeding changes into the models maintains up-to-date tests.
3. Pushing the tests to agile test management tooling then makes sure that teams know which tests to run, when, with full traceability and a measurable definition of ‘done’.
James Walker, Curiosity’s Director of Technology, and Sérgio Freire, Head of Product Evangelism for Xray, will set out this cutting-edge approach to in-sprint testing. Günther-Matthias Bär, Test Automation Engineer at Sogeti, will then draw on implementation experience to discuss the value of the proposed approach.
Benchmarking for Big Data Applications with the DataBench Framework, Arne Ber...DataBench
The document discusses benchmarking for big data applications using the DataBench framework. It provides an overview of business and technical benchmarking, describes how DataBench links the two through its workflow and toolbox, and outlines some early results from DataBench's business user survey. It also discusses identifying relevant benchmarks based on the BDVA reference model and introduces some benchmarks that could be integrated into DataBench's toolbox, including HiBench, SparkBench, and YCSB.
2014 Asdenca - Capability-driven development of a soa platform, a case studyCaaS EU FP7 Project
These slides describe the EVR case study that focuses on capability modelling within a service-oriented architecture development project. The paper discusses the lessons learned, as well as open challenges to feedback the improvement of the CDD methodology.
This document discusses techniques for estimating the cost of software projects. It explains that software cost estimation aims to predict the effort, time and total cost required. The key components of software costs are outlined as labor costs, hardware/software costs, and overhead costs. The document then examines various techniques for measuring programmer productivity and estimating project size, including lines of code, function points, and object points. Finally, it analyzes different estimation techniques like algorithmic modeling, expert judgment, analogy, and top-down vs. bottom-up approaches.
Eccenca provides an open source semantic information logistics architecture called SMILA that standardizes interfaces for search, integration, and information management. SMILA creates a large pool of reusable connectors and add-ins through an open source community. It offers a standardized, flexible and cost-effective solution for information logistics projects compared to proprietary alternatives.
This document discusses various software metrics that can be used to measure and improve software development processes and products. It describes several traditional metrics like lines of code and function points. It also discusses more modern frameworks like the Capability Maturity Model Integration and Six Sigma that use a metrics-driven approach. The document provides examples of how different metrics can provide insights into areas like project effort, cost, schedule, quality and productivity. It compares traditional and modern software development techniques and their use of metrics.
Abhishank Gaba has a BASc in Mechatronics Engineering from the University of Waterloo with a GPA of 4.0. He has experience leading projects involving machine learning and computer vision to detect critical points in pipes and identify tissue patterns. His relevant work experience includes product management and software development roles at startups focused on ignition interlock devices and smart underwear. He also has experience in quality assurance and software development.
Requirements traceability ensures that source
code is consistent with documentation and that all requirements
have been implemented. During software evolution, features
are added, removed, or modified, the code drifts away from its
original requirements. Thus traceability recovery approaches
becomes necessary to re-establish the traceability relations
between requirements and source code.
This paper presents an approach (Coparvo) complementary
to existing traceability recovery approaches for object-oriented
programs. Coparvo reduces false positive links recovered by
traditional traceability recovery processes thus reducing the
manual validation effort.
Coparvo assumes that information extracted from different
entities (e.g., class names, comments, class variables, or methods signatures) are different information sources; they may
have different level of reliability in requirements traceability
and each information source may act as a different expert
recommending traceability links.
We applied Coparvo on three data sets, Pooka, SIP Communicator, and iTrust, to filter out false positive links recovered
via the information retrieval approach i.e., vector space model.
The results show that Coparvo significantly improves the of
the recovered links accuracy and also reduces up to
This document is a seminar report submitted by a student named Shahbaz Khan to Visvesvaraya Technological University in partial fulfillment of a bachelor's degree in electronics and communication engineering. The report describes a project to predict house prices in Mumbai using machine learning models. It explores a dataset of Mumbai house listings, applies techniques like data visualization, transformation and several regression models to predict prices. It finds that linear regression has the best performance and can be used to build a house price prediction application.
How to drive real business value from your virtual Supply Chain twin?Bluecrux
This is the full presentation of Anneleen Tronquo (Partner bluecrux) & Valerie Vandenbroucke (Product Manager LightsOutPlanning bluecrux), presented at Logipharma 2019 (Wednesday 10 April, 2019). Learn how a virtual twin can enlighten your Supply Chain, using practical case studies.
A Method for Evaluating End-User Development TechnologiesClaudia Melo
Presentation at Americas Conference on Information Systems, 2017. Paper abstract:
End-user development (EUD) is a strategy that can reduce a considerable amount of business demand on
IT departments. Empowering the end-user in the context of software development is only possible
through technologies that allow them to manipulate data and information without the need for deep
programming knowledge. The successful selection of appropriate tools and technologies is highly
dependent on the context in which the end-user is embedded. End-users should be a central piece in any
software package evaluation, being key in the evaluation process in the end-user development context.
However, little research has empirically examined software package evaluation criteria and techniques in
general, and in the end-user development context in particular. This paper aims to provide a method for
technology evaluation in the context of end-user development and to present the evaluation of two
platforms. We conclude our study proposing a set of suggestions for future research.
Presentation at ACM Conference - Semantics2017, September 11--14, 2017, Amsterdam, Netherlands
This work was supported by grants from the EU H2020 Framework Programme provided for the project HOBBIT (GA no. 688227).
Risk and Engineering Knowledge Integration in Cyber-physical Production Syste...SEAA 2022
Felix Rinker 1,2
Kristof Meixner 1,2
Sebastian Kropatschek 3
Elmar Kiesling 4
Stefan Biffl 1,3
1 ISE TU Wien
2 CDL SQI TU Wien
3 CDP Wien
4 IDPKM WU Wien
5 OvGU Magdeburg
Managing an Experimentation Platform by LinkedIn Product LeaderProduct School
Main Takeaways:
-Establishing a culture of experimentation at scale
-Developing the product vision and strategy
-Backlog prioritization based on Impact Score formula
This document discusses using use case points (UCP) to estimate software development effort. UCP involves classifying use cases and actors based on complexity, then calculating unadjusted use case and actor weights. Technical and environmental factors are also assessed. These variables are used in an equation to determine the adjusted use case points and estimated effort in hours or weeks. The document presents this method and tools to automate it. It also compares UCP to function points and shares results from applying UCP in three industry projects, finding the estimates were close to expert assessments.
This document discusses feature engineering and machine learning approaches for predicting customer behavior. It begins with an overview of feature engineering, including how it is used for image recognition, text mining, and generating new variables from existing data. The document then discusses challenges with artificial intelligence and machine learning models, particularly around explainability. It concludes that for smaller datasets, feature engineering can improve predictive performance more than complex machine learning models, while large datasets are better suited to machine learning approaches. Testing on a small travel acquisition dataset confirmed that traditional models with feature engineering outperformed neural networks.
On the use of requirement patterns to analyse RfP documents - ER 2019Xavier Franch
This paper proposes using requirement patterns to analyze Request for Proposal (RFP) documents from the perspective of technology providers responding to calls for tenders. The researchers conducted a case study analyzing RFPs in the railway domain. They identified requirement patterns, added attributes to patterns to indicate domain and compliance rules, and created a pattern catalogue. They then evaluated the benefits of using patterns through a questionnaire with experts. Respondents were cautiously optimistic about productivity gains but concerned about output quality. Future work includes improving tool support and integrating customers and providers into the bidding process.
CIbSE-RET 2019 keynote - The Road towards Data-Driven REXavier Franch
The keynote presentation discusses moving from traditional requirements engineering to data-driven requirements engineering. It outlines the data-driven requirements engineering cycle which involves gathering feedback, analyzing usage data, mining repositories, and using analytics for decision making. Feedback can be gathered explicitly from users and implicitly through monitoring quality of service. Both forms of feedback need to be analyzed, categorized, and summarized. Repository mining involves defining quality metrics and evaluating software attributes. All gathered and analyzed data can then be used to support strategic decision making about requirements through analytics tools and stakeholder prioritization. While data-driven requirements engineering offers benefits, challenges also exist in terms of resources, expertise, and transparency.
Priore 2017 - release planning and project management toolsXavier Franch
This document summarizes the current state of practice in software release planning (SRP). It analyzes 7 major project management tools and finds that while they all offer basic task scheduling capabilities, none provide advanced AI-assisted features for dynamic feature release planning and task scheduling. The document observes that academic approaches to SRP have struggled to enter the market due to competition and challenges demonstrating benefits compared to established tools under real-world conditions. It suggests a strategy of developing AI plugins for popular tools like JIRA.
This document outlines a study investigating how companies deal with non-functional requirements (NFRs) in model-driven development (MDD) processes. The study involves surveying companies across Europe using semi-structured interviews. The objective is to understand the context in which companies adopt MDD, the extent to which their MDD approaches support NFRs, and how companies deal with NFRs that are not supported. The document describes the research questions, context of the study team, discussions around key decisions, study protocol, threats to validity, and next steps to complete the study and produce results.
This document outlines the agenda for a tutorial on modeling and analyzing business and software ecosystems. It begins with definitions of business ecosystems, software ecosystems, and open source software ecosystems. It discusses modeling approaches for ecosystems from both a value and software architecture perspective. The tutorial will then cover intentional modeling and analysis techniques, a hands-on exercise for applying these techniques, and modeling open source software ecosystems from an intentional perspective.
This document discusses modeling and analyzing open source software (OSS) ecosystems. It presents an agenda that includes an introduction to ecosystems, the case of OSS ecosystems, ecosystem modeling using the i* modeling language, ecosystem analysis using i*-based techniques, and a final discussion. It provides background on business ecosystems and software ecosystems, and the roles that different actors can play within ecosystems, such as keystone, dominator, developer, and niche player. The document aims to demonstrate how the i* modeling language can be used to model and analyze OSS ecosystems.
This document discusses open source software (OSS) adoption risks and risk modeling. It provides examples of common OSS risks like component selection errors and integration failures. It also outlines measures and indicators of risks in OSS ecosystems, like bug fix times and forum activity. The document then describes how risks can be modeled as entities with relationships, and shows how measures can provide evidence for situations and events. Finally, it discusses using statistical analysis and social network analysis to study OSS projects and communities, and using Bayesian networks to represent links between measures and risks.
1) The document describes a case study where a group of non-technical university stakeholders used the i* modeling language to develop an enterprise architecture model for their university.
2) The case study found that providing basic training, guidelines for quality, and feedback was important for non-technical users to learn and apply i*. Managing the size of the model and not over-constraining users' creativity was also important.
3) The lessons learned were organized into induction, execution, and consolidation phases. Overall, the use of i* by non-experts was found to be challenging but the lessons helped conduct a successful case study.
A layered approach to risk management in OSS projects - presented at OSS 2014Xavier Franch
This document presents a 3-layer approach to managing risks in open source software projects. Layer 1 involves collecting data through scenario-based assessments of risk drivers and their distributions. Layer 2 computes risk indicators for projects and communities and links them to business risks. Layer 3 uses goal reasoning to analyze the impact of risks on business goals. The approach separates concerns in risk analysis and the authors are working to improve automation and apply the approach through a platform called RISCOSS.
Top Benefits of Using Salesforce Healthcare CRM for Patient Management.pdfVALiNTRY360
Salesforce Healthcare CRM, implemented by VALiNTRY360, revolutionizes patient management by enhancing patient engagement, streamlining administrative processes, and improving care coordination. Its advanced analytics, robust security, and seamless integration with telehealth services ensure that healthcare providers can deliver personalized, efficient, and secure patient care. By automating routine tasks and providing actionable insights, Salesforce Healthcare CRM enables healthcare providers to focus on delivering high-quality care, leading to better patient outcomes and higher satisfaction. VALiNTRY360's expertise ensures a tailored solution that meets the unique needs of any healthcare practice, from small clinics to large hospital systems.
For more info visit us https://valintry360.com/solutions/health-life-sciences
Hand Rolled Applicative User ValidationCode KataPhilip Schwarz
Could you use a simple piece of Scala validation code (granted, a very simplistic one too!) that you can rewrite, now and again, to refresh your basic understanding of Applicative operators <*>, <*, *>?
The goal is not to write perfect code showcasing validation, but rather, to provide a small, rough-and ready exercise to reinforce your muscle-memory.
Despite its grandiose-sounding title, this deck consists of just three slides showing the Scala 3 code to be rewritten whenever the details of the operators begin to fade away.
The code is my rough and ready translation of a Haskell user-validation program found in a book called Finding Success (and Failure) in Haskell - Fall in love with applicative functors.
Malibou Pitch Deck For Its €3M Seed Roundsjcobrien
French start-up Malibou raised a €3 million Seed Round to develop its payroll and human resources
management platform for VSEs and SMEs. The financing round was led by investors Breega, Y Combinator, and FCVC.
Mobile App Development Company In Noida | Drona InfotechDrona Infotech
Drona Infotech is a premier mobile app development company in Noida, providing cutting-edge solutions for businesses.
Visit Us For : https://www.dronainfotech.com/mobile-application-development/
UI5con 2024 - Keynote: Latest News about UI5 and it’s EcosystemPeter Muessig
Learn about the latest innovations in and around OpenUI5/SAPUI5: UI5 Tooling, UI5 linter, UI5 Web Components, Web Components Integration, UI5 2.x, UI5 GenAI.
Recording:
https://www.youtube.com/live/MSdGLG2zLy8?si=INxBHTqkwHhxV5Ta&t=0
Artificia Intellicence and XPath Extension FunctionsOctavian Nadolu
The purpose of this presentation is to provide an overview of how you can use AI from XSLT, XQuery, Schematron, or XML Refactoring operations, the potential benefits of using AI, and some of the challenges we face.
14 th Edition of International conference on computer visionShulagnaSarkar2
About the event
14th Edition of International conference on computer vision
Computer conferences organized by ScienceFather group. ScienceFather takes the privilege to invite speakers participants students delegates and exhibitors from across the globe to its International Conference on computer conferences to be held in the Various Beautiful cites of the world. computer conferences are a discussion of common Inventions-related issues and additionally trade information share proof thoughts and insight into advanced developments in the science inventions service system. New technology may create many materials and devices with a vast range of applications such as in Science medicine electronics biomaterials energy production and consumer products.
Nomination are Open!! Don't Miss it
Visit: computer.scifat.com
Award Nomination: https://x-i.me/ishnom
Conference Submission: https://x-i.me/anicon
For Enquiry: Computer@scifat.com
E-commerce Development Services- Hornet DynamicsHornet Dynamics
For any business hoping to succeed in the digital age, having a strong online presence is crucial. We offer Ecommerce Development Services that are customized according to your business requirements and client preferences, enabling you to create a dynamic, safe, and user-friendly online store.
Everything You Need to Know About X-Sign: The eSign Functionality of XfilesPr...XfilesPro
Wondering how X-Sign gained popularity in a quick time span? This eSign functionality of XfilesPro DocuPrime has many advancements to offer for Salesforce users. Explore them now!
15. 15
ER 2019Xavier Franch
• From text to lexical/syntactical units
Tokenization
• Splitting input into parts
Stemming / Lemmatization
• Lemmatization more accurate
Phrasing
• Part‐of‐speech tagging
“I have a problem when saving the document, please check it”
I/PRP have/VBP a/DT problem/NN when/WRB saving/VBG
the/DT document/NN ,/, please/VBP check/VB it/PRP
Speech‐art: Requestive
Preprocessing
20. 20
ER 2019Xavier Franch
Process of assigning a quantitative value to a piece of text expressing an
affect or mood
Use of dictionaries:
But of course, not easy...: “Great, I love this new
feature that gives me this wonderful headache”
Models using advanced ML techniques
Sentiment analysis
41. 41
ER 2019Xavier Franch
Raw metric Size metric
Normalized
metric
Attribute
+
Data source
Business value
Data availability
Raw metric
Data source
Code complexity
SonarQube
Function’s
cyclomatic
complexity
Number of
functions
Average
cyclomatic
complexity
GitHub
Nb.
commits
1. Definition of metrics
54. 54
Data‐driven RE in context
• How to connect data‐driven RE to the
software process
ER 2019Xavier Franch
How can practitioners use this information
and integrate it into their processes and tools
to decide about what should be done
58. 58
Lessons learned
• Organizational
Incremental adoption
Monitor progress with strategic indicators
Involve experts
• Value
Transparency as a business value
Tailoring to different scopes
• Technological
Single access point to software quality related data
ER 2019Xavier Franch
59. 59
Online Controlled Experimentation
• Collecting data from users based in two
competing versions wrt to some difference D
• Difference in metrics due to D (modulo
statistical significance)
• Intensively used by big players as Google,
Microsoft, etc.
Not so easy for smaller companies
ER 2019Xavier Franch
65. 65
ER 2019Xavier Franch Data‐driven RE
• Offers a great opportunity for delivering more
business value to systems’ stakeholders
• But…
Not a hammer for every nail
Data‐driven needs data
• Still traditional methods at least to start with
• The role of traditional RE in the loop is a matter of debate
66. 66
ER 2019Xavier Franch Beyond RE
• Data is prevalent in many fields…
… and also in conceptual modeling
• We had a couple of examples yesterday…