Slides of my keynote at QUATIC 2019.
Abstract: Uncertainty is the quality or state that involves lacking information or insufficient knowledge. Uncertainty can be due to different reasons, including incomplete or inaccurate information, inexact data or measurements, imprecise human judgments, or approximate estimations. The explicit representation of uncertainty is gaining attention among software engineers in order to provide more faithful systems representations, more accurate design methods, and better estimations of the development processes. However, incorporating uncertainty into our systems models is not enough. Uncertainty also affects many aspects related to the quality of systems, products, processes, and data, including how uncertainty is taken into account when designing our systems, measured when evaluating their quality, and perceived by customers and users. In fact, uncertainty – and, more specifically, the lack of knowledge about the system, our measuring tools, and our potential users – should be incorporated into our quality models, too. This talk identifies several kinds of uncertainties that have a direct impact on quality, and discusses some challenges on how quality needs to be planned, modeled, designed, measured and ensured in the presence of uncertainty.
Java vs. C#
The document compares Java and C# programming languages. It discusses some key differences:
1. Syntax differences such as main method signatures, print statements, and array declarations are slightly different between the two languages.
2. Some concepts are modified in C# from Java, such as polymorphism requiring the virtual keyword, operator overloading restrictions, and switch statements allowing string cases.
3. C# introduces new concepts not in Java like enumerations, foreach loops, properties to encapsulate fields, pointers in unsafe contexts, and passing arguments by reference.
The document provides information about differences between C# and C++ programming languages. It discusses key differences in areas such as pointers, references, classes and structs, accessing native code, destruction handling, operator overloading, preprocessor directives, and exceptions. It also covers C# features like delegates, events, attributes, properties, and configuration management using XML files. The document is intended to help C++ programmers transition to C# development.
This document discusses how to organize and manipulate files in Python. It introduces the shutil module, which contains functions for copying, moving, renaming, and deleting files. It describes how to use shutil functions like copy(), copytree(), move(), rmtree() to perform common file operations. It also introduces the send2trash module as a safer alternative to permanently deleting files. Finally, it discusses walking directory trees using os.walk() to perform operations on all files within a folder and its subfolders.
Software Development Best Practices: Separating UI from Business LogicICS
One of the most effective software engineering approaches involves separating the user interface (frontend) from the business logic (backend), especially when it comes to developing embedded-devices. This practice makes it far easier to code for a single specific functionality versus coding an overall product.
In this webinar we’ll explain not only what’s involved in separating the UI from business logic in your next Qt project, but explore some of the key benefits of this approach, including:
Parallel development
Modularity
Enhanced testability
Accelerated development
Architecture that easily accommodates future changes
We’ll also touch on a few of the drawbacks, chief among them the need to implement new strategies for independent testing, build and deployment — tasks that take extra time and resources
[PR12] PR-050: Convolutional LSTM Network: A Machine Learning Approach for Pr...Taegyun Jeon
PR-050: Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting
Original Slide from http://home.cse.ust.hk/~xshiab/data/valse-20160323.pptx
Youtube: https://youtu.be/3cFfCM4CXws
Continual/Lifelong Learning with Deep ArchitecturesVincenzo Lomonaco
Humans have the extraordinary ability to learn continually from experience. Not only can we apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning. One of the grand goals of AI is building an artificial continually learning agent that constructs a sophisticated understanding of the world from its own experience through the autonomous incremental development of ever more complex skills and knowledge.
"Continual Learning" (CL) is indeed a fast emerging topic in AI concerning the ability to efficiently improve the performance of a deep model over time, dealing with a long (and possibly unlimited) sequence of data/tasks. In this workshop, after a brief introduction of the topic, we’ll implement different Continual Learning strategies and assess them on common vision benchmarks. We’ll conclude the workshop with a look at possible real world applications of CL.
TinyML: Machine Learning for MicrocontrollersRobert John
This document discusses machine learning on embedded edge devices with constraints. It notes that edge devices have limited resources like small storage, no OS, and lack floating point support. This makes pre-trained models difficult to deploy as-is on microcontrollers. The document recommends keeping models simple and optimized for the specific task by using quantization during and after training to reduce model size. Resources for learning TinyML and deploying to microcontrollers are also provided.
Java vs. C#
The document compares Java and C# programming languages. It discusses some key differences:
1. Syntax differences such as main method signatures, print statements, and array declarations are slightly different between the two languages.
2. Some concepts are modified in C# from Java, such as polymorphism requiring the virtual keyword, operator overloading restrictions, and switch statements allowing string cases.
3. C# introduces new concepts not in Java like enumerations, foreach loops, properties to encapsulate fields, pointers in unsafe contexts, and passing arguments by reference.
The document provides information about differences between C# and C++ programming languages. It discusses key differences in areas such as pointers, references, classes and structs, accessing native code, destruction handling, operator overloading, preprocessor directives, and exceptions. It also covers C# features like delegates, events, attributes, properties, and configuration management using XML files. The document is intended to help C++ programmers transition to C# development.
This document discusses how to organize and manipulate files in Python. It introduces the shutil module, which contains functions for copying, moving, renaming, and deleting files. It describes how to use shutil functions like copy(), copytree(), move(), rmtree() to perform common file operations. It also introduces the send2trash module as a safer alternative to permanently deleting files. Finally, it discusses walking directory trees using os.walk() to perform operations on all files within a folder and its subfolders.
Software Development Best Practices: Separating UI from Business LogicICS
One of the most effective software engineering approaches involves separating the user interface (frontend) from the business logic (backend), especially when it comes to developing embedded-devices. This practice makes it far easier to code for a single specific functionality versus coding an overall product.
In this webinar we’ll explain not only what’s involved in separating the UI from business logic in your next Qt project, but explore some of the key benefits of this approach, including:
Parallel development
Modularity
Enhanced testability
Accelerated development
Architecture that easily accommodates future changes
We’ll also touch on a few of the drawbacks, chief among them the need to implement new strategies for independent testing, build and deployment — tasks that take extra time and resources
[PR12] PR-050: Convolutional LSTM Network: A Machine Learning Approach for Pr...Taegyun Jeon
PR-050: Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting
Original Slide from http://home.cse.ust.hk/~xshiab/data/valse-20160323.pptx
Youtube: https://youtu.be/3cFfCM4CXws
Continual/Lifelong Learning with Deep ArchitecturesVincenzo Lomonaco
Humans have the extraordinary ability to learn continually from experience. Not only can we apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning. One of the grand goals of AI is building an artificial continually learning agent that constructs a sophisticated understanding of the world from its own experience through the autonomous incremental development of ever more complex skills and knowledge.
"Continual Learning" (CL) is indeed a fast emerging topic in AI concerning the ability to efficiently improve the performance of a deep model over time, dealing with a long (and possibly unlimited) sequence of data/tasks. In this workshop, after a brief introduction of the topic, we’ll implement different Continual Learning strategies and assess them on common vision benchmarks. We’ll conclude the workshop with a look at possible real world applications of CL.
TinyML: Machine Learning for MicrocontrollersRobert John
This document discusses machine learning on embedded edge devices with constraints. It notes that edge devices have limited resources like small storage, no OS, and lack floating point support. This makes pre-trained models difficult to deploy as-is on microcontrollers. The document recommends keeping models simple and optimized for the specific task by using quantization during and after training to reduce model size. Resources for learning TinyML and deploying to microcontrollers are also provided.
This document provides an introduction and overview of the Python programming language. It discusses Python's origins, philosophy, features, and uses. Key points covered include Python's simplicity, power, object-oriented approach, and wide portability. Examples are provided of basic Python syntax and constructs like strings, lists, functions, modules, and dictionaries.
The document discusses several design patterns including creational, structural, and behavioral patterns. Creational patterns (like factory method) help create objects in a way that decouples object creation from use. Structural patterns (like adapter) help manage relationships between entities. Behavioral patterns (like chain of responsibility) help define communication between objects to distribute behavior. Many patterns provide flexibility, reuse, and loose coupling between classes. Some patterns may introduce more complexity or have limitations in certain situations.
Golang basics for Java developers - Part 1Robert Stern
This document provides an overview of Golang basics for Java developers. It covers Golang's history, features, syntax, data types, flow control, functions and interfaces, concurrency, and differences from Java. Key points include Golang being a compiled, statically typed language created at Google in 2007, its use of packages and imports, basic types like strings and integers, slices for dynamic arrays, maps for key-value pairs, functions with receivers, errors instead of exceptions, and goroutines for concurrency with channels.
Qt5 is a cross-platform application development framework that allows developers to write applications once and deploy them across many operating systems, it provides tools like Qt Creator for building graphical user interfaces and libraries for tasks like networking, multimedia, and data storage, and it uses C++ for application logic with the option of using QML and JavaScript for declarative user interface development.
DetectoRS for Object Detection/Segmentation
On COCO test-dev, DetectoRS achieves state-of-the art 55.7% box AP for object detection, 48.5% mask AP for instance segmentation, and 50.0% PQ for panoptic segmentation.
(2020.07)
http://imatge-upc.github.io/telecombcn-2016-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.
The slide covers a few state of the art models of word embedding and deep explanation on algorithms for approximation of softmax function in language models.
JVM Mechanics: Understanding the JIT's TricksDoug Hawkins
In this talk, we'll walkthrough how the JIT optimizes a piece Java code step-by-step. In doing so, you'll learn some of the amazing feats of optimization that JVMs can perform, but also some surprisingly simple things that prevent your code from running fast.
Polymorphism allows objects of different types to be treated as a common type. It is implemented by adding a virtual pointer (VPTR) to objects that points to a virtual function table (VTABLE) containing pointers to each object's virtual methods. This allows calling the same method on different types of objects in a polymorphic way while executing the correct implementation based on the object's actual type at runtime.
Constructors and destructors in Python.
Constructors are special methods that are called automatically when an object is created. They initialize variables and ensure objects are properly initialized. There are two types of constructors: default and parameterized. Default don't take arguments, parameterized do.
Destructors are called when an object is destroyed. Defined using __del__(), they are useful for releasing resources like closing files before a program exits.
The document then provides code examples of classes with constructors, parameterized constructors, and destructors. It also discusses Python's garbage collection and how the collector deletes unneeded objects to free memory space.
This document discusses classes and objects in Python. It defines a Calculator class and demonstrates how to create class attributes, methods, and instances. It explains the __init__ method, self keyword, and how to access attributes and methods. It also covers data attributes versus class attributes, inheritance, method overriding, and calling parent methods. The document provides examples to illustrate these object-oriented programming concepts in Python.
This document provides an overview of the Python programming language. It discusses Python's history and evolution, its key features like being object-oriented, open source, portable, having dynamic typing and built-in types/tools. It also covers Python's use for numeric processing with libraries like NumPy and SciPy. The document explains how to use Python interactively from the command line and as scripts. It describes Python's basic data types like integers, floats, strings, lists, tuples and dictionaries as well as common operations on these types.
1. The document discusses recent developments in transformer architectures in 2021. It covers large transformers with models of over 100 billion parameters, efficient transformers that aim to address the quadratic attention problem, and new modalities like image, audio and graph transformers.
2. Issues with large models include high costs of training, carbon emissions, potential biases, and static training data not reflecting changing social views. Efficient transformers use techniques like mixture of experts, linear attention approximations, and selective memory to improve scalability.
3. New modalities of transformers in 2021 include vision transformers applied to images and audio transformers for processing sound. Multimodal transformers aim to combine multiple modalities.
Restricted Boltzman Machine (RBM) presentation of fundamental theorySeongwon Hwang
The document discusses restricted Boltzmann machines (RBMs), an type of neural network that can learn probability distributions over its input data. It explains that RBMs define an energy function over hidden and visible units, with no connections between units within the same group. This conditional independence allows efficient computation of conditional probabilities. RBMs are trained using maximum likelihood, minimizing the negative log-likelihood of the training data by gradient descent.
Over the last year there has been a lot of buzz about Clean Architecture in the Android community, but what is Clean Architecture? How does it work? And should I be using it? Recently at Badoo we decided to rewrite our messenger component.
Over the years this core piece of functionality in our app has become large and unwieldy. We wanted to take a fresh approach to try and prevent this from happening again. We choose to use Clean Architecture to achieve our goal. This talk intends to share our journey from theory to implementation in an application with over 100 million downloads. By the end, you should not only understand what Clean Architecture is, but how to implement it, and whether you should.
Expressing Confidence in Model and Model Transformation ElementsLola Burgueño
The expression and management of uncertainty, both in the information and in the operations that manipulate it, is a critical issue in those systems that work with physical environments. Measurement uncertainty can be due to several factors, such as unreliable data sources, tolerance in the measurements, or the inability to determine if a certain event has actually happened or not. In particular, this contribution focuses on the expression of one kind of uncertainty, namely the confidence on the model elements, i.e., the degree of belief that we have on their occurrence, and on how such an uncertainty can be managed and propagated through model transformations, whose rules can also be subject to uncertainty.
Representing and generating uncertainty effectively presentatıonAzdeen Najah
Prof. Frank H Knight (1921) proposed that "risk" is randomness with knowable probabilities, and "uncertainty" is randomness with unknowable probabilities. However, risk and uncertainty both share features with randomness. The illustration here explains the relationship of the concepts better than words...
This document provides an introduction and overview of the Python programming language. It discusses Python's origins, philosophy, features, and uses. Key points covered include Python's simplicity, power, object-oriented approach, and wide portability. Examples are provided of basic Python syntax and constructs like strings, lists, functions, modules, and dictionaries.
The document discusses several design patterns including creational, structural, and behavioral patterns. Creational patterns (like factory method) help create objects in a way that decouples object creation from use. Structural patterns (like adapter) help manage relationships between entities. Behavioral patterns (like chain of responsibility) help define communication between objects to distribute behavior. Many patterns provide flexibility, reuse, and loose coupling between classes. Some patterns may introduce more complexity or have limitations in certain situations.
Golang basics for Java developers - Part 1Robert Stern
This document provides an overview of Golang basics for Java developers. It covers Golang's history, features, syntax, data types, flow control, functions and interfaces, concurrency, and differences from Java. Key points include Golang being a compiled, statically typed language created at Google in 2007, its use of packages and imports, basic types like strings and integers, slices for dynamic arrays, maps for key-value pairs, functions with receivers, errors instead of exceptions, and goroutines for concurrency with channels.
Qt5 is a cross-platform application development framework that allows developers to write applications once and deploy them across many operating systems, it provides tools like Qt Creator for building graphical user interfaces and libraries for tasks like networking, multimedia, and data storage, and it uses C++ for application logic with the option of using QML and JavaScript for declarative user interface development.
DetectoRS for Object Detection/Segmentation
On COCO test-dev, DetectoRS achieves state-of-the art 55.7% box AP for object detection, 48.5% mask AP for instance segmentation, and 50.0% PQ for panoptic segmentation.
(2020.07)
http://imatge-upc.github.io/telecombcn-2016-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.
The slide covers a few state of the art models of word embedding and deep explanation on algorithms for approximation of softmax function in language models.
JVM Mechanics: Understanding the JIT's TricksDoug Hawkins
In this talk, we'll walkthrough how the JIT optimizes a piece Java code step-by-step. In doing so, you'll learn some of the amazing feats of optimization that JVMs can perform, but also some surprisingly simple things that prevent your code from running fast.
Polymorphism allows objects of different types to be treated as a common type. It is implemented by adding a virtual pointer (VPTR) to objects that points to a virtual function table (VTABLE) containing pointers to each object's virtual methods. This allows calling the same method on different types of objects in a polymorphic way while executing the correct implementation based on the object's actual type at runtime.
Constructors and destructors in Python.
Constructors are special methods that are called automatically when an object is created. They initialize variables and ensure objects are properly initialized. There are two types of constructors: default and parameterized. Default don't take arguments, parameterized do.
Destructors are called when an object is destroyed. Defined using __del__(), they are useful for releasing resources like closing files before a program exits.
The document then provides code examples of classes with constructors, parameterized constructors, and destructors. It also discusses Python's garbage collection and how the collector deletes unneeded objects to free memory space.
This document discusses classes and objects in Python. It defines a Calculator class and demonstrates how to create class attributes, methods, and instances. It explains the __init__ method, self keyword, and how to access attributes and methods. It also covers data attributes versus class attributes, inheritance, method overriding, and calling parent methods. The document provides examples to illustrate these object-oriented programming concepts in Python.
This document provides an overview of the Python programming language. It discusses Python's history and evolution, its key features like being object-oriented, open source, portable, having dynamic typing and built-in types/tools. It also covers Python's use for numeric processing with libraries like NumPy and SciPy. The document explains how to use Python interactively from the command line and as scripts. It describes Python's basic data types like integers, floats, strings, lists, tuples and dictionaries as well as common operations on these types.
1. The document discusses recent developments in transformer architectures in 2021. It covers large transformers with models of over 100 billion parameters, efficient transformers that aim to address the quadratic attention problem, and new modalities like image, audio and graph transformers.
2. Issues with large models include high costs of training, carbon emissions, potential biases, and static training data not reflecting changing social views. Efficient transformers use techniques like mixture of experts, linear attention approximations, and selective memory to improve scalability.
3. New modalities of transformers in 2021 include vision transformers applied to images and audio transformers for processing sound. Multimodal transformers aim to combine multiple modalities.
Restricted Boltzman Machine (RBM) presentation of fundamental theorySeongwon Hwang
The document discusses restricted Boltzmann machines (RBMs), an type of neural network that can learn probability distributions over its input data. It explains that RBMs define an energy function over hidden and visible units, with no connections between units within the same group. This conditional independence allows efficient computation of conditional probabilities. RBMs are trained using maximum likelihood, minimizing the negative log-likelihood of the training data by gradient descent.
Over the last year there has been a lot of buzz about Clean Architecture in the Android community, but what is Clean Architecture? How does it work? And should I be using it? Recently at Badoo we decided to rewrite our messenger component.
Over the years this core piece of functionality in our app has become large and unwieldy. We wanted to take a fresh approach to try and prevent this from happening again. We choose to use Clean Architecture to achieve our goal. This talk intends to share our journey from theory to implementation in an application with over 100 million downloads. By the end, you should not only understand what Clean Architecture is, but how to implement it, and whether you should.
Expressing Confidence in Model and Model Transformation ElementsLola Burgueño
The expression and management of uncertainty, both in the information and in the operations that manipulate it, is a critical issue in those systems that work with physical environments. Measurement uncertainty can be due to several factors, such as unreliable data sources, tolerance in the measurements, or the inability to determine if a certain event has actually happened or not. In particular, this contribution focuses on the expression of one kind of uncertainty, namely the confidence on the model elements, i.e., the degree of belief that we have on their occurrence, and on how such an uncertainty can be managed and propagated through model transformations, whose rules can also be subject to uncertainty.
Representing and generating uncertainty effectively presentatıonAzdeen Najah
Prof. Frank H Knight (1921) proposed that "risk" is randomness with knowable probabilities, and "uncertainty" is randomness with unknowable probabilities. However, risk and uncertainty both share features with randomness. The illustration here explains the relationship of the concepts better than words...
The document discusses representing belief uncertainty in software models. It proposes using a Bayesian probability approach to quantify belief uncertainty, where degrees of belief for model statements are assigned by belief agents. A UML profile and operational semantics are defined to explicitly represent belief agents and propagate credence values through dependent statements. This allows querying credence values to understand the level of confidence in different parts of the model based on the assessing agent. Future work includes associating evidence with beliefs and applying this approach to other model types.
This document provides an overview of programmatic risk management. It discusses:
1. The importance of managing risk to cost, schedule, and technical performance for project success.
2. How single point estimates are not sufficient and statistical estimates are needed to build a credible cost and schedule model given the uncertainty inherent in projects.
3. The key aspects of risk management including identifying risk, analyzing risk probability and impact, and communicating risk as an ongoing process for decision making.
Bayesian Assurance: Formalizing Sensitivity Analysis For Sample SizenQuery
Title: Bayesian Assurance: Formalizing Sensitivity Analysis For Sample Size
Duration: 60 minutes
Speaker: Ronan Fitzpatrick, Head of Statistics, Statsols
Watch Here: http://bit.ly/2ndRG4B
In this webinar you’ll learn about:
Benefits of Sensitivity Analysis: What does the researcher gain by conducting a sensitivity analysis?
Why isn't Sensitivity Analysis formalized: Why does sensitivity analysis still lack the type of formalized rules and grounding to make it a routine part of sample size determination in every field?
How Bayesian Assurance works: Using Bayesian Assurance provides key contextual information on what is likely to happen over the total range possible values rather than the small number of fixed points used in a sensitivity analysis
Elicitation & SHELF: How expert opinion is elicited and then how to integrate these opinions with each other plus prior data using the Sheffield Elicitation Framework (SHELF)
Why use in both Frequentist or Bayesian analysis: How and why these methods can be used for studies which will use Frequentist or Bayesian methods in their final analysis
Plus more
The document discusses detecting unknown insider threat scenarios. It proposes an ensemble-based, unsupervised technique to robustly detect potential insider threats, including scenarios not previously identified. The approach uses a variety of individual detectors combined using anomaly detection ensemble techniques. It explores factors like the number and variety of detectors, and incorporating existing knowledge from scenario-based detectors. The technique is evaluated on its ability to detect unknown scenarios in real data. Several new insider threat scenarios and solutions are presented, such as wearable technologies, outsourced systems, knowing detection methods, and activity outside work.
Managing in the presence of uncertaintyGlen Alleman
Uncertainty is the source of risk. Uncertainty comes in two types, aleatory and epistemic. It is important to understand both and deal with both in distinct ways, in order to produce a credible risk handling strategy.
Statistical inference is a process of making conclusions about a population based on a sample of data. It involves using statistical methods to draw inferences about the population parameters based on sample data. There are two main types of statistical inference: estimation and hypothesis testing. Estimation involves using sample data to estimate population parameter values like the mean or standard deviation, while hypothesis testing involves specifying and testing hypotheses about population parameters.
#Data science is a field that involves using statistical and computational methods to analyze and extract insights from data. It plays a crucial role in various industries, from business and healthcare to finance and technology.
Computers have played an increasingly important role in pharmaceutical research and development since the 1960s. In the 1960s, computational chemistry was still primarily conducted in academia. Programs were shared through repositories like the Quantum Chemistry Program Exchange. In the 1970s, pharmaceutical companies like Lilly and Merck began adopting computational techniques. The 1980s saw further growth with advances in computing power from technologies like mainframe computers and personal computers. Statistical modeling and optimization techniques became more widely used in the 1990s to aid drug discovery and development. Population modeling also emerged as a tool to understand variability in drug exposure and response.
This document provides guidance on measurement uncertainty and detection/quantification limits according to international standards. It discusses key concepts like types of uncertainty evaluation, propagation of uncertainty, and expanded uncertainty. It recommends following the ISO Guide to the Expression of Uncertainty in Measurement for terminology and methods. It also discusses definitions of minimum detectable concentration and minimum quantifiable concentration from IUPAC guidance. The document aims to unify approaches to uncertainty and detection/quantification limits and provide practical recommendations and examples.
This document proposes a new technique called LIME (Local Interpretable Model-agnostic Explanations) that can explain the predictions of any classifier or regressor in an interpretable and faithful manner. It does this by learning an interpretable model locally around the prediction. It also proposes a method called SP-LIME to select a set of representative individual predictions and their explanations in a non-redundant way to help evaluate whether a model as a whole can be trusted before being deployed. The authors demonstrate LIME on different models for text and image classification and show through experiments that explanations can help humans decide whether to trust a prediction, choose between models, improve an untrustworthy classifier, and identify cases where a classifier should not be
This document provides an overview of uncertainty and probability theory concepts for artificial intelligence. It discusses acting under uncertainty, utility theory, basic probability notation, and the axioms of probability. Key concepts covered include prior and conditional probability, propositions, atomic events, random variables, joint probability distributions, and the probability axioms. The document is intended to introduce foundational probability and decision theory concepts for agents operating with uncertain knowledge.
1) This document outlines an agenda for a workshop on programmatic risk management that covers topics such as risk management principles, basic statistics, Monte Carlo simulation theory, using Microsoft Project and Risk+ software, risk ranking, and building a credible schedule.
2) It discusses five key principles of managing programmatic risk: having a strategy rather than relying on hope, understanding that single point estimates are inaccurate without variance data, integrating cost, time and technical performance, using a risk management process and model rather than "driving in the dark," and ensuring effective risk communication.
3) The mechanics section describes how to set up a Risk+ simulation integrated with
hisory of computers in pharmaceutical research presentation.pptxDhanaa Dhoni
Computers have been used in pharmaceutical research and development since the 1940s. Early computers were large mainframe systems that were expensive and shared between organizations. By the 1960s, some pharmaceutical companies had acquired early computers like the IBM 650 to assist with scientific tasks. Today, computers are essential for tasks across the pharmaceutical industry from drug design and clinical trials to manufacturing, sales, and more. Advanced statistical modeling and software continue to be important tools in pharmaceutical research and development.
The document discusses efficient reasoning in artificial intelligence systems. It describes how reasoning systems use stored information to derive conclusions and answers to queries. However, as reasoning systems become more expressive, they can also become less efficient or even undecidable. The document surveys techniques for addressing this tradeoff between expressiveness and efficiency in both logic-based and probabilistic reasoning systems. These techniques allow systems to sacrifice some correctness, precision, or expressiveness to gain efficiency.
This document discusses measurement in research and provides examples and guidelines. It covers topics such as selecting observable events, assigning numbers or symbols to represent aspects of events, applying mapping rules, and different levels of measurement including nominal, ordinal, interval and ratio scales. Reliability and validity are important criteria for good measurement. The document also discusses sampling methods like probability and non-probability designs as well as factors to consider for determining sample size.
Dynamic Rule Base Construction and Maintenance Scheme for Disease Predictionijsrd.com
Business and healthcare application are tuned to automatically detect and react events generated from local are remote sources. Event detection refers to an action taken to an activity. The association rule mining techniques are used to detect activities from data sets. Events are divided into 2 types' external event and internal event. External events are generated under the remote machines and deliver data across distributed systems. Internal events are delivered and derived by the system itself. The gap between the actual event and event notification should be minimized. Event derivation should also scale for a large number of complex rules. Attacks and its severity are identified from event derivation systems. Transactional databases and external data sources are used in the event detection process. The new event discovery process is designed to support uncertain data environment. Uncertain derivation of events is performed on uncertain data values. Relevance estimation is a more challenging task under uncertain event analysis. Selectability and sampling mechanism are used to improve the derivation accuracy. Selectability filters events that are irrelevant to derivation by some rules. Selectability algorithm is applied to extract new event derivation. A Bayesian network representation is used to derive new events given the arrival of an uncertain event and to compute its probability. A sampling algorithm is used for efficient approximation of new event derivation. Medical decision support system is designed with event detection model. The system adopts the new rule mapping mechanism for the disease analysis. The rule base construction and maintenance operations are handled by the system. Rule probability estimation is carried out using the Apriori algorithm. The rule derivation process is optimized for domain specific model.
Similar to Modeling and Evaluating Quality in the Presence of Uncertainty (20)
Modeling the behavior of complex systems that operate in real environments, deal with physical elements, or interact with humans is a challenging task. It involves the explicit representation of aspects of behavioral uncertainty that are inherent in the system but generally neglected in software models. In this paper, we focus on the explicit representation of the behavior of objects of complex systems, considering their motivations, randomness, and the different types of underlying uncertainty that affect their actions. We show how such uncertain behaviors can be effectively modeled in UML and OCL, and how the specifications produced can be used to simulate and analyze these systems.
Knowledge-based applications that deal with uncertainty usually represent it by means of a confidence score that expresses the probability that a given fact is true. However, different users may have distinct opinions about the same fact, something that is not considered in existing proposals. This is critical in a number of areas where individual opinions need to be taken into account when making informed decisions, particularly when these are to be made by consensus. This paper introduces Subjective Knowledge Graphs (SKG), an extension to Probabilistic Knowledge Graphs that considers the individual opinions of separate users about the same facts, and allows reasoning about them. We show how SKGs can be implemented using standard graph databases and how the results of the queries can be enriched with the associated degrees of uncertainty.
Using UML and OCL Models to realize High-Level Digital TwinsAntonio Vallecillo
Digital twins constitute virtual representations of physically existing systems. However, their inherent complexity makes them difficult to develop and prove correct. In this paper, we explore the use of UML and OCL, complemented with an executable language, SOIL, to build and test digital twins at a high level of abstraction. We also show how to realize the bidirectional connection between the UML models of the digital twin in the USE tool with the physical twin, using an architectural framework centered on a data lake. We have built a prototype of the framework to demonstrate our ideas, and validated it by developing a digital twin of a Lego Mindstorms car. The results allow us to show some interesting advantages of using high-level UML models to specify virtual twins, such as simulation, property checking, and some other types of tests.
Modeling behavioral deontic constraints using UML and OCLAntonio Vallecillo
This paper proposes modeling behavioral deontic constraints using UML and OCL. It introduces deontic tokens that reify permissions and obligations as objects. It also uses filmstrip models to represent system behavior as a sequence of snapshots. Operations become structural invariants between snapshots. The approach is demonstrated on a student grading case study. Behavioral analysis can then be done on filmstrips to check properties like reachability and accountability. Modeling deontic constraints explicitly aims to better support analysis, implementation and evolution compared to implicit representations in modal logic.
1. Research evaluation in Spain has improved with new agencies adopting the GGS conference rating system, equating computer science conferences to journals.
2. The Spanish Informatics Societies play a key role in harmonizing evaluation criteria, building on successes like GGS, but challenges remain like dependence on journal impact factors.
3. Opportunities now exist to promote positive changes with momentum from initiatives addressing issues, and experiences can be shared across Europe for further improvement.
This slides correspond to the talk we gave at the MODEVVA'17 workshop. This work presents an extension of OCL to allow modellers to deal with random numbers and probability distributions in their OCL specifications. We show its implementation in the tool USE and discuss some advantages of this new feature for the validation and verification of models.
Extending Complex Event Processing to Graph-structured InformationAntonio Vallecillo
Complex Event Processing (CEP) is a powerful technology in realtime distributed environments for analyzing fast and distributed streams of data, and deriving conclusions from them. CEP permits defining complex events based on the events produced by the incoming sources in order to identify complex meaningful circumstances and to respond to them as quickly as possible. However, in many situations the information that needs to be analyzed is not structured as a mere sequence of events, but as graphs of interconnected data that evolve over time. This paper proposes an extension of CEP systems that permits dealing with graph-structured information. Two case studies are used to validate the proposal and to compare its performance with traditional CEP systems. We discuss the benefits and limitations of the CEP extensions presented.
Towards a Body of Knowledge for Model-Based Software EngineeringAntonio Vallecillo
Model-based Software Engineering (MBSE) is now accepted as a Software Engineering (SE) discipline and is being taught as part of more general SE curricula. However, an agreed core of concepts, mechanisms and practices — which constitutes the Body of Knowledge of a discipline — has not been captured anywhere, and is only partially covered by the SE Body of Knowledge (SWEBOK). With the goals of characterizing the contents of the MBSE discipline, promoting a consistent view of it worldwide, clarifying its scope with regard to other SE disciplines, and defining a foundation for a curriculum development on MBSE, this paper provides a proposal
for an extension of the contents of SWEBOK with the set of fundamental concepts, terms and mechanisms that should constitute the MBSE Body of Knowledge.
La Ingeniería Informática no es una Ciencia -- Reflexiones sobre la Educación...Antonio Vallecillo
Charla invitada en Jenui 2017: En esta charla cuestionamos la formación actual que damos a nuestros alumnos de ingeniería informática, más propia de una disciplina científica que una ingeniería. De hecho, a pesar de los esfuerzos llevados a cabo durante los últimos años en nuestras Escuelas de Ingeniería Informática para mejorar la formación que se da a sus alumnos, la sociedad sigue sin percibirnos como ingenieros ni reconoce las competencias propias de nuestra disciplina. Partiendo de las características que debería tener la profesión de ingeniero informático, y que nuestra misión como Universidad debe ser la de formar profesionales, se analizan las fortalezas y debilidades de nuestra educación y se identifican algunos aspectos tanto de contenidos como de metodología que sería preciso plantear si realmente queremos formar ingenieros informáticos y mejorar la percepción que tiene de nosotros la Sociedad.
La Ética en la Ingeniería de Software de Pruebas: Necesidad de un Código ÉticoAntonio Vallecillo
En esta charla se analiza la necesidad de un código ético en el desarrollo de la actividad profesional en el ámbito de la ingeniería de software y, consecuentemente, en las pruebas.(mpartida en el Primer Congreso del Comité Español de Empresas de Pruebas Software (SSTQB). Sevilla, 16/6/2016. http://www.sstqb.es/eventos/gira2016sstqbetapasevilla.html)
La ingeniería del software en España: retos y oportunidadesAntonio Vallecillo
Este documento trata sobre los retos y oportunidades de la Ingeniería del Software. Brevemente describe cómo el software juega un papel clave en aplicaciones críticas y cómo la fiabilidad debe venir del software. También menciona la complejidad creciente de los requisitos y la rápida evolución de las tecnologías como desafíos para la industria del software.
El documento proporciona información sobre los estudios de posgrado de la Universidad de Málaga. Ofrece 61 másteres oficiales verificados por la ANECA con una alta tasa de empleo y satisfacción de los estudiantes. También cuenta con 21 programas de doctorado con 105 líneas de investigación y 62 títulos propios de posgrado como másteres, diplomas y expertos orientados al mercado laboral. La universidad fomenta la movilidad internacional a través de varios programas de intercambio.
El papel de los MOOCs en la Formación de Posgrado. El reto de la Universidad...Antonio Vallecillo
Este documento resume el papel de los MOOCs en la formación de posgrado y los retos actuales de la universidad. Brevemente describe los antecedentes de los MOOCs y cómo han cambiado los estudiantes, la tecnología y el mundo, pero los métodos de enseñanza y los profesores no han cambiado. También define qué es un MOOC, los nuevos modelos de trabajo y enseñanza que proponen, y las plataformas disponibles para crearlos. Finalmente, analiza la experiencia de la Universidad de Málaga creando sus
La enseñanza digital y los MOOC en la UMA. Presentación en el XV encuentro de...Antonio Vallecillo
Presentación realizada en el XV Encuentro de Rectores del Grupo Tordesillas (http://www.grupotordesillas.net/) celebrado en Lisboa en octubre de 2014, en el Seminario sobre nuevos instrumentos de aprendizaje digital y cursos masivos abiertos online (MOOC)
El doctorado en Informática: ¿Nuevo vino en viejas botellas? (Charla U. Sevil...Antonio Vallecillo
RESUMEN: El nuevo Real Decreto 99/2011 ha supuesto un cambio sustancial en el tercer ciclo de los estudios universitarios y en las prácticas que conducen al desarrollo de la tesis. Estos cambios son especialmente significativos en los doctorados de ciencias e ingenierías, y en particular en Informática, con la aparición de nuevas formas de comunicación social y de evaluación de la actividad investigadora, las bases de datos de publicaciones y los índices de impacto, la reputación online de los investigadores, y la profesionalización de los doctorados.
Esta charla está dedicada a presentar, y debatir, lo que representan estas novedades para los estudiantes de doctorado en Informática, y sugerir algunos aspectos que es importante tener en cuenta a la hora de plantear el desarrollo de la tesis y construir nuestra carrera profesional.
Accountable objects: Modeling Liability in Open Distributed SystemsAntonio Vallecillo
As an increasing amount of commercial activity becomes automated, the importance of techniques for providing complete system specifications, checking the correctness of interactions and flagging incorrect behaviour increases. The aim throughout is to generate more complete information about the system and so to produce IT solutions that reflect the business requirements accurately. So far, most efforts have been placed on the appropriate specification of the system behaviour and then on the non-functional requirements that constitute the contract between a system and its users. But in fully-automated commercial systems, such as Cloud Computing or SOA systems, we should also consider the liability of the different parties, since we should be able that assign responsibility to objects and, more importantly, to know in case of problems or contact violations, which one should be blamed.
The consequence of these considerations is that we need the ability to express more directly the necessary obligations and other deontic concepts, such as permissions and prohibitions, giving the designer the tools for extending the behavioural information to make it clear where obligations apply and with what detailed properties. In this talk we describe current activities within the International Organization for Standardization (ISO) to extend the ODP family of standards for the expression of policies using deontic logic, and on how to improve support for deontic concepts based on their reification.
The document discusses assigning meaning to models, noting that models must have precise meanings in order to understand systems, analyze properties, and drive implementation. It explores how domain-specific modeling languages that are intuitive and close to the problem domain can help assign meaning, and suggests current modeling notations may not be optimal for this task. Precise yet abstract notations are needed to allow formal analysis of modeled systems.
Slides of the talk at ECMDA 2011, Brimingham, June 2011
ABSTRACT:
The package is one of the basic UML concepts. It is used both to group model elements and to provide a namescope for its members. However, combining these two tasks into a single UML concept can become not only too restrictive but also a source of subtle problems. This paper presents some improvements to the current UML naming and grouping schemata, using the ideas proposed in the reference model of Open Distributed Processing (ODP). The extensions try to maintain backwards compatibility with the existing UML concepts, while allowing more flexible grouping and naming mechanisms.
On the Combination of Domain Specific Modeling LanguagesAntonio Vallecillo
This are the slides of the presentation at ECMFA 2010 of paper:
"On the Combination of Domain Specific Modeling Languages". LNCS 6138, pp. 301-316, Paris, June 16-18, 2010.
ABSTRACT: Domain Specific Modeling Languages (DSMLs) are essential elements in Model-based Engineering. Each DSML allows capturing certain properties of the system, while abstracting other properties away. Nowadays DSMLs are mostly used in silos to solve specific problems. However, there are many occasions when multiple DSMLs need to be combined to design systems in a modular way. In this paper we discuss some scenarios of use and several mechanisms for DSML combination. We propose a general framework for combining DSMLs that subsumes them, based on the concept of viewpoint unification, and its realization using model-driven techniques.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Things to Consider When Choosing a Website Developer for your Website | FODUUFODUU
Choosing the right website developer is crucial for your business. This article covers essential factors to consider, including experience, portfolio, technical skills, communication, pricing, reputation & reviews, cost and budget considerations and post-launch support. Make an informed decision to ensure your website meets your business goals.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Modeling and Evaluating Quality in the Presence of Uncertainty
1. Modeling and Evaluating Quality in the
Presence of Uncertainty
QUATIC 2019
Ciudad Real, September 13, 2019
Antonio Vallecillo
Universidad de Málaga, Spain
2. Uncertainty
It applies to: predictions of future events,
estimations,
physical measurements, or
properties of a system, its elements or its environment
due to:
Underspecification of the problem or solution domains
Lack of knowledge of the system, its environment, or its underlying physics
Lack of precision in measurements
Imperfect, incorrect, or missing information
Numerical approximations
Values and parameters indeterminacy
Different interpretations of the same evidences by separate parties
2
Uncertainty: Quality or state that involves imperfect and/or unknown information
“There is nothing certain, but the uncertain” (proverb)
3. Uncertainty in Software Engineering
Ziv’s Uncertainty Principle: “Uncertainty is inherent and inevitable in software
development processes and products” (1996)
All projects, no matter the domain, processes, or technology, operate in the presence
of uncertainty – reducible (epistemic) and irreducible (aleatory)
Humphrey’s Requirements Uncertainty Principle: “For a new software system, the
requirements will not be completely known until after the users have used it.”
The true role of design is thus to create a workable solution to an ill-defined problem.
Software engineering variables affected by Uncertainty:
Cost
Schedule
Performance
Capacity for work
Productivity
Quality of results
3
4. Software development methodologies and uncertainty
4
“The Uncertainty Principle OR How to Choose the Right Methodology”. https://kosmothink.wordpress.com/2010/12/31/the-
uncertainty-principal-or-how-to-choose-the-right-methodology
6. Different kinds of uncertainty in Complex Event Processing (CEP) systems
Selection phase:
Uncertain events in the stream: Missing events (false negatives, FN); or wrongly
inserted (false positives, FP).
Uncertainty in the values of the attributes (including their timestamps!) due to
imprecision of the measuring methods or tools (measurement uncertainty, MU).
Matching phase:
Uncertainty of comparison operators (=, <, >, ->,...) between uncertain values.
Uncertainty of logical composition operators (or, and, not) between uncertain
statements
Production phase:
Lack of precision in the values of the attributes of derived events, due to the
propagation of uncertainty in their calculation.
Lack of confidence in the derived event, due to incomplete or erroneous
assumptions about the environment in which the system operates, which many
influence the rule’s confidence.
6
Nathalie Moreno, Manuel F. Bertoa, Loli Burgueño, Antonio Vallecillo: “Managing Measurement and Occurrence Uncertainty
in Complex Event Processing Systems.” IEEE Access 7: 88026-88048 (2019)
7. Many different formalisms and theories to quantify uncertainty
7
Bayesian Belief Networks (BBN)
Monte Carlo simulations
Decision theory/trees
Probabilities
Fuzzy Logic
…
11. A classification of uncertainty (according to its nature)
Aleatory Uncertainty – A kind of uncertainty that refers to the inherent
uncertainty due to the probabilistic variability or randomness of a
phenomenon
Examples: measuring the speed of a car, or the duration of a software
development process
This type of uncertainty is irreducible, in that there will always be variability in
the underlying variables.
Epistemic Uncertainty – A kind of uncertainty that refers to the lack of
knowledge we may have about the system (modeled or real).
Examples: Ambiguous or imprecise requirements about the expected system
functionality, its envisioned operating environment, etc.
This type of uncertainty is reducible, in that additional information or knowledge
may reduce it.
11
A. Der Kiureghian and O. Ditlevsen: "Aleatory or epistemic? Does it matter?" Structural Safety 31(2):105-112, 2009
12. Reducing the uncertainty
1. Certainty: There is no reducible uncertainty and information is
complete
2. Fully reducible imprecision: There is no full certainty, but
uncertainty can be reduced by collecting additional information
until achieving full certainty (no irreducible uncertainty present)
3. Partially reducible imprecision: There is no full certainty, but
uncertainty can be reduced by collecting additional information.
However, there is still irreducible uncertainty
4. Irreducible imprecision: There is no full certainty, and it cannot
be reduced (only margins can be used)
12
Epistemic
Aleatory
13. Uncertainty and Knowledge…
13Borrowed from “Introduction to Uncertainty Modeling” presentation at the OMG by T. Tue, S. Ali, B. Selic and A. Watson, 2016
14. Knowledge vs. Belief
14
Each “knowledge” statement here is based on real evidence!
When dealing with
uncertainty, perhaps
it is best to avoid the
notion of “knowledge”
altogether!
Borrowed from “Introduction to Uncertainty Modeling” presentation at the OMG by T. Tue, S. Ali, B. Selic and A. Watson, 2016
15. Belief
Belief: An implicit or explicit opinion or conviction held by a belief agent about a
topic, expressed by one or more belief statements
Belief agent: An entity (human, institution, even a machine) that holds one or
more beliefs
Topic: a possible phenomenon or notion belonging to a given subject area.
Belief Statement: An explicit specification of some belief held by a belief agent.
It represents a belief, and therefore it is a subjective concept
It may not always be possible to determine whether or not a belief statement is valid.
A belief statement may not necessarily correspond to objective reality.
This means that it could be completely false, or only partially true, or completely true.
The validity of a statement may only be meaningfully defined within a given context
or purpose.
Thus, the statement that “the Earth can be represented as a perfect sphere” may be perfectly
valid for some purposes but invalid or only partly valid for others.
15OMG. “Precise Semantics for Uncertainty Modeling” Request For Proposals. OMG Document: ad/2017-12-01, 2017.
16. The OMG PSUM initiative (Precise Semantics for Uncertainty Modeling)
16
17. Related concepts
Risk – The effect of uncertainty on objectives [ISO/IEC 31000].
An uncertainty may have an associated risk, and a high-risk difficulty or danger
associated with this uncertainty that deserves special attention.
“Risk does not exist by itself. Risk is created when there is uncertainty.”
Evidence – Objective information that may be used to justify a belief
It can be an observation, a record of a real-world event occurrence or,
alternatively, the conclusion of some formalized chain of logical inference that
provides information that can contribute to determining the validity
(truthfulness) of a belief statement
17
OMG. “Precise Semantics for Uncertainty Modeling” Request For Proposals. OMG Document: ad/2017-12-01, 2017.
18. Types of uncertainty (according to their sources)
Measurement uncertainty: A kind of aleatory uncertainty that refers to a set of possible
states or outcomes of a measurement, where probabilities are assigned to each possible
state or outcome
Occurrence uncertainty: a kind of epistemic uncertainty that refers to the degree of belief
that we have on the actual existence of an entity, i.e., the real entity that a model element
represents
Belief uncertainty: A kind of epistemic uncertainty in which a belief agent is uncertain about
any of the statements made about the system or its environment.
Design uncertainty: A kind of epistemic uncertainty that refers to a set of possible design
decisions or options, where probabilities are assigned to each decision or option
Environment uncertainty: lack of certainty about the surroundings, boundaries and usages
of a system and of its elements
Location uncertainty: lack of certainty about the geographical or physical location of a
system, its elements or its environment
Time uncertainty: lack of certainty about the time properties expressed in a statement
about the system or its environment
18
Based on M. Zhang, B. Selic, S. Ali, T. Yue, O. Okariz, and R. Norgren, "Understanding Uncertainty in Cyber-Physical Systems: A
Conceptual Model" In Proc. of ECMFA 2016, LNCS vol. 9764, pp. 247-264. Springer, 2016.
20. Measurement uncertainty
Engineers naturally think about uncertainty
associated with measured values
Uncertainty is explicitly defined in their models and
considered in model-based simulations
Precise notations permit representing and
operating with uncertain values and confidences
20
21. Measurement uncertainty
Measurement uncertainty: A kind of aleatory uncertainty that refers to a set
of possible states or outcomes of a measurement
Normally expressed by a parameter, associated with the result of a measurement 𝑥𝑥,
that characterizes the dispersion of the values that could reasonably be attributed to
the measurand: the standard deviation 𝑢𝑢 of the possible variation of the values of 𝑥𝑥
Representation: 𝒙𝒙 ± 𝒖𝒖 or 𝑥𝑥, 𝑢𝑢
Examples:
21
JCGM 100:2008. Evaluation of measurement data – Guide to the expression of uncertainty in measurement (GUM).
http://www.bipm.org/utils/common/documents/jcgm/JCGM_100_2008_E.pdf
• Normal distribution: (𝑥𝑥, 𝜎𝜎) with mean 𝑥𝑥, and
and standard deviation 𝜎𝜎
• Interval 𝑎𝑎, 𝑏𝑏 : Uniform distribution is assumed
(𝑥𝑥, 𝑢𝑢) with 𝑥𝑥 =
𝑎𝑎+𝑏𝑏
2
, 𝑢𝑢 =
(𝑏𝑏−𝑎𝑎)
2 3
24. Some problems with Measurement Uncertainty
Computations with uncertain values have to respect the propagation of
uncertainty (uncertainty analysis)
In general this is a complex problem, which cannot be manually managed
Comparison of uncertain values is no longer a Boolean property!
How to compare 17.7 ± 0.2 with 17.8 ± 0.2?
Other primitive datatypes are also affected by uncertainty
Strings (OCR)
Enumerations
Collections
24
25. Primitive datatypes extended with Uncertainty
Extended primitive datatypes
Real -> UReal UReal(17.8,0.2) ≡ 17.8 ± 0.2
Boolean -> UBoolean UBoolean(true, 0.8)
String -> Ustring UString(“Implementaci6n”,0.93)
Enum -> UEnum UColor{ (#red,.9), (#orange,0.09), (#purple,0.01) }
An algebra of operations on uncertain datatypes extending OCL/UML types
Operations are closed in this algebra and automatically propagate uncertainty
25
M. F. Bertoa, N. Moreno, L. Burgueño, A. Vallecillo. “Incorporating Measurement Uncertainty into OCL/UML Primitive
Datatypes.” Software and Systems Modeling (Sosym), 2019. https://doi.org/10.1007/s10270-019-00741-0
27. Occurrence uncertainty
Occurrence uncertainty: a kind of epistemic uncertainty that refers to the
degree of belief (confidence) that we have on the actual existence of an
entity, i.e., the real entity that a model element represents
Assigned to individual objects
Permit dealing with false positives (elements in the model that do not exist in
the real system) and false negatives (elements in the real system not
captured in the model)
Normally measured by (Bayesian) probabilities
27
L. Burgueño, M. F. Bertoa, N. Moreno, A. Vallecillo: “Expressing Confidence in model and in model transformation
elements.” In Proc of MODELS 2018: 57-66, 2018.
28. Uncertainty related to OCL invariants (system integrity constraints)
Degree of fulfilment of an OCL invariant
Ocurrence uncertainty of the elements of the system (confidence)
28
[Image borrowed from Mihai Lica Pura “Ad Hoc Networks and Their Security: A Survey”, 2012]
Constraints
inv EnoughSensors: Sensor.allInstances()->size() >= 3000
inv Sorrounded: Enemy.allInstances()->select(e|e.distanceTo(self))->size() < 50
M. Gogolla, A. Vallecillo: “On Softening OCL invariants” Journal of Object Technology 18(2): 6:1-22 (2019).
32. Belief uncertainty
Belief uncertainty: A kind of epistemic uncertainty in which the modeler, or
any other belief agent, is uncertain about any of the statements made about
the system or its environment.
By nature, it is always subjective
Belief agent: An entity (human, institution, even a machine) that holds one or
more beliefs
Belief statement: Statement qualified by a degree of belief
Degree of belief: Confidence assigned to a statement by a belief agent.
Normally expressed by quantitative or qualitative methods (e.g., a grade or a
probability “credence”)
32
Loli Burgueño, Robert Clarisó, Jordi Cabot, Sébastien Gérard, Antonio Vallecillo. “Belief uncertainty in software models.” Proc.
of MiSE 2019@ICSE, pp. 19-26. ACM, 2019. https://dl.acm.org/citation.cfm?id=3340709
33. A simple example of a hotel room
33
Temp. sensor Smoke detector
Alarm center
CO detector
Loli Burgueño, Robert Clarisó, Jordi Cabot, Sébastien Gérard, Antonio Vallecillo. “Belief uncertainty in software models.” Proc.
of MiSE 2019@ICSE, pp. 19-26. ACM, 2019. https://dl.acm.org/citation.cfm?id=3340709
36. Some Belief Statements about the (model of the) system
The CO and smoke detectors that we bought have a reliability of 90% (i.e., 10% of
their readings are not meaningful)
We cannot be completely sure that the precision of the Temperature sensor is ± 0.5o,
as indicated in its datasheet
We are only 95% confident that the presence of high temperature, high CO level and
smoke really means that there is a fire in the room
Bob is from the South, so he only assigns a credibility of 50% to the operations that
indicate if the room is hot or cold. In contrast, Mary thinks they are mostly accurate
Room #3 is close to the kitchen and frequently emits alarms. Everybody thinks that
most of them are false positives
Joe the modeler doubts that the type of attribute “number” of class “Room” is Integer.
He thinks it may contain characters different from digits.
Lucy the modeler is unsure if an “AlarmCenter” has to be attached to only one single
Room. She thinks they can also be attached to several.
36
[About the credibility of the values]
[From individual belief agents]
[About individual instances]
[About the model itself: relations]
[About the behavioral rules]
[About the uncertainty of the values]
[About the model itself: types]
>> How to represent these uncertainties in the system specifications?
>> How to incorporate them into the system structural and behavioral models?
38. Operationalization
A list of pairs (BeliefAgent,credence) for every model statement subject to
Belief Uncertainty
Operations to add and remove pairs from the list of pairs
Query operation to know the credence of a statement
38
isHot_Beliefs : Set(Tuple(beliefAgent : BeliefAgent, degreeOfBelief : Real))
isHot_BeliefsAdd(ba : BeliefAgent, d : Real)
post: self.isHot_Beliefs = self.isHot_Beliefs@pre->reject(t|t.beliefAgent=ba)->
including(Tuple{beliefAgent:ba,degreeOfBelief:d})
isHot_credence(a:BeliefAgent): Real =
let baBoD : … = self.isHot_Beliefs->select(t|t.beliefAgent = a) in
let baBoDnull : … = self.isHot_Beliefs->select(t|t.beliefAgent = null) in
if baBoD->isEmpty then -- no explicit credence by “a”
if baBoDnull->notEmpty then -- but if default value exists
baBoDnull->collect(degreeOfBelief)->any(true)
else 1.0 endif
else baBoD->collect(degreeOfBelief)->any(true) endif
42. Design uncertainty
Design uncertainty: A kind of epistemic uncertainty that refers to a set of
possible design decisions about the system
It refers to the uncertainty that the developer has about what the system should
be like, rather than about what conditions it may face during its operation
(environment uncertainty).
42M. Famelis, M. Chechik: “Managing design-time uncertainty.” Software and Systems Modeling 18(2): 1249-1284 (2019)
43. The Design-Time Uncertainty Management (DeTUM) model
43M. Famelis, M. Chechik: “Managing design-time uncertainty.” Software and Systems Modeling 18(2): 1249-1284 (2019)
44. This is similar to the “Cone of Uncertainty” (CoU)
It represents the best case uncertainty needed to inform the decision makers
of the Probability of Project Success at specific phases of the project
44
45. Further types of uncertainty: Environment
Environment uncertainty: lack of certainty about the surroundings,
boundaries and usages of a system and of its elements
Tackled by approaches such as self-adaptation, probabilistic behavior, or
identifying and explicating operational assumptions.
“Uncertainty-aware” software
45
47. Further types of uncertainty: Location
Location uncertainty: lack of certainty about the geographical or physical
location of a system, its elements or its environment
The submarine can now be somewhere in the Mediterranean sea
Cyber-attacks can come from anywhere
47
48. Further types of uncertainty: Time
Time uncertainty: lack of certainty about the time properties expressed in a
statement about the system or its environment
Mañana (i.e., “not today” )
“We will call you soon”
“A man with a watch knows
what time it is. A man with two
watches is never sure.”
(Segal's law)
48
51. Quality evaluation – Prediction Models
1. Identify your target entities and your target stakeholders
Examples of entities: COTS components, Data stored in DBs, Internet Delivery Service.
Examples of stakeholders: Developers, Advanced Users, Novice Users, …
2. Choose a Quality Model for evaluating your entities
E.g., ISO/IEC 25010 “Product” QM
3. Customize the Quality Model
Select the Characteristics and Subcharacteristics relevant to these entities and
stakeholders
4. Select the Measurable Attributes of the entities relevant to the Quality Model
5. Select the appropriate measures for those measurable attributes
6. Run experiments with samples of entities and groups of stakeholders to:
Empirically evaluate the “perceived” (subjective) quality subcharacteristic of these
entities
Empirically evaluate the “objective” quality subcharacteristic of these entities
7. Run regression analyses to identify the set of measures that better explain each
quality subcharacteristic, and define appropriate quality indicators
51
52. Quality evaluation – Prediction Models
1. Identify your target entities and your target stakeholders
Examples of entities: COTS components, Data stored in DBs, Internet Delivery Service.
Examples of stakeholders: Developers, Advanced Users, Novice Users, Any kinds of users.
Evaluate the quality of software components that are candidates to
be integrated in a software system
Our target stakeholders are system developers and maintainers, who
need to select the best candidate components to form part of their
systems
52
Manuel F. Bertoa, José M. Troya, Antonio Vallecillo: “Measuring the usability of software components”, Journal of Systems and
Software, 79(3):427-439, March 2006
53. Quality evaluation – Prediction Models
2. Choose a Quality Model for evaluating your entities
ISO/IEC 9126
53
54. Quality evaluation – Prediction Models
3. Customize the Quality Model
Select the relevant characteristics and subcharacteristics
54
55. Quality evaluation – Prediction Models
4. Select the relevant Measurable Attributes of the entities w.r.t. the Quality
model
55
56. Quality evaluation – Prediction Models
5. Select the appropriate measures for those measurable attributes
Measures related to “Quality of Documentation”
56
57. Quality evaluation – Prediction Models
5. Select the appropriate measures for those measurable attributes
Measures related to “Design Complexity”
57
58. Quality evaluation – Prediction Models
6. Run experiments with samples of entities and groups of stakeholders to
empirically evaluate the “perceived” (subjective) and “objective” quality
58
59. Quality evaluation – Prediction Models
6. Run experiments with samples of entities and groups of stakeholders to
empirically evaluate the “perceived” (subjective) and “objective” quality
59
60. Quality evaluation – Prediction Models
7. Run regression analyses to identify the set of measures that better explain
each quality subcharacteristic, and define appropriate quality indicators
60
61. Quality evaluation – Prediction Models
7. Run regression analyses to identify the set of measures that better explain
each quality subcharacteristic, and define appropriate quality indicators
61
62. Quality evaluation – Prediction Models
7. Run regression analyses to identify the set of measures that better explain
each quality subcharacteristic, and define appropriate quality indicators
62
Maint = α Und + β Learn + γ Oper
high if Maint > 0.8;
Maintainability = low if Maint < 0.4
medium otherwise
63. Maintainability of models
63
F. Basciani, J. Rocco, D. Ruscio, L. Iovino,A. Pierantonio “ A tool-supported approach for assessing the quality of modeling
artifacts.” Journal of Computer Languages 51:173-192, April 2019.
M. Genero, M. Piattini. “Empirical validation of measures for class diagram structural complexity through controlled
experiments.” Proc. of QAOOSE WS at ECOOP 2001.
64. Maintainability of model transformations
64
F. Basciani, J. Rocco, D. Ruscio, L. Iovino,A. Pierantonio “ A tool-supported approach for assessing the quality of modeling
artifacts.” Journal of Computer Languages 51:173-192, April 2019.
66. Quality evaluation – Sources of uncertainty
1. Identify your target entities and your target stakeholders
Examples of entities: COTS components, Data stored in DBs, Internet Delivery Service.
Examples of stakeholders: Developers, Advanced Users, Novice Users, …
2. Choose a Quality Model for evaluating your entities
E.g., ISO/IEC 25010 “Product” QM
3. Customize the Quality Model
Select the Characteristics and Subcharacteristics relevant to these entities and
stakeholders
4. Select the Measurable Attributes of the entities relevant to the Quality Model
5. Select the appropriate measures for those measurable attributes
6. Run experiments with samples of entities and groups of stakeholders to:
Empirically evaluate the “perceived” (subjective) quality subcharacteristic of these
entities
Empirically evaluate the “objective” quality subcharacteristic of these entities
7. Run regression analyses to identify the set of measures that better explain each
quality subcharacteristic, and define appropriate quality indicators
66
67. Sources of uncertainty
Selection of the subset of the quality model
Selection of incorrect, inappropriate or missing quality subcharacteristics
Selection of quality measures
Selection of incorrect or inappropriate quality measures
Imprecise measurements of quality measures
Empirical experiments
Confidence in the entity samples
Confidence in the selected groups of stakeholders
Evaluation of perceived and objective quality
Incorrect or imprecise experiment results
Statistical analyses and regression tests
Confidence of estimation models
Definition of quality indicators
Confidence in thresholds
Propagation of measurement uncertainty in decision models
67
68. Maintainability of models
68
M. Genero, M. Piattini, E. manso, G. Cantone. “Building UML Class Diagram Maintainability Prediction Models based on Early
Metrics.” Proc. of IEEE METRICS 2003.
70. Estimating quality with uncertainty
70
Maintainability = { (low, 0.8), (medium, 0.18), (high, 0.02) }
with a credence of (0.95)
71. Estimating quality with uncertainty
71
Use > ?system1.maintainability()
-> ULevel((#low, 0.8), (#medium, 0.18), (#high, 0.02)) : ULevel
Use > ?system1.maintainability_credence(agent1)
-> 0.5 : Real
Use > ?r1. maintainability_credence(agent2)
-> 0.99 : Real
Use > ?r1. maintainability_credence(null)
-> 0.95 : Real
72. Summary (on Evaluating Quality in the presence of Uncertainty)
Identify the kinds of uncertainty (and their nature) that affect
Your entities and their attributes
The quality characteristics you need to evaluate
Your target stakeholders’ particular needs and backgrounds
Your quality (base and derived) measures
Your quality indicators
Model uncertainty
Include uncertainty in your quality models and measures as first-class elements
(measurement uncertainty, degrees of belief, credence, etc.)
Evaluate uncertainty
Use tools for quantifying and propagating uncertainty
Document uncertainty
Produce estimates of the magnitude and impact of these uncertainties
Manage your quality considering uncertainty
Make sure decision processes take into account the estimated uncertainties
72
73. Uncertainty as a first class concept in quality modeling and evaluation
From “correctness” to “utility”
Useful, beneficial and profitable to users, instead of objectively correct
Utility permits accommodating trade-offs between different dimensions
From “precise” to “approximate”
Need to evaluate possible deviations and estimate margins
“How accurate are my models and estimations, and how confident I am on
them?”
From “open-loop” to “closed-loop”
Need to (self-)adapt as new information is available, or conditions change
“How do I change when the level of uncertainty changes?”
73
David Garlan “Software Engineering in an Uncertain World.” In Proc. of FoSER 2010: 125-128.
74. Takeaways (on Uncertainty)
“Uncertainty” is not a single concept, it encompasses many different types of
uncertainties (measurement, belief, environment, …)
Each type of uncertainty requires its own notations, underlying logics and
propagation mechanisms
Uncertainty can be aleatory or epistemic (irreducible or reducible)
Uncertainty does not depend so much on knowledge, but on belief
It is mainly subjective, and diffent people may hold different degrees of belief
about the same statement
Learn to manage in the presence of uncertainty; it cannot be eliminated.
You can try to reduce it (for epistemic) with testing, verification, validation,
redundancy and other knowledge acquisition processes.
Aleatory uncertainty and its risks cannot be reduced. It needs to be calculated,
and its values and risks bounded. Margins and bounds can be used to handle it.
74
75. Open problems for Quatic
From the QUATIC 2019 Call for Papers:
Quality Aspects in Requirements Engineering
Quality Aspects in Model-Driven Engineering
Quality Aspects in DevOps Development
Quality Aspects in Process Improvement and Assessment
Quality Aspects in Verification and Validation
Quality Aspects in Evidence-Based Software Engineering
Quality Aspects in Security & Privacy
Quality Aspects in Cloud-based Platforms and Services
Quality Aspects in Business Processes
Quality Aspects in Data Science & Artificial Intelligence
Quality Aspects in Software Maintenance and Comprehension
75
78. Modeling and Evaluating Quality in the
Presence of Uncertainty
QUATIC 2019
Ciudad Real, September 13, 2019
Antonio Vallecillo
Universidad de Málaga, Spain