Automatic selection of object recognition methods using reinforcement learningShunta Saito
The document discusses using reinforcement learning to automatically select between two object recognition methods. The goal is for a robot to decide which method to use depending on current conditions. It describes using Q-learning to choose between Lowe's feature matching or a vocabulary tree algorithm. State is defined based on image attributes, and actions update the value of state-action pairs to select the best recognition method.
[5 minutes LT] Brief Introduction to Recent Image Recognition Methods and Cha...Shunta Saito
This document provides a brief introduction to recent image recognition methods and ChainerCV. It first introduces the presenter Shunta Saito and their background and research interests. It then outlines several major image recognition problems including image classification, object detection, semantic segmentation, instance-aware segmentation, image captioning, and visual question answering. For each problem, it lists some popular datasets and example methods that have been proposed. It also provides an overview and link to ChainerCV, an open source framework for computer vision research. Finally, it mentions some datasets and methods for computer vision applications in fashion.
Infrastructure for forensic analysis of multi-agent systems Emilio Serrano
The contribution of this paper is an intent to state the basis for forensic analysis of multi-agent system (MAS) runs. It proposes a general approach for open source agents platforms. It consists on techniques to store, order and represent messages based on conventional observation of the events in a distributed system, particularized for the case of MAS in which agents can be distributed across a number of machines or even be mobile.
The document discusses implementing computer vision algorithms using OpenCV on an FPGA. Specifically, it explores a stitching algorithm for biomedical applications. The algorithm involves capturing images, extracting features, selecting features, matching features, warping, blending, and merging images. It notes that this algorithm is embarrassingly parallel and well-suited for an FPGA implementation. It then profiles the different steps and discusses tools like OpenCV, OpenCL, and SDAccel for FPGA programming. Current implementations using OpenCV and OpenCL on a Xilinx FPGA are also mentioned. Contact information is provided for the presenters.
This document outlines the structure and curriculum of the proposed B.E. Information Technology 2008 Course.
It consists of two parts - Part I and Part II, each spanning two semesters. Part I covers subjects related to information assurance and security, object oriented modeling, software testing, quality assurance, and computer lab practices. Part II focuses on distributed systems, information retrieval, electives, and a major project work.
The course aims to impart fundamental knowledge in the domains of information security, software engineering principles, and practical skills through laboratory sessions and projects. A variety of electives allow students to specialize in their area of interest. Overall, the program is designed to equip students with skills for careers in information technology
IRJET - Human Pose Detection using Deep LearningIRJET Journal
This document discusses using deep learning for human pose detection. It begins with an introduction to human pose detection and challenges in the field. It then describes how deep learning can be used for this task by training neural networks on large datasets of images annotated with body joint locations. Specifically, it trained models like COCO and MPII to identify and locate body parts. OpenCV and Flask were used to process video frames and build a graphical interface. The trained models were able to detect poses and provide feedback on proper form for exercises. Graphs and skeletal representations visualized the poses and joint angles. The system was able to perform human pose detection in real-time with low hardware requirements. In conclusion, it achieved an effective low-cost software model for motion
Human pose detection using machine learning by GrandelGrandelDsouza
This document discusses using deep learning for human pose detection. It begins with an introduction to human pose detection and challenges in the field. It then describes how deep learning can be used for this task by training neural networks on large datasets of images annotated with body joint locations. Specifically, it trained models like COCO and MPII to identify and locate body parts. OpenCV and Flask were used to process video frames and build a graphical interface. The trained models were able to detect poses and provide feedback on proper form for exercises. Graphs and skeletal representations visualized the poses and joint angles. The system was able to perform human pose detection in real-time with low hardware requirements. In conclusion, it achieved an effective low-cost software model for motion
Automatic selection of object recognition methods using reinforcement learningShunta Saito
The document discusses using reinforcement learning to automatically select between two object recognition methods. The goal is for a robot to decide which method to use depending on current conditions. It describes using Q-learning to choose between Lowe's feature matching or a vocabulary tree algorithm. State is defined based on image attributes, and actions update the value of state-action pairs to select the best recognition method.
[5 minutes LT] Brief Introduction to Recent Image Recognition Methods and Cha...Shunta Saito
This document provides a brief introduction to recent image recognition methods and ChainerCV. It first introduces the presenter Shunta Saito and their background and research interests. It then outlines several major image recognition problems including image classification, object detection, semantic segmentation, instance-aware segmentation, image captioning, and visual question answering. For each problem, it lists some popular datasets and example methods that have been proposed. It also provides an overview and link to ChainerCV, an open source framework for computer vision research. Finally, it mentions some datasets and methods for computer vision applications in fashion.
Infrastructure for forensic analysis of multi-agent systems Emilio Serrano
The contribution of this paper is an intent to state the basis for forensic analysis of multi-agent system (MAS) runs. It proposes a general approach for open source agents platforms. It consists on techniques to store, order and represent messages based on conventional observation of the events in a distributed system, particularized for the case of MAS in which agents can be distributed across a number of machines or even be mobile.
The document discusses implementing computer vision algorithms using OpenCV on an FPGA. Specifically, it explores a stitching algorithm for biomedical applications. The algorithm involves capturing images, extracting features, selecting features, matching features, warping, blending, and merging images. It notes that this algorithm is embarrassingly parallel and well-suited for an FPGA implementation. It then profiles the different steps and discusses tools like OpenCV, OpenCL, and SDAccel for FPGA programming. Current implementations using OpenCV and OpenCL on a Xilinx FPGA are also mentioned. Contact information is provided for the presenters.
This document outlines the structure and curriculum of the proposed B.E. Information Technology 2008 Course.
It consists of two parts - Part I and Part II, each spanning two semesters. Part I covers subjects related to information assurance and security, object oriented modeling, software testing, quality assurance, and computer lab practices. Part II focuses on distributed systems, information retrieval, electives, and a major project work.
The course aims to impart fundamental knowledge in the domains of information security, software engineering principles, and practical skills through laboratory sessions and projects. A variety of electives allow students to specialize in their area of interest. Overall, the program is designed to equip students with skills for careers in information technology
IRJET - Human Pose Detection using Deep LearningIRJET Journal
This document discusses using deep learning for human pose detection. It begins with an introduction to human pose detection and challenges in the field. It then describes how deep learning can be used for this task by training neural networks on large datasets of images annotated with body joint locations. Specifically, it trained models like COCO and MPII to identify and locate body parts. OpenCV and Flask were used to process video frames and build a graphical interface. The trained models were able to detect poses and provide feedback on proper form for exercises. Graphs and skeletal representations visualized the poses and joint angles. The system was able to perform human pose detection in real-time with low hardware requirements. In conclusion, it achieved an effective low-cost software model for motion
Human pose detection using machine learning by GrandelGrandelDsouza
This document discusses using deep learning for human pose detection. It begins with an introduction to human pose detection and challenges in the field. It then describes how deep learning can be used for this task by training neural networks on large datasets of images annotated with body joint locations. Specifically, it trained models like COCO and MPII to identify and locate body parts. OpenCV and Flask were used to process video frames and build a graphical interface. The trained models were able to detect poses and provide feedback on proper form for exercises. Graphs and skeletal representations visualized the poses and joint angles. The system was able to perform human pose detection in real-time with low hardware requirements. In conclusion, it achieved an effective low-cost software model for motion
Principles for Engineering Elastic IoT Cloud SystemsHong-Linh Truong
This document discusses principles for engineering elastic Internet of Things (IoT) cloud systems. It outlines the key concepts of elasticity for IoT elements and cloud platform services. It then presents several engineering principles for IoT cloud systems, including enabling virtualization and composition of IoT components, dynamic provisioning of resources, and providing coherence across all levels from IoT elements to cloud services. The document also describes models and techniques for programming elasticity, such as software-defined machines for IoT and frameworks for controlling elastic objects. Finally, it overviews several tools developed by the authors for monitoring, analyzing and controlling elasticity in IoT cloud systems.
Digital Catapult Centre Brighton - Dr Nour Aliwired_sussex
At The Digital Catapult Centre Brighton event, Tech Beyond The Screen: Connectivity & Infrastructure on Wednesday 2nd March, Dr Nour Ali from The University of Brighton spoke about mobile and self adaptive ambients in service oriented architecture.
Automated Image Captioning – Model Based on CNN – GRU ArchitectureIRJET Journal
This document presents a model for automated image captioning using deep learning techniques. The model uses a CNN-GRU architecture, where a CNN encoder extracts image features and a GRU decoder generates captions. The model is trained on the Flickr30K dataset and achieves a BLEU score of 0.5625. Experimental results show the model can accurately identify objects, animals, and relationships between objects in images and generate descriptive captions. The authors integrate text-to-speech functionality to help describe images to visually impaired people. In under 3 sentences, the document introduces an image captioning model using CNN-GRU, discusses training on Flickr30K, and highlights integration of text-to-speech for assisting the visually impaired.
This document is a resume for Jaehoon Jeong. It summarizes his professional experience, education, skills, and publications. Some key details include:
- He is currently a Software Engineer at Brocade Communications Systems, working on IPv6 and IPsec implementations.
- He received a Ph.D. in Computer Science from the University of Minnesota in 2009, with a focus on wireless sensor networking.
- He has published over 15 papers in international conferences on topics related to wireless sensor networks, vehicular networks, and IPv6 networking.
- His experience includes research positions at the University of Minnesota and ETRI in Korea, as well as internships at SGI and McData.
Model-Driven Generation of MVC2 Web Applications: From Models to CodeIJEACS
Computer systems engineering is based,
increasingly, on models. These models permit to describe the
systems under development and their environment at different
abstraction levels. These abstractions allow us to conceive
applications independently of target platforms. For a long
time, models have only constituted a help for human users,
allow to manually develop the final code of computer
applications. The Model-Driven Engineering approach (MDE)
consists of programming at the level of models, represented as
an instance of a meta-model, and using them for generating the
end code of applications. The MDA (Model-Driven
Architecture) is a typical model-driven engineering approach
to application design. MDA is based on the UML standard to
define models and on the meta-modeling environment (MOF)
[1] for model-level programming and code generation. The
code generation operation is the subject of this paper. Thus, in
this work, we explain the code generation of MVC2 Web
application by using the M2M transformation (ATL
transformation language) then the M2T transformation. To
implement this latter we use the Acceleo generator which is a
generator language. In the M2T transformation, we use the
PSM model of Struts2 already generated by M2M
transformation as an input model of Acceleo generator. This
transformation is validated by a case study. The main goal of
this paper is to achieve the end-to-end code generation.
The document discusses using artificial intelligence and machine learning for lower-cost motion capture animation. It proposes a system that uses OpenCV and Unity to extract coordinates from uploaded video frames and generate animated character models without expensive motion capture suits. A Python script would detect 33 body points from a video and save the coordinates to a text file. Unity software would then use those coordinates to create animated spheres representing the body points and linking them to form a moving skeleton. The goal is to use AI and external software to enable affordable and innovative motion capture for the general public.
AI Infra Day | Model Lifecycle Management Quality Assurance at Uber ScaleAlluxio, Inc.
AI Infra Day
Oct. 25, 2023
Organized by Alluxio
For more Alluxio Events: https://www.alluxio.io/events/
Speaker:
- Sally (Mihyoung) Lee (Senior Staff Engineer, TLM, @Uber)
Machine learning models power Uber’s everyday business. However, developing and deploying a model is not a one-time event but a continuous process that requires careful planning, execution, and monitoring. This session will highlight Uber’s practice on the machine learning lifecycle to ensure high model quality.
Designing Cross-Domain Semantic Web of Things ApplicationsAmélie Gyrard
The document discusses designing cross-domain semantic web of things applications. It introduces challenges including how to interpret IoT data, combine data from different domains, and reuse domain knowledge. The proposed M3 framework addresses these challenges through components like a SWoT generator template, M3 language and ontology, sensor-based linked open rules, and linked open vocabularies for IoT. Evaluations show the framework helps developers build semantic applications and interprets data efficiently while reusing interoperable domain knowledge. The framework has potential applications in domains like health, tourism and transportation.
Modeling and Provisioning IoT Cloud Systems for Testing UncertaintiesHong-Linh Truong
The document discusses modeling and provisioning IoT cloud systems to enable testing of uncertainties. It proposes a tool pipeline that involves modeling uncertainties and system components, generating test configurations, deploying the system under test, and executing tests. A prototype models a base transceiver station system and its uncertainties. The tooling extracts models, generates deployment configurations, and allows for elastic testing by changing configurations at runtime. The overall approach aims to help address the challenges of testing uncertainties in complex IoT and cloud systems.
Precaution for Covid-19 based on Mask detection and sensorIRJET Journal
This document describes a system that uses computer vision and sensors to detect if a person is wearing a face mask and monitor their temperature and oxygen levels. The system uses a Raspberry Pi, camera, and sensors. It applies CNN algorithms to detect faces and determine if a mask is present. It also monitors temperature using a temperature sensor and oxygen levels using a pulse sensor. The goal is to help enforce mask-wearing and identify potential COVID-19 cases by their symptoms. It aims to provide an educational platform for learning different machine learning modules in one place and comparing modified user modules to existing ones.
Cyber Physical Systems – Collaborating Systems of SystemsJoachim Schlosser
This document discusses computational semantics in complex cyber-physical systems. It begins by noting the increasing connectivity between embedded microprocessors, sensors, actuators and networks. This merging of the physical and virtual worlds highlights the importance of computation. The document then discusses modeling heterogeneous systems and the challenges of computational semantics across different domains like physics, information, electronics and networks. It emphasizes simulating systems early and often to validate designs and gain insights. Finally, it outlines best practices like creating high-level system models during specification, using multidomain simulation from the start, creating virtual test suites to stress systems, and reusing models and tests as a reference.
The document describes the development of a traffic sign recognition system using machine learning techniques. It involves building a convolutional neural network (CNN) model to classify images of traffic signs into different categories. The front-end will utilize libraries like Pandas, NumPy, Matplotlib and OpenCV for data processing and visualization. Tkinter will be used for the graphical user interface. The back-end will use TensorFlow and Keras deep learning frameworks to develop the CNN model for traffic sign classification. The system aims to accurately detect and recognize traffic signs to help with autonomous driving.
Model-Based Risk Assessment in Multi-Disciplinary Systems EngineeringEmanuel Mätzler
This document proposes a model-based approach for risk assessment in multi-disciplinary engineering projects. It involves defining metamodels for production system models, link models between artifacts, and metrics. Metrics are defined using the Structured Metrics Metamodel and calculated by executing queries on the system models. Measurement results are stored in the metrics model. The approach aims to support risk assessment across distributed, versioned engineering artifacts represented in AutomationML. Future work includes expanding the metrics, integrating dynamic aspects, and visualizing results.
IRJET - Content based Image ClassificationIRJET Journal
The document discusses content based image classification, which involves grouping large numbers of digital images uploaded daily into categories based on their visual content. It describes how content based image classification systems work by extracting features from images like shape, color, and texture to classify them. The document also outlines some challenges in content based image classification and potential areas of future research like using deep learning approaches.
IRJET-Testing Uncertainty of Cyber-Physical Systems in IoT Cloud Infrastructu...IRJET Journal
This document discusses testing uncertainties in cyber-physical systems (CPS) that span Internet of Things (IoT) and cloud infrastructures. It proposes combining model-driven engineering and elastic execution techniques to dynamically provision both the CPS under test and testing utilities across various IoT and cloud infrastructures. Specifically, it suggests using software-defined IoT units and cloud-based elastic services that can be composed, controlled via APIs, and provisioned elastically to enable testing CPS configurations and behaviors across heterogeneous environments.
Real time Traffic Signs Recognition using Deep LearningIRJET Journal
This document discusses a deep learning model for real-time traffic sign recognition using convolutional neural networks. Specifically:
- The model uses a CNN architecture based on LeNet to classify images of traffic signs in real-time with a webcam.
- The model was trained on a dataset containing over 22,000 images across 43 traffic sign classes. It achieved 95% accuracy on the test set.
- The model consists of convolutional layers to extract features from images, max pooling layers, dropout layers, and dense layers to perform classification.
- Once trained, the model can continuously classify traffic signs from a webcam feed in real-time, displaying the predicted class and probability. This system has applications for autonomous vehicle navigation
Model-Driven Architecture for Cloud Applications Development, A survey Editor IJCATR
Model Driven Architecture and Cloud computing are among the most important paradigms in software service engineering now a days. As cloud computing continues to gain more activities, more issues and challenges for many systems with its dynamic usage are introduced. Model Driven Architecture (MDA) approach for development and maintenance becomes an evident choice for ensuring software solutions that are robust, flexible and agile for developing applications.
This paper aims to survey and analyze the research issues and challenges that have been emerging in cloud computing applications with a focus on using Model Driven architecture (MDA) development. We discuss the open research issues and highlight future research problems.
DevOps and Model Driven Engineering (MDE) provide differently skilled IT stakeholders with methodologies and tools for organizing and automating continuous software engineering activities and using models as key engineering artifacts.
JSON is a popular data format, and JSON Schema provides a general-purpose schema language for JSON.
This paper presents our work in progress on blended modeling and scenario simulation of continuous delivery pipelines as executable JSON-based models. For this purpose, we show a case study based on Keptn, an open-source tool for DevOps automation of cloud-native applications, and its language, Shipyard, a JSON-based process language for continuous delivery pipeline specification.
Combining fUML and profiles for non-functional analysis based on model execut...Luca Berardinelli
For developing software systems it is crucial to consider non-functional properties already in an early development stage to guarantee that the system will satisfy its non-functional requirements. Following the model-based engineering paradigm facilitates an early analysis of non-functional properties of the system being developed based on the elaborated design models. Although UML is widely used in model-based engineering, it is not suitable for model-based analysis directly due to its lack of formal semantics. Thus, current model-based analysis approaches transform UML models into formal languages dedicated for analyses purpose, which may introduce accidental complexity of implementing the required model transformations.
More Related Content
Similar to Uncertainty-wise Engineering of IoT Cloud Systems
Principles for Engineering Elastic IoT Cloud SystemsHong-Linh Truong
This document discusses principles for engineering elastic Internet of Things (IoT) cloud systems. It outlines the key concepts of elasticity for IoT elements and cloud platform services. It then presents several engineering principles for IoT cloud systems, including enabling virtualization and composition of IoT components, dynamic provisioning of resources, and providing coherence across all levels from IoT elements to cloud services. The document also describes models and techniques for programming elasticity, such as software-defined machines for IoT and frameworks for controlling elastic objects. Finally, it overviews several tools developed by the authors for monitoring, analyzing and controlling elasticity in IoT cloud systems.
Digital Catapult Centre Brighton - Dr Nour Aliwired_sussex
At The Digital Catapult Centre Brighton event, Tech Beyond The Screen: Connectivity & Infrastructure on Wednesday 2nd March, Dr Nour Ali from The University of Brighton spoke about mobile and self adaptive ambients in service oriented architecture.
Automated Image Captioning – Model Based on CNN – GRU ArchitectureIRJET Journal
This document presents a model for automated image captioning using deep learning techniques. The model uses a CNN-GRU architecture, where a CNN encoder extracts image features and a GRU decoder generates captions. The model is trained on the Flickr30K dataset and achieves a BLEU score of 0.5625. Experimental results show the model can accurately identify objects, animals, and relationships between objects in images and generate descriptive captions. The authors integrate text-to-speech functionality to help describe images to visually impaired people. In under 3 sentences, the document introduces an image captioning model using CNN-GRU, discusses training on Flickr30K, and highlights integration of text-to-speech for assisting the visually impaired.
This document is a resume for Jaehoon Jeong. It summarizes his professional experience, education, skills, and publications. Some key details include:
- He is currently a Software Engineer at Brocade Communications Systems, working on IPv6 and IPsec implementations.
- He received a Ph.D. in Computer Science from the University of Minnesota in 2009, with a focus on wireless sensor networking.
- He has published over 15 papers in international conferences on topics related to wireless sensor networks, vehicular networks, and IPv6 networking.
- His experience includes research positions at the University of Minnesota and ETRI in Korea, as well as internships at SGI and McData.
Model-Driven Generation of MVC2 Web Applications: From Models to CodeIJEACS
Computer systems engineering is based,
increasingly, on models. These models permit to describe the
systems under development and their environment at different
abstraction levels. These abstractions allow us to conceive
applications independently of target platforms. For a long
time, models have only constituted a help for human users,
allow to manually develop the final code of computer
applications. The Model-Driven Engineering approach (MDE)
consists of programming at the level of models, represented as
an instance of a meta-model, and using them for generating the
end code of applications. The MDA (Model-Driven
Architecture) is a typical model-driven engineering approach
to application design. MDA is based on the UML standard to
define models and on the meta-modeling environment (MOF)
[1] for model-level programming and code generation. The
code generation operation is the subject of this paper. Thus, in
this work, we explain the code generation of MVC2 Web
application by using the M2M transformation (ATL
transformation language) then the M2T transformation. To
implement this latter we use the Acceleo generator which is a
generator language. In the M2T transformation, we use the
PSM model of Struts2 already generated by M2M
transformation as an input model of Acceleo generator. This
transformation is validated by a case study. The main goal of
this paper is to achieve the end-to-end code generation.
The document discusses using artificial intelligence and machine learning for lower-cost motion capture animation. It proposes a system that uses OpenCV and Unity to extract coordinates from uploaded video frames and generate animated character models without expensive motion capture suits. A Python script would detect 33 body points from a video and save the coordinates to a text file. Unity software would then use those coordinates to create animated spheres representing the body points and linking them to form a moving skeleton. The goal is to use AI and external software to enable affordable and innovative motion capture for the general public.
AI Infra Day | Model Lifecycle Management Quality Assurance at Uber ScaleAlluxio, Inc.
AI Infra Day
Oct. 25, 2023
Organized by Alluxio
For more Alluxio Events: https://www.alluxio.io/events/
Speaker:
- Sally (Mihyoung) Lee (Senior Staff Engineer, TLM, @Uber)
Machine learning models power Uber’s everyday business. However, developing and deploying a model is not a one-time event but a continuous process that requires careful planning, execution, and monitoring. This session will highlight Uber’s practice on the machine learning lifecycle to ensure high model quality.
Designing Cross-Domain Semantic Web of Things ApplicationsAmélie Gyrard
The document discusses designing cross-domain semantic web of things applications. It introduces challenges including how to interpret IoT data, combine data from different domains, and reuse domain knowledge. The proposed M3 framework addresses these challenges through components like a SWoT generator template, M3 language and ontology, sensor-based linked open rules, and linked open vocabularies for IoT. Evaluations show the framework helps developers build semantic applications and interprets data efficiently while reusing interoperable domain knowledge. The framework has potential applications in domains like health, tourism and transportation.
Modeling and Provisioning IoT Cloud Systems for Testing UncertaintiesHong-Linh Truong
The document discusses modeling and provisioning IoT cloud systems to enable testing of uncertainties. It proposes a tool pipeline that involves modeling uncertainties and system components, generating test configurations, deploying the system under test, and executing tests. A prototype models a base transceiver station system and its uncertainties. The tooling extracts models, generates deployment configurations, and allows for elastic testing by changing configurations at runtime. The overall approach aims to help address the challenges of testing uncertainties in complex IoT and cloud systems.
Precaution for Covid-19 based on Mask detection and sensorIRJET Journal
This document describes a system that uses computer vision and sensors to detect if a person is wearing a face mask and monitor their temperature and oxygen levels. The system uses a Raspberry Pi, camera, and sensors. It applies CNN algorithms to detect faces and determine if a mask is present. It also monitors temperature using a temperature sensor and oxygen levels using a pulse sensor. The goal is to help enforce mask-wearing and identify potential COVID-19 cases by their symptoms. It aims to provide an educational platform for learning different machine learning modules in one place and comparing modified user modules to existing ones.
Cyber Physical Systems – Collaborating Systems of SystemsJoachim Schlosser
This document discusses computational semantics in complex cyber-physical systems. It begins by noting the increasing connectivity between embedded microprocessors, sensors, actuators and networks. This merging of the physical and virtual worlds highlights the importance of computation. The document then discusses modeling heterogeneous systems and the challenges of computational semantics across different domains like physics, information, electronics and networks. It emphasizes simulating systems early and often to validate designs and gain insights. Finally, it outlines best practices like creating high-level system models during specification, using multidomain simulation from the start, creating virtual test suites to stress systems, and reusing models and tests as a reference.
The document describes the development of a traffic sign recognition system using machine learning techniques. It involves building a convolutional neural network (CNN) model to classify images of traffic signs into different categories. The front-end will utilize libraries like Pandas, NumPy, Matplotlib and OpenCV for data processing and visualization. Tkinter will be used for the graphical user interface. The back-end will use TensorFlow and Keras deep learning frameworks to develop the CNN model for traffic sign classification. The system aims to accurately detect and recognize traffic signs to help with autonomous driving.
Model-Based Risk Assessment in Multi-Disciplinary Systems EngineeringEmanuel Mätzler
This document proposes a model-based approach for risk assessment in multi-disciplinary engineering projects. It involves defining metamodels for production system models, link models between artifacts, and metrics. Metrics are defined using the Structured Metrics Metamodel and calculated by executing queries on the system models. Measurement results are stored in the metrics model. The approach aims to support risk assessment across distributed, versioned engineering artifacts represented in AutomationML. Future work includes expanding the metrics, integrating dynamic aspects, and visualizing results.
IRJET - Content based Image ClassificationIRJET Journal
The document discusses content based image classification, which involves grouping large numbers of digital images uploaded daily into categories based on their visual content. It describes how content based image classification systems work by extracting features from images like shape, color, and texture to classify them. The document also outlines some challenges in content based image classification and potential areas of future research like using deep learning approaches.
IRJET-Testing Uncertainty of Cyber-Physical Systems in IoT Cloud Infrastructu...IRJET Journal
This document discusses testing uncertainties in cyber-physical systems (CPS) that span Internet of Things (IoT) and cloud infrastructures. It proposes combining model-driven engineering and elastic execution techniques to dynamically provision both the CPS under test and testing utilities across various IoT and cloud infrastructures. Specifically, it suggests using software-defined IoT units and cloud-based elastic services that can be composed, controlled via APIs, and provisioned elastically to enable testing CPS configurations and behaviors across heterogeneous environments.
Real time Traffic Signs Recognition using Deep LearningIRJET Journal
This document discusses a deep learning model for real-time traffic sign recognition using convolutional neural networks. Specifically:
- The model uses a CNN architecture based on LeNet to classify images of traffic signs in real-time with a webcam.
- The model was trained on a dataset containing over 22,000 images across 43 traffic sign classes. It achieved 95% accuracy on the test set.
- The model consists of convolutional layers to extract features from images, max pooling layers, dropout layers, and dense layers to perform classification.
- Once trained, the model can continuously classify traffic signs from a webcam feed in real-time, displaying the predicted class and probability. This system has applications for autonomous vehicle navigation
Model-Driven Architecture for Cloud Applications Development, A survey Editor IJCATR
Model Driven Architecture and Cloud computing are among the most important paradigms in software service engineering now a days. As cloud computing continues to gain more activities, more issues and challenges for many systems with its dynamic usage are introduced. Model Driven Architecture (MDA) approach for development and maintenance becomes an evident choice for ensuring software solutions that are robust, flexible and agile for developing applications.
This paper aims to survey and analyze the research issues and challenges that have been emerging in cloud computing applications with a focus on using Model Driven architecture (MDA) development. We discuss the open research issues and highlight future research problems.
Similar to Uncertainty-wise Engineering of IoT Cloud Systems (20)
DevOps and Model Driven Engineering (MDE) provide differently skilled IT stakeholders with methodologies and tools for organizing and automating continuous software engineering activities and using models as key engineering artifacts.
JSON is a popular data format, and JSON Schema provides a general-purpose schema language for JSON.
This paper presents our work in progress on blended modeling and scenario simulation of continuous delivery pipelines as executable JSON-based models. For this purpose, we show a case study based on Keptn, an open-source tool for DevOps automation of cloud-native applications, and its language, Shipyard, a JSON-based process language for continuous delivery pipeline specification.
Combining fUML and profiles for non-functional analysis based on model execut...Luca Berardinelli
For developing software systems it is crucial to consider non-functional properties already in an early development stage to guarantee that the system will satisfy its non-functional requirements. Following the model-based engineering paradigm facilitates an early analysis of non-functional properties of the system being developed based on the elaborated design models. Although UML is widely used in model-based engineering, it is not suitable for model-based analysis directly due to its lack of formal semantics. Thus, current model-based analysis approaches transform UML models into formal languages dedicated for analyses purpose, which may introduce accidental complexity of implementing the required model transformations.
AutomationML (Automation Markup Language) is a neutral data format based on XML for the storage and exchange of plant engineering information, which is provided as open standard. Goal of AutomationML is to interconnect the heterogeneous tool landscape of modern engineering tools in their different disciplines, e.g. mechanical plant engineering, electrical design, HMI development, PLC, robot control.
This presentation provides an overview on AutomationML and a model-driven engineering view on its integration capabilities.
Integrating Performance Modeling in Industrial Automation through AutomationM...Luca Berardinelli
Data exchange is a critical issue within the multi-disciplinary engineering process of cyber physical production systems (CPPS).
AutomationML (AML) is an emerging standard in the this field to represent and exchange artifacts between heterogeneous engineering tools used in different domains, such as mechanical, electrical, and software engineering. However, in addition, the interoperability of different exchange standards may be needed in order to integrate even further tools in current tool chains. For instance, the Performance Model Interchange Format (PMIF) is a common representation devised in the performance engineering domain for model-based system performance analysis and simulation based on Queueing Network Models (QNM). Of course, such aspects are also of particular interest when designing a CPPS.
This work investigates, with the help of a case study, the combination of AML and PMIF as an enabling step towards an early performance validation of CPPS. By this, we close the current gap between CPPS engineering and performance engineering standards.
On The Evolution of CAEX: A Language Engineering PerspectiveLuca Berardinelli
CAEX is one of the most promising standards when it comes to data exchange between engineering tools in the production system automation domain. This is also reflected by the current emergence of AutomationML which uses CAEX as its core representation data format. Having such standards at hand, the question arises how to deal with the evolution of such standards as is currently happening with the transition from CAEX 2.15 to CAEX 3.0.
In this work, we take a language engineering point of view to the evolution of engineering data formats. In particular, we present how CAEX can be formulated in a model-based framework which allows to reason about evolution of the data format as well as its impact on the data stored in such evolving formats. By this, not only the migration process of existing data to the new format version is possible, but also a more theoretical investigation on information preservation is possible. We demonstrate the approach by the concrete case of the upcoming CAEX evolution.
Model-Based Co-Evolution of Production Systems and their Libraries with Auto...Luca Berardinelli
System models are essential in planning, designing, realizing, and maintaining production systems. AutomationML (AML) is an emerging standard to represent and exchange heterogeneous artifacts throughout the complete system life cycle and is more and more used as a modeling language. AML is designed as a flexible, prototype-based language able to represent the full spectrum of different artifacts. It may be utilized to build reusable libraries containing prototypical elements to build up production systems by using clones. However, libraries have to evolve over time, e.g., to reflect bug fixes, new features or refactorings, and so system models have to co-evolve to reflect
the changes in the libraries.
To tackle this co-evolution challenge, we specify in this paper the relationship between library elements, i.e., prototypes, and system elements, i.e., clones, by establishing a formal model for prototype-based modeling languages. Based on this formalization,we introduce several levels of consistency rigor one may want to achieve when modeling with prototype-based languages. These levels are also the main input to reason about the impact of library changes on the concrete system models for which we provide semi-automated co-evolution propagation strategies. We apply the established theory to the concrete AML case and present concrete tool support for evolving AML models based on Eclipse which demonstrates that consistency between system models and libraries may be maintained semi-automatically.
ECMFA 2015 - Energy Consumption Analysis and Design with Foundational UMLLuca Berardinelli
Wireless Sensor Networks (WSN) are nowadays applied to a
wide set of domains (e.g., security, health). WSN are networks of spatially distributed, radio-communicating, battery-powered, autonomous sensor nodes. WSN are characterized by scarcity of resources, hence an application running on them should carefully manage its resources. The most critical resource in WSN is the nodes’ battery.
In this paper, we propose model-based engineering facilities to analyze the energy consumption and to develop energy-aware applications for WSN that are based on Agilla Middleware. For this aim i) we extend the Agilla Instruction Set with the new battery instruction able to retrieve the battery Voltage of a WSN node at run-time; ii) we measure the energy that the execution of each Agilla instruction consumes on a target platform; and iii) we extend the Agilla Modeling Framework with a new analysis that, leveraging the conducted energy consumption measurements, predicts the energy required by the Agilla agents running on the WSN. Such analysis, implemented in fUML, is based on simulation and it guides the design of WSN applications that guarantee low energy consumption. The approach is showed on the Reader agent used in the Wild Fire Tracker Application.
The document is a slide presentation on UML modeling and profiling from a software engineering course. It introduces UML and the concepts of metamodeling. It explains that UML is used to specify, visualize, construct and document software system artifacts. The presentation then outlines the typical steps in UML modeling: 1) modeling use cases, 2) modeling system structure with classes and components, and 3) modeling deployment to hardware nodes.
fUML-Driven Performance Analysisthrough the MOSES Model LibraryLuca Berardinelli
The growing request for high-quality applications for em- bedded systems demands model-driven approaches that facilitate their design as well as the verification and validation activities.
In this paper we present MOSES, a model-driven performance analysis methodology based on Foundational UML (fUML). Implemented as an executable model library, MOSES provides data structures, as Classes, and algorithms, as Activities, which can be imported to instrument fUML models and then to carry out the performance analysis of the modeled system through fUML model simulation. An industrial case study is provided to show MOSES at work, its achievements and its future challenges.
fUML-Driven Design and Performance Analysis of Software Agents for Wireless S...Luca Berardinelli
The growing request for high-quality applications for Wireless Sensor Network (WSN) demands model-driven approaches that facilitate the design and the early validation of extra-functional properties by combining design and analysis models. For this purpose, UML and several analysis-specific languages can be chosen and weaved through translational approaches. However, the complexity brought by the underlying technological spaces may hinder the adoption of UML-based approaches in the WSN domain. The recently introduced Foundational UML (fUML) standard provides a formal semantics to a strict UML subset, enabling the execution of UML models.
Leveraging fUML, we realize the Agilla Modeling Framework, an executable fUML model library, to conveniently design agent-based software applications for WSN and analyze their performance through the execution of the corresponding fUML model. A running case study is provided to show our framework at work.
Combining fUML and Profiles for Non-Functional Analysis Based on Model Execut...Luca Berardinelli
For developing software systems it is crucial to consider non-functional properties already in an early development stage to guarantee that the system will satisfy its non-functional requirements.
Following the model-based engineering para\-digm facilitates an early analysis of non-functional properties of the system being developed based on the elaborated design models.
Although UML is widely used in model-based engineering, it is not suitable for model-based analysis directly due to its lack of formal semantics.
Thus, current model-based analysis approaches transform UML models into formal languages dedicated for analyses purpose, which may introduce accidental complexity of implementing the required model transformations.
The recently introduced fUML standard provides a formal semantics of a subset of UML enabling the execution of UML models.
In this paper, we show how fUML can be utilized for analyzing UML models directly without having to transform them.
We present a reusable framework for performing model-based analyses leveraging execution traces of UML models and integrating UML profiles heretofore unsupported by fUML.
A case study in the performance analysis domain is used to illustrate the benefits of our framework.
MICE is a tool for monitoring context evolution and updating context models at runtime. It consists of three main components: a Monitor that collects contextual data from applications, a Context Data Repository that stores the data, and a Modeling Component that retrieves the data and updates context models. The tool was demonstrated by collecting battery data from Android devices using an open-source monitor, storing it in Cosm, an open-source repository, and updating awareness manager models. Future work includes combining multiple data streams and context attributes into more complex models and integrating context and design models at runtime.
This document discusses a tool called MOSQUITO that uses model-driven engineering to construct and analyze queuing networks. MOSQUITO includes a client that uses the MagicDraw UML modeling tool to construct queuing network models in PMIF format. These models are sent to a MOSQUITO server running on an Eclipse platform that includes a PMIF editor and the WEASEL queuing network solver to analyze performance and other non-functional properties of systems modeled as queuing networks.
This document presents a framework for modeling and analyzing the performance of context-aware mobile software systems. The framework uses statecharts to model context evolution across different dimensions like physical location, logical location, and hardware configuration. It allows specifying alternative system behaviors based on "if context is" conditions. Context-specific annotations are added to behavior models to estimate resource usage. Different analysis models can then be derived and solved to obtain performance indices for specific contexts. The document demonstrates the framework on an e-health system example and compares response times for two mobility scenarios. Future work includes modeling more complex context compositions and adaptation at runtime.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELijaia
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Blood finder application project report (1).pdfKamal Acharya
Blood Finder is an emergency time app where a user can search for the blood banks as
well as the registered blood donors around Mumbai. This application also provide an
opportunity for the user of this application to become a registered donor for this user have
to enroll for the donor request from the application itself. If the admin wish to make user
a registered donor, with some of the formalities with the organization it can be done.
Specialization of this application is that the user will not have to register on sign-in for
searching the blood banks and blood donors it can be just done by installing the
application to the mobile.
The purpose of making this application is to save the user’s time for searching blood of
needed blood group during the time of the emergency.
This is an android application developed in Java and XML with the connectivity of
SQLite database. This application will provide most of basic functionality required for an
emergency time application. All the details of Blood banks and Blood donors are stored
in the database i.e. SQLite.
This application allowed the user to get all the information regarding blood banks and
blood donors such as Name, Number, Address, Blood Group, rather than searching it on
the different websites and wasting the precious time. This application is effective and
user friendly.
Digital Twins Computer Networking Paper Presentation.pptxaryanpankaj78
A Digital Twin in computer networking is a virtual representation of a physical network, used to simulate, analyze, and optimize network performance and reliability. It leverages real-time data to enhance network management, predict issues, and improve decision-making processes.
Tools & Techniques for Commissioning and Maintaining PV Systems W-Animations ...Transcat
Join us for this solutions-based webinar on the tools and techniques for commissioning and maintaining PV Systems. In this session, we'll review the process of building and maintaining a solar array, starting with installation and commissioning, then reviewing operations and maintenance of the system. This course will review insulation resistance testing, I-V curve testing, earth-bond continuity, ground resistance testing, performance tests, visual inspections, ground and arc fault testing procedures, and power quality analysis.
Fluke Solar Application Specialist Will White is presenting on this engaging topic:
Will has worked in the renewable energy industry since 2005, first as an installer for a small east coast solar integrator before adding sales, design, and project management to his skillset. In 2022, Will joined Fluke as a solar application specialist, where he supports their renewable energy testing equipment like IV-curve tracers, electrical meters, and thermal imaging cameras. Experienced in wind power, solar thermal, energy storage, and all scales of PV, Will has primarily focused on residential and small commercial systems. He is passionate about implementing high-quality, code-compliant installation techniques.
Impartiality as per ISO /IEC 17025:2017 StandardMuhammadJazib15
This document provides basic guidelines for imparitallity requirement of ISO 17025. It defines in detial how it is met and wiudhwdih jdhsjdhwudjwkdbjwkdddddddddddkkkkkkkkkkkkkkkkkkkkkkkwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwioiiiiiiiiiiiii uwwwwwwwwwwwwwwwwhe wiqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq gbbbbbbbbbbbbb owdjjjjjjjjjjjjjjjjjjjj widhi owqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq uwdhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhwqiiiiiiiiiiiiiiiiiiiiiiiiiiiiw0pooooojjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjj whhhhhhhhhhh wheeeeeeee wihieiiiiii wihe
e qqqqqqqqqqeuwiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiqw dddddddddd cccccccccccccccv s w c r
cdf cb bicbsad ishd d qwkbdwiur e wetwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww w
dddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddfffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffw
uuuuhhhhhhhhhhhhhhhhhhhhhhhhe qiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii iqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc ccccccccccccccccccccccccccccccccccc bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbu uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuum
m
m mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm m i
g i dijsd sjdnsjd ndjajsdnnsa adjdnawddddddddddddd uw
Build the Next Generation of Apps with the Einstein 1 Platform.
Rejoignez Philippe Ozil pour une session de workshops qui vous guidera à travers les détails de la plateforme Einstein 1, l'importance des données pour la création d'applications d'intelligence artificielle et les différents outils et technologies que Salesforce propose pour vous apporter tous les bénéfices de l'IA.
1. Uncertainty-wise Engineering of IoT Cloud
Systems: From System Models to Non-
Functional Analyses, Deployment, and
Testing
Luca Berardinelli,
Hong Linh Truong
Distributed Systems Group, TU Wien
https://www.researchgate.net/profile/Luca_Berardinelli
https://www.linkedin.com/in/lucaberardinelli
MDE4IoT, Linz, 22/10/2017
2. 2
Outline
1. Introduction
2. IoT Cloud CPS, Uncertainty, and Elasticity
3. Design of IoT Cloud CPS and Uncertainty
4. Deployment of IoT Cloud CPS
5. Testing of IoT Cloud CPS
6. Conclusion and Future Work
3. 3
Who We Are
Model-Driven Engineering/Analysis
Service Engineering Analytics
Research & Development
https://rdsea.github.io/
4. 4
2. What are IoT Cloud CPS
Our Cyber-Physical Systems (CPS)
– Have IoT elements and cloud services in datacenter, connect via communication
– Also called IoT Cloud CPS
Highly elastic:
– Cloud services can be provisioned and de-provisioned
– IoT devices can be activated, de-activated
– Communication can be changed by provisioning and de-provisioning resources in an autonomic
manner
Feedback Loop
5. 5
Key problems
Deal with Uncertainties
– Data delivery functional/dependability Uncertainty, affecting communication
resources
– Data quality functional/dependability Uncertainty, e.g. insufficient sampling rate from
sensors
– Actuation functional/dependability Uncertainty, affecting mechanisms related to
routing, buffering, delivering and ordering of actuation requests
Deal with Elastic Execution:
– Elastic tests, mapping uncertainties with elastic execution
6. 6
Uncertainty Concepts for CPS from H2020 U-Test
By uncertainty we mean here the
lack of certainty (i.e., knowledge)
about
– the timing and nature of inputs,
– the state of a system,
– a future outcome,
– as well as other relevant factors.
WP1: Uncertainty Taxonomy, Use Cases, and Evaluation Plans
Understanding Uncertainty in Cyber-Physical Systems (D1.2)
www.u-test.eu
source
If MDE then
– BeliefStatement
ModelElement
– BeliefAgent MDE Tools,…
– IndeterminacySource
ModelElement, Annotations,…
7. 7
Design: Uncertainty Modeling and Evaluation (UME)
UME:
Modeling and Detecting Uncertainty @ Design Time
Model Refactoring to support next MDE activities (e.g.,
MBT)
Tool for UME (T4UME):
Wizards for Modeling, Uncertainty Detection Rules (UDR),
UML2JSON
Hong-Linh Truong, Luca
Berardinelli, Ivan Pavkovic
and Georgiana Copil.
Modeling and
Provisioning IoT Cloud
Systems for Testing
Uncertainties
(Mobiquitous 2017)
reference:
12. 12
T4UME : Wizards (via Epsilon Wizard Language)
• Contextual menu entries to call entries on model diagram elements.
13. 13
Design: Uncertainty Modeling and Evaluation (UME)
UME:
Modeling and Detecting Uncertainty @ Design Time
Model Refactoring to support next MDE activities (e.g., MBT)
Tool for UME (T4UME):
Wizards for Modeling, Uncertainty Detection Rules (UDR),
UML2JSON
Hong-Linh Truong, Luca
Berardinelli, Ivan Pavkovic
and Georgiana Copil.
Modeling and
Provisioning IoT Cloud
Systems for Testing
Uncertainties
(Mobiquitous 2017)
reference:
14. 14
T4UME : UDR (via Epsilon Validation Language)
T4UME provides UDRs for uncertainty detection (U-Detection) on IoT Cloud elements
- distinct UDR for each stereotype of applied profiles
- warnings for missing property value(s) causing potential uncertainties
16. 16
Design: Uncertainty Modeling and Evaluation (UME)
Modeling and Detecting Uncertainty @ Design Time
Model Refactoring to support next MDE activities (e.g., MBT)
17. 17
T4UME : UDR (via Epsilon Validation Language)
T4UME provides UDRs for uncertainty detection (U-Detection) on IoT Cloud elements
- distinct UDR for each stereotype of applied profiles
- warnings for missing property value(s) causing potential uncertainties
19. 19
Design: Uncertainty Modeling and Evaluation (UME)
Modeling and Detecting Uncertainty @ Design Time
Model Refactoring to support next MDE activities (e.g., MBT)
20. 20
T4UME : Wizard and UDR Generation
(via Epsilon Generation Language)
UME adapts to different domains (modeled as UML profiles)
T4UME automatically generates wizards and UDRs from applied profiles
23. 23
Deployment: Work flow
Reuse well-known tools for deployment (e.g. SALSA)
Adapt extracted JSON for many tools UML2JSON flexible
Uncertainty info has to be propagated (on going work)
http://tuwiendsg.github.io/SALSA/
source
24. 24
Deployment: Example of artifacts
Hong-Linh Truong, Luca Berardinelli, Ivan Pavkovic and Georgiana Copil,
Modeling and Provisioning IoT Cloud Systems for Testing Uncertainties
14th EAI International Conference on Mobile and Ubiquitous Systems: Computing,
Networking and Services (MobiQuitous 2017), November 7–10, 2017,Melbourne,
Australia. To appear.
Reference:
25. 25
Testing Work flow
Problem
MBT approaches do not consider
IoT Cloud Infrastructures underlying
the CPS
Static SUT deployment
Solution:
MBT process that deal with
dynamic configuration and
elastic execution of Cloud and IoT
resources
Classes
Instances
State Machines
Figure source:
Mark Utting and Bruno Legeard. 2006. Practical Model-Based Testing: A Tools
Approach. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA.
Dynamic
27. 27
Conclusions and Future Work
Conclusions:
We are devising methodology (UME) and tool (T4UME) for
uncertainty modeling and evaluation at design-time
– Wizards apply and and instantiate IoT Cloud architectural elements
– U-Detection to detect uncertainty caused my missing property values of
stereotypes
– U-Refactoring actions implemented ad-hoc to support MBT (test case
generation from state machines)
– UML2JSON exports UML model content into JSON via Java objects and
GSON
28. 28
Conclusions and Future Work
Future Work:
Customization of UME/T4UME for different MDE tasks
– Integration of OMG standard profiles (MARTE, SysML) thanks to Wizard
and UDR generation capability (on going)
– Performance Uncertainty caused by detected Performance Antipatterns
(on going) via customized U-Detection and U-Refactoring steps
– UDR composition algorithms
– Mappings of stereotype properties with Uncertainty Families (e.g., no
MARTE::exec_time for operations then potential Performance
Uncertainty)
– Customization for different application domains
– Extension for non-UML based approaches (e.g., UDR from metaclasses)
29. 29
Thank you! Q&A
Model-Driven Engineering/Analysis
Service Engineering Analytics
Research & Development
https://rdsea.github.io/