The VTREV IZERTIS solution proposes a new training system by means of Virtual Reality. With this solution the personnel is trained in an interactive and progressive way which can be assessed and analyzed.
MoLe aims to facilitate the development and use of digital twins for smart factories, so factory stakeholders can enjoy all the benefits of digital twin technologies with little effort.
This document describes a proposed solution from IGIT & CMBIT to improve the pipe bending manufacturing process using Industry 4.0 technologies. The current process uses an industrial robot, band saw, and bending machine, but has limitations in traceability, repeatability, and producing lot-size-one orders. The proposed solution adds a robotic arm equipped with a 3D camera and microcomputer to recognize pipe materials using neural networks, grasp pipes in the optimal position using computer vision, and optimize waste reduction using reinforcement learning. The solution integrates components from the Apache software stack to acquire data, transmit messages, store data, and develop the system. Key performance indicators include accuracy improvements and reductions in errors, waste and processing time. Explo
Duratag ar moonstruck midih-presentation_oc2MIDIH_EU
The main objective of the project is to extend our Product Passport software platform, to build Augmented Reality module for delivering generic, indicative concept explanations and training using AR backend creator.
Despite significant scientific research, systematic performance engineering techniques are still hardly used in industry, as many practitioners rely on ad-hoc performance firefighting. It is still not well understood where more sophisticated performance modeling approaches are appropriate and the maturity of the existing tools and processes can be improved. While there have been several industrial case studies on performance modeling in the last few years, more experience is needed to better understand the constraints in practice and to optimize existing tool-chains.
I gave a talk summarizing six years of performance modeling at ABB. In three projects, different approaches to performance modeling were taken, and experiences on the capabilities and limitations of existing tools were gathered. The talk reports on several lessons learned from these projects, for example the need for more efficient performance modeling and the integration of measurement and modeling tools.
Towards the Automation Cloud: Architectural Challenges for a Novel Smart Ecos...Heiko Koziolek
Future industrial automation systems will execute a number of control and monitoring functions in central data centers. The cloud computing paradigm will reduce IT costs and enable small companies to flexibly automate production processes. Centralized control and monitoring across companies and domains will facilitate a novel smart ecosystem for industrial automation connecting both embedded devices and information systems. To realize this vision, a number of technical, economical, and social challenges need to be solved. This talk focuses on software architecture challenges for cloud-connected automation systems. It points out the architectural impact of critical non-functional properties, such as latency, security, and multi-tenancy.
P. Dasu is an Industrial Automation Engineer seeking a challenging position in the field. He has over 2 years and 6 months of experience in automation. He is proficient in PLC programming, SCADA development, and site maintenance. He has expertise in ABB PLCs and SCADA Vantage software. Currently he is working as a Site Engineer for GGIPL on an ONGC project involving monitoring of production and drilling data across various sites in India.
The VTREV IZERTIS solution proposes a new training system by means of Virtual Reality. With this solution the personnel is trained in an interactive and progressive way which can be assessed and analyzed.
MoLe aims to facilitate the development and use of digital twins for smart factories, so factory stakeholders can enjoy all the benefits of digital twin technologies with little effort.
This document describes a proposed solution from IGIT & CMBIT to improve the pipe bending manufacturing process using Industry 4.0 technologies. The current process uses an industrial robot, band saw, and bending machine, but has limitations in traceability, repeatability, and producing lot-size-one orders. The proposed solution adds a robotic arm equipped with a 3D camera and microcomputer to recognize pipe materials using neural networks, grasp pipes in the optimal position using computer vision, and optimize waste reduction using reinforcement learning. The solution integrates components from the Apache software stack to acquire data, transmit messages, store data, and develop the system. Key performance indicators include accuracy improvements and reductions in errors, waste and processing time. Explo
Duratag ar moonstruck midih-presentation_oc2MIDIH_EU
The main objective of the project is to extend our Product Passport software platform, to build Augmented Reality module for delivering generic, indicative concept explanations and training using AR backend creator.
Despite significant scientific research, systematic performance engineering techniques are still hardly used in industry, as many practitioners rely on ad-hoc performance firefighting. It is still not well understood where more sophisticated performance modeling approaches are appropriate and the maturity of the existing tools and processes can be improved. While there have been several industrial case studies on performance modeling in the last few years, more experience is needed to better understand the constraints in practice and to optimize existing tool-chains.
I gave a talk summarizing six years of performance modeling at ABB. In three projects, different approaches to performance modeling were taken, and experiences on the capabilities and limitations of existing tools were gathered. The talk reports on several lessons learned from these projects, for example the need for more efficient performance modeling and the integration of measurement and modeling tools.
Towards the Automation Cloud: Architectural Challenges for a Novel Smart Ecos...Heiko Koziolek
Future industrial automation systems will execute a number of control and monitoring functions in central data centers. The cloud computing paradigm will reduce IT costs and enable small companies to flexibly automate production processes. Centralized control and monitoring across companies and domains will facilitate a novel smart ecosystem for industrial automation connecting both embedded devices and information systems. To realize this vision, a number of technical, economical, and social challenges need to be solved. This talk focuses on software architecture challenges for cloud-connected automation systems. It points out the architectural impact of critical non-functional properties, such as latency, security, and multi-tenancy.
P. Dasu is an Industrial Automation Engineer seeking a challenging position in the field. He has over 2 years and 6 months of experience in automation. He is proficient in PLC programming, SCADA development, and site maintenance. He has expertise in ABB PLCs and SCADA Vantage software. Currently he is working as a Site Engineer for GGIPL on an ONGC project involving monitoring of production and drilling data across various sites in India.
This document provides an overview of a 6-hour hands-on introduction to LabVIEW course. The course goals are to make students comfortable with the LabVIEW environment and data flow model, and teach how to acquire, save, load, and analyze data using LabVIEW. The course covers the LabVIEW interface, creating programs, and using DAQ devices or a sound card for input. It does not cover programming theory, every LabVIEW function, or analog concepts. The document includes setup instructions for different hardware tracks: a DAQ device, simulated DAQ, or sound card.
SIEWIRE - Tool To Create DCS Wiring DiagramsDisha Bedi
SIEWIRE is a tool developed to create DCS wiring diagrams more efficiently. It aims to save time and costs over existing methods like Tec4fde and MS Visio by allowing users to generate all wiring diagrams for a project at once from a centralized C2 loading table. The tool was tested on a project in Tata Trombay and showed significant time and cost savings compared to previous methods. Feedback was also positive regarding the tool's ease of use and accuracy. Future work will focus on enhancements like improved error handling and generating diagrams with a single click.
Embedded World 2015: Internet of Things Changes the Definition of What a Prod...Intland Software GmbH
The Internet of Things is bringing about a change that some claim is a new industrial revolution. Connectivity doesn't simply let companies add new features to their products – rather, it's fundamentally changing what we think of when referring to 'product', as these additional services are increasingly becoming the substance of products. Managing the development and maintenance of these services adds new lifecycles, posing a challenge to companies that were previously simply manufacturing physical products.
Reverse engineering is the process of analyzing a product or system to understand its design, functionality, and operation. It involves taking something apart and studying how it works. Reverse engineering can be used to retrieve lost source code, study how a program performs operations, improve performance, fix bugs, or identify malicious content. It is commonly used for security research, software development, product analysis, and understanding legacy software when documentation is lost. The key steps of reverse engineering involve collecting information, examining the structure and functionality, and documenting the recovered design. Common tools used include disassemblers, debuggers, and decompilers. While useful, the legality of reverse engineering varies depending on jurisdiction and software licenses.
This webinar is going to cover what is a digital twin and how all stakeholders can benefit from their functionality. You will learn how model-based systems engineering enables digital engineering. Your host will discuss use cases, a realistic look at digital engineering and digital twins, and how you can use Innoslate to get started.
The Agenda
Here's what we're covering.
What is a Digital Twin
Benefits of Digital Twin
The Digital Engineering Path Enabled by MBSE
AR + MBSE Software
A More Realistic Digital Twin
Getting You Started with Digital Twins
Question Answer Session
experiences and outcomes of the internship done at VI solutions the presentation contains the brief introduction to LabVIEW and tasks fullfilled at workspace, conclusion,references
The document discusses LabVIEW, a graphical programming language developed by National Instruments. It was originally created in 1988 to interface with scientific instruments. LabVIEW uses icons instead of text to create programs. It is widely used in engineering and science due to its ease of use, speed of development, and ability to interface with instruments. The document promotes LabVIEW certification training offered by National Instruments and provides examples of LabVIEW projects for testing and measurement.
The document proposes adapting the Roofline Model performance analysis tool for FPGAs. It aims to allow application designers to evaluate performance before acceleration, compare performance across platforms, and enable automatic HLS optimization. The adaptation is non-trivial as the compute bound performance limit on FPGAs depends on both the algorithm and hardware resources. An optimization flow and set of tools are under development to implement the Roofline Model approach for FPGAs.
by Jayashree Purushothaman, Advisory Technical Services Specialist & Jesanraj Balasubramanian, System Engineer, IBM at STeP-IN SUMMIT 2018 15th International Conference on Software Testing on August 31, 2018 at Taj, MG Road, Bengaluru
Rapid Performance Modeling by transforming Use Case Maps to Palladio Componen...Heiko Koziolek
This document presents the UCM2PCM tool, which rapidly transforms use case maps (UCM) models into Palladio component models (PCM) to enable early performance modeling. It aims to address challenges in modeling complex control/data flows and allocating global response time budgets. The tool was evaluated on three systems and found to produce PCM models with performance results within 15% of manual models. A user survey also found the tool made performance modeling more comprehensible and faster for non-experts. Future work includes improving the tool's input assistance and enabling reverse transformation from PCM back to UCM.
Lily Craps, responsible for the Mainframe outsourcing project at SDWorx, explains how the moving of their mainframe to a shared environment at NRB, enabled ‘economies of scale’ on infrastructure costs for hardware and software. She describes the process, from starting the outsourcing study, over the RFI/RFP process, the selection of the provider, the contract negotiations and the migration project, next to the criteria for choosing NRB and an Infrastructure As A Service –cloud model.
Technology is evolving and changing at a very rapid pace, and it is more important than ever to ensure that mission critical back-end mainframe applications can exploit these new and disruptive technologies to transform digitally and deliver real value to the business, and to customers. DevOps on z Systems is a key enabler for the API economy and hybrid cloud. In this session we will discuss how DevOps can transform application delivery on z Systems, mitigate risk, and elevate the ability to respond quickly to customer expectations through continuous improvement"
Domain Specific Languages: An introduction (DSLs)Pedro Silva
Domain Specific Languages (DSLs) are special-purpose programming languages developed for a specific domain.
Some of its most interesting benefits include:
● increasing productivity
○ by reducing
■ the lines of code that have to be written manually
■ the number of coding errors
● (due to automatic domain restrictions)
● test generation
● formal verification
(Check my books at https://beacons.ai/tagido)
Vivek Rana is seeking a role that allows him to utilize his skills and contribute to organizational growth. He has 2 years of experience as a NETEZZA ELT developer and Informatica ETL developer. He is skilled in SQL, PL/SQL, Oracle, and data integration tools like Informatica and custom frameworks. Currently he works on the Customer Centric Database project at Accenture, developing ELT scripts to extract, transform and load data from various source systems into a Netezza data warehouse. He was awarded Employee of the Quarter for proactively automating processes and creating reusable tools.
The document summarizes a team's project to create a power meter that monitors energy usage through a graphical interface. It discusses the mechanical design, project management approach, resources used, code repository structure, code reviews, IP design, hardware-software interface, functional verification, cost estimate, lessons learned, roadblocks encountered, and conclusions. The team successfully interfaced their design with an evaluation board and displayed measured data on an LCD.
An industry-leading analyst discusses how you can take control of application performance and provide superior end-user experiences. Then, you’ll hear how a major US healthcare provider eliminated sporadic performance outages that affected its public-facing website, and prevented revenue loss and many hundreds of hours in support costs. To learn more, watch the webcast replay: http://rvbd.ly/1JGz1ke
Or to learn more about AppInternals, visit: http://rvbd.ly/1IsjC5t
The document introduces a new digital app analyzer tool that can analyze apps much faster than manual reverse engineering. It produces analysis reports in minutes by analyzing an app's structure, functionality, and dependencies using an AST approach. The analyzer is the fastest and most accurate on the market and can analyze apps without interrupting their operations. The reports it generates are valuable for app reengineering to new technologies.
GenerationRFID Test & Embedded Electronics Technology CompanyÀngels Pinyol Escala
This document provides an overview of an electronics company that specializes in embedded electronics development, interim personnel outsourcing, and EOL tester solutions. The company has 30 engineers across hardware prototyping, embedded software, sales, and EOL testing divisions. Recent income has declined from over €1 million in 2011-2014 to €80,000 in 2018. The company develops embedded electronics across several markets including automotive, IoT, and power electronics. It offers hardware and software development services following an ISO9001 certified process. The company also provides interim personnel outsourcing and designs EOL test solutions including test scheduling software, test fixtures, and automated optical inspection systems.
John BishopResume Controls Engineer(6-11-15)John Bishop
John Bishop has over 30 years of experience in controls engineering. He has extensive experience designing control systems using PLCs, HMIs, and SCADA systems for applications in industries such as power generation, water treatment, manufacturing, and oil & gas. He is proficient in programming PLCs from manufacturers including Allen-Bradley, Siemens, and Mitsubishi.
This document provides information about a company called PMT that offers engineering services including 3D laser scanning, dimensional control surveys, underground utility detection, 2D and 3D modeling, and engineering data management. The company was established in 2005 and has grown to 81 employees offering project-based and manpower services. It aims to become a preferred provider of engineering design and database management in oil, gas, and related industries through integrated solutions and intelligent tools.
Platforming the Major Analytic Use Cases for Modern EngineeringDATAVERSITY
We’ll describe some use cases as examples of a broad range of modern use cases that need a platform. We will describe some popular valid technology stacks that enterprises use in accomplishing these modern use cases of customer churn, predictive analytics, fraud detection, and supply chain management.
In many industries, to achieve top-line growth, it is imperative that companies get the most out of existing customer relationships. Customer churn use cases are about generating high levels of profitable customer satisfaction through the use of knowledge generated from corporate and external data to help drive a more positive customer experience (CX).
Many organizations are turning to predictive analytics to increase their bottom line and efficiency and, therefore, competitive advantage. It can make the difference between business success or failure.
Fraudulent activity detection is exponentially more effective when risk actions are taken immediately (i.e., stop the fraudulent transaction), instead of after the fact. Fast digestion of a wide network of risk exposures across the network is required in order to minimize adverse outcomes.
Supply chain leaders are under constant pressure to reduce overall supply chain management (SCM) costs while maintaining a flexible and diverse supplier ecosystem. They will leverage IoT, sensors, cameras, and blockchain. Major investments in advanced analytics, warehouse relocation, and automation, both in distribution centers and stores, will be essential for survival.
Big Data Berlin v8.0 Stream Processing with Apache Apex Apache Apex
This document discusses Apache Apex, an open source stream processing framework. It provides an overview of stream data processing and common use cases. It then describes key Apache Apex capabilities like in-memory distributed processing, scalability, fault tolerance, and state management. The document also highlights several customer use cases from companies like PubMatic, GE, and Silver Spring Networks that use Apache Apex for real-time analytics on data from sources like IoT sensors, ad networks, and smart grids.
This document provides an overview of a 6-hour hands-on introduction to LabVIEW course. The course goals are to make students comfortable with the LabVIEW environment and data flow model, and teach how to acquire, save, load, and analyze data using LabVIEW. The course covers the LabVIEW interface, creating programs, and using DAQ devices or a sound card for input. It does not cover programming theory, every LabVIEW function, or analog concepts. The document includes setup instructions for different hardware tracks: a DAQ device, simulated DAQ, or sound card.
SIEWIRE - Tool To Create DCS Wiring DiagramsDisha Bedi
SIEWIRE is a tool developed to create DCS wiring diagrams more efficiently. It aims to save time and costs over existing methods like Tec4fde and MS Visio by allowing users to generate all wiring diagrams for a project at once from a centralized C2 loading table. The tool was tested on a project in Tata Trombay and showed significant time and cost savings compared to previous methods. Feedback was also positive regarding the tool's ease of use and accuracy. Future work will focus on enhancements like improved error handling and generating diagrams with a single click.
Embedded World 2015: Internet of Things Changes the Definition of What a Prod...Intland Software GmbH
The Internet of Things is bringing about a change that some claim is a new industrial revolution. Connectivity doesn't simply let companies add new features to their products – rather, it's fundamentally changing what we think of when referring to 'product', as these additional services are increasingly becoming the substance of products. Managing the development and maintenance of these services adds new lifecycles, posing a challenge to companies that were previously simply manufacturing physical products.
Reverse engineering is the process of analyzing a product or system to understand its design, functionality, and operation. It involves taking something apart and studying how it works. Reverse engineering can be used to retrieve lost source code, study how a program performs operations, improve performance, fix bugs, or identify malicious content. It is commonly used for security research, software development, product analysis, and understanding legacy software when documentation is lost. The key steps of reverse engineering involve collecting information, examining the structure and functionality, and documenting the recovered design. Common tools used include disassemblers, debuggers, and decompilers. While useful, the legality of reverse engineering varies depending on jurisdiction and software licenses.
This webinar is going to cover what is a digital twin and how all stakeholders can benefit from their functionality. You will learn how model-based systems engineering enables digital engineering. Your host will discuss use cases, a realistic look at digital engineering and digital twins, and how you can use Innoslate to get started.
The Agenda
Here's what we're covering.
What is a Digital Twin
Benefits of Digital Twin
The Digital Engineering Path Enabled by MBSE
AR + MBSE Software
A More Realistic Digital Twin
Getting You Started with Digital Twins
Question Answer Session
experiences and outcomes of the internship done at VI solutions the presentation contains the brief introduction to LabVIEW and tasks fullfilled at workspace, conclusion,references
The document discusses LabVIEW, a graphical programming language developed by National Instruments. It was originally created in 1988 to interface with scientific instruments. LabVIEW uses icons instead of text to create programs. It is widely used in engineering and science due to its ease of use, speed of development, and ability to interface with instruments. The document promotes LabVIEW certification training offered by National Instruments and provides examples of LabVIEW projects for testing and measurement.
The document proposes adapting the Roofline Model performance analysis tool for FPGAs. It aims to allow application designers to evaluate performance before acceleration, compare performance across platforms, and enable automatic HLS optimization. The adaptation is non-trivial as the compute bound performance limit on FPGAs depends on both the algorithm and hardware resources. An optimization flow and set of tools are under development to implement the Roofline Model approach for FPGAs.
by Jayashree Purushothaman, Advisory Technical Services Specialist & Jesanraj Balasubramanian, System Engineer, IBM at STeP-IN SUMMIT 2018 15th International Conference on Software Testing on August 31, 2018 at Taj, MG Road, Bengaluru
Rapid Performance Modeling by transforming Use Case Maps to Palladio Componen...Heiko Koziolek
This document presents the UCM2PCM tool, which rapidly transforms use case maps (UCM) models into Palladio component models (PCM) to enable early performance modeling. It aims to address challenges in modeling complex control/data flows and allocating global response time budgets. The tool was evaluated on three systems and found to produce PCM models with performance results within 15% of manual models. A user survey also found the tool made performance modeling more comprehensible and faster for non-experts. Future work includes improving the tool's input assistance and enabling reverse transformation from PCM back to UCM.
Lily Craps, responsible for the Mainframe outsourcing project at SDWorx, explains how the moving of their mainframe to a shared environment at NRB, enabled ‘economies of scale’ on infrastructure costs for hardware and software. She describes the process, from starting the outsourcing study, over the RFI/RFP process, the selection of the provider, the contract negotiations and the migration project, next to the criteria for choosing NRB and an Infrastructure As A Service –cloud model.
Technology is evolving and changing at a very rapid pace, and it is more important than ever to ensure that mission critical back-end mainframe applications can exploit these new and disruptive technologies to transform digitally and deliver real value to the business, and to customers. DevOps on z Systems is a key enabler for the API economy and hybrid cloud. In this session we will discuss how DevOps can transform application delivery on z Systems, mitigate risk, and elevate the ability to respond quickly to customer expectations through continuous improvement"
Domain Specific Languages: An introduction (DSLs)Pedro Silva
Domain Specific Languages (DSLs) are special-purpose programming languages developed for a specific domain.
Some of its most interesting benefits include:
● increasing productivity
○ by reducing
■ the lines of code that have to be written manually
■ the number of coding errors
● (due to automatic domain restrictions)
● test generation
● formal verification
(Check my books at https://beacons.ai/tagido)
Vivek Rana is seeking a role that allows him to utilize his skills and contribute to organizational growth. He has 2 years of experience as a NETEZZA ELT developer and Informatica ETL developer. He is skilled in SQL, PL/SQL, Oracle, and data integration tools like Informatica and custom frameworks. Currently he works on the Customer Centric Database project at Accenture, developing ELT scripts to extract, transform and load data from various source systems into a Netezza data warehouse. He was awarded Employee of the Quarter for proactively automating processes and creating reusable tools.
The document summarizes a team's project to create a power meter that monitors energy usage through a graphical interface. It discusses the mechanical design, project management approach, resources used, code repository structure, code reviews, IP design, hardware-software interface, functional verification, cost estimate, lessons learned, roadblocks encountered, and conclusions. The team successfully interfaced their design with an evaluation board and displayed measured data on an LCD.
An industry-leading analyst discusses how you can take control of application performance and provide superior end-user experiences. Then, you’ll hear how a major US healthcare provider eliminated sporadic performance outages that affected its public-facing website, and prevented revenue loss and many hundreds of hours in support costs. To learn more, watch the webcast replay: http://rvbd.ly/1JGz1ke
Or to learn more about AppInternals, visit: http://rvbd.ly/1IsjC5t
The document introduces a new digital app analyzer tool that can analyze apps much faster than manual reverse engineering. It produces analysis reports in minutes by analyzing an app's structure, functionality, and dependencies using an AST approach. The analyzer is the fastest and most accurate on the market and can analyze apps without interrupting their operations. The reports it generates are valuable for app reengineering to new technologies.
GenerationRFID Test & Embedded Electronics Technology CompanyÀngels Pinyol Escala
This document provides an overview of an electronics company that specializes in embedded electronics development, interim personnel outsourcing, and EOL tester solutions. The company has 30 engineers across hardware prototyping, embedded software, sales, and EOL testing divisions. Recent income has declined from over €1 million in 2011-2014 to €80,000 in 2018. The company develops embedded electronics across several markets including automotive, IoT, and power electronics. It offers hardware and software development services following an ISO9001 certified process. The company also provides interim personnel outsourcing and designs EOL test solutions including test scheduling software, test fixtures, and automated optical inspection systems.
John BishopResume Controls Engineer(6-11-15)John Bishop
John Bishop has over 30 years of experience in controls engineering. He has extensive experience designing control systems using PLCs, HMIs, and SCADA systems for applications in industries such as power generation, water treatment, manufacturing, and oil & gas. He is proficient in programming PLCs from manufacturers including Allen-Bradley, Siemens, and Mitsubishi.
This document provides information about a company called PMT that offers engineering services including 3D laser scanning, dimensional control surveys, underground utility detection, 2D and 3D modeling, and engineering data management. The company was established in 2005 and has grown to 81 employees offering project-based and manpower services. It aims to become a preferred provider of engineering design and database management in oil, gas, and related industries through integrated solutions and intelligent tools.
Platforming the Major Analytic Use Cases for Modern EngineeringDATAVERSITY
We’ll describe some use cases as examples of a broad range of modern use cases that need a platform. We will describe some popular valid technology stacks that enterprises use in accomplishing these modern use cases of customer churn, predictive analytics, fraud detection, and supply chain management.
In many industries, to achieve top-line growth, it is imperative that companies get the most out of existing customer relationships. Customer churn use cases are about generating high levels of profitable customer satisfaction through the use of knowledge generated from corporate and external data to help drive a more positive customer experience (CX).
Many organizations are turning to predictive analytics to increase their bottom line and efficiency and, therefore, competitive advantage. It can make the difference between business success or failure.
Fraudulent activity detection is exponentially more effective when risk actions are taken immediately (i.e., stop the fraudulent transaction), instead of after the fact. Fast digestion of a wide network of risk exposures across the network is required in order to minimize adverse outcomes.
Supply chain leaders are under constant pressure to reduce overall supply chain management (SCM) costs while maintaining a flexible and diverse supplier ecosystem. They will leverage IoT, sensors, cameras, and blockchain. Major investments in advanced analytics, warehouse relocation, and automation, both in distribution centers and stores, will be essential for survival.
Big Data Berlin v8.0 Stream Processing with Apache Apex Apache Apex
This document discusses Apache Apex, an open source stream processing framework. It provides an overview of stream data processing and common use cases. It then describes key Apache Apex capabilities like in-memory distributed processing, scalability, fault tolerance, and state management. The document also highlights several customer use cases from companies like PubMatic, GE, and Silver Spring Networks that use Apache Apex for real-time analytics on data from sources like IoT sensors, ad networks, and smart grids.
Thomas Weise, Apache Apex PMC Member and Architect/Co-Founder, DataTorrent - ...Dataconomy Media
Thomas Weise, Apache Apex PMC Member and Architect/Co-Founder of DataTorrent presented "Streaming Analytics with Apache Apex" as part of the Big Data, Berlin v 8.0 meetup organised on the 14th of July 2016 at the WeWork headquarters.
On the Application of AI for Failure Management: Problems, Solutions and Algo...Jorge Cardoso
Artificial Intelligence for IT Operations (AIOps) is a class of software which targets the automation of operational tasks through machine learning technologies. ML algorithms are typically used to support tasks such as anomaly detection, root-causes analysis, failure prevention, failure prediction, and system remediation. AIOps is gaining an increasing interest from the industry due to the exponential growth of IT operations and the complexity of new technology. Modern applications are assembled from hundreds of dependent microservices distributed across many cloud platforms, leading to extremely complex software systems. Studies show that cloud environments are now too complex to be managed solely by humans. This talk discusses various AIOps problems we have addressed over the years and gives a sketch of the solutions and algorithms we have implemented. Interesting problems include hypervisor anomaly detection, root-cause analysis of software service failures using application logs, multi-modal anomaly detection, root-cause analysis using distributed traces, and verification of virtual private cloud networks.
This document is a resume for Jessie P. Semana summarizing her qualifications. She has almost 9 years of experience in front-end and back-end development, Windows and web application development, and supporting and enhancing legacy applications. Her professional experience includes various roles as a systems analyst, developer, and programmer for companies like Nestle Philippines, Universal Robina Corporation, and Wyeth Philippines. She has skills in technologies like .NET, C#, Java, Android, and databases like SQL Server and MySQL.
OEP allows harvesting of real time business insights from edge devices in the Internet of Things. It combines data from multiple sources to identify complex events and enable faster decision making and actions. This reduces latency and improves responsiveness. OEP Embedded is optimized for edge devices like sensors and gateways. It features a continuous query language, event processing network, and supports modular development. Use cases include smart grids, industrial automation, building security, and vehicle telematics.
Presentazione dello speech tenuto da Carmine Spagnuolo (Postdoctoral Research Fellow - Università degli Studi di Salerno/ ACT OR) dal titolo "Technology insights: Decision Science Platform", durante il Decision Science Forum 2019, il più importante evento italiano sulla Scienza delle Decisioni.
The client needed a solution to monitor IT operations using artificial intelligence. The project involved building a data processing architecture using Kafka to collect high-volume event data via REST API. Rules would be defined and applied to the data using a rule engine to automatically identify, prioritize, and resolve issues through machine learning algorithms. The implemented solution involved building this data pipeline and rule engine on a Dataramp platform using Docker containers to provide automated, scalable event monitoring for the client's IT operations.
Industrial production is becoming increasingly interlinked with modern information and communication technology. From the foundation of intelligent digitally-networked systems, a largely self-organized production will be possible. In Industrie4.0, people, machinery, plants, logistics and products will communicate and cooperate directly. To connect these different strands, a unified, flexible, high-performance system is needed to provide company-wide, real-time, information flow.
To target these issues, we developed enterprise:inmation.
It securely and efficiently gathers data from manufacturing, process control and IT systems all around the globe, contextualizes it and transforms it into actionable information, which is presented to every decision-maker on any device, anytime, at any location.
Software made by industrial system integration pros, in close cooperation with industry leaders. Business performance in real-time, anytime, anywhere, for all decision- makers -that is enterprise:inmation.
The document provides an overview of traditional ERP systems, cloud ERP systems, and a comparison of the two. It discusses the types and reasons for implementing ERP systems. It then covers the benefits and drawbacks of traditional on-premise ERP systems and cloud ERP systems. The document compares traditional and cloud ERP systems based on factors like deployment, pricing, expenditure, customization, and provides a battle card comparing the key parameters of both. It also discusses intelligent, modern, and hybrid ERP systems.
Businesses are changing the way they used to work. They are adapting Enterprise Resource Planning (ERP) systems to make their business processes integrated and streamlined. To begin with, on-premise ERP and cloud ERP are the two options available for them. Both have their own pros & cons. However, the discussion of the best suited one comes down to what your business requirements are. Go through the presentation below and know about them in detail:
Visualizing Your Network Health - Know your NetworkDellNMS
An old adage states that you cannot manage what you don’t know. Do you know what devices are on your network, where they are located, how they are configured, what they are connected to, and how they are affected by changes and failures?
Today’s network infrastructure is becoming more and more complex, while demands on the Network Administrator to ensure network availability and performance are higher than ever. Business critical systems depend upon you managing your entire network infrastructure and delivering high-quality service 24/7, 365 days a year. So how do you keep the pace?
Learn how real-time visibility into your entire network infrastructure provides the power to manage your assets with greater control.
The document is a resume for Deepit Chaturvedi. It summarizes his professional experience in software testing and quality assurance over 6 years. It details his work with clients like UPS SCS, Melbourne IT, Telecom New Zealand, and Panduit. It also lists his skills in testing Java applications, automation testing using tools like QTP, and testing Oracle, Siebel, and other applications. His education credentials include a Bachelor's degree in Computer Technology.
New usage model for real-time analytics by Dr. WILLIAM L. BAIN at Big Data S...Big Data Spain
Operational systems manage our finances, shopping, devices and much more. Adding real-time analytics to these systems enables them to instantly respond to changing conditions and provide immediate, targeted feedback. This use of analytics is called “operational intelligence,” and the need for it is widespread.
The designed SCADA software system ensured remote monitoring of the positions and advanced system health conditions of all the solar tracking systems to provide data analytics and reporting. This SCADA solution was designed and developed toco-exist in a remote system that will continuously monitor multiple fields consisting of several masters and their respective slave trackers.
Architecting and Tuning IIB/eXtreme Scale for Maximum Performance and Reliabi...Prolifics
Abstract: Recent projects have stressed the "need for speed" while handling large amounts of data, with near zero downtime. An analysis of multiple environments has identified optimizations and architectures that improve both performance and reliability. The session covers data gathering and analysis, discussing everything from the network (multiple NICs, nearby catalogs, high speed Ethernet), to the latest features of extreme scale. Performance analysis helps pinpoint where time is spent (bottlenecks) and we discuss optimization techniques (MQ tuning, IIB performance best practices) as well as helpful IBM support pacs. Log Analysis pinpoints system stress points (e.g. CPU starvation) and steps on the path to near zero downtime.
- Application Performance Management (APM) solutions manage the performance, capacity and availability of dynamic applications from the Cloud or a traditional data center.
- APM aims to diagnose application performance issues to ensure that an expected level of service is maintained.
- As part of this monitoring, two specific sets of parameters are closely tracked.
- The first being performance metrics that define end user experience for an application, the second being metrics for computational resources used by the application for a specific load.
- APM solutions not only monitor and analyze logs but also diagnose problems and assist in pro-active performance management.
- APM is most commonly used for web applications where its components can also be individually monitored to pinpoint reasons for possible delays in the system.
- Neev has partnered with APM solutions like AppDynamics and Splunk to offer them to our customers.
This document discusses enabling real-time analytics using Hadoop MapReduce on an in-memory data grid (IMDG). It describes implementing MapReduce using parallel method invocation on an IMDG to eliminate batch scheduling overhead and analyze live data. Sample use cases are presented for applications in financial services, ecommerce, and other industries that require real-time analysis of large, changing datasets.
Building a Real-Time Security Application Using Log Data and Machine Learning...Sri Ambati
Building a Real-Time Security Application Using Log Data and Machine Learning- Karthik Aaravabhoomi
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
The document discusses cloud-native application architectures and how they enable speed, safety, and scale through approaches like twelve-factor applications and microservices. It outlines the cloud-native stack and where governance is needed to secure different components like code, orchestration tools, containers, services, and infrastructure. The document argues that while cloud-native approaches are well-suited for technology companies, traditional enterprises face challenges in fully adopting these architectures due to differences in priorities, skills, and scale.
GARE du MIDIH the DIHIWARE collaboration platform for mastering your digita...MIDIH_EU
The document summarizes the MIDIH Final Event - Session 3. It discusses the DIHIWARE Collaboration Platform, which is an online platform that can open opportunities for Digital Innovation Hubs (DIHs) and small-and-medium enterprises (SMEs) adopting cyber-physical systems and other Industry 4.0 technologies through collaboration tools. It provides an overview of the DIHIWARE platform's capabilities and services, including knowledge management, catalog management, and collaboration and innovation features. Success stories are also presented on the adoption of DIHIWARE by organizations in Italy. The presentation concludes with the vision for a DIH4Industry platform that would create a network of DIHs in the manufacturing domain.
GARE du MIDIH Open Digital Platforms the adoption of a standards-based open...MIDIH_EU
The Open Digital Platform importance in the Industrial Digital Transformation. Three MIDIH use cases: Lighthouse pilots - how they have benefits from MIDIH Open Source platform
GARE du MIDIH MIDIH, towards a flexible, modular and open source reference ...MIDIH_EU
The MIDIH approach for defining and implementing a data-driven, open source and standards-based I4.0 reference architecture for pan-European DIHs, allowing manufacturing companies to stay on the wave of industry digitization and providing flexibility and agility for developers and systems integrators
GARE du MIDIH Digital Manufacturing Platforms in H2020 and in future Digita...MIDIH_EU
This document summarizes a presentation about digital industrial platforms in Europe. It discusses how platforms can strengthen European leadership in manufacturing by unlocking data, integrating technologies, and facilitating applications and services. It outlines the roles of platforms, provides examples, and maps platforms to the Reference Architecture Model Industry 4.0. The presentation emphasizes the need for Europe to shape the business-to-business platform landscape, address fragmentation, and set interoperability frameworks. It also reviews the European Commission's strategy and funding for digital manufacturing platforms and calls for collaboration to increase impact.
The document discusses the MIDIH collaboration model for digital innovation hubs. It addresses the challenges of collaborating between heterogeneous partners like consistency in governance and business models. The model defines a common service portfolio across categories like awareness, consulting and funding. It utilizes a catalog and online platform to facilitate cost/revenue calculation and service promotion at a network level. The document also examines MIDIH governance aspects like legal structures, financial resources, and lessons learned around competition, connecting governance to business models and revenue streams, and defining partners' roles and liabilities.
GARE du MIDIH Methods and Tools to enhance DIHs Digital Transformation powe...MIDIH_EU
Learn how networks of DIHs could structure a convincing value proposition for their SMEs ecosystem, how to perform a Service Portfolio analysis aimed how to create flexible personalized Customer Journeys and how to exploit Manufacturing SMEs Digital Transformation best practices Sergio Gusmeroli, Research Coordinator, Politecnico di Milano
Gare du MIDIH the EC focus on the DIHs network, eDIHs in Digital Europe Prog...MIDIH_EU
The document discusses the European Commission's focus on supporting a network of Digital Innovation Hubs (DIHs) across Europe. It outlines the European Digital Innovation Hubs (EDIHs) that will be part of the Digital Europe Programme and funded to help businesses and public institutions adopt digital technologies. The EDIHs will work with existing DIHs and form a network of up to 200 hubs to provide expertise on technologies like AI, HPC, and cybersecurity. The network will be coordinated by a Digital Transformation Accelerator to facilitate collaboration, training, and sharing of best practices among the DIHs.
The MIDIH project is a 36-month, €7.9 million effort coordinated by EIT Digital to support manufacturing SMEs' digital transformation through a network of digital innovation hubs, competency centers, and didactic factories across 12 European countries. The hubs provide services, technologies, skills training and an open source reference architecture to help SMEs implement digital platforms and pilots. MIDIH also developed a 6P migration methodology to help SMEs adopt new skills and jobs and established a data sovereignty framework to facilitate cross-border collaboration on its platforms.
The document summarizes a project between Linz Center of Mechatronics (LCM) and CEMTEC to integrate a cement plant pilot ball mill circuit with the FIWARE platform. The project involves setting up the FIWARE platform and integrating the pilot plant to acquire process data over several months. Data will then be analyzed and used to develop a theoretical and data-based process model in the form of an expert system. Key performance indicators and next steps are outlined. The overall goals are to demonstrate information exchange between plant sensors, PLCs and FIWARE, integrate plant data in the cloud, and visualize the data to help CEMTEC and LCM in process optimization and control.
The PGplant solution provides the Industrial Internet with an attractive facade, which is based on the advanced Digital Twin concept.Process Genius unique multi-layered 3D user interface product, called PGplant, integrates the data from various sources and thus replaces multiple UIs of current IT systems.
The I3D product represents digitalization of processes in industrial company, combines virtual reality (VR) and augmented reality (AR) to provide guidance for a worker through the process of training and the process of work execution.
MAMOC is a machine learning application that uses motion capture to provide digital process data and optimization for manual production stations. It uses an Intel RealSense camera to record depth video and identify objects, hand poses, and actions. The application was tested by assembling a speaker kit and showed potential to integrate human processes with digital workflows, optimize throughput and quality, and achieve an accuracy level of TRL 3-4. Further development is needed to improve action detection accuracy and integration could provide benefits for industrial engineering optimization projects.
Best route beck et al-midih presentation oc2.MIDIH_EU
This presentation summarizes a project to optimize the manual parts picking process for tractor assembly at a Deutz-Fahr factory. The project developed a smartphone app to dynamically sort optimized picklists and provide real-time information to reduce picking times from 40 to 20 minutes, and errors from 2-4 to 1 per month. It was built using the MIDIH architecture in the cloud and implemented on-premise. Lessons from the successful experiment include observing users early, letting IT support business needs, and reducing on-site time through remote collaboration. The app will be implemented fully across the factory in the next quarters and may be expanded to other sites.
Smart Poly is a solution to connect your factory, to transform
process data into high value information and to improve efficiency and quality consistency and control of your process
The experiment aimed to monitor energy and gas consumption on an aircraft parts production line. Sensors were installed to measure consumption of autoclaves and machines. Apache Flink processed streaming data and Orion stored it. Knowage visualized trends and provided indicators. The system identified optimization rules, applying a new recipe reducing energy 11.12%, gases 12.92%, and production time 8%, meeting KPI targets. COVID-19 impacted sensor installation but rules still provided decision support.
AllbeSmart - E robotic midih-presentation-oc2_demo_dayMIDIH_EU
Design, develop and validate an Augmented Reality (AR) application for training-on-the-job and maintenance assist operations- An AllbeSmart experiment based on MIDIH Open Platform Architecture
MIDIH (Manufacturing Industry Digital Innovation Hubs) Modern Open-Source Approaches to Software for Automation Systems: Eclipse Arrowhead & Eclipse 4DIAC
This presentation by Yong Lim, Professor of Economic Law at Seoul National University School of Law, was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
Carrer goals.pptx and their importance in real lifeartemacademy2
Career goals serve as a roadmap for individuals, guiding them toward achieving long-term professional aspirations and personal fulfillment. Establishing clear career goals enables professionals to focus their efforts on developing specific skills, gaining relevant experience, and making strategic decisions that align with their desired career trajectory. By setting both short-term and long-term objectives, individuals can systematically track their progress, make necessary adjustments, and stay motivated. Short-term goals often include acquiring new qualifications, mastering particular competencies, or securing a specific role, while long-term goals might encompass reaching executive positions, becoming industry experts, or launching entrepreneurial ventures.
Moreover, having well-defined career goals fosters a sense of purpose and direction, enhancing job satisfaction and overall productivity. It encourages continuous learning and adaptation, as professionals remain attuned to industry trends and evolving job market demands. Career goals also facilitate better time management and resource allocation, as individuals prioritize tasks and opportunities that advance their professional growth. In addition, articulating career goals can aid in networking and mentorship, as it allows individuals to communicate their aspirations clearly to potential mentors, colleagues, and employers, thereby opening doors to valuable guidance and support. Ultimately, career goals are integral to personal and professional development, driving individuals toward sustained success and fulfillment in their chosen fields.
XP 2024 presentation: A New Look to Leadershipsamililja
Presentation slides from XP2024 conference, Bolzano IT. The slides describe a new view to leadership and combines it with anthro-complexity (aka cynefin).
This presentation by OECD, OECD Secretariat, was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
This presentation by Juraj Čorba, Chair of OECD Working Party on Artificial Intelligence Governance (AIGO), was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
Collapsing Narratives: Exploring Non-Linearity • a micro report by Rosie WellsRosie Wells
Insight: In a landscape where traditional narrative structures are giving way to fragmented and non-linear forms of storytelling, there lies immense potential for creativity and exploration.
'Collapsing Narratives: Exploring Non-Linearity' is a micro report from Rosie Wells.
Rosie Wells is an Arts & Cultural Strategist uniquely positioned at the intersection of grassroots and mainstream storytelling.
Their work is focused on developing meaningful and lasting connections that can drive social change.
Please download this presentation to enjoy the hyperlinks!
This presentation by Professor Alex Robson, Deputy Chair of Australia’s Productivity Commission, was made during the discussion “Competition and Regulation in Professions and Occupations” held at the 77th meeting of the OECD Working Party No. 2 on Competition and Regulation on 10 June 2024. More papers and presentations on the topic can be found at oe.cd/crps.
This presentation was uploaded with the author’s consent.
Suzanne Lagerweij - Influence Without Power - Why Empathy is Your Best Friend...Suzanne Lagerweij
This is a workshop about communication and collaboration. We will experience how we can analyze the reasons for resistance to change (exercise 1) and practice how to improve our conversation style and be more in control and effective in the way we communicate (exercise 2).
This session will use Dave Gray’s Empathy Mapping, Argyris’ Ladder of Inference and The Four Rs from Agile Conversations (Squirrel and Fredrick).
Abstract:
Let’s talk about powerful conversations! We all know how to lead a constructive conversation, right? Then why is it so difficult to have those conversations with people at work, especially those in powerful positions that show resistance to change?
Learning to control and direct conversations takes understanding and practice.
We can combine our innate empathy with our analytical skills to gain a deeper understanding of complex situations at work. Join this session to learn how to prepare for difficult conversations and how to improve our agile conversations in order to be more influential without power. We will use Dave Gray’s Empathy Mapping, Argyris’ Ladder of Inference and The Four Rs from Agile Conversations (Squirrel and Fredrick).
In the session you will experience how preparing and reflecting on your conversation can help you be more influential at work. You will learn how to communicate more effectively with the people needed to achieve positive change. You will leave with a self-revised version of a difficult conversation and a practical model to use when you get back to work.
Come learn more on how to become a real influencer!
This presentation by Nathaniel Lane, Associate Professor in Economics at Oxford University, was made during the discussion “Pro-competitive Industrial Policy” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/pcip.
This presentation was uploaded with the author’s consent.
This presentation by Thibault Schrepel, Associate Professor of Law at Vrije Universiteit Amsterdam University, was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
2. About Us
• Established 2011, 5 employees
• Development of Level 3 software for manufacturing industry
• Transitioning from service-based integrator subcontracting for market
leaders to knowledge based products supplier
• Main focus on discrete manufacturing solutions starting from train
assembly (China South Railway, Qingdao, China), through plastic moulding
(Bianor, Bialystok, Poland) to timing belts (Gates, Legnica, Poland)
4. Challenge
For large scale production companies minimizing scrap, improving yield is
critical to maintain operational margins and assuring satisfied customers .
Existing Predictive Maintenance solutions have several disadvantages:
• Companies often face hard-to-detect failure conditions while solutions
deliver classical value based alerting or AI supervised learning
• Solution adaptation to unique customer installation, operating conditions,
individual device configurations results in long running and expensive
projects
• Required PLC’s reprograming.
• Required infrastructure investments (e.g. network).
5. Solution
For the manufacturing companies who are willing to introduce step-wise
improvements in order to reduce number of unpredicted failures of electric
engines based assets The APEMAN is a predictive maintenance class system
that detects potential failures of electric engines well in advance.
Unlike competing products APEMAN:
• is capable of predicting failures that were never observed neither in
customer installation nor in device category
• operates without customer specific setup
• doesn’t require any infrastructure investment or PLC programming to start
operating
• has scalable number of monitoring sensors to deliver wide capabilities of
environment replication for AI model
7. Architecture
• The solution is based on the Apache toolchain
• Data acquisition & persistence module, based on Node-RED and InfluxDB
is responsible for collecting and storing sensor measurements.
• The Data & trained model bi-directional synchronization module, using
Flask and NGNIX, is responsible for sending measurements data to the
server and retrieving trained neural networks.
• The failure detection module compares the incoming measurements with
pre-trained failures models allowing for prediction of imminent failures.
• The Client App is responsible for preparation of the time series data,
alerting and visualization.
• On the server side the Data & model synchronization module is
responsible for collecting data and retraining model periodically.
• The failure detection training module ensures persistence of the training
data and re-trains the neural networks once new data is available.
8. ISAB
• The Integrated Sensing and Analysis Box (ISAB) is the edge device and is
responsible for collecting and processing measurements from sensors
attached to individual machines as well as for reasoning using the pre-
trained deep neural network.
• Developing a single edge device like that ensures portability and allows for
installing on different machines, collecting data, retraining the model off-
line and monitoring machines as needed.
• The box is based on the Raspberry Pi4 with specialized sensors attached –
current and voltage, frequency of the inverter, temperature and vibration.
• Additional sensors, measuring different modalities can be easily attached
further enhancing the capabilities of the device.
9. APEMAN MIDIH components
Component name Role
Apache Kafka (+ Zookeeper) Redundant message broker for transferring sensor data from multiple edge devices to server
Apache Spark Used for data pre-processing and AI learning management
Apache Zeppelin AI notebooks visualization
Cassandra Training data storage
Grafana Server and edge device monitoring visualization
TensorFlow (+Keras) Neural network for unsupervised learning
NGNIX
Reverse proxy.
ISMB and server web application hosting.
Logstash Logs gathering
10. Other APEMAN components
Component name Role
Flask
REST based data workflow automation & delivery (e.g. Node RED <-> influx, sensor data delivery to web app).
Offline functionality.
Other minor business logic.
Redis Queue + Worker
Queueing TensorFlow scoring calculations jobs and processing them in the background with workers.
Required in order to keep ISMB resources for sensor data gathering and preventing of Tensor Flow scoring
parallel calculations.
Kapacitor Data processing for creating alerts and detecting anomalies based on Tensor Flow calculations.
Telegraf
Agent for collecting, processing, aggregating, and writing metrics. Used for collecting non-sensor related data
(e.g. http pings, CPU readings, mem readings).
Node-RED Sensor data gathering.
13. Autoencoders for anomaly detection
• Copies input values to output values and ignores „noise”
• The important part is the hidden core in the middle which extracts
important information
• Encoding and decoding is part of the network
14. Dimensionality reduction to find outliers
• Early adoption of autoencoders is dimensionality reduction
• Perform better than PCA as can perform non-linear transformations
• Reducing dimensionality identifies the main patterns and reveals outliers
• Outlier detection is a by-product of dimension reduction
15. How to detect outliers
• Number of input variables equals number of output variables
• When trying to reproduce input MSE is used as loss function
• While training the model learns „normal” data and compress it inside core
layer
• When anomaly is sent through the model it fails to reproduce
• Necessary to find the right threshold to differentiate between valid input
and anomalies
17. Achieved results
• The experiment ended with a success despite the unforeseen difficulties
caused by the outbreak of the Covid-19 virus.
• The experiment has proved the usability of the MIDIH Reference
Architecture for development of AI-capable systems operating both in
real-time and offline.
• The involved companies are committed to further development of the
system as they see an attractive and untapped market niche for it.
18. KPI’s
• All the Technical KPI’s were delivered (80h device operating time,
gathering data from 2 machines, 95% of identified failures)
• Business KPI were achieved as well.
8 companies took part in hybrid (remote/onsite) workshops.
3 companies signed letters of interests and confirmed their willingness to
become early adopters and rent out the next iteration for test trials in
their facilities.
• The TRL level of the presented solution was too low to attract more letters
of interest, since there was no immediately available product.
19. BUSINESS IMPACT
• The experiment allowed us to investigate the capabilities of the Apache
stack. The well though through architecture of MIDIH saved us effort on
designing the system and allowed us to focus on testing the software
components.
• Experiment uncovered many potential market opportunities and allowed
to identify technical means to address them.
• MASTA sees the system as a breakthrough product, which will support the
intended transition from a service-based integrator to a knowledge-based
product provider and is willing to continue investing in its further
development
• During the workshops potential customers pointed out that identification
of potential causes of imminent failures creates significant added value. In
the next development iteration supervised learning, with data annotation
containing information on the causes of failures, will be used.