The document discusses using digital twins and Industry 4.0 concepts to synchronize maintenance and operations decisions in real-time. It outlines how digital twins of maintenance systems and production processes could interact to determine the optimal time for maintenance based on current degradation levels, future operational loads, and costs. While the models presented are theoretical, they demonstrate the need for digital replicas that are updated continuously from sensor data and can provide real-time feedback on scenarios. The goal is to minimize total costs by optimizing the tradeoff between maintenance costs, failure costs from downtime, and costs of relaxing production schedules.
Making Model-Driven Verification Practical and Scalable: Experiences and Less...Lionel Briand
The document discusses experiences and lessons learned from making model-driven verification practical and scalable. It describes several projects collaborating with industry partners to develop model-based solutions for verification. Key challenges addressed include achieving applicability for engineers, scalability to large systems, and developing solutions informed by real-world problems. Lessons learned emphasize the importance of collaborative applied research, defining problems in context, and validating solutions realistically.
Conference: 42nd Annual Industrial
Electronics Conference (IECON2016).
Florence, Italy – October 24-27, 2016
Title of the paper: A solution for
processing supply chain events within
ontology-based descriptions
Authors: Borja Ramis Ferrer, Wael M.
Mohammed, José L. Martinez Lastra
Show and Tell - Data and Digitalisation, Digital Twins.pdfSIFOfgem
The document summarizes several projects presented at a webinar on the Strategic Innovation Fund's "Data & Digitalisation" challenge.
- The EN-twin-e project aims to develop a digital twin of the electricity distribution network to provide greater visibility of distributed energy resources. This will help the ESO make more effective balancing decisions.
- The Digi-GIFT project seeks to build an integrated cybersecurity system and shared data infrastructure. This will help manage data quality, integrity and security while supporting applications like digital twins.
- Cost-benefit analyses were conducted for a shared data infrastructure, an integrated cyber intrusion defense system, and quantifying flexibility services. The analyses found savings from data sharing and
Overview of the Inria Bordeaux - Sud-Ouest research centreInria
The document summarizes research activities at Inria Bordeaux – Sud-Ouest Research Centre in Bordeaux, France. It discusses the centre's 173 project teams, 1,300 doctoral students, 120 startups created, and research focus areas including modeling, high-performance computing, uncertainty management and optimization, modeling and simulation for health and biology, and human and computing interaction and visualization. It also outlines the centre's partnerships with academic and industrial organizations and its role in transferring research to companies.
Towards CIM Compliant Model-Based Cyber-Physical Power System Design and Simu...Francisco José Gómez López
The document describes a presentation given at the 7th International Conference on Real-Time Simulation Technologies in Montreal from June 9-12, 2014 on using the Modelica modeling language to develop cyber-physical power system models that are compliant with the Common Information Model (CIM) standard in order to enable model-based design and simulation of power systems incorporating information and communication technologies. The presentation outlines key concepts of hybrid and cyber-physical systems, modeling approaches using Unified Modeling Language (UML) and CIM, and the development of power system component models in Modelica.
Towards CIM-Compliant Model-Based Cyber-Physical Power System Design and Simu...Luigi Vanfretti
Compliance with grid data exchange standards (i.e. CIM) can allow for sustainable software development in power systems if open and equation-based modeling languages and simulation standards are exploited . Together with my PhD student Francisco José Gómez López, we will be @RT-2014 presenting our vision and recent work carried out together with Svein Olsen: "Towards CIM-Compliant Model-Based Cyber-Physical Power System Design and Simulation using Modelica".
RECAP at ETSI Experiential Network Intelligence (ENI) MeetingRECAP Project
This presentation was delivered by Johan Forsman (Tieto), Jörg Domaschka (UULM) and Paolo Casari (IMDEA Networks) at the ETSI Experiential Network Intelligence (ENI) Meeting in Warsaw, Poland, on April 12th, 2019. ETSI Experiential Networked Industry Specification Group (ENI ISG) work on defining a Cognitive Network Management architecture using Artificial Intelligence (AI) techniques and context-aware policies to adjust offered services based on changes in user needs, environmental conditions and business goals. The intention is that the use of Artificial Intelligence techniques in the network management system should solve some of the problems of future network deployment and operations. For more information, see https://www.etsi.org/technologies/experiential-networked-intelligence.
This document provides an overview of MEP 382: Design of Applied Measurement Systems course at the Faculty of Engineering. The course objectives are to enable students to specify, build, and use basic data acquisition systems to acquire and process laboratory or field data using LabVIEW. The course covers topics like signal conditioning, transduction, data acquisition, sensors, and instrumentation standards. It is taught by Drs. Maged Ghoneima and Mostafa Soliman with assistance from Eng. Ahmed Allam and Eng. Yehia Zakaria. Student performance will be evaluated through assignments, exams, projects, and class participation. The course aims to provide hands-on experience with measurement systems and encourage further study of underlying principles.
Making Model-Driven Verification Practical and Scalable: Experiences and Less...Lionel Briand
The document discusses experiences and lessons learned from making model-driven verification practical and scalable. It describes several projects collaborating with industry partners to develop model-based solutions for verification. Key challenges addressed include achieving applicability for engineers, scalability to large systems, and developing solutions informed by real-world problems. Lessons learned emphasize the importance of collaborative applied research, defining problems in context, and validating solutions realistically.
Conference: 42nd Annual Industrial
Electronics Conference (IECON2016).
Florence, Italy – October 24-27, 2016
Title of the paper: A solution for
processing supply chain events within
ontology-based descriptions
Authors: Borja Ramis Ferrer, Wael M.
Mohammed, José L. Martinez Lastra
Show and Tell - Data and Digitalisation, Digital Twins.pdfSIFOfgem
The document summarizes several projects presented at a webinar on the Strategic Innovation Fund's "Data & Digitalisation" challenge.
- The EN-twin-e project aims to develop a digital twin of the electricity distribution network to provide greater visibility of distributed energy resources. This will help the ESO make more effective balancing decisions.
- The Digi-GIFT project seeks to build an integrated cybersecurity system and shared data infrastructure. This will help manage data quality, integrity and security while supporting applications like digital twins.
- Cost-benefit analyses were conducted for a shared data infrastructure, an integrated cyber intrusion defense system, and quantifying flexibility services. The analyses found savings from data sharing and
Overview of the Inria Bordeaux - Sud-Ouest research centreInria
The document summarizes research activities at Inria Bordeaux – Sud-Ouest Research Centre in Bordeaux, France. It discusses the centre's 173 project teams, 1,300 doctoral students, 120 startups created, and research focus areas including modeling, high-performance computing, uncertainty management and optimization, modeling and simulation for health and biology, and human and computing interaction and visualization. It also outlines the centre's partnerships with academic and industrial organizations and its role in transferring research to companies.
Towards CIM Compliant Model-Based Cyber-Physical Power System Design and Simu...Francisco José Gómez López
The document describes a presentation given at the 7th International Conference on Real-Time Simulation Technologies in Montreal from June 9-12, 2014 on using the Modelica modeling language to develop cyber-physical power system models that are compliant with the Common Information Model (CIM) standard in order to enable model-based design and simulation of power systems incorporating information and communication technologies. The presentation outlines key concepts of hybrid and cyber-physical systems, modeling approaches using Unified Modeling Language (UML) and CIM, and the development of power system component models in Modelica.
Towards CIM-Compliant Model-Based Cyber-Physical Power System Design and Simu...Luigi Vanfretti
Compliance with grid data exchange standards (i.e. CIM) can allow for sustainable software development in power systems if open and equation-based modeling languages and simulation standards are exploited . Together with my PhD student Francisco José Gómez López, we will be @RT-2014 presenting our vision and recent work carried out together with Svein Olsen: "Towards CIM-Compliant Model-Based Cyber-Physical Power System Design and Simulation using Modelica".
RECAP at ETSI Experiential Network Intelligence (ENI) MeetingRECAP Project
This presentation was delivered by Johan Forsman (Tieto), Jörg Domaschka (UULM) and Paolo Casari (IMDEA Networks) at the ETSI Experiential Network Intelligence (ENI) Meeting in Warsaw, Poland, on April 12th, 2019. ETSI Experiential Networked Industry Specification Group (ENI ISG) work on defining a Cognitive Network Management architecture using Artificial Intelligence (AI) techniques and context-aware policies to adjust offered services based on changes in user needs, environmental conditions and business goals. The intention is that the use of Artificial Intelligence techniques in the network management system should solve some of the problems of future network deployment and operations. For more information, see https://www.etsi.org/technologies/experiential-networked-intelligence.
This document provides an overview of MEP 382: Design of Applied Measurement Systems course at the Faculty of Engineering. The course objectives are to enable students to specify, build, and use basic data acquisition systems to acquire and process laboratory or field data using LabVIEW. The course covers topics like signal conditioning, transduction, data acquisition, sensors, and instrumentation standards. It is taught by Drs. Maged Ghoneima and Mostafa Soliman with assistance from Eng. Ahmed Allam and Eng. Yehia Zakaria. Student performance will be evaluated through assignments, exams, projects, and class participation. The course aims to provide hands-on experience with measurement systems and encourage further study of underlying principles.
This is the presentation of the paper about the integration of artificial intelligence and the systems engineering lifecycle.
You can find more information in the following link: https://event.conflr.com/IS2019/sessiondetail_395325
This presentation is a keynote in the AI4SE International Workshop exploring the challenges and opportunities of bringing Systems Engineering the development of AI/ML functions for safety-critical systems.
Early diagnosis of infectious diseases and monitoring of community health are essential
for delivering cost-effective healthcare solutions. Healthcare delivery in developing coun-
tries have many challenges because they do not have enough resources for meeting the
healthcare needs, and they lack testing lab infrastructures in communities. It is necessary that a solution must be developed to safeguard the impacted communities. It has
been proven that Point-Of-Care (POC) testing can be considered as one of the ways to
resolve the crisis in healthcare delivery in these communities. The term 'POC testing'
has been used in many contexts. Use case, where a passenger in an autonomous vehicle
is driven to the nearest hospital for an emergency treatment due to an alert received by
the health monitoring system in the car. The POC that the research exploring is for
the use for a remote patient who does not have access to lab testing facilities because of
the social and economic conditions in that community.
The POC testing is a mission critical processes in which the patient conduct tests in a
home environment, as opposed to the tests undertaken otherwise in a laboratory facility
and it needs a communication system of architecture support which the research refers
as POCT system. The research looked at the requirements and made recommendations concerning critical aspects: requirements gathering for POCT system, design and
development methods of secure communication architecture, finding a suitable strategy
for testing the system and project managing the end-2-end process.
Data security is another critical factor in healthcare data. The research came up with
secure data communication architecture and processes for the system. A secure data
storage for the test results are needed, and the research came up with an architecture
for cloud storage. Deployment and expansion of the secure communication system are
essential for practical use cases based on community needs, and a scalable communication
architecture was developed to support the objective.
The RaPId Toolbox for Parameter Identification and Model Validation: How Mode...Luigi Vanfretti
RaPId is a recursive acronym for Rapid Parameter Identification. The toolbox was built within WP3 of the FP7 iTesla project. It uses Modelica models compiled in FMUs compliant with the FMI standard, which are imported into Simulink using the FMI Toolbox for Matlab/Simulink from Modelon. Within the Matlab environment, we have developed a plug-in architecture that lets the user choose many different (or even their own) optimization solvers for parameter calibration. Not to mention, you can choose any simulation solver available in Simulink (not just trapezoidal integration!)
The document describes opportunities for Honours projects in 2007 at the Computer Sciences Lab and NICTA. It discusses four research groups focused on areas like artificial intelligence, machine learning, logic and computation, and computer vision. Each group has several potential project topics listed, such as planning under uncertainty, constraint satisfaction, document analysis, and verified microkernel development. Contact information is provided for over 15 specific projects that students could get involved in.
Design and Experiment Platform for Industrial Wireless SystemsRyan
Cite This Work: Peng Hu. "Design and Experiment Platform for Industrial Wireless Systems", The 10th Annual UNENE I&C Workshop, Toronto, Canada, Oct. 24th, 2014.
This presentation introduces an experiment platform for industrial wireless systems done by CMC in collaboration with Western University.
Modeling and Simulation of Electrical Power Systems using OpenIPSL.org and Gr...Luigi Vanfretti
Title:
Modeling and Simulation of Electrical Power Systems using OpenIPSL.org and GridDyn
Presenters:
Luigi Vanfretti (RPI) & Philip Top (LNLL)
luigi.vanfretti@gmail.com, top1@llnl.gov
Abstract:
The Modelica language, being standardized and equation-based, has proven valuable for the for model exchange, simulation and even for model validation applications in actual power systems. These important features have been now recognized by the European Network of Transmission System Operators, which have adopted the Modelica language for dynamic model exchange in the Common Grid Model Exchange Standard (v2.5, Annex F).
Following previous FP7 project results, within the ITEA 3 openCPS project, the presenters have continued the efforts of using the Modelica language for power system modeling and simulation, by developing and maintaining the OpenIPSL library: https://github.com/SmarTS-Lab/OpenIPSL
This seminar first gives an overview of the origins of the OpenIPSL and it’s models, it contrasts it against typical power system tools, and gives an introduction the OpenIPSL library. The new project features that help in the OpenIPSL maintenance (use of continuous integration, regression testing, documentation, etc.) are also described.
Finally, the seminar will present current work at LNLL that exploits OpenIPSL in coordination with other tools including ongoing work integrating openIPSL models into GridDyn an open-source power system simulation tool, as well as a demos of the use of openIPSL libraries in GridDyn.
Bios:
Luigi Vanfretti (SMIEEE’14) obtained the M.Sc. and Ph.D. degrees in electric power engineering at Rensselaer Polytechnic Institute, Troy, NY, USA, in 2007 and 2009, respectively.
He was with KTH Royal Institute of Technology, Stockholm, Sweden, as Assistant 2010-2013), and Associate Professor (Tenured) and Docent (2013-2017/August); where he lead the SmarTS Lab and research group. He also worked at Statnett SF, the Norwegian electric power transmission system operator, as consultant (2011 - 2012), and Special Advisor in R&D (2013 - 2016).
He joined Rensselaer Polytechnic Institute in August 2017, to continue to develop his research at ALSETLab: http://alsetlab.com
His research interests are in the area of synchrophasor technology applications; and cyber-physical power system modeling, simulation, stability and control.
Philp Top (Lawrence Livermore National Lab)
PhD 2007 Purdue University. Currently a Research Engineer at Lawrence Livermore National Laboratory in Livermore, CA. Philip has been involved in several projects connected with the DOE effort on Grid Modernization including projects on modeling and simulation, co-simulation and smart grid data analytics. He is the principle developer on the open source power system simulation tool GridDyn, and a key contributor to the HELICS open source co-simulation framework.
Scalable and Cost-Effective Model-Based Software Verification and TestingLionel Briand
This document describes research on using model-based techniques to generate stress test cases for embedded software. A constraint programming approach is used to model the software system, hardware platform, and performance requirements. The model includes properties of threads, activities, and the scheduling policy. The approach searches for values of tunable parameters, such as delays, that maximize CPU usage while satisfying constraints, in order to evaluate the system under worst-case conditions and help verify that it meets safety standards. The generated test cases effectively stress the system by selecting parameter values that guide the execution towards maximum resource consumption.
Marta de Mesa i Jesus Gironda, de Telvent, presenten les possibilitats d'aplicar el Big Data més enllà del sector privat. Per exemple, en la previsió i planificació de recursos interns a les universitats.
Aquesta presentació ha tingut lloc a la TSIUC'14, celebrada a la Universitat Autònoma de Barcelona el passat 2 de desembre de 2014, sota el títol "Reptes en Big Data a la universitat i la Recerca".
This document summarizes a method for fast modeling of industrial objects using stereo vision and projectors to enable 6D pose estimation without requiring CAD models. The method segments objects from a reconstructed scene using supervoxel segmentation and local convexity. Initial alignment of models from different views is achieved through PCA or SAC-IA. Experiments show the reconstructed models are 52-87% complete with 1-4.8mm accuracy, enabling object pose estimation with a recognition rate of 62% using the reconstructed models compared to 54% using ground truth models. The method provides a feasible way to model unknown industrial objects for robot grasping without CAD models.
The objective of this presentation to present some challenges and opportunities in the integration of Systems Engineering and the Artificial Intelligence/Machine Learning model lifecycle.
This document provides a general introduction to the Module I on Artificial Intelligence in Smart Grids taught jointly by Kaunas University of Technology and Dresden University of Technology. The module introduces key machine learning methods and their applications in smart grids, including load and renewable energy forecasting, consumer behavior analysis, and predictive maintenance. It aims to equip students with the skills to create and validate AI models to solve practical problems in areas like power quality monitoring, energy management, and grid stability. The universities contribute expertise in electrical engineering, applied AI, and research experience with smart grid projects.
New Innovative Additive Manufacturing processes KTN
The document discusses several new additive manufacturing processes and projects. It begins with Smartdrop, a non-contact patterned coating technology that jets glue to apply functional fluids without waste. It then discusses the UK EB Additive Manufacturing Platform project to develop complex geometries and advanced materials using electron beam wire additive manufacturing. Finally, it summarizes the RoboWAAM project which involves developing a large scale robotic production platform for industrial wire arc additive manufacturing applications with a build size of meters.
This document outlines the syllabus for a 15-week cloud computing course. The course covers topics such as virtual machines, virtual private clouds, cloud services, elastic compute service, auto scaling, object storage, relational data service, cloud security, Kubernetes, and cloud platforms for AI. Students will complete assignments, a midterm exam, final exam, and capstone project. Assessment is based on attendance, midterm exam, assignments/project, and final exam.
REAL-TIME SIMULATION TECHNOLOGIES FOR POWER SYSTEMS DESIGN, TESTING, AND ANAL...Jithin T
This is the ppt that contains effective elementsof the IEEE research journel "REAL-TIME SIMULATION TECHNOLOGIES FOR POWER
SYSTEMS DESIGN, TESTING, AND ANALYSIS"
Tulasi has experience in technical program management, system architecture, software development, and test and measurement instruments. She has worked on projects involving mobile communication systems, wireless networks, transit automation, aerospace systems, medical devices, and oil exploration equipment. Tulasi led the development of several products and systems, including a medical centrifuge, pipeline communication system, and environmental controls for an F-22 aircraft. Her PhD research involved developing a secure cloud-based framework for point-of-care medical testing systems.
www.ivpower.com
IVPower is a software dedicated to real-time monitoring of transmission and distribution grids.
IVPower provides essential information about power system disturbances for control and maintenance purpose.
Using mainly disturbance records and event logs from protection devices, IVPower offers innovative and eay-to-use web-based applications dedicated to operators in control centers, post mortem analysis experts and asset managers.
Learn SQL from basic queries to Advance queriesmanishkhaire30
Dive into the world of data analysis with our comprehensive guide on mastering SQL! This presentation offers a practical approach to learning SQL, focusing on real-world applications and hands-on practice. Whether you're a beginner or looking to sharpen your skills, this guide provides the tools you need to extract, analyze, and interpret data effectively.
Key Highlights:
Foundations of SQL: Understand the basics of SQL, including data retrieval, filtering, and aggregation.
Advanced Queries: Learn to craft complex queries to uncover deep insights from your data.
Data Trends and Patterns: Discover how to identify and interpret trends and patterns in your datasets.
Practical Examples: Follow step-by-step examples to apply SQL techniques in real-world scenarios.
Actionable Insights: Gain the skills to derive actionable insights that drive informed decision-making.
Join us on this journey to enhance your data analysis capabilities and unlock the full potential of SQL. Perfect for data enthusiasts, analysts, and anyone eager to harness the power of data!
#DataAnalysis #SQL #LearningSQL #DataInsights #DataScience #Analytics
Global Situational Awareness of A.I. and where its headedvikram sood
You can see the future first in San Francisco.
Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be un-leashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.
Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the wilful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change.
Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people—the smartest people I have ever met—and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride.
Let me tell you what we see.
This is the presentation of the paper about the integration of artificial intelligence and the systems engineering lifecycle.
You can find more information in the following link: https://event.conflr.com/IS2019/sessiondetail_395325
This presentation is a keynote in the AI4SE International Workshop exploring the challenges and opportunities of bringing Systems Engineering the development of AI/ML functions for safety-critical systems.
Early diagnosis of infectious diseases and monitoring of community health are essential
for delivering cost-effective healthcare solutions. Healthcare delivery in developing coun-
tries have many challenges because they do not have enough resources for meeting the
healthcare needs, and they lack testing lab infrastructures in communities. It is necessary that a solution must be developed to safeguard the impacted communities. It has
been proven that Point-Of-Care (POC) testing can be considered as one of the ways to
resolve the crisis in healthcare delivery in these communities. The term 'POC testing'
has been used in many contexts. Use case, where a passenger in an autonomous vehicle
is driven to the nearest hospital for an emergency treatment due to an alert received by
the health monitoring system in the car. The POC that the research exploring is for
the use for a remote patient who does not have access to lab testing facilities because of
the social and economic conditions in that community.
The POC testing is a mission critical processes in which the patient conduct tests in a
home environment, as opposed to the tests undertaken otherwise in a laboratory facility
and it needs a communication system of architecture support which the research refers
as POCT system. The research looked at the requirements and made recommendations concerning critical aspects: requirements gathering for POCT system, design and
development methods of secure communication architecture, finding a suitable strategy
for testing the system and project managing the end-2-end process.
Data security is another critical factor in healthcare data. The research came up with
secure data communication architecture and processes for the system. A secure data
storage for the test results are needed, and the research came up with an architecture
for cloud storage. Deployment and expansion of the secure communication system are
essential for practical use cases based on community needs, and a scalable communication
architecture was developed to support the objective.
The RaPId Toolbox for Parameter Identification and Model Validation: How Mode...Luigi Vanfretti
RaPId is a recursive acronym for Rapid Parameter Identification. The toolbox was built within WP3 of the FP7 iTesla project. It uses Modelica models compiled in FMUs compliant with the FMI standard, which are imported into Simulink using the FMI Toolbox for Matlab/Simulink from Modelon. Within the Matlab environment, we have developed a plug-in architecture that lets the user choose many different (or even their own) optimization solvers for parameter calibration. Not to mention, you can choose any simulation solver available in Simulink (not just trapezoidal integration!)
The document describes opportunities for Honours projects in 2007 at the Computer Sciences Lab and NICTA. It discusses four research groups focused on areas like artificial intelligence, machine learning, logic and computation, and computer vision. Each group has several potential project topics listed, such as planning under uncertainty, constraint satisfaction, document analysis, and verified microkernel development. Contact information is provided for over 15 specific projects that students could get involved in.
Design and Experiment Platform for Industrial Wireless SystemsRyan
Cite This Work: Peng Hu. "Design and Experiment Platform for Industrial Wireless Systems", The 10th Annual UNENE I&C Workshop, Toronto, Canada, Oct. 24th, 2014.
This presentation introduces an experiment platform for industrial wireless systems done by CMC in collaboration with Western University.
Modeling and Simulation of Electrical Power Systems using OpenIPSL.org and Gr...Luigi Vanfretti
Title:
Modeling and Simulation of Electrical Power Systems using OpenIPSL.org and GridDyn
Presenters:
Luigi Vanfretti (RPI) & Philip Top (LNLL)
luigi.vanfretti@gmail.com, top1@llnl.gov
Abstract:
The Modelica language, being standardized and equation-based, has proven valuable for the for model exchange, simulation and even for model validation applications in actual power systems. These important features have been now recognized by the European Network of Transmission System Operators, which have adopted the Modelica language for dynamic model exchange in the Common Grid Model Exchange Standard (v2.5, Annex F).
Following previous FP7 project results, within the ITEA 3 openCPS project, the presenters have continued the efforts of using the Modelica language for power system modeling and simulation, by developing and maintaining the OpenIPSL library: https://github.com/SmarTS-Lab/OpenIPSL
This seminar first gives an overview of the origins of the OpenIPSL and it’s models, it contrasts it against typical power system tools, and gives an introduction the OpenIPSL library. The new project features that help in the OpenIPSL maintenance (use of continuous integration, regression testing, documentation, etc.) are also described.
Finally, the seminar will present current work at LNLL that exploits OpenIPSL in coordination with other tools including ongoing work integrating openIPSL models into GridDyn an open-source power system simulation tool, as well as a demos of the use of openIPSL libraries in GridDyn.
Bios:
Luigi Vanfretti (SMIEEE’14) obtained the M.Sc. and Ph.D. degrees in electric power engineering at Rensselaer Polytechnic Institute, Troy, NY, USA, in 2007 and 2009, respectively.
He was with KTH Royal Institute of Technology, Stockholm, Sweden, as Assistant 2010-2013), and Associate Professor (Tenured) and Docent (2013-2017/August); where he lead the SmarTS Lab and research group. He also worked at Statnett SF, the Norwegian electric power transmission system operator, as consultant (2011 - 2012), and Special Advisor in R&D (2013 - 2016).
He joined Rensselaer Polytechnic Institute in August 2017, to continue to develop his research at ALSETLab: http://alsetlab.com
His research interests are in the area of synchrophasor technology applications; and cyber-physical power system modeling, simulation, stability and control.
Philp Top (Lawrence Livermore National Lab)
PhD 2007 Purdue University. Currently a Research Engineer at Lawrence Livermore National Laboratory in Livermore, CA. Philip has been involved in several projects connected with the DOE effort on Grid Modernization including projects on modeling and simulation, co-simulation and smart grid data analytics. He is the principle developer on the open source power system simulation tool GridDyn, and a key contributor to the HELICS open source co-simulation framework.
Scalable and Cost-Effective Model-Based Software Verification and TestingLionel Briand
This document describes research on using model-based techniques to generate stress test cases for embedded software. A constraint programming approach is used to model the software system, hardware platform, and performance requirements. The model includes properties of threads, activities, and the scheduling policy. The approach searches for values of tunable parameters, such as delays, that maximize CPU usage while satisfying constraints, in order to evaluate the system under worst-case conditions and help verify that it meets safety standards. The generated test cases effectively stress the system by selecting parameter values that guide the execution towards maximum resource consumption.
Marta de Mesa i Jesus Gironda, de Telvent, presenten les possibilitats d'aplicar el Big Data més enllà del sector privat. Per exemple, en la previsió i planificació de recursos interns a les universitats.
Aquesta presentació ha tingut lloc a la TSIUC'14, celebrada a la Universitat Autònoma de Barcelona el passat 2 de desembre de 2014, sota el títol "Reptes en Big Data a la universitat i la Recerca".
This document summarizes a method for fast modeling of industrial objects using stereo vision and projectors to enable 6D pose estimation without requiring CAD models. The method segments objects from a reconstructed scene using supervoxel segmentation and local convexity. Initial alignment of models from different views is achieved through PCA or SAC-IA. Experiments show the reconstructed models are 52-87% complete with 1-4.8mm accuracy, enabling object pose estimation with a recognition rate of 62% using the reconstructed models compared to 54% using ground truth models. The method provides a feasible way to model unknown industrial objects for robot grasping without CAD models.
The objective of this presentation to present some challenges and opportunities in the integration of Systems Engineering and the Artificial Intelligence/Machine Learning model lifecycle.
This document provides a general introduction to the Module I on Artificial Intelligence in Smart Grids taught jointly by Kaunas University of Technology and Dresden University of Technology. The module introduces key machine learning methods and their applications in smart grids, including load and renewable energy forecasting, consumer behavior analysis, and predictive maintenance. It aims to equip students with the skills to create and validate AI models to solve practical problems in areas like power quality monitoring, energy management, and grid stability. The universities contribute expertise in electrical engineering, applied AI, and research experience with smart grid projects.
New Innovative Additive Manufacturing processes KTN
The document discusses several new additive manufacturing processes and projects. It begins with Smartdrop, a non-contact patterned coating technology that jets glue to apply functional fluids without waste. It then discusses the UK EB Additive Manufacturing Platform project to develop complex geometries and advanced materials using electron beam wire additive manufacturing. Finally, it summarizes the RoboWAAM project which involves developing a large scale robotic production platform for industrial wire arc additive manufacturing applications with a build size of meters.
This document outlines the syllabus for a 15-week cloud computing course. The course covers topics such as virtual machines, virtual private clouds, cloud services, elastic compute service, auto scaling, object storage, relational data service, cloud security, Kubernetes, and cloud platforms for AI. Students will complete assignments, a midterm exam, final exam, and capstone project. Assessment is based on attendance, midterm exam, assignments/project, and final exam.
REAL-TIME SIMULATION TECHNOLOGIES FOR POWER SYSTEMS DESIGN, TESTING, AND ANAL...Jithin T
This is the ppt that contains effective elementsof the IEEE research journel "REAL-TIME SIMULATION TECHNOLOGIES FOR POWER
SYSTEMS DESIGN, TESTING, AND ANALYSIS"
Tulasi has experience in technical program management, system architecture, software development, and test and measurement instruments. She has worked on projects involving mobile communication systems, wireless networks, transit automation, aerospace systems, medical devices, and oil exploration equipment. Tulasi led the development of several products and systems, including a medical centrifuge, pipeline communication system, and environmental controls for an F-22 aircraft. Her PhD research involved developing a secure cloud-based framework for point-of-care medical testing systems.
www.ivpower.com
IVPower is a software dedicated to real-time monitoring of transmission and distribution grids.
IVPower provides essential information about power system disturbances for control and maintenance purpose.
Using mainly disturbance records and event logs from protection devices, IVPower offers innovative and eay-to-use web-based applications dedicated to operators in control centers, post mortem analysis experts and asset managers.
Learn SQL from basic queries to Advance queriesmanishkhaire30
Dive into the world of data analysis with our comprehensive guide on mastering SQL! This presentation offers a practical approach to learning SQL, focusing on real-world applications and hands-on practice. Whether you're a beginner or looking to sharpen your skills, this guide provides the tools you need to extract, analyze, and interpret data effectively.
Key Highlights:
Foundations of SQL: Understand the basics of SQL, including data retrieval, filtering, and aggregation.
Advanced Queries: Learn to craft complex queries to uncover deep insights from your data.
Data Trends and Patterns: Discover how to identify and interpret trends and patterns in your datasets.
Practical Examples: Follow step-by-step examples to apply SQL techniques in real-world scenarios.
Actionable Insights: Gain the skills to derive actionable insights that drive informed decision-making.
Join us on this journey to enhance your data analysis capabilities and unlock the full potential of SQL. Perfect for data enthusiasts, analysts, and anyone eager to harness the power of data!
#DataAnalysis #SQL #LearningSQL #DataInsights #DataScience #Analytics
Global Situational Awareness of A.I. and where its headedvikram sood
You can see the future first in San Francisco.
Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be un-leashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.
Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the wilful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change.
Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people—the smartest people I have ever met—and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride.
Let me tell you what we see.
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data LakeWalaa Eldin Moustafa
Dynamic policy enforcement is becoming an increasingly important topic in today’s world where data privacy and compliance is a top priority for companies, individuals, and regulators alike. In these slides, we discuss how LinkedIn implements a powerful dynamic policy enforcement engine, called ViewShift, and integrates it within its data lake. We show the query engine architecture and how catalog implementations can automatically route table resolutions to compliance-enforcing SQL views. Such views have a set of very interesting properties: (1) They are auto-generated from declarative data annotations. (2) They respect user-level consent and preferences (3) They are context-aware, encoding a different set of transformations for different use cases (4) They are portable; while the SQL logic is only implemented in one SQL dialect, it is accessible in all engines.
#SQL #Views #Privacy #Compliance #DataLake
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Discussion on Vector Databases, Unstructured Data and AI
https://www.meetup.com/unstructured-data-meetup-new-york/
This meetup is for people working in unstructured data. Speakers will come present about related topics such as vector databases, LLMs, and managing data at scale. The intended audience of this group includes roles like machine learning engineers, data scientists, data engineers, software engineers, and PMs.This meetup was formerly Milvus Meetup, and is sponsored by Zilliz maintainers of Milvus.
End-to-end pipeline agility - Berlin Buzzwords 2024Lars Albertsson
We describe how we achieve high change agility in data engineering by eliminating the fear of breaking downstream data pipelines through end-to-end pipeline testing, and by using schema metaprogramming to safely eliminate boilerplate involved in changes that affect whole pipelines.
A quick poll on agility in changing pipelines from end to end indicated a huge span in capabilities. For the question "How long time does it take for all downstream pipelines to be adapted to an upstream change," the median response was 6 months, but some respondents could do it in less than a day. When quantitative data engineering differences between the best and worst are measured, the span is often 100x-1000x, sometimes even more.
A long time ago, we suffered at Spotify from fear of changing pipelines due to not knowing what the impact might be downstream. We made plans for a technical solution to test pipelines end-to-end to mitigate that fear, but the effort failed for cultural reasons. We eventually solved this challenge, but in a different context. In this presentation we will describe how we test full pipelines effectively by manipulating workflow orchestration, which enables us to make changes in pipelines without fear of breaking downstream.
Making schema changes that affect many jobs also involves a lot of toil and boilerplate. Using schema-on-read mitigates some of it, but has drawbacks since it makes it more difficult to detect errors early. We will describe how we have rejected this tradeoff by applying schema metaprogramming, eliminating boilerplate but keeping the protection of static typing, thereby further improving agility to quickly modify data pipelines without fear.
Natural Language Processing (NLP), RAG and its applications .pptxfkyes25
1. In the realm of Natural Language Processing (NLP), knowledge-intensive tasks such as question answering, fact verification, and open-domain dialogue generation require the integration of vast and up-to-date information. Traditional neural models, though powerful, struggle with encoding all necessary knowledge within their parameters, leading to limitations in generalization and scalability. The paper "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks" introduces RAG (Retrieval-Augmented Generation), a novel framework that synergizes retrieval mechanisms with generative models, enhancing performance by dynamically incorporating external knowledge during inference.
Beyond the Basics of A/B Tests: Highly Innovative Experimentation Tactics You...Aggregage
This webinar will explore cutting-edge, less familiar but powerful experimentation methodologies which address well-known limitations of standard A/B Testing. Designed for data and product leaders, this session aims to inspire the embrace of innovative approaches and provide insights into the frontiers of experimentation!
4th Modern Marketing Reckoner by MMA Global India & Group M: 60+ experts on W...Social Samosa
The Modern Marketing Reckoner (MMR) is a comprehensive resource packed with POVs from 60+ industry leaders on how AI is transforming the 4 key pillars of marketing – product, place, price and promotions.
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...sameer shah
"Join us for STATATHON, a dynamic 2-day event dedicated to exploring statistical knowledge and its real-world applications. From theory to practice, participants engage in intensive learning sessions, workshops, and challenges, fostering a deeper understanding of statistical methodologies and their significance in various fields."
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...
jVatnIndustry4_0.pptx
1. Norwegian University of Science and Technology
Industry 4.0 and real-time synchronization
of operation and maintenance
Jørn Vatn
Department of Mechanical and Industrial Engineering
NTNU - Norwegian University of Science and Technology, Norway
2. Norwegian University of Science and Technology 2
Background
• Maintenance decisions needs to take into account
– Current state of the component, system etc (“real-time”)
– Cost of maintenance and failures taking current operational
context into account (“real time”)
– Future loads affecting the probability of failure
• These loads may be influenced by changing operational profiles
Can we use Industry 4.0 concepts to approach the challenges?
3. Norwegian University of Science and Technology 3
Objectives
• Elaborate on basic elements of Industry 4.0
– Digital twin, stochastic digital twin
– Real-time
– Digital twins interacting
• Present elements of a case study: towards a set of
interacting digital twins
4. Norwegian University of Science and Technology 4
Industry 4.0 (Forth industrial revolution)
• Industry 4.0 is a collective term particularly used in manufacturing to emphasize
technologies and concepts of value chain organizations
• Related terms
– Cyber-Physical Systems
– the Internet of Things
– Cloud computing
– Digital Twin
• Although the term originates from the manufacturing industry, the elements of
Industry 4.0 are relevant for most businesses (Maintenance 4.0, Safety 4.0,
Ship 4.0,…)
• The current usage of the term Industry 4.0 has been criticized as essentially
meaningless
Focus on the elements rather than the term Industry 4.0 as such !
5. Norwegian University of Science and Technology 5
IoT -Internet of Things
• The Internet of Things (IoT) is the network of items
embedded with electronics, software, sensors,
actuators, and network connectivity
• which enable these objects to connect and exchange
data
IoT is what we need to connect
6. Norwegian University of Science and Technology 6
Cloud computing
• Cloud computing is an information technology paradigm
that enables access to shared pools of configurable
system resources
• In some presentations the term Internet of Services (IoS)
rather than cloud computing
With cloud computing we do not need to think about
platforms, how to connect etc
7. Norwegian University of Science and Technology 7
Digital twin
• The digital twin refers to a digital replica of physical assets,
processes and systems that can be used in real-time for
control and decision purposes
– Computerized mathematical model (what we have done over years)
– Real-time, thanks to IoT
• In contrast to a physical asset, the digital twin can
immediately respond to what-if inquiries
8. Norwegian University of Science and Technology 8
Garnter TOP 10 (2019):
• The notion of a digital representation of real-world entities or
systems is not new. Its heritage goes back to computer-aided
design representations of physical assets or profiles of individual
customers
• The difference in the latest iteration of digital twins is:
– The robustness of the models with a focus on how they support specific
business outcomes such that high reliability and efficient maintenance
– Digital twins’ link to the real world, potentially in real-time for
monitoring, and control
– The application of advanced big data analytics and AI/ML to drive new
business opportunities
– The ability to interact with them and evaluate “what-if” scenarios
9. Norwegian University of Science and Technology 9
Draft DNVGL-RP-A204 Qualification and
assurance of digital twins
• The capability of DTs can be ranked on a scale from 0 – 5:
0-standalone
1-descriptive
2-diagnostic
3-predictive
4-prescriptive
5-autonomy
10. Norwegian University of Science and Technology 10
Off-line digital models
• Remaining useful lifetime (RUL) models have been
available
– First principle approaches (white box)
– Probabilistic models (gray-box)
– Data driven models / ML / AI (black box)
• These models exists, but are mainly used as off-line
models
11. Norwegian University of Science and Technology 11
The need for on-line models
• For predictive maintenance
1. Anomaly detection
2. Diagnostic
3. Prognostics / RUL
12. Norwegian University of Science and Technology 12
The need for on-line models
• For predictive maintenance
1. Anomaly detection
1. Machine learning
2. First principles
3. Signal processing (FFT)
2. Diagnostic
1. Signal processing (FFT)
2. Machine learning
3. Prognostics / RUL
1. Not much
13. Norwegian University of Science and Technology 13
The need for decision support
• For anomaly detection
– False positives and false negatives
• Sensor drifts
• Long term drifts in the process, which is not related to physical
degradation
• Diagnostic
– Is treatment required?
• Prognostics
– What and when to do “hard maintenance”
– Scheduling, taking opportunity windows into account
– How, will changes in operational loads affect RUL, and what to do?
14. Norwegian University of Science and Technology
Time, t
Current time and current state
Mean RUL
Deterioration level, Y(t)
L
Prognostics models: RUL
15. Norwegian University of Science and Technology
Time, t
Deterioration level, Y(t)
L
m
TL
= Lead time
Decisions: m = Maintenance level
16. Norwegian University of Science and Technology
Time, t
Deterioration level, Y(t)
L
m
TL
= Lead time
Decisions requiring DT-inquiries
• What is the current
production demand?
• Will there be a
maintenance opportunity in
the near future?
• Can I relax on production to
reduce degradation rate
• Is it a weather window?
• How to group maintenance
activities, remote
operation?
18. Norwegian University of Science and Technology 18
Stochastic digital twin
• A stochastic digital twin is a computerized model of the
stochastic behavior of a system where
– the model is updated in real-time
• based on sensor information and other information
• accessed via the internet and the use of cloud computing resources
• What-if inquiries result in pdf’s rather than single values
19. Norwegian University of Science and Technology
Real time model vs test model
“Google maps – Traffic” vs “Google maps Bicycle-friendly routes»
20. Norwegian University of Science and Technology 20
Real-time model
• A real-time model is a model where it is possible to
obtain values of system performance and system states
in real-time
• With real-time we mean that data referring to a system is
analysed and updated at the rate at which it is
received
21. Norwegian University of Science and Technology 21
Test model
• A test model is a mathematical model describing relations between
future and current values of the variables of interest, but where we
are not able to monitor system performance and system sates in
real-time
• Such a model is often referred to as an off-line model or a
sandbox model
• A test model is still valid in order to establish decision rules to be
used in real-time
Claim: 99.9% of all models presented at ESREL since 1991 are test
models because they have no ambitions to connect in real time
22. Norwegian University of Science and Technology 22
Anomaly detection
• Highlights from:
– A2.1 MonitorX framework
– Review of analytics methods supporting Anomaly detection and
Condition Based Maintenance
• First principles
• Machine learning
• Objective: Separate anomalies and noise
24. Norwegian University of Science and Technology 24
First principles
• The physical equations describing the top oil and hot-spot
temperature dynamics are the following
• The physical model parameters may be estimated from data
during normal operation
• From the model we can predict (estimate) temperature and
compare with actual temperature:
26. Norwegian University of Science and Technology 26
Machine learning
• The first principle model requires a physical
understanding of the relation between input (current and
ambivalent temperature) and output (hot-spot
temperature)
• If the relation is complex, we can use huge training data
sets to estimate the relation between input and output:
27. Norwegian University of Science and Technology 27
Big data analytics and data driven models
• There are several techniques for these data driven
models
– Classical multivariate regression analysis
– Artificial neural networks
– Deep learning
– Decision tree learning
– Support vector machines
• The hot-spot temperature example:
30. Norwegian University of Science and Technology 30
Case study – Turnout monitoring
• Turnouts (switches) are important components in the railway
infrastructure, and failure of a turnout will usually give large
problems with the circulation, and delays are expected
• A range of condition monitoring techniques
exist
• BaneNOR is running at test project
in Norway where the electric current of the
motor is monitored, and represents a
signature that can alert a coming failure
31. Norwegian University of Science and Technology
Signature (Normal operation)
0
0 1 2 3 4 5
Current
Time, seconds
32. Norwegian University of Science and Technology
Indication of a potential failure
0 1 2 3 4 5
Signature, s Actual, a
𝑎𝑖 − 𝑠𝑖
2 > Threshold?
33. Norwegian University of Science and Technology 33
Digital twins
• A maintenance twin
• A production twin (including punctuality)
34. Norwegian University of Science and Technology 34
Digital twin for maintenance (degradation)
• The focus is on “when to act” upon a potential failure rather
than the classical inspection interval approach
• A PF-model is used, z(t) = f(t)/R(t) for TPF:
• 𝑧 𝑡 𝑦, 𝑥 𝑡 = 𝑧0 𝑡 𝑒𝛽𝑦𝑦+𝛽𝑥𝑥𝑡
• Where
– z0(t) = baseline failure rate function, TPF
– y = degradation level at the point of warning
– x(t) = future load in the near future (t is typically minutes and hours)
– 𝛽𝑦 and 𝛽𝑥 regression coefficient in the Cox-proportional hazard
model
Failure
progression
Time
Failure
Critical failure progression
F
P
PF-
interval
35. Norwegian University of Science and Technology 35
Real-time ?
• Not actually, the case study is only illustrative
• But
– y = current degradation level in real-time is in principle
accessible
– x(t) = future loads
• This is something the digital twin for the production in principle can
respond on, i.e., how many trains are scheduled to pass the
coming hours and how many shifting operations are required?
• In a “what-if” analysis, we may also investigate what is happening if
we relax on operation, i.e., if we move crossing to another station to
avoid operating the switch (and reduce the likelihood of a failure)
36. Norwegian University of Science and Technology
For example y is obtained in real-time by
0 1 2 3 4 5
Signature, s Actual, a
𝑦 = 𝑎𝑖 − 𝑠𝑖
2
37. Norwegian University of Science and Technology 37
Real-time ?
• Not actually, the case study is only illustrative
• But
– y = current degradation level in real-time is in principle
accessible
– x(t) = future loads
• This is something the digital twin for the production in principle can
respond on, i.e., how many trains are scheduled to pass the
coming hours and how many shifting operations are required?
• In a what-if analysis, we may also investigate what is happening if
we relax on operation, i.e., if we move crossing to another station to
avoid operating the switch (and reduce the likelihood of a failure)
38. Norwegian University of Science and Technology 38
Stochastic digital twin?
• Yes:
– 𝐹 𝑡 𝑦, 𝑥 𝑡 = 1 − 𝑒 0
𝑡
𝑧(𝑢|𝑦,𝑥 𝑡 𝑑𝑢
– The cumulative distribution function, F(), is essentially what we
need for the optimization:
39. Norwegian University of Science and Technology 39
The objective function to minimize:
• 𝐶 𝑡, 𝑥 = 𝑐PM 𝑡 + 𝑐U𝐹 𝑡 𝑦, 𝑥 + 𝑐R(𝑥)
– t = time for when to act upon the potential failure = decision
variable
– x = how many times we operate the degraded switch
– 𝑐PM 𝑡 = Cost of preventive action, decreases as function of t, to
be obtained from the production digital twin
– 𝑐U = Punctuality (unavailability) cost upon a failure (production
digital twin, or punctuality model)
– 𝑐R(𝑥) = Relaxing cost, i.e., a function of how many times the
switch is operated (production digital twin)
40. Norwegian University of Science and Technology 40
Results
• It is assumed that maintenance opportunities exist at
time 3, 5, 7 and 9 hours at various “costs”
• Calculation examples are shown in the paper
• Result (optimal intervention time, t):
41. Norwegian University of Science and Technology 41
Conclusions
• Straight forward mathematical models are presented
• We have demonstrated the need to synchronize
maintenance and operation
• We have indicated what is required, i.e., a maintenance
model and a production model that are updated in real-
time (the digital twins)
• We have demonstrated how these twins interact
• There is still a long way to go get the models actually
running in real-time