Presentation held at Healthgrid Conference 2011 - Bristol, England
Abstract. This paper presents the architecture of the Virtual Imaging Platform supporting the execution of medical image simulation workflows on multiple computing infrastructures. The system relies on the MOTEUR engine for workflow execution and on the DIRAC pilot-job system for workload management. The jGASW code wrapper is extended to describe applications running on multiple infrastructures and a DIRAC cluster agent that can securely involve personal cluster resources with no administrator intervention is proposed. Grid data management is complemented with local storage used as a failover in case of file transfer errors. Between November 2010 and April 2011 the platform was used by 10 users to run 484 workflow instances representing 10.8 CPU years. Tests show that a small personal cluster can significantly contribute to a simulation running on EGI and that the improved data manager can decrease the job failure rate from 7.7% to 1.5%.
More information: www.rafaelsilva.com
This document discusses the oSCJ project for developing safety-critical applications in Java. It describes safety-critical systems and the challenges of developing such systems. It then provides an overview of the Safety-Critical Java (SCJ) specification, the oSCJ implementation including its virtual machine, libraries, and tools. It also presents benchmark results and discusses future work.
Festival navideño 4º,5º y 6º. PEREDA_LEGANÉSevax14
Los alumnos de 4o, 5o y 6o curso de primaria continúan desarrollando las habilidades básicas de lectura, escritura y matemáticas, al tiempo que comienzan a especializarse en asignaturas como ciencias naturales, geografía e historia.
Los cursos 5o y 6o se enfocan en las materias básicas como matemáticas, lengua, ciencias naturales e historia. Los alumnos continúan desarrollando habilidades académicas y empiezan a explorar sus intereses a través de asignaturas opcionales como música, arte o tecnología. El objetivo es preparar a los estudiantes para la educación secundaria ayudándolos a madurar personal y académicamente.
Este documento describe las actividades navideñas que tendrán lugar en el C.E.I.P. José María de Pereda entre el 15 y el 19 de diciembre. Incluye un belén viviente el lunes 15, una visita de Papá Noel el martes 16, un concierto navideño el jueves 18, y un festival navideño y fiesta de fin de trimestre el viernes 19. También menciona otras decoraciones y actividades navideñas como un concurso y mural organizados por el AMPA.
Las tradiciones navideñas en España incluyen preparaciones como escribir tarjetas de Navidad y adornar los pueblos antes de Navidad, el sorteo de la Lotería de Navidad el 22 de diciembre que marca el inicio de las fiestas, celebrar la Nochebuena el 24 y el Día de Navidad el 25, los Santos Inocentes el 28, la Nochevieja el 31 de diciembre y Año Nuevo, la Cabalgata de los Reyes Magos el 5 de enero y recibir regalos el Día de los Reyes el
This document discusses the oSCJ project for developing safety-critical applications in Java. It describes safety-critical systems and the challenges of developing such systems. It then provides an overview of the Safety-Critical Java (SCJ) specification, the oSCJ implementation including its virtual machine, libraries, and tools. It also presents benchmark results and discusses future work.
Festival navideño 4º,5º y 6º. PEREDA_LEGANÉSevax14
Los alumnos de 4o, 5o y 6o curso de primaria continúan desarrollando las habilidades básicas de lectura, escritura y matemáticas, al tiempo que comienzan a especializarse en asignaturas como ciencias naturales, geografía e historia.
Los cursos 5o y 6o se enfocan en las materias básicas como matemáticas, lengua, ciencias naturales e historia. Los alumnos continúan desarrollando habilidades académicas y empiezan a explorar sus intereses a través de asignaturas opcionales como música, arte o tecnología. El objetivo es preparar a los estudiantes para la educación secundaria ayudándolos a madurar personal y académicamente.
Este documento describe las actividades navideñas que tendrán lugar en el C.E.I.P. José María de Pereda entre el 15 y el 19 de diciembre. Incluye un belén viviente el lunes 15, una visita de Papá Noel el martes 16, un concierto navideño el jueves 18, y un festival navideño y fiesta de fin de trimestre el viernes 19. También menciona otras decoraciones y actividades navideñas como un concurso y mural organizados por el AMPA.
Las tradiciones navideñas en España incluyen preparaciones como escribir tarjetas de Navidad y adornar los pueblos antes de Navidad, el sorteo de la Lotería de Navidad el 22 de diciembre que marca el inicio de las fiestas, celebrar la Nochebuena el 24 y el Día de Navidad el 25, los Santos Inocentes el 28, la Nochevieja el 31 de diciembre y Año Nuevo, la Cabalgata de los Reyes Magos el 5 de enero y recibir regalos el Día de los Reyes el
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/dec-2016-member-meeting-khronos
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Mark Bünger, Vice President of Research at Lux Research, delivers the presentation "Imaging + AI: Opportunities Inside the Car and Beyond" at the December 2016 Embedded Vision Alliance Member Meeting. Bünger presents his firm’s perspective on how embedded vision will upend the automotive industry.
IRJET - Biometric Identification using Gait Analyis by Deep LearningIRJET Journal
1. The document discusses using gait analysis and deep learning for biometric identification. Gait analysis examines a person's walking pattern or "gait cycle" as a biometric for identification.
2. The proposed system would use deep learning models trained on video frames of a person's gait cycle to generate identification vectors, which would then be used to authenticate individuals based on a match to their gait pattern.
3. If a sample video's gait cycle matches the trained identification vectors, the person would be authenticated. This could eliminate manual entry and allow automatic identification via security cameras based on a person's walking pattern.
IRJET - Analysis of Virtual Machine in Digital ForensicsIRJET Journal
This document discusses analyzing virtual machines for digital forensics purposes. It proposes a methodology for acquiring and analyzing files from VMware and Oracle VirtualBox virtual machines. The methodology has three phases: detection and acquisition of virtual machine files from the host system, analysis of virtual disk images and log files, and reporting the conclusions. The analysis phase examines virtual disk files in detail, looking at the file structure and metadata that could provide evidence. The system is implemented using Python scripts to perform the virtual machine analysis.
1. The document describes a proposed object detection bot that uses a Raspberry Pi, camera modules, and cloud services from AWS to accurately detect objects like plastic bottles, metal cans, and abandoned baggage.
2. The system uses a hybrid approach with both edge computing on the Raspberry Pi using SSD object detection models, as well as AWS cloud services for storage, analytics, and notifications. Video feeds are ingested into AWS Kinesis and objects detected on the edge are sent to AWS SNS for notifications.
3. An evaluation showed the system could reliably detect objects and send notifications within seconds, demonstrating the feasibility of combining local edge computing and cloud services for object detection on resource-constrained devices.
Smart Surveillance Bot with Low Power MCUIRJET Journal
This document describes the design of a low-power, wirelessly controlled surveillance robot. The robot uses a CC3200 SimpleLink WiFi microcontroller and integrates an onboard video camera, ultrasonic sensor, Bluetooth module, GPS, and other components. The robot is intended to transmit real-time video data to a controller to monitor hard to access areas. The design aims to create an affordable and efficient robotic system for surveillance applications using low power consumption. Future extensions could include sensors for mine detection or fire detection.
Inspection of Suspicious Human Activity in the Crowd Sourced Areas Captured i...IRJET Journal
The document proposes a system to detect suspicious human activity in crowdsourced video captured by surveillance cameras. The system uses Advanced Motion Detection (AMD) to detect moving objects and generate a reliable background model for analysis. A camera connected to a monitoring room would produce alert messages for any detected suspicious activity based on height, time, and body movement constraints. The system aims to automate real-time video processing for security applications like detecting unauthorized access. It extracts human objects from frames and identifies suspicious behavior using the AMD algorithm before sending alerts.
This document provides a summary of Amit Prabhudesai's work portfolio. It outlines his educational background and work experience in image processing and computer vision. It then describes several projects he has worked on, including human detection using Adaboost for surveillance video, optimizing a Lane Departure Warning system for a Texas Instruments DSP, developing video analytics software for retail store customer counting and queue detection, and an Automatic Fingerprint Identification System. It also lists some relevant trainings and mentorship activities.
IRJET- Smart Authentication System for AirportIRJET Journal
1. The document describes a smart authentication system for airports that uses a PIR sensor, camera, computer system, and image processing to detect intruders.
2. When a PIR sensor detects motion, it signals the computer system to activate the camera to capture an image. The image is then compared to images in a database using LBPH and HOG algorithms for face recognition.
3. If the image matches one in the database, information about that person is sent to the owner by email. If it doesn't match, only the image and timestamp are sent. The system was tested on 20 images in a database and achieved an average efficiency of 97.8%.
Tactical Virtual Assistance (TVA) With Jubal Biggs | Current 2022HostedbyConfluent
Causing Pain:
The traditional edge intelligence computing model doesn’t work for edge Military wearable and IOBT devices. Transmission of large amounts of sensor data over potentially unreliable communications channels puts Battlefield soldiers at risk.
Problem Statement: How DOD can manage the military battlefield assets to include integrate signals from a diverse and dynamic set of sensors, including static ground sensors and soldiers worn sensors to provide predictive and operational analytics?
SAIC’s Virtual Tactical Assistance provides the DOD the IOT infrastructure to manage military battlefield assets to include integrating signals and communications protocols from a diverse and dynamic set of sensors, including stat ground sensors and soldier worn sensors to provide Distributed compute resources and predictive and operational analytics. Combatant personnel require the ability to use data from many systems and communicate peer to peer without access to significant IT infrastructure. The technology seeks to enhance asset management, supply and maintenance operations, rapid target acquisition, and the sharing of targeting/ballistic solutions. Sensors and other weapon mountable accessories/enablers are designed to capture logistic, targeting, and telemetry data from individual weapon systems and share it across the squad, platoon, and higher echelons via a decentralized network. Confluent/Kafka provides processing of IOBT data streams, such as device and sensor data, which provides insights into asset state by monitoring use, providing indicators, tracking patterns and optimizing asset use for proactive and predictive analytics. More importantly, Confluent kafka provides each TVA to function as a data hub and a peer in a “data mesh”, which can share messages, location updates, and other data with other TVA devices in an ad-hoc manner independent of any wider network.
Advanced Open IoT Platform for Prevention and Early Detection of Forest FiresIvo Andreev
The session was about open architecture using IoT Edge, Azure Cognitive Services, Mosquitto MQTT, Influx DB and GraphQL web services to develop advanced architecture for early detection of forest fires that integrates sensor networks and mobile (drone) technologies for data collection and processing. Unmanned air vehicles (UAVs) will allow coverage of larger areas to raise the percentage of forest fires detections, monitor areas with high fire weather index and such already affected by forest fires. All information is forwarded and stored in cloud computing platform where near real-time processing and alerting is performed.
Design and Development of Multipurpose Automatic Material Handling Robot with...IRJET Journal
The document describes the design and development of a multipurpose automatic material handling robot. It aims to overcome logistics problems in warehouses and workplaces using a system-thinking approach. A scissor lift design is implemented to lift the robot's gripper assembly. Digital tools like factory flow simulation and virtual reality are used to simulate the robot's functioning in a virtual warehouse environment. Calculations are shown for dimensions of the scissor lift and robot. The robot is designed to efficiently handle materials in warehouses and improve inventory tracking and management.
IRJET - Display for Crew Station of Next Generation Main Battle TankIRJET Journal
This document describes a proposed display system for the crew station of a next generation main battle tank. The system aims to make battlefield surveillance unmanned by installing multiple cameras on the tank's turret and transmitting the video to a display unit in the hull. A Raspberry Pi is used to capture video from the cameras and transmit the data to a Smart Display Unit via Ethernet. This allows surveillance of the battlefield without crew in the turret, reducing risks. The single Smart Display Unit replaces multiple displays, addressing space constraints in the tank. Experimental results found that transmitting compressed video from the Raspberry Pi to the display unit in real-time via Ethernet is feasible for unmanned battlefield surveillance.
SERENE 2014 School: Measurement-Driven Resilience Design of Cloud-Based Cyber...SERENEWorkshop
SERENE 2014 School on Engineering Resilient Cyber Physical Systems
Talk: Measurement-Driven Resilience Design of Cloud-Based Cyber-Physical Systems, by Imre Kocsis
This document presents a project titled "Fingerprint Recognition for Security" by Bisangabagabo Alphonse. The project aims to use fingerprint recognition to improve student identification for security at KIST by replacing the current system of using ID cards. The system will utilize fingerprint scanning, matching, and identification algorithms built with C programming language. It seeks to address issues like unauthorized access and students missing classes when cards are lost or forgotten by implementing an accurate biometric authentication solution based on individuals' unique fingerprint data.
Dell NVIDIA AI Roadshow - South Western OntarioBill Wong
- Artificial intelligence (AI) is mimicking human intelligence through machine algorithms like those used for chess and facial recognition. Machine learning (ML) is a subset of AI that uses algorithms to parse data, learn from data, and make predictions. Deep learning (DL) uses artificial neural networks to develop relationships in data and is used for applications like driverless cars and cybersecurity.
- AI technologies are enabling digital transformation and require infrastructure like edge computing, GPUs, FPGAs, deep learning accelerators, and specialized hardware to power applications of AI, ML, and DL. Dell Technologies provides platforms and solutions to accelerate AI workloads and support digital transformation.
Towards an Infrastructure for Enabling Systematic Development and Research of...Rafael Ferreira da Silva
Presentation held at the 17th IEEE eScience Conference
Scientific workflows have been used almost universally across scientific domains, and have underpinned some of the most significant discoveries of the past several decades. Many of these workflows have high computational, storage, and/or communication demands, and thus must execute on a wide range of large-scale platforms, from large clouds to upcoming exascale high-performance computing (HPC) platforms. These executions must be managed using some software infrastructure. Due to the popularity of workflows, workflow management systems (WMSs) have been developed to provide abstractions for creating and executing workflows conveniently, efficiently, and portably. While these efforts are all worthwhile, there are now hundreds of independent WMSs, many of which are moribund. As a result, the WMS landscape is segmented and presents significant barriers to entry due to the hundreds of seemingly comparable, yet incompatible, systems that exist. As a result, many teams, small and large, still elect to build their own custom workflow solution rather than adopt, or build upon, existing WMSs. This current state of the WMS landscape negatively impacts workflow users, developers, and researchers. In this talk, I will provide a view of the state of the art and some of my previous research and technical contributions, and identify crucial research challenges in the workflow community.
Modeling and Simulation of Parallel and Distributed Computing Systems with Si...Rafael Ferreira da Silva
In this talk, I present an overview of three open source tools for enabling research and development of scientific workflow systems and applications:
- SimGrid: https://simgrid.org
- WRENCH: https://wrench-project.org
- WfCommons: https://wfcommons.org
More Related Content
Similar to Multi-infrastructure workflow execution for medical simulation in the Virtual Imaging Platform
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/dec-2016-member-meeting-khronos
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Mark Bünger, Vice President of Research at Lux Research, delivers the presentation "Imaging + AI: Opportunities Inside the Car and Beyond" at the December 2016 Embedded Vision Alliance Member Meeting. Bünger presents his firm’s perspective on how embedded vision will upend the automotive industry.
IRJET - Biometric Identification using Gait Analyis by Deep LearningIRJET Journal
1. The document discusses using gait analysis and deep learning for biometric identification. Gait analysis examines a person's walking pattern or "gait cycle" as a biometric for identification.
2. The proposed system would use deep learning models trained on video frames of a person's gait cycle to generate identification vectors, which would then be used to authenticate individuals based on a match to their gait pattern.
3. If a sample video's gait cycle matches the trained identification vectors, the person would be authenticated. This could eliminate manual entry and allow automatic identification via security cameras based on a person's walking pattern.
IRJET - Analysis of Virtual Machine in Digital ForensicsIRJET Journal
This document discusses analyzing virtual machines for digital forensics purposes. It proposes a methodology for acquiring and analyzing files from VMware and Oracle VirtualBox virtual machines. The methodology has three phases: detection and acquisition of virtual machine files from the host system, analysis of virtual disk images and log files, and reporting the conclusions. The analysis phase examines virtual disk files in detail, looking at the file structure and metadata that could provide evidence. The system is implemented using Python scripts to perform the virtual machine analysis.
1. The document describes a proposed object detection bot that uses a Raspberry Pi, camera modules, and cloud services from AWS to accurately detect objects like plastic bottles, metal cans, and abandoned baggage.
2. The system uses a hybrid approach with both edge computing on the Raspberry Pi using SSD object detection models, as well as AWS cloud services for storage, analytics, and notifications. Video feeds are ingested into AWS Kinesis and objects detected on the edge are sent to AWS SNS for notifications.
3. An evaluation showed the system could reliably detect objects and send notifications within seconds, demonstrating the feasibility of combining local edge computing and cloud services for object detection on resource-constrained devices.
Smart Surveillance Bot with Low Power MCUIRJET Journal
This document describes the design of a low-power, wirelessly controlled surveillance robot. The robot uses a CC3200 SimpleLink WiFi microcontroller and integrates an onboard video camera, ultrasonic sensor, Bluetooth module, GPS, and other components. The robot is intended to transmit real-time video data to a controller to monitor hard to access areas. The design aims to create an affordable and efficient robotic system for surveillance applications using low power consumption. Future extensions could include sensors for mine detection or fire detection.
Inspection of Suspicious Human Activity in the Crowd Sourced Areas Captured i...IRJET Journal
The document proposes a system to detect suspicious human activity in crowdsourced video captured by surveillance cameras. The system uses Advanced Motion Detection (AMD) to detect moving objects and generate a reliable background model for analysis. A camera connected to a monitoring room would produce alert messages for any detected suspicious activity based on height, time, and body movement constraints. The system aims to automate real-time video processing for security applications like detecting unauthorized access. It extracts human objects from frames and identifies suspicious behavior using the AMD algorithm before sending alerts.
This document provides a summary of Amit Prabhudesai's work portfolio. It outlines his educational background and work experience in image processing and computer vision. It then describes several projects he has worked on, including human detection using Adaboost for surveillance video, optimizing a Lane Departure Warning system for a Texas Instruments DSP, developing video analytics software for retail store customer counting and queue detection, and an Automatic Fingerprint Identification System. It also lists some relevant trainings and mentorship activities.
IRJET- Smart Authentication System for AirportIRJET Journal
1. The document describes a smart authentication system for airports that uses a PIR sensor, camera, computer system, and image processing to detect intruders.
2. When a PIR sensor detects motion, it signals the computer system to activate the camera to capture an image. The image is then compared to images in a database using LBPH and HOG algorithms for face recognition.
3. If the image matches one in the database, information about that person is sent to the owner by email. If it doesn't match, only the image and timestamp are sent. The system was tested on 20 images in a database and achieved an average efficiency of 97.8%.
Tactical Virtual Assistance (TVA) With Jubal Biggs | Current 2022HostedbyConfluent
Causing Pain:
The traditional edge intelligence computing model doesn’t work for edge Military wearable and IOBT devices. Transmission of large amounts of sensor data over potentially unreliable communications channels puts Battlefield soldiers at risk.
Problem Statement: How DOD can manage the military battlefield assets to include integrate signals from a diverse and dynamic set of sensors, including static ground sensors and soldiers worn sensors to provide predictive and operational analytics?
SAIC’s Virtual Tactical Assistance provides the DOD the IOT infrastructure to manage military battlefield assets to include integrating signals and communications protocols from a diverse and dynamic set of sensors, including stat ground sensors and soldier worn sensors to provide Distributed compute resources and predictive and operational analytics. Combatant personnel require the ability to use data from many systems and communicate peer to peer without access to significant IT infrastructure. The technology seeks to enhance asset management, supply and maintenance operations, rapid target acquisition, and the sharing of targeting/ballistic solutions. Sensors and other weapon mountable accessories/enablers are designed to capture logistic, targeting, and telemetry data from individual weapon systems and share it across the squad, platoon, and higher echelons via a decentralized network. Confluent/Kafka provides processing of IOBT data streams, such as device and sensor data, which provides insights into asset state by monitoring use, providing indicators, tracking patterns and optimizing asset use for proactive and predictive analytics. More importantly, Confluent kafka provides each TVA to function as a data hub and a peer in a “data mesh”, which can share messages, location updates, and other data with other TVA devices in an ad-hoc manner independent of any wider network.
Advanced Open IoT Platform for Prevention and Early Detection of Forest FiresIvo Andreev
The session was about open architecture using IoT Edge, Azure Cognitive Services, Mosquitto MQTT, Influx DB and GraphQL web services to develop advanced architecture for early detection of forest fires that integrates sensor networks and mobile (drone) technologies for data collection and processing. Unmanned air vehicles (UAVs) will allow coverage of larger areas to raise the percentage of forest fires detections, monitor areas with high fire weather index and such already affected by forest fires. All information is forwarded and stored in cloud computing platform where near real-time processing and alerting is performed.
Design and Development of Multipurpose Automatic Material Handling Robot with...IRJET Journal
The document describes the design and development of a multipurpose automatic material handling robot. It aims to overcome logistics problems in warehouses and workplaces using a system-thinking approach. A scissor lift design is implemented to lift the robot's gripper assembly. Digital tools like factory flow simulation and virtual reality are used to simulate the robot's functioning in a virtual warehouse environment. Calculations are shown for dimensions of the scissor lift and robot. The robot is designed to efficiently handle materials in warehouses and improve inventory tracking and management.
IRJET - Display for Crew Station of Next Generation Main Battle TankIRJET Journal
This document describes a proposed display system for the crew station of a next generation main battle tank. The system aims to make battlefield surveillance unmanned by installing multiple cameras on the tank's turret and transmitting the video to a display unit in the hull. A Raspberry Pi is used to capture video from the cameras and transmit the data to a Smart Display Unit via Ethernet. This allows surveillance of the battlefield without crew in the turret, reducing risks. The single Smart Display Unit replaces multiple displays, addressing space constraints in the tank. Experimental results found that transmitting compressed video from the Raspberry Pi to the display unit in real-time via Ethernet is feasible for unmanned battlefield surveillance.
SERENE 2014 School: Measurement-Driven Resilience Design of Cloud-Based Cyber...SERENEWorkshop
SERENE 2014 School on Engineering Resilient Cyber Physical Systems
Talk: Measurement-Driven Resilience Design of Cloud-Based Cyber-Physical Systems, by Imre Kocsis
This document presents a project titled "Fingerprint Recognition for Security" by Bisangabagabo Alphonse. The project aims to use fingerprint recognition to improve student identification for security at KIST by replacing the current system of using ID cards. The system will utilize fingerprint scanning, matching, and identification algorithms built with C programming language. It seeks to address issues like unauthorized access and students missing classes when cards are lost or forgotten by implementing an accurate biometric authentication solution based on individuals' unique fingerprint data.
Dell NVIDIA AI Roadshow - South Western OntarioBill Wong
- Artificial intelligence (AI) is mimicking human intelligence through machine algorithms like those used for chess and facial recognition. Machine learning (ML) is a subset of AI that uses algorithms to parse data, learn from data, and make predictions. Deep learning (DL) uses artificial neural networks to develop relationships in data and is used for applications like driverless cars and cybersecurity.
- AI technologies are enabling digital transformation and require infrastructure like edge computing, GPUs, FPGAs, deep learning accelerators, and specialized hardware to power applications of AI, ML, and DL. Dell Technologies provides platforms and solutions to accelerate AI workloads and support digital transformation.
Towards an Infrastructure for Enabling Systematic Development and Research of...Rafael Ferreira da Silva
Presentation held at the 17th IEEE eScience Conference
Scientific workflows have been used almost universally across scientific domains, and have underpinned some of the most significant discoveries of the past several decades. Many of these workflows have high computational, storage, and/or communication demands, and thus must execute on a wide range of large-scale platforms, from large clouds to upcoming exascale high-performance computing (HPC) platforms. These executions must be managed using some software infrastructure. Due to the popularity of workflows, workflow management systems (WMSs) have been developed to provide abstractions for creating and executing workflows conveniently, efficiently, and portably. While these efforts are all worthwhile, there are now hundreds of independent WMSs, many of which are moribund. As a result, the WMS landscape is segmented and presents significant barriers to entry due to the hundreds of seemingly comparable, yet incompatible, systems that exist. As a result, many teams, small and large, still elect to build their own custom workflow solution rather than adopt, or build upon, existing WMSs. This current state of the WMS landscape negatively impacts workflow users, developers, and researchers. In this talk, I will provide a view of the state of the art and some of my previous research and technical contributions, and identify crucial research challenges in the workflow community.
Modeling and Simulation of Parallel and Distributed Computing Systems with Si...Rafael Ferreira da Silva
In this talk, I present an overview of three open source tools for enabling research and development of scientific workflow systems and applications:
- SimGrid: https://simgrid.org
- WRENCH: https://wrench-project.org
- WfCommons: https://wfcommons.org
Good Practices for Developing Scientific Software Frameworks: The WRENCH fram...Rafael Ferreira da Silva
The document provides guidelines for best practices in developing scientific software frameworks. It discusses hosting open source projects on version control platforms and ensuring documentation, testing, continuous integration/delivery, and other development practices are followed. Specific examples mentioned include the WRENCH simulation framework, Pegasus workflow system, and scikit-learn machine learning library. The document emphasizes practices like writing tests, tracking issues, reviewing code quality, and releasing versions in a semantic and citable manner.
WorkflowHub: Community Framework for Enabling Scientific Workflow Research a...Rafael Ferreira da Silva
Scientific workflows are a cornerstone of modern scientific computing. They are used to describe complex computational applications that require efficient and robust management of large volumes of data, which are typically stored/processed on heterogeneous, distributed resources. The workflow research and development community has employed a number of methods for the quantitative evaluation of existing and novel workflow algorithms and systems. In particular, a common approach is to simulate workflow executions. In previous work, we have presented a collection of tools that have been used for aiding research and development activities in the Pegasus project, and that have been adopted by others for conducting workflow research. Despite their popularity, there are several shortcomings that prevent easy adoption, maintenance, and consistency with the evolving structures and computational requirements of production workflows. In this work, we present WorkflowHub, a community framework that provides a collection of tools for analyzing workflow execution traces, producing realistic synthetic workflow traces, and simulating workflow executions. We demonstrate the realism of the generated synthetic traces by comparing simulated executions of these traces with actual workflow executions. We also contrast these results with those obtained when using the previously available collection of tools. We find that our framework not only can be used to generate representative synthetic workflow traces (i.e., with workflow structures and task characteristics distributions that resemble those in traces obtained from real-world workflow executions), but can also generate representative workflow traces at larger scales than that of available workflow traces.
Bridging Concepts and Practice in eScience via Simulation-driven EngineeringRafael Ferreira da Silva
The CyberInfrastructure (CI) has been the object of intensive research and development in the last decade, re- sulting in a rich set of abstractions and interoperable software implementations that are used in production today for supporting ongoing and breakthrough scientific discoveries. A key challenge is the development of tools and application execution frameworks that are robust in current and emerging CI configurations, and that can anticipate the needs of upcoming CI applications. This paper presents WRENCH, a framework that enables simulation-driven engineering for evaluating and developing CI application execution frameworks. WRENCH provides a set of high- level simulation abstractions that serve as building blocks for developing custom simulators. These abstractions rely on the scalable and accurate simulation models that are provided by the SimGrid simulation framework. Consequently, WRENCH makes it possible to build, with minimum software development effort, simulators that that can accurately and scalably simulate a wide spectrum of large and complex CI scenarios. These simulators can then be used to evaluate and/or compare alternate platform, system, and algorithm designs, so as to drive the development of CI solutions for current and emerging applications.
Accurately Simulating Energy Consumption of I/O-intensive Scientific WorkflowsRafael Ferreira da Silva
While distributed computing infrastructures can provide in- frastructure-level techniques for managing energy consumption, appli- cation-level energy consumption models have also been developed to support energy-efficient scheduling and resource provisioning algorithms. In this work, we analyze the accuracy of a widely-used application-level model that have been developed and used in the context of scientific workflow executions. To this end, we profile two production scientific workflows on a distributed platform instrumented with power meters. We then conduct an analysis of power and energy consumption measure- ments. This analysis shows that power consumption is not linearly related to CPU utilization and that I/O operations significantly impact power, and thus energy, consumption. We then propose a power consumption model that accounts for I/O operations, including the impact of wait- ing for these operations to complete, and for concurrent task executions on multi-socket, multi-core compute nodes. We implement our proposed model as part of a simulator that allows us to draw direct comparisons between real-world and modeled power and energy consumption. We find that our model has high accuracy when compared to real-world execu- tions. Furthermore, our model improves accuracy by about two orders of magnitude when compared to the traditional models used in the energy- efficient workflow scheduling literature.
Running Accurate, Scalable, and Reproducible Simulations of Distributed Syste...Rafael Ferreira da Silva
Scientific workflows are used routinely in numerous scientific domains, and Workflow Management Systems (WMSs) have been developed to orchestrate and optimize workflow executions on distributed platforms. WMSs are complex software systems that interact with complex software infrastructures. Most WMS research and development activities rely on empirical experiments conducted with full-fledged software stacks on actual hardware platforms. Such experiments, however, are limited to hardware and software infrastructures at hand and can be labor- and/or time-intensive. As a result, relying solely on real- world experiments impedes WMS research and development. An alternative is to conduct experiments in simulation.
In this work we present WRENCH, a WMS simulation framework, whose objectives are (i) accurate and scalable simula- tions; and (ii) easy simulation software development. WRENCH achieves its first objective by building on the SimGrid framework. While SimGrid is recognized for the accuracy and scalability of its simulation models, it only provides low-level simulation abstractions and thus large software development efforts are required when implementing simulators of complex systems. WRENCH thus achieves its second objective by providing high- level and directly re-usable simulation abstractions on top of SimGrid. After describing and giving rationales for WRENCH’s software architecture and APIs, we present a case study in which we apply WRENCH to simulate the Pegasus production WMS. We report on ease of implementation, simulation accuracy, and simulation scalability so as to determine to which extent WRENCH achieves its two above objectives. We also draw both qualitative and quantitative comparisons with a previously proposed workflow simulator.
WRENCH enables novel avenues for scientific workflow use, research, development, and education. WRENCH capitalizes on recent and critical advances in the state of the art of distributed platform/application simulation. WRENCH builds on top of the open-source SimGrid simulation framework. SimGrid enables the simulation of large-scale distributed applications in a way that is accurate (via validated simulation models), scalable (low ratio of simulation time to simulated time, ability to run large simulations on a single computer with low compute, memory, and energy footprints), and expressive (ability to simulate arbitrary platform, application, and execution scenarios). WRENCH provides directly usable high-level simulation abstractions using SimGrid as a foundation. More information on https://wrench-project.org
In a nutshell, WRENCH makes it possible to:
- Prototype implementations of Workflow Management System (WMS) components and underlying algorithms;
- Quickly, scalably, and accurately simulate arbitrary workflow and platform scenarios for a simulated WMS implementation; and
- Run extensive experimental campaigns to conclusively compare workflow executions, platform architectures, and WMS algorithms and designs.
This talk will examine issues of workflow execution, in particular using the Pegasus Workflow Management System, on distributed resources and how these resources can be provisioned ahead of the workflow execution. Pegasus was designed, implemented and supported to provide abstractions that enable scientists to focus on structuring their computations without worrying about the details of the target cyberinfrastructure. To support these workflow abstractions Pegasus provides automation capabilities that seamlessly map workflows onto target resources, sparing scientists the overhead of managing the data flow, job scheduling, fault recovery and adaptation of their applications. In some cases, it is beneficial to provision the resources ahead of the workflow execution, enabling the re-use of resources across workflow tasks. The talk will examine the benefits of resource provisioning for workflow execution.
On the Use of Burst Buffers for Accelerating Data-Intensive Scientific WorkflowsRafael Ferreira da Silva
The document discusses using burst buffers to accelerate I/O performance for data-intensive scientific workflows. It finds that burst buffers improved write performance by 9x and read performance by 15x for a cybersecurity workflow. However, performance decreased slightly with more than 64 nodes due to potential I/O bottlenecks. While burst buffers helped, other approaches like in-situ processing may also be needed to meet all application requirements. Future work includes investigating combined in-situ and in-transit analysis and developing a production workflow management system with burst buffer support.
Using Simple PID Controllers to Prevent and Mitigate Faults in Scientific Wor...Rafael Ferreira da Silva
Presentation held at the 11th Workflows in Support of Large-Scale Science, October 14, 2016.
Abstract - Scientific workflows have become mainstream for conducting large-scale scientific research. As a result, many workflow applications and Workflow Management Systems (WMSs) have been developed as part of the cyberinfrastructure to allow scientists to execute their applications seamlessly on a range of distributed platforms. In spite of many success stories, a key challenge for running workflows in distributed systems is failure prediction, detection, and recovery. In this paper, we propose an approach to use control theory developed as part of autonomic computing to predict failures before they happen, and mitigated them when possible. The proposed approach applying the proportional-integral-derivative controller (PID controller) control loop mechanism, which is widely used in industrial control systems, to mitigate faults by adjusting the inputs of the controller. The PID controller aims at detecting the possibility of a fault far enough in advance so that an action can be performed to prevent it from happening. To demonstrate the feasibility of the approach, we tackle two common execution faults of the Big Data era---data storage overload and memory overflow. We define, implement, and evaluate simple PID controllers to autonomously manage data and memory usage of a bioinformatics workflow that consumes/produces over 4.4TB of data, and requires over 24TB of memory to run all tasks concurrently. Experimental results indicate that workflow executions may significantly benefit from PID controllers, in particular under online and unknown conditions. Simulation results show that nearly-optimal executions (slowdown of 1.01) can be attained when using our proposed method, and faults are detected and mitigated far in advance of their occurrence.
Automating Environmental Computing Applications with Scientific WorkflowsRafael Ferreira da Silva
Presentation held at the Environmental Computing Workshop on October 23, 2016
Abstract - Computational environmental science applications have evolved and become more complex over the last decade. In order to cope with the needs of such applications, computational methods and technologies have emerged to support the execution of these applications on heterogeneous, distributed systems. Among them are workflow management systems such as Pegasus. Pegasus is being used by researchers to model seismic wave propagation, to discover new celestial objects, to study RNA critical to human brain development, and to investigate other important research questions. This paper provides an introduction to scientific workflows and describes Pegasus and its main features. The paper highlights how the environmental science community has used Pegasus to automate their scientific workflow executions on high performance and high throughput computing systems by presenting three use cases: two Earth science workflows, and a climate science workflow.
Presentation held at the USC Information Sciences Institute on July 27, 2016
Abstract - Understanding user behavior is a crucial factor when evaluating scheduling and allocation performances in high performance computing environments. Since workload traces implicitly include interaction processes, they are often used for conducting performance evaluation. Nevertheless, realistic performance evaluations need to take into account the dynamic user reaction to different levels of system performance as recorded data reflects only one instantiation of an interactive process. To further understand this process, we perform a comprehensive analysis of the user behavior in recorded data in the form of delays in subsequent job submission behavior. Therefore, we characterize a workload trace from Mira supercomputer at ALCF (Argonne Leadership Computing Facility) covering one year of job submissions. We perform an in-depth analysis of correlations between job characteristics, system performance metrics, and subsequent user behavior. Analysis results show that the user behavior is significantly influenced by long waiting times, and that complex jobs (in terms of number of nodes and CPU hours) lead to longer delays in subsequent job submissions. Also, we investigate that a notification mechanism informing users upon job completion does not influence the subsequent submission behavior. Furthermore, we advance the results of HPC job submission to HTC job submission. We consider HTC job submission behavior in terms of parallel batch-wise submissions, as well as delays and pauses in job submission. We compare differences in batch characteristics by classifying batches using a popular model. Our findings show that modeling an HTC job submission behavior requires knowledge of the underlying bags of tasks, which is often unavailable. Additionally, we find evidence that the subsequent job submission behavior is not influenced by the different complexities and requirements of HPC and HTC jobs.
Performance Analysis of an I/O-Intensive Workflow executing on Google Cloud a...Rafael Ferreira da Silva
Presentation held at the 18th Workshop on Advances in Parallel and Distributed Computational Models - 2015
Abstract - Scientific workflows have become the mainstream to conduct large-scale scientific research. In the meantime, cloud computing has emerged as an alternative computing paradigm. In this paper, we conduct an analysis of the performance of an I/O-intensive real scientific workflow on cloud environments using makespan (the turnaround time for a workflow to complete its execution) as the key performance metric. In particular, we assess the impact of varying the storage configurations on workflow performance when executing on Google Cloud and Amazon Web Services. We aim to understand the performance bottlenecks of the popular cloud-based execution environments. Experimental results show significant differences in application performance for different configurations. They also reveal that Amazon Web Services outperforms Google Cloud with equivalent application and system configurations. We then investigate the root cause of these results using provenance data and by benchmarking disk and network I/O on both infrastructures. Lastly, we also suggest modifications in the standard cloud storage APIs, which will reduce the makespan for I/O-intensive workflows.
More information: www.rafaelsilva.com
Pegasus is a workflow management system that automates complex computational experiments. It handles tasks such as automating multi-step processing pipelines, enabling parallel distributed computations, automatically transferring data, and handling failures to provide reliability. Pegasus records provenance data to keep track of how results were produced and allows experimental workflows to be reproduced. It generates executable workflows from abstract descriptions to run experiments on distributed computing resources including clusters, grids, and clouds.
Task Resource Consumption Prediction for Scientific Applications and WorkflowsRafael Ferreira da Silva
Presentation held at the Algorithms and Scheduling Techniques to Manage Resilience and Power Consumption in Distributed Systems 2015 Seminar - Dagstuhl
Estimates of task runtime, disk space usage, and memory consumption, are commonly used by scheduling and resource provisioning algorithms to support efficient and reliable scientific application executions. Such algorithms often assume that accurate estimates are available, but such estimates are difficult to generate in practice. In this work, we first profile real scientific applications and workflows, collecting fine-grained information such as process I/O, runtime, memory usage, and CPU utilization. We then propose a method to automatically characterize task requirements based on these profiles. Our method estimates task runtime, disk space, and peak memory consumption. It looks for correlations between the parameters of a dataset, and if no correlation is found, the dataset is divided into smaller subsets using the statistical recursive partitioning method and conditional inference trees to identify patterns that characterize particular behaviors of the workload. We then propose an estimation process to predict task characteristics of scientific applications based on the collected data. For scientific workflows, we propose an online estimation process based on the MAPE-K loop, where task executions are monitored and estimates are updated as more information becomes available. Experimental results show that our online estimation process results in much more accurate predictions than an offline approach, where all task requirements are estimated prior to workflow execution.
Characterizing a High Throughput Computing Workload: The Compact Muon Solenoi...Rafael Ferreira da Silva
Presentation held at ICCS 2015 Conference - Reykjavik, Iceland
High throughput computing (HTC) has aided the scientific community in the analysis of vast amounts of data and computational jobs in distributed environments. To manage these large workloads, several systems have been developed to efficiently allocate and provide access to distributed resources. Many of these systems rely on job characteristics estimates (e.g., job runtime) to characterize the workload behavior, which in practice is hard to obtain. In this work, we perform an exploratory analysis of the CMS experiment workload using the statistical recursive partitioning method and conditional inference trees to identify patterns that characterize particular behaviors of the workload. We then propose an estimation process to predict job characteristics based on the collected data. Experimental results show that our process estimates job runtime with 75% of accuracy on average, and produces nearly optimal predictions for disk and memory consumption.
More information: www.rafaelsilva.com
Experiments with Complex Scientific Applications on Hybrid Cloud InfrastructuresRafael Ferreira da Silva
Presentation held at NSFCloud Workshop - Arlington, USA
DICE Team at Department of Computer Science and Academic Computer Center CYFRONET of AGH collaborates with researchers at the University of Southern California and the Center for Research Computing at the University of Notre Dame. In the scope of this collaboration, we develop methods and tools supporting programming and execution of complex scientific applications on heterogeneous computing infrastructures.
More information: www.rafaelsilva.com
A Unified Approach for Modeling and Optimization of Energy, Makespan and Reli...Rafael Ferreira da Silva
Presentation held at MODSIM 2014 workshop - Seattle, USA
Abstract - Scientific workflows are a useful representation for managing the execution of large-scale computations on high performance computing (HPC) and high throughput computing (HTC) platforms. In scientific workflow applications, resource provisioning and utilization optimizations have been investigated to reduce energy consumption on Cloud infrastructures. However, existing research is largely limited to the measurement of energy usage according to resource utilization when running a program on an execution node. Furthermore, most existing optimization techniques for workflows are limited to single objectives (e.g. makespan), and some can deal with only two objectives. There does not exist an approach that deals with an arbitrary number of objectives and no scheduling technique explored tradeoffs among makespan, energy consumption, and reliability. In this work, we propose an energy consumption model for analyzing and profiling energy usage address real large-scale infrastructure conditions (e.g. heterogeneity, resource unavailability, external loads); the validation of the model in a fully instrumented platform able to measure the actual temperature and energy consumed by computing, networking, and storage systems; and a multi-objective optimization approach to explore tradeoffs among makespan, energy consumption, and reliability for multi-objective workflow scheduling.
More information: www.rafaelsilva.com
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Multi-infrastructure workflow execution for medical simulation in the Virtual Imaging Platform
1. VIP
Virtual Imaging Platform
Multi-infrastructure workflow execution
for medical simulation in the
Virtual Imaging Platform
Rafael FERREIRA DA SILVA1, Sorina CAMARASU-POP1, Baptiste GRENIER3
Vanessa HAMAR2, David MANSET3, Johan MONTAGNAT4, Jérôme REVILLARD3
Javier ROJAS BALDERRAMA4, Andrei TSAREGORODTSEV2, Tristan GLATARD1
1 Université de Lyon, CNRS, INSERM, CREATIS
2 Centre de Physique des Particules de Marseille
3 maatG France
4 CNRS/UNS, I3S lab, MODALIS team
Bristol, HealthGrid 2011
VIP ANR-09-COSI-013-02 www.creatis.insa-lyon.fr/vip 1
2. VIP
Virtual Imaging Platform
Introduction
Simulation
Computation: 16h
Example: 2D+t ultrasound simulation
[O. Bernard]
Example: Prostate traitement in protontherapy
Computation: 2 months
[L. Grevillot, D. Sarrut]
8.5 CPU
days
US MRI CT PET
VIP ANR-09-COSI-013-02 www.creatis.insa-lyon.fr/vip 2
3. VIP
Virtual Imaging Platform
Introduction
European Grid Infrastructure (EGI)
High throughput for computation and data transfers
Challenges
High latencies
Low reliability
VIP ANR-09-COSI-013-02 www.creatis.insa-lyon.fr/vip 3
4. VIP
Virtual Imaging Platform
Our Goal
Multi-infrastructure workflow execution
Grid resources
Personal clusters (non-grid resources)
Improve data transfer reliability
Local reliable storage
VIP ANR-09-COSI-013-02 www.creatis.insa-lyon.fr/vip 4
6. VIP
Virtual Imaging Platform
Workflows
Virtual Imaging Platform (VIP)
Workflow
Activities
Data sources
Data sinks
Simulated US image
of the heart
Core Simulation Workflows
MRI, US, CT and PET
VIP ANR-09-COSI-013-02 www.creatis.insa-lyon.fr/vip 6
7. VIP
Virtual Imaging Platform
Workflow Wrapping
Simulations are described as workflows
Workflows are interpreted and executed by MOTEUR
Core engine workflow interpreter
Data flow evaluation
Production of computational tasks
Workflow activities are described using jGASW
Code wrapper for distributed platforms and local executions
Grid jobs (bash scripts) are generated from processed data
Each jGASW descriptor bundles a unique executable
http://modalis.i3s.unice.fr/softwares/moteur/start
http://modalis.i3s.unice.fr/jgasw
VIP ANR-09-COSI-013-02 www.creatis.insa-lyon.fr/vip 7
8. VIP
Virtual Imaging Platform
Pilot Jobs
Pull mode
Execution environment verification
Worker node reservation
Moteur + jGASW
User Jobs
Pilot-jobs System
gLite WMS
X
Worker Nodes DIRAC Architecture
VIP ANR-09-COSI-013-02 www.creatis.insa-lyon.fr/vip 8
9. VIP
Virtual Imaging Platform
Multi-infrastructure Execution
Design constraints
No intervention from the cluster administrator
No sharing or generic user accounts
Minimal technical assumptions on the cluster architecture
Setup
The agent is launched by the user on his cluster(s) account(s)
Download of a tar.gz bundle and run setup script
Support to PBS, BQS and SLURM
Execution
Queries WMS for user’s waiting tasks Personal
cluster
Submits pilots to the local cluster queue VIP cluster bundle
Embedded data transfer client (VLET)
http://vip.creatis.insa-lyon.fr:9002/projects/dirac-cluster-agent/wiki/Wiki!
VIP ANR-09-COSI-013-02 www.creatis.insa-lyon.fr/vip 9
10. VIP
Virtual Imaging Platform
Multi-infrastructure Execution
Conditions
EGI
134-node cluster limited to 67 pilots running concurrently
3 executions of a PET workflow
Results
Conclusion
Involving a small personal cluster in a simulation has a significant
impact on the execution
VIP ANR-09-COSI-013-02 www.creatis.insa-lyon.fr/vip 10
11. VIP
Virtual Imaging Platform
Reliable Data Management
EGI three-tier data management
Challenge: data availability between 80-95%
Storage Elements (SE) – DPM, dCache, STORM, Castor
Logical File Catalog (LFC) – single index space
Current Data Management in VIP
Critical input files are replicated
Files are cached by pilot jobs
Output files are stored on site SE
Jobs error rate: 5-10%
Local Data Manager
Failover storage
Available for users and grid jobs
Overlay of DPM SE
VIP ANR-09-COSI-013-02 www.creatis.insa-lyon.fr/vip 11
12. VIP
Virtual Imaging Platform
Reliable Data Management
Data Management use case
Input download
Output upload
VIP ANR-09-COSI-013-02 www.creatis.insa-lyon.fr/vip 12
13. Impact of the Data Manager
VIP
Virtual Imaging Platform on Job Reliability
Conditions
EGI, biomed VO (production infrastructure)
Ultrasonic Simulation comprised of 128 jobs
Each job has 5 input files + 1 output file
Failure rate: 1%
Results
Conclusion
Job data transfers failure rate can be significantly decreased in the
presence of a failover storage
VIP ANR-09-COSI-013-02 www.creatis.insa-lyon.fr/vip 13
14. VIP
Virtual Imaging Platform
Web Portal
Workflow submission and management
Workflow execution and monitoring
VIP ANR-09-COSI-013-02 www.creatis.insa-lyon.fr/vip 14
15. VIP
Virtual Imaging Platform
Web Portal
Detailed performance information
VIP ANR-09-COSI-013-02 www.creatis.insa-lyon.fr/vip 15
16. VIP
Virtual Imaging Platform
Web Portal
Authentication based on X.509 certificates
VIP ANR-09-COSI-013-02 www.creatis.insa-lyon.fr/vip 16
17. VIP
Virtual Imaging Platform
Web Portal
File transfer operations
http://vip.creatis.insa-lyon.fr
VIP ANR-09-COSI-013-02 www.creatis.insa-lyon.fr/vip 17
19. VIP
Virtual Imaging Platform
Conclusions
VIP can execute workflows on multi-infrastructures
Extension of jGASW application description
Extension of DIRAC to support personal clusters with no administration
intervention and respecting common security rules
Results show that a small personal cluster can significantly contribute to
a simulation running on EGI
VIP Data Manager
A first test shows that job failure rate was decreased from 7.7% to 1.5%
Try it!
Requires a certificate registered in the Biomed VO
http://vip.creatis.insa-lyon.fr
VIP ANR-09-COSI-013-02 www.creatis.insa-lyon.fr/vip 19
20. VIP
Virtual Imaging Platform
Future Work
Workflow Execution
Scalability and Reliability
Memory shortage
Workflow checkpointing
Provenance of simulated data
VIP ANR-09-COSI-013-02 www.creatis.insa-lyon.fr/vip 20
21. VIP
Virtual Imaging Platform
Future Work
Simulators and Models
Biological object model sharing for image simulation
See poster FORESTIER et al. at CBMS 2011
Semantic catalog of biological models
3D visualization
VIP ANR-09-COSI-013-02 www.creatis.insa-lyon.fr/vip 21
22. VIP
Virtual Imaging Platform
Future Work
Simulators and Models
Multi-Modality Simulation Workflow
See poster MARION et al. at CBMS 2011
SIMRI (MRI) Sindbad (CT)
VIP ANR-09-COSI-013-02 www.creatis.insa-lyon.fr/vip 22
23. VIP
Virtual Imaging Platform
Multi-infrastructure workflow execution for medical
simulation in the Virtual Imaging Platform
Questions?
http://www.creatis.insa-lyon.fr/vip!
VIP ANR-09-COSI-013-02 www.creatis.insa-lyon.fr/vip 23
24. VIP
Virtual Imaging Platform
References
Fabrizio Gagliardi, Bob Jones, Fran ̧ois Grey, Marc-Elian Bégin, and Matti Heikkurinen. Building an infrastructure
for scientific grid computing: status and goals of the egee project. Phil. Trans. R. Soc. A 15, 363(1833):1729–
1742, aug 2005.
T. Li, S. Camarasu-Pop, T. Glatard, T. Grenier, and H. Benoit-Cattin. Optimization of mean-shift scale parameters
on the egee grid. In Studies in health technology and informatics, Proceedings of Healthgrid 2010, volume 159,
pages 203–214, 2010.
A. Marion, G. Forestier, H. Benoit-Cattin, S. Camarasu-Pop, P. Clarysse, R. Ferreira da Silva, B. Gibaud, T.
Glatard, P. Hugonnard, C. Lartizien, H. Liebgott, J. Tabary, S. Valette, and D. Friboulet. Multi- modality medical
image simulation of biological models with the Virtual Imaging Platform (VIP). In IEEE CBMS 2011, Bristol, UK,
2011. submitted.
Tristan Glatard, Johan Montagnat, Diane Lingrand, and Xavier Pennec. Flexible and efficient workflow
deployement of data-intensive applications on grids with MOTEUR. International Journal of High Performance
Computing Applications (IJHPCA), 22(3):347–360, August 2008.
Javier Rojas Balderrama, Johan Montagnat, and Diane Lingrand. jGASW: A Service-Oriented Frame- work
Supporting High Throughput Computing and Non-functional Concerns. In IEEE International Conference on Web
Services, ICWS 2010, Miami (FL), USA, July 2010. IEEE Computer Society.
A Tsaregorodtsev, N Brook, A Casajus Ramo, Ph Charpentier, J Closier, G Cowan, R Graciani Diaz, E Lanciotti, Z
Mathe, R Nandakumar, S Paterson, V Romanovsky, R Santinelli, M Sapunov, A C Smith, M Seco Miguelez, and
A Zhelezov. DIRAC3 . The New Generation of the LHCb Grid Software. Journal of Physics: Conference Series,
219(6):062029, 2009.
L. Grevillot, T. Frisson, D Maneval, N. Zahra, J.N. Badel, and D. Sarrut. Simulation of a 6 mv elekta precise linac
photon beam using gate / geant4. Phys Med Biol, 56(4), 2011.
VIP ANR-09-COSI-013-02 www.creatis.insa-lyon.fr/vip 24
25. VIP
Virtual Imaging Platform
References
Silvia Olabarriaga, T. Glatard, and P.T. de Boer. A virtual laboratory for medical image analysis. IEEE T Inf Technol
B, 14(4):979–985, 2010.
Vladimir V. Korkhov, Jakub T. Moscicki, and Valeria V. Krzhizhanovskaya. Dynamic workload bal- ancing of parallel
applications with user-level scheduling on the grid. Future Generation Computer Systems, 25(1):28 – 34, 2009.
Ivo D. Dinov, John D. Van Horn, Kamen M. Lozev, Rico Magsipoc, Petros Petrosyan, Zhizhong Liu, Allan
MacKenzie-Graham, Paul Eggert, Parker Douglas S., and Arthur W. Toga. Efficient, Distributed and Interactive
Neuroimaging Data Analysis Using the LONI Pipeline. Frontiers in Neuroinformatics, 3(22):1–10, 2009.
Thierry Delaitre, Tamas Kiss, Ariel Goyeneche, Gabor Terstyanszky, Stephen Winter, and Péter Kacsuk.
GEMLCA: Running Legacy Code Applications as Grid services. Journal of Grid Computing, 3(1):75– 90, 2005.
Kyle Chard, Wei Tan, Joshua Boverhof, Ravi Madduri, and Ian Foster. Wrap Scientific Applications as WSRF Grid
Services Using gRAVI. In International Conference on Web Services, ICWS’09, Los Angeles (CA), USA, July
2009.
Martin Senger, Peter Rice, Alan Bleasby, Tom Oinn, and Mahmut Uludag. Soaplab2: More Reliable Sesame Door
to Bioinformatics Programs. In Bioinformatics Open Source Conference, BOSC’08, Toronto, ON Canada, July
2008.
Douglas Thain, Todd Tannenbaum, and Miron Livny. Condor and the Grid, pages 299–335. John Wiley & Sons,
Ltd, 2003.
Vinod Kasam, Jean Salzemann, Marli Botha, Ana Dacosta, Gianluca Degliesposti, Raul Isea, Do- man Kim, Astrid
Maass, Colin Kenyon, Giulio Rastelli, Martin Hofmann-Apitius, and Vincent Breton. Wisdom-ii: Screening against
multiple targets implicated in malaria using computational grid infrastruc- tures. Malaria Journal, 8(1):88, 2009.
VIP ANR-09-COSI-013-02 www.creatis.insa-lyon.fr/vip 25