SERENE 2014 - 6th International Workshop on Software Engineering for Resilient Systems
http://serene.disim.univaq.it/
Session 1: Design of Resilient Systems
Paper 3: "Automatic Generation of Description Files for Highly Available Systems"
This document introduces PMIx, an extended Process Management Interface for exascale systems. PMIx is a collaborative open source effort led by Intel, Mellanox Technologies, and Adaptive Computing to address limitations of current PMI standards at extreme scale. Version 1.0 focuses on reducing memory footprint and communication overhead. Version 2.0 adds performance optimizations like distributed databases and collective operations. A SLURM plugin implements PMIx support and will be included in the next major SLURM release. The project timeline outlines development through 2017 to integrate with resource managers and update MPI implementations.
HPC control systems are evolving into the future. This presentation looks at where this evolution may lead, and describes how the control system of the future might be constructed.
This document discusses Process Management Interface for Exascale (PMIx). It provides an overview and objectives of PMIx, which aims to establish an independent and open community effort to develop scalable client/server libraries for job launch and management. The document discusses performance status showing improvements over PMI2, integration status in Open MPI and SLURM, and roadmap for continued development including supporting evolving application needs through flexible resource allocation and fault tolerance. It also discusses different types of malleable and adaptive jobs that PMIx aims to support.
Recent Advances in Machine Learning: Bringing a New Level of Intelligence to ...Brocade
Presentation by Brocade Chief Scientist and Fellow, David Meyer, given at Orange Gardens July 2016. What is Machine Learning and what is all the excitement about?
An associated blog is available here: http://community.brocade.com/t5/CTO-Corner/Networking-Meets-Artificial-Intelligence-A-Glimpse-into-the-Very/ba-p/88196
The document discusses improvements to Apache NiFi and NiFi Registry for software development lifecycles. Key improvements include:
1) Parameterized flows in NiFi that allow sensitive values to be parameterized and referenced securely.
2) Version control improvements like forcing commits and tracking component enable/disable state.
3) Granular proxy permissions and public buckets in NiFi Registry for access control.
4) Versioning of extension bundles alongside flows in NiFi Registry.
The document discusses TOSCA concepts and their application to service orchestration modeling in ONAP. It proposes using TOSCA to model ONAP's various APIs as components while hiding internal details from designers. Base types and workflows could abstract interactions with AAI, MultiVIM, APPC, and other ONAP components. The approach aims to allow correct orchestrations while avoiding exposing ONAP internals to designers.
A key tenant of moving NFV from a Proof of Concept (Poc) to deployment is testing. NFV solutions that pull from open source projects such as OPNFV, OpenStack, OpenDaylight, and others must be integrated and tested in an environment that fully supports the performance and availability requirements of service provider networks. Testing criteria and solutions are also required to ensure NFV interoperability between hardware and software systems that comprise NFV. In this tutorial, you’ll learn best practices for open source NFV testing, including: methodology; mapping to ETSI NFV use-case/s; open source project integration; testing dashboards; Continuous Integration and Continuous Deployment (CI/CD); and testing acceleration.
This document summarizes a presentation on developing with Apache NiFi. It discusses NiFi's REST API for programmatic access, the NiFi developer guide for building custom processors, and tips for contributing to the NiFi project through the GitHub pull request process. Key aspects of the NiFi architecture like its repositories and FlowFile lifecycle are also overviewed.
This document introduces PMIx, an extended Process Management Interface for exascale systems. PMIx is a collaborative open source effort led by Intel, Mellanox Technologies, and Adaptive Computing to address limitations of current PMI standards at extreme scale. Version 1.0 focuses on reducing memory footprint and communication overhead. Version 2.0 adds performance optimizations like distributed databases and collective operations. A SLURM plugin implements PMIx support and will be included in the next major SLURM release. The project timeline outlines development through 2017 to integrate with resource managers and update MPI implementations.
HPC control systems are evolving into the future. This presentation looks at where this evolution may lead, and describes how the control system of the future might be constructed.
This document discusses Process Management Interface for Exascale (PMIx). It provides an overview and objectives of PMIx, which aims to establish an independent and open community effort to develop scalable client/server libraries for job launch and management. The document discusses performance status showing improvements over PMI2, integration status in Open MPI and SLURM, and roadmap for continued development including supporting evolving application needs through flexible resource allocation and fault tolerance. It also discusses different types of malleable and adaptive jobs that PMIx aims to support.
Recent Advances in Machine Learning: Bringing a New Level of Intelligence to ...Brocade
Presentation by Brocade Chief Scientist and Fellow, David Meyer, given at Orange Gardens July 2016. What is Machine Learning and what is all the excitement about?
An associated blog is available here: http://community.brocade.com/t5/CTO-Corner/Networking-Meets-Artificial-Intelligence-A-Glimpse-into-the-Very/ba-p/88196
The document discusses improvements to Apache NiFi and NiFi Registry for software development lifecycles. Key improvements include:
1) Parameterized flows in NiFi that allow sensitive values to be parameterized and referenced securely.
2) Version control improvements like forcing commits and tracking component enable/disable state.
3) Granular proxy permissions and public buckets in NiFi Registry for access control.
4) Versioning of extension bundles alongside flows in NiFi Registry.
The document discusses TOSCA concepts and their application to service orchestration modeling in ONAP. It proposes using TOSCA to model ONAP's various APIs as components while hiding internal details from designers. Base types and workflows could abstract interactions with AAI, MultiVIM, APPC, and other ONAP components. The approach aims to allow correct orchestrations while avoiding exposing ONAP internals to designers.
A key tenant of moving NFV from a Proof of Concept (Poc) to deployment is testing. NFV solutions that pull from open source projects such as OPNFV, OpenStack, OpenDaylight, and others must be integrated and tested in an environment that fully supports the performance and availability requirements of service provider networks. Testing criteria and solutions are also required to ensure NFV interoperability between hardware and software systems that comprise NFV. In this tutorial, you’ll learn best practices for open source NFV testing, including: methodology; mapping to ETSI NFV use-case/s; open source project integration; testing dashboards; Continuous Integration and Continuous Deployment (CI/CD); and testing acceleration.
This document summarizes a presentation on developing with Apache NiFi. It discusses NiFi's REST API for programmatic access, the NiFi developer guide for building custom processors, and tips for contributing to the NiFi project through the GitHub pull request process. Key aspects of the NiFi architecture like its repositories and FlowFile lifecycle are also overviewed.
Data ingestion and distribution with apache NiFiLev Brailovskiy
In this session, we will cover our experience working with Apache NiFi, an easy to use, powerful, and reliable system to process and distribute a large volume of data. The first part of the session will be an introduction to Apache NiFi. We will go over NiFi main components and building blocks and functionality.
In the second part of the session, we will show our use case for Apache NiFi and how it's being used inside our Data Processing infrastructure.
Registry is a central metadata repository that allows users to collaboratively use Schema definitions for stream processing.
Stream Analytics Manager, provides a framework to build Streaming applications faster, easier.
Staying Close to Experts with Executable SpecificationsVagif Abilov
The document discusses using executable specifications to capture expert knowledge for the NRK media player project. Specifications were written using Gherkin and the SpecFlow framework to describe requirements. This allowed developers to work closely with domain experts and validate requirements through automated tests. Lessons learned include starting with acceptance criteria before end-to-end testing and using specifications as a communication tool between technical teams.
Best practices and lessons learnt from Running Apache NiFi at RenaultDataWorks Summit
No real-time insight without real-time data ingestion. No real-time data ingestion without NiFi ! Apache NiFi is an integrated platform for data flow management at entreprise level, enabling companies to securely acquire, process and analyze disparate sources of information (sensors, logs, files, etc) in real-time. NiFi helps data engineers accelerate the development of data flows thanks to its UI and a large number of powerful off-the-shelf processors. However, with great power comes great responsibilities. Behind the simplicity of NiFi, best practices must absolutely be respected in order to scale data flows in production & prevent sneaky situations. In this joint presentation, Hortonworks and Renault, a French car manufacturer, will present lessons learnt from real world projects using Apache NiFi. We will present NiFi design patterns to achieve high level performance and reliability at scale as well as the process to put in place around the technology for data flow governance. We will also show how these best practices can be implemented in practical use cases and scenarios.
Speakers
Kamelia Benchekroun, Data Lake Squad Lead, Renault Group
Abdelkrim Hadjidj, Solution Engineer, Hortonworks
Denovolab ( www.denovolab.com ) is a SIP switching solution that is extremely high performance. Suitable for call center, wholesales termination, carrier services.
The document discusses rapid prototyping of applications using Grails and SAP's HANA Cloud Platform (HCP). It provides an overview of HCP and Grails, then demonstrates building a simple web application for managing tech events using Grails on HCP. Key steps include generating a Grails domain class and controllers, modifying configuration for the HCP deployment, building and deploying the WAR file locally and to HCP, and accessing the application. Resources for further information on HCP, Grails, Groovy and the sample app are also listed.
Flink Forward San Francisco 2018: Dave Torok & Sameer Wadkar - "Embedding Fl...Flink Forward
This document discusses using Apache Flink to operationalize a streaming machine learning lifecycle. It describes Comcast's need to improve customer experiences through predictive analytics over streaming data. Flink is used to orchestrate feature engineering, model training/evaluation, and real-time predictions. Key aspects of the solution include a metadata-driven pipeline, automated deployments, consistent feature stores for training and prediction, and monitoring of multiple models. The document outlines the various components of the ML lifecycle and pipeline implemented on Flink and discusses next steps around UI/UX, continuous monitoring, and supporting multiple feature stores.
The document provides a summary of Tarun Verma's contact information, work experience as a senior technical lead, skills and qualifications including 11+ years of experience in software development and architecture, expertise in content migration systems like OpenText and SharePoint, and programming abilities in languages like C# and Java. It also lists some of his projects involving systems migration and development.
This document provides an agenda and overview of an Apache Kafka integration meetup with Mulesoft 4.3. The meetup will include introductions, an overview of Kafka basics and components, a demonstration of the Mulesoft Kafka connector, and a networking session. Kafka is introduced as a distributed publish-subscribe messaging system that provides reliability, scalability, durability and high performance. Key Kafka concepts that will be covered include topics, partitions, producers, consumers, brokers and the commit log architecture. The Mulesoft Kafka connector operations for consuming, publishing and seeking messages will also be demonstrated.
BYOP: Custom Processor Development with Apache NiFiDataWorks Summit
Apache NiFi, a robust, scalable, and secure tool for data flow management, ships with over 212 processors to ingest, route, manipulate, and exfil data from a variety of sources and consumers. But many users turn to NiFi to meet unusual requirements — from proprietary protocol parsing, to running inside connected cars, to offloading massive hardware metrics from oil rigs in the most remote environments. Rather than posting a community request for custom development or offloading unusual demands to unnecessary external systems, there’s an answer in NiFi. Learn how NiFi allows you to quickly prototype custom processors in the scripting language of your choice against live production data without affecting your existing flows. Easily translate prototypes to full-fledged processors to optimize performance and leverage the full provenance reporting infrastructure. Discover how the framework provides conventions to streamline your development and minimize common boilerplate code, and the robust testing framework to make testing easy, and dare we say, fun.
Expected prior knowledge / intended audience: developers and data flow managers should have passing knowledge of Apache NiFi as a platform for routing, transforming, and delivering data through systems (a brief overview will be provided). The intended audience will have experience with programming in Groovy, Ruby, Jython, ECMAScript/Javascript, or Lua.
Takeaways: Attendees will gain an understanding in writing custom processors for Apache NiFi, including the component lifecycle, unit and integration testing, quick prototyping using a scripting language of their choice, and the artifact publishing and deployment process.
Running Apache NiFi with Apache Spark : Integration OptionsTimothy Spann
A walk-through of various options in integration Apache Spark and Apache NiFi in one smooth dataflow. There are now several options in interfacing between Apache NiFi and Apache Spark with Apache Kafka and Apache Livy.
Ildikó Váncsa, Chris Price, and Carsten Rossenhövel's presentation at the 2017 Open Networking Summit.
Communications service providers (CSPs) have a wide range of options when building virtualized services from the ground up including multiple choices for each functional block in the ETSI NFV reference architecture. CSPs prefer heterogeneous systems with building blocks from different vendors including open source software; for such deployments interoperability becomes a crucial requirement.
OpenStack, as the NFVI and VIM, serves as a widely used cloud platform for telecom and NFV use cases. As a common base, OpenStack offers the means for vendors and other open source projects to ease the interoperability challenge by providing a set of open API’s while focusing on upgradeability and backward compatibility.
However, when it comes to productization, interoperability testing often falls short and is sometimes left to the carrier as shown by the testing programs actively run by no fewer than 10 organizations today.
Join Carsten Rossenhövel from the European Advanced Networking Test Center (EANTC) and the rapporteur (editor) of ETSI’s NFV interoperability standards, Ildikó Váncsa from the OpenStack Foundation, and Chris Price, Ericsson and OpenStack board director to learn more about
The ETSI NFV Release 2 interoperability testing activities - standardization and recently completed ETSI PlugTest. Over 40 commercial and open source implementations were tested for interoperability, including 20 virtual network functions, 10 management and orchestration solutions and 10 NFV platforms.
The New IP Agency (NIA) interoperability testing campaigns of commercial NFV implementations executed by EANTC, focusing on results, lessons learned and recommendations.
How vendors and open source projects are stepping up to the challenge, realizing they must work together.
How to stay up-to-date with OpenStack releases and the community.
How to get involved to ensure you are aware of the latest developments and contribute what you need to OpenStack.
What will I learn from attending this session?
CSPs, open source projects and vendors alike will learn more about the recent ETSI PlugTest and NIA-commissioned interoperability testing, their results and how to architect full NFV solutions that will work together. Interoperability API tests and associated marks from OpenStack will be covered, as well as features to help stay current on OpenStack releases. Attendees will also hear from Ericsson about a vendor’s point of view, and how other projects such a OPNFV are evolving and expanding in scope to address this challenge.
This document provides an overview and user guide for Apache NiFi. It discusses what NiFi is, its architecture and data flow, common terms, and how to operate and design data flows in NiFi. The guide explains how to debug NiFi and test data flows. It also provides tips on processor utilization, routing strategies, and using the NiFi Expression Language. Examples of existing NiFi processors and learning steps are outlined.
MiNiFi is a recently started sub-project of Apache NiFi that is a complementary data collection approach which supplements the core tenets of NiFi in dataflow management, focusing on the collection of data at the source of its creation. Simply, MiNiFi agents take the guiding principles of NiFi and pushes them to the edge in a purpose built design and deploy manner. This talk will focus on MiNiFi's features, go over recent developments and prospective plans, and give a live demo of MiNiFi.
The config.yml is available here: https://gist.github.com/JPercivall/f337b8abdc9019cab5ff06cb7f6ff09a
NiFi processors allow data to be processed as it flows through the system. This document discusses how to create a custom NiFi processor by using the nifi-processor-bundle-archetype Maven archetype to generate the project structure. It also covers deploying the custom processor by building a NAR file with Maven and placing it in the NiFi installation directory so that the new processor will be available. Key methods for customizing processor behavior like init, onSchedule, and onTrigger are also outlined.
KPN ETL Factory (KETL) - Automated Code generation using Metadata to build Da...DataWorks Summit
Being one of biggest and oldest telecom providers in Netherlands with multiple acquisitions over last few decades left KPN with 1500+ data sources and more than 25 teams working with different tools like Teradata, Informatica, Oracle, OBIEE, Hadoop etc. This resulted into a lot of technical debt and also duplicated data on various systems with complex data relationship. Thus resulting into data quality issues and long processing times to get meaningful insights into business.
This created need for an unified way of Ingesting, Storing and Transforming data to be consumed and processed at multiple stages where KETL Framework was born.
Our journey so far:
Instead of developing mappings and workflows or handcrafted SQL Code, Business teams started writing metadata about their sources and dependency between them using macro based excel files or Django based YAML file generator. These files are used by the KETL framework to generate appropriate Hive/Spark/Informatica/Oracle/Teradata Code along with Airflow Scheduler DAG with schedule and dependency as Code.
Additionally all the environments, configuration and access rights are also managed via YAML files with Ansible thus enabling us to view each change as code. This made teams self sufficient where they can build their own Dev/Test environment to validate their metadata and target model structure before deploying to production.
Benefits:
- On-boarding new sources and integrating with existing data store takes less than a week
- Everything is maintained in Git, giving full visibility of changes along with their timelines
- Minimising technical depth and allowing business teams to focus on data instead of tooling
- Easier adoption of newer tools and path of least resistence for decommissioning of legacy stack
- KISS Architecture, easier to maintain and scale
- Reducing bureaucratic processes and design for transparency
What keeps us busy:
- Adding a Test Framework to enable users with BDD tests using the same metadata
- Adding functionality to generate complex code structures
- Using advanced CI/CD processes like Jenkins pipeline for faster deployments
- Integration with new tools and technologies both enterprise and opensource.
Tools/Technologies used:
- Hortonworks HDP - HDFS, Yarn, Hive, Spark, LLAP, Tez
- User and Access Management - Ranger, Knox, Kerberos, LDAP, SSSD, Linux ACL's
- ETL & DWH Tools - Informatica, Informatica BDM, Teradata, Querygrid, Aster etc.
- Reporting - OBIEE, Tableau, Zeppelin Notebooks
- Monitoring - Grafana, Zabbix & ELK
- Scheduler - Airflow
- Orchestration - Ansible
- Code - Python, YAML, Jinja2
- CI/CD - Git, Artifactory, Jenkins
Speaker
Gerhard Messelink, Teacher, KPN
Netflix uses Conductor, an open source microservices orchestrator, to manage complex content processing workflows involving ingestion, encoding, localization, and delivery. Conductor provides visibility, control, and reuse of tasks through a task queuing system and workflow definitions. It has scaled to process millions of workflow executions across Netflix's content platform using a stateless architecture with Dynomite for storage and Dyno-Queues for task distribution.
The document discusses migrating from WebSphere Application Server v5.1 to v6.1. It covers reviewing the architecture changes between the versions, the migration roadmap, changes to topology like profiles and nodes. It also discusses migrating settings, components, APIs, new concepts in v6.1 like integrated third party vendors and feature packs.
Kubernetes Connectivity to Cloud Native Kafka | Christina Lin and Evan Shorti...HostedbyConfluent
If you want to build an ecosystem of streaming data to your Kafka platform, you will need a much easier way for your developer to quickly move what’s on the source to your cluster. Better yet, making the connector serverless so it would NOT waste any resources for being idle, and having a trusted partner manage your Kafka infrastructure for you.
In this session, we will show you how easy we have made streaming data with great user experience. Flexible resource management with our new secret weapon in the Apache Camel project -- Kamelet. We’ll also demonstrate how Red Hat OpenShift Streams for Apache Kafka simplifies the provisioning of Kafka deployments in a public cloud, managing the cluster,topics, and configuring secure access to the Kafka cluster for your developers.
A Software Factory Integrating Rational & WebSphere Toolsghodgkinson
The document discusses how a large automotive retailer integrated Rational Software Architect, WebSphere Message Broker, and Rational Team Concert into a software factory to develop an integration layer between a new point of sale system and SAP backend. Key challenges included a multi-vendor global team and parallel development of UI, integration, and backend layers. The software factory employed model-driven development, continuous integration, and practices like architectural modeling in UML, automated WSDL generation, tracking work items and impediments, and collaborative configuration management to help coordinate distributed development and integrate results.
Data ingestion and distribution with apache NiFiLev Brailovskiy
In this session, we will cover our experience working with Apache NiFi, an easy to use, powerful, and reliable system to process and distribute a large volume of data. The first part of the session will be an introduction to Apache NiFi. We will go over NiFi main components and building blocks and functionality.
In the second part of the session, we will show our use case for Apache NiFi and how it's being used inside our Data Processing infrastructure.
Registry is a central metadata repository that allows users to collaboratively use Schema definitions for stream processing.
Stream Analytics Manager, provides a framework to build Streaming applications faster, easier.
Staying Close to Experts with Executable SpecificationsVagif Abilov
The document discusses using executable specifications to capture expert knowledge for the NRK media player project. Specifications were written using Gherkin and the SpecFlow framework to describe requirements. This allowed developers to work closely with domain experts and validate requirements through automated tests. Lessons learned include starting with acceptance criteria before end-to-end testing and using specifications as a communication tool between technical teams.
Best practices and lessons learnt from Running Apache NiFi at RenaultDataWorks Summit
No real-time insight without real-time data ingestion. No real-time data ingestion without NiFi ! Apache NiFi is an integrated platform for data flow management at entreprise level, enabling companies to securely acquire, process and analyze disparate sources of information (sensors, logs, files, etc) in real-time. NiFi helps data engineers accelerate the development of data flows thanks to its UI and a large number of powerful off-the-shelf processors. However, with great power comes great responsibilities. Behind the simplicity of NiFi, best practices must absolutely be respected in order to scale data flows in production & prevent sneaky situations. In this joint presentation, Hortonworks and Renault, a French car manufacturer, will present lessons learnt from real world projects using Apache NiFi. We will present NiFi design patterns to achieve high level performance and reliability at scale as well as the process to put in place around the technology for data flow governance. We will also show how these best practices can be implemented in practical use cases and scenarios.
Speakers
Kamelia Benchekroun, Data Lake Squad Lead, Renault Group
Abdelkrim Hadjidj, Solution Engineer, Hortonworks
Denovolab ( www.denovolab.com ) is a SIP switching solution that is extremely high performance. Suitable for call center, wholesales termination, carrier services.
The document discusses rapid prototyping of applications using Grails and SAP's HANA Cloud Platform (HCP). It provides an overview of HCP and Grails, then demonstrates building a simple web application for managing tech events using Grails on HCP. Key steps include generating a Grails domain class and controllers, modifying configuration for the HCP deployment, building and deploying the WAR file locally and to HCP, and accessing the application. Resources for further information on HCP, Grails, Groovy and the sample app are also listed.
Flink Forward San Francisco 2018: Dave Torok & Sameer Wadkar - "Embedding Fl...Flink Forward
This document discusses using Apache Flink to operationalize a streaming machine learning lifecycle. It describes Comcast's need to improve customer experiences through predictive analytics over streaming data. Flink is used to orchestrate feature engineering, model training/evaluation, and real-time predictions. Key aspects of the solution include a metadata-driven pipeline, automated deployments, consistent feature stores for training and prediction, and monitoring of multiple models. The document outlines the various components of the ML lifecycle and pipeline implemented on Flink and discusses next steps around UI/UX, continuous monitoring, and supporting multiple feature stores.
The document provides a summary of Tarun Verma's contact information, work experience as a senior technical lead, skills and qualifications including 11+ years of experience in software development and architecture, expertise in content migration systems like OpenText and SharePoint, and programming abilities in languages like C# and Java. It also lists some of his projects involving systems migration and development.
This document provides an agenda and overview of an Apache Kafka integration meetup with Mulesoft 4.3. The meetup will include introductions, an overview of Kafka basics and components, a demonstration of the Mulesoft Kafka connector, and a networking session. Kafka is introduced as a distributed publish-subscribe messaging system that provides reliability, scalability, durability and high performance. Key Kafka concepts that will be covered include topics, partitions, producers, consumers, brokers and the commit log architecture. The Mulesoft Kafka connector operations for consuming, publishing and seeking messages will also be demonstrated.
BYOP: Custom Processor Development with Apache NiFiDataWorks Summit
Apache NiFi, a robust, scalable, and secure tool for data flow management, ships with over 212 processors to ingest, route, manipulate, and exfil data from a variety of sources and consumers. But many users turn to NiFi to meet unusual requirements — from proprietary protocol parsing, to running inside connected cars, to offloading massive hardware metrics from oil rigs in the most remote environments. Rather than posting a community request for custom development or offloading unusual demands to unnecessary external systems, there’s an answer in NiFi. Learn how NiFi allows you to quickly prototype custom processors in the scripting language of your choice against live production data without affecting your existing flows. Easily translate prototypes to full-fledged processors to optimize performance and leverage the full provenance reporting infrastructure. Discover how the framework provides conventions to streamline your development and minimize common boilerplate code, and the robust testing framework to make testing easy, and dare we say, fun.
Expected prior knowledge / intended audience: developers and data flow managers should have passing knowledge of Apache NiFi as a platform for routing, transforming, and delivering data through systems (a brief overview will be provided). The intended audience will have experience with programming in Groovy, Ruby, Jython, ECMAScript/Javascript, or Lua.
Takeaways: Attendees will gain an understanding in writing custom processors for Apache NiFi, including the component lifecycle, unit and integration testing, quick prototyping using a scripting language of their choice, and the artifact publishing and deployment process.
Running Apache NiFi with Apache Spark : Integration OptionsTimothy Spann
A walk-through of various options in integration Apache Spark and Apache NiFi in one smooth dataflow. There are now several options in interfacing between Apache NiFi and Apache Spark with Apache Kafka and Apache Livy.
Ildikó Váncsa, Chris Price, and Carsten Rossenhövel's presentation at the 2017 Open Networking Summit.
Communications service providers (CSPs) have a wide range of options when building virtualized services from the ground up including multiple choices for each functional block in the ETSI NFV reference architecture. CSPs prefer heterogeneous systems with building blocks from different vendors including open source software; for such deployments interoperability becomes a crucial requirement.
OpenStack, as the NFVI and VIM, serves as a widely used cloud platform for telecom and NFV use cases. As a common base, OpenStack offers the means for vendors and other open source projects to ease the interoperability challenge by providing a set of open API’s while focusing on upgradeability and backward compatibility.
However, when it comes to productization, interoperability testing often falls short and is sometimes left to the carrier as shown by the testing programs actively run by no fewer than 10 organizations today.
Join Carsten Rossenhövel from the European Advanced Networking Test Center (EANTC) and the rapporteur (editor) of ETSI’s NFV interoperability standards, Ildikó Váncsa from the OpenStack Foundation, and Chris Price, Ericsson and OpenStack board director to learn more about
The ETSI NFV Release 2 interoperability testing activities - standardization and recently completed ETSI PlugTest. Over 40 commercial and open source implementations were tested for interoperability, including 20 virtual network functions, 10 management and orchestration solutions and 10 NFV platforms.
The New IP Agency (NIA) interoperability testing campaigns of commercial NFV implementations executed by EANTC, focusing on results, lessons learned and recommendations.
How vendors and open source projects are stepping up to the challenge, realizing they must work together.
How to stay up-to-date with OpenStack releases and the community.
How to get involved to ensure you are aware of the latest developments and contribute what you need to OpenStack.
What will I learn from attending this session?
CSPs, open source projects and vendors alike will learn more about the recent ETSI PlugTest and NIA-commissioned interoperability testing, their results and how to architect full NFV solutions that will work together. Interoperability API tests and associated marks from OpenStack will be covered, as well as features to help stay current on OpenStack releases. Attendees will also hear from Ericsson about a vendor’s point of view, and how other projects such a OPNFV are evolving and expanding in scope to address this challenge.
This document provides an overview and user guide for Apache NiFi. It discusses what NiFi is, its architecture and data flow, common terms, and how to operate and design data flows in NiFi. The guide explains how to debug NiFi and test data flows. It also provides tips on processor utilization, routing strategies, and using the NiFi Expression Language. Examples of existing NiFi processors and learning steps are outlined.
MiNiFi is a recently started sub-project of Apache NiFi that is a complementary data collection approach which supplements the core tenets of NiFi in dataflow management, focusing on the collection of data at the source of its creation. Simply, MiNiFi agents take the guiding principles of NiFi and pushes them to the edge in a purpose built design and deploy manner. This talk will focus on MiNiFi's features, go over recent developments and prospective plans, and give a live demo of MiNiFi.
The config.yml is available here: https://gist.github.com/JPercivall/f337b8abdc9019cab5ff06cb7f6ff09a
NiFi processors allow data to be processed as it flows through the system. This document discusses how to create a custom NiFi processor by using the nifi-processor-bundle-archetype Maven archetype to generate the project structure. It also covers deploying the custom processor by building a NAR file with Maven and placing it in the NiFi installation directory so that the new processor will be available. Key methods for customizing processor behavior like init, onSchedule, and onTrigger are also outlined.
KPN ETL Factory (KETL) - Automated Code generation using Metadata to build Da...DataWorks Summit
Being one of biggest and oldest telecom providers in Netherlands with multiple acquisitions over last few decades left KPN with 1500+ data sources and more than 25 teams working with different tools like Teradata, Informatica, Oracle, OBIEE, Hadoop etc. This resulted into a lot of technical debt and also duplicated data on various systems with complex data relationship. Thus resulting into data quality issues and long processing times to get meaningful insights into business.
This created need for an unified way of Ingesting, Storing and Transforming data to be consumed and processed at multiple stages where KETL Framework was born.
Our journey so far:
Instead of developing mappings and workflows or handcrafted SQL Code, Business teams started writing metadata about their sources and dependency between them using macro based excel files or Django based YAML file generator. These files are used by the KETL framework to generate appropriate Hive/Spark/Informatica/Oracle/Teradata Code along with Airflow Scheduler DAG with schedule and dependency as Code.
Additionally all the environments, configuration and access rights are also managed via YAML files with Ansible thus enabling us to view each change as code. This made teams self sufficient where they can build their own Dev/Test environment to validate their metadata and target model structure before deploying to production.
Benefits:
- On-boarding new sources and integrating with existing data store takes less than a week
- Everything is maintained in Git, giving full visibility of changes along with their timelines
- Minimising technical depth and allowing business teams to focus on data instead of tooling
- Easier adoption of newer tools and path of least resistence for decommissioning of legacy stack
- KISS Architecture, easier to maintain and scale
- Reducing bureaucratic processes and design for transparency
What keeps us busy:
- Adding a Test Framework to enable users with BDD tests using the same metadata
- Adding functionality to generate complex code structures
- Using advanced CI/CD processes like Jenkins pipeline for faster deployments
- Integration with new tools and technologies both enterprise and opensource.
Tools/Technologies used:
- Hortonworks HDP - HDFS, Yarn, Hive, Spark, LLAP, Tez
- User and Access Management - Ranger, Knox, Kerberos, LDAP, SSSD, Linux ACL's
- ETL & DWH Tools - Informatica, Informatica BDM, Teradata, Querygrid, Aster etc.
- Reporting - OBIEE, Tableau, Zeppelin Notebooks
- Monitoring - Grafana, Zabbix & ELK
- Scheduler - Airflow
- Orchestration - Ansible
- Code - Python, YAML, Jinja2
- CI/CD - Git, Artifactory, Jenkins
Speaker
Gerhard Messelink, Teacher, KPN
Netflix uses Conductor, an open source microservices orchestrator, to manage complex content processing workflows involving ingestion, encoding, localization, and delivery. Conductor provides visibility, control, and reuse of tasks through a task queuing system and workflow definitions. It has scaled to process millions of workflow executions across Netflix's content platform using a stateless architecture with Dynomite for storage and Dyno-Queues for task distribution.
The document discusses migrating from WebSphere Application Server v5.1 to v6.1. It covers reviewing the architecture changes between the versions, the migration roadmap, changes to topology like profiles and nodes. It also discusses migrating settings, components, APIs, new concepts in v6.1 like integrated third party vendors and feature packs.
Kubernetes Connectivity to Cloud Native Kafka | Christina Lin and Evan Shorti...HostedbyConfluent
If you want to build an ecosystem of streaming data to your Kafka platform, you will need a much easier way for your developer to quickly move what’s on the source to your cluster. Better yet, making the connector serverless so it would NOT waste any resources for being idle, and having a trusted partner manage your Kafka infrastructure for you.
In this session, we will show you how easy we have made streaming data with great user experience. Flexible resource management with our new secret weapon in the Apache Camel project -- Kamelet. We’ll also demonstrate how Red Hat OpenShift Streams for Apache Kafka simplifies the provisioning of Kafka deployments in a public cloud, managing the cluster,topics, and configuring secure access to the Kafka cluster for your developers.
A Software Factory Integrating Rational & WebSphere Toolsghodgkinson
The document discusses how a large automotive retailer integrated Rational Software Architect, WebSphere Message Broker, and Rational Team Concert into a software factory to develop an integration layer between a new point of sale system and SAP backend. Key challenges included a multi-vendor global team and parallel development of UI, integration, and backend layers. The software factory employed model-driven development, continuous integration, and practices like architectural modeling in UML, automated WSDL generation, tracking work items and impediments, and collaborative configuration management to help coordinate distributed development and integrate results.
Modern Cloud-Native Streaming Platforms: Event Streaming Microservices with A...confluent
Microservices, events, containers, and orchestrators are dominating our vernacular today. As operations teams adapt to support these technologies in production, cloud-native platforms like Pivotal Cloud Foundry and Kubernetes have quickly risen to serve as force multipliers of automation, productivity and value.
Apache Kafka® is providing developers a critically important component as they build and modernize applications to cloud-native architecture.
This talk will explore:
• Why cloud-native platforms and why run Apache Kafka on Kubernetes?
• What kind of workloads are best suited for this combination?
• Tips to determine the path forward for legacy monoliths in your application portfolio
• Demo: Running Apache Kafka as a Streaming Platform on Kubernetes
Presenting the newest version of Cloudify - 4.6 including a orchestrated SD-WAN demo from MEF18 where Cloudify is used as the orchestration platform for uCPE based on containers.
Morphis provides the most comprehensive solutions for cloud-enabling legacy systems. In partnership with leading global enterprises, software vendors and system integrators, Morphis upgrades outdated systems to increase IT innovation, unleashing the agility of the cloud for the customers, partners, suppliers, regulators and employees connected to every enterprise.
Radisys, along with Orange and Strategy Analytics presented this webinar entitled: Radisys Makes ONAP Real for High Performance Services. The presenter team, Sue Rudd of SA, Al Balasco and Adnan Saleem of Radisys and Morgan Richomme of Orange covered topics such as: NFV and ONAP, Media Server 'readiness', Tier 1 challenges and finish up with some real-world use cases. For more on ONAP and how Radisys can get you ready, please contact us at: sales@radisys.com
This document summarizes new features in .NET Framework 4.5, including improvements to WeakReferences, streams, ReadOnlyDictionary, compression, and large objects. It describes enhancements to server GC, asynchronous programming, the Task Parallel Library, ASP.NET, Entity Framework, WCF, WPF, and more. The .NET 4.5 update focuses on performance improvements, support for asynchronous code and parallel operations, and enabling modern app development patterns.
Au delà des brokers, un tour de l’environnement Kafka | Florent Ramièreconfluent
During the Confluent Streaming event in Paris, Florent Ramière, Technical Account Manager at Confluent, goes beyond brokers, introducing a whole new ecosystem with Kafka Streams, KSQL, Kafka Connect, Rest proxy, Schema Registry, MirrorMaker, etc.
MMS2012-HP VirtualSystem-The Ideal Foundation for a Microsoft Private CloudHarold Sriver
- The document discusses HP's VirtualSystem solutions for Microsoft virtualization and private cloud environments, including the VS1, VS2, and VS3 solutions.
- It provides an overview of the hardware and software components that make up each VirtualSystem solution, including servers, storage, networking, management tools, and Microsoft software.
- HP's VirtualSystem solutions are presented as integrated virtualization platforms optimized for Microsoft environments that can help businesses deploy private clouds.
ONAP - Open Network Automation PlatformAtul Pandey
The document provides an overview of the Open Network Automation Platform (ONAP), which is an open source platform for automating virtual network functions (VNFs). ONAP was derived from AT&T's ECOMP platform and can design, create, orchestrate, monitor, and manage the lifecycles of VNFs, SDNs containing VNFs, and higher-level services combining these components. It also discusses network function virtualization (NFV) basics, global traffic trends, proprietary equipment issues, declining revenues, and the call for more agile and flexible software-based networks. Finally, it summarizes ONAP's architecture, including its design time and run time frameworks, and provides a use case
Confluent Partner Tech Talk with Synthesisconfluent
A discussion on the arduous planning process, and deep dive into the design/architectural decisions.
Learn more about the networking, RBAC strategies, the automation, and the deployment plan.
Marcelo Perazolo, Lead Software Architect, IBM Corporation - Monitoring a Pow...Nagios
Marcelo Perazolo, Lead Software Architect, IBM Corporation - In this session, Marcelo will describe how Nagios can be
integrated and extended for the monitoring of a typical
power-based converged infrastructure, and how it interfaces with existing element managers to provide a single point of integration for passive and active monitoring purposes.
Building high performance microservices in finance with Apache ThriftRX-M Enterprises LLC
Apache Roadshow Chicago Talk on May 14, 2019
In this talk we’ll look at the ways Apache Thrift can solve performance problems commonly facing next generation applications deployed in performance sensitive capital markets and banking environments. The talk will include practical examples illustrating the construction, performance and resource utilization benefits of Apache Thrift. Apache Thrift is a high-performance cross platform RPC and serialization framework designed to make it possible for organizations to specify interfaces and application wide data structures suitable for serialization and transport over a wide variety of schemes. Due to the unparalleled set of languages supported by Apache Thrift, these interfaces and structs have similar interoperability to REST type services with an order of magnitude improvement in performance. Apache Thrift services are also a perfect fit for container technology, using considerably fewer resources than traditional application server style deployments. Decomposing applications into microservices, packaging them into containers and orchestrating them on systems like Kubernetes can bring great value to an organization; however, it can also take a very fast monolithic application and turn it into a high latency web of slow, resource hungry services. Apache Thrift is a perfect solution to the performance and resource ills of many microservice based endeavors.
HPC Web overview - Mobyle Workshop - September 28, 2012Hervé Ménager
The document provides an overview of the HPC Web project developed by the National Institute of Allergy and Infectious Diseases Bioinformatics and Computational Biosciences Branch. HPC Web is a web interface that allows non-command line users to access high performance computing resources. The document discusses the goals of the project, an overview of the design using the Mobyle framework, video demonstrations of the interface, and upcoming next steps including continued application development and integration with other frameworks.
Software Factories in the Real World: How an IBM WebSphere Integration Factor...ghodgkinson
This document discusses how an automotive retailer set up an efficient software factory using IBM tools like Rational Software Architect and WebSphere Message Broker to integrate a new point of sale system with their SAP backend. The software factory employed techniques like model-driven development and continuous integration to help scale development and keep customers satisfied. Key practices that helped succeed included tighter architectural control using Rational Software Architect models and service definitions, and keeping the distributed team coordinated using Rational Team Concert for planning, source control, and tracking progress across locations. The integrated approach and tools helped the retailer successfully complete the large integration project.
Modernizing Testing as Apps Re-ArchitectDevOps.com
Applications are moving to cloud and containers to boost reliability and speed delivery to production. However, if we use the same old approaches to testing, we'll fail to achieve the benefits of cloud. But what do we really need to change? We know we need to automate tests, but how do we keep our automation assets from becoming obsolete? Automatically provisioning test environments seems close, but some parts of our applications are hard to move to cloud.
[QCon London 2020] The Future of Cloud Native API Gateways - Richard LiAmbassador Labs
The introduction of microservices, Kubernetes, and cloud technology has provided many benefits for developers. However, the age-old problem of getting user traffic routed correctly to the API of your backend applications can still be an issue, and may be complicated with the adoption of cloud native approaches: applications are now composed of multiple (micro)services that are built and released by independent teams; the underlying infrastructure is dynamically changing; services support multiple protocols, from HTTP/JSON to WebSockets and gRPC, and more; and many API endpoints require custom configuration of cross-cutting concerns, such as authn/z, rate limiting, and retry policies.
A cloud native API gateway is on the critical path of all requests, and also on the critical path for the workflow of any developer that is releasing functionality. Join this session to learn about the underlying technology and the required changes in engineering workflows. Key takeaways will include:
A brief overview of the evolution of API gateways over the past ten years, and how the original problems being solved have shifted in relation to cloud native technologies and workflow
Two important challenges when using an API gateway within Kubernetes: scaling the developer workflow; and supporting multiple architecture styles and protocols
Strategies for exposing Kubernetes services and APIs at the edge of your system
Insight into the (potential) future of cloud native API gateways
https://qconlondon.com/london2020/presentation/future-cloud-native-api-gateways
ACM NOSSDAV'21-ES-HAS_ An Edge- and SDN-Assisted Framework for HTTP Adaptive ...Reza Farahani
The document presents ES-HAS, an edge- and SDN-assisted framework for HTTP adaptive video streaming. ES-HAS leverages SDN and NFV paradigms to provide network assistance for video streaming. It introduces virtual reverse proxy servers at the network edge that employ a novel server/segment selection policy. An evaluation on a large-scale cloud testbed with 60 clients shows that ES-HAS outperforms state-of-the-art approaches in terms of playback bitrate and number of stalls by at least 70% and 40% respectively. Future work directions include extending edge caching and collaboration as well as improving the proposed optimization model.
.NET Cloud-Native Bootcamp- Los AngelesVMware Tanzu
This document outlines an agenda for a .NET cloud-native bootcamp. The bootcamp will introduce practices, platforms and tools for building modern .NET applications, including microservices, Cloud Foundry, and cloud-native .NET technologies and patterns. The agenda includes sessions on microservices, Cloud Foundry, hands-on exercises, and a wrap up. Break times are scheduled between sessions.
This document provides an agenda and overview for an event-driven architecture workshop. The workshop will cover topics including event-driven architecture, change data capture, cloud-native integration, how cloud architectures have evolved, modern application elements like APIs, events and data, container-based application development, and integration patterns. Hands-on labs will allow attendees to work with technologies like Apache Kafka and Red Hat AMQ Streams.
Similar to SERENE 2014 Workshop: Paper "Automatic Generation of Description Files for Highly Available Systems" (20)
Hot Stand-By Disaster Recovery Solutions for Ensuring the Resilience of Railw...SERENEWorkshop
Specifications of modern railway control systems often include resilience requirements in order to quickly and safely recovery from disasters (e.g. system-level failures). To that aim, spatial redundancy is required, with main and backup systems installed in fully isolated buildings, together with very short switchover times from main to backup systems in case of disasters. In order to fulfil those requirements, Ansaldo STS has developed a system-level hot stand-by solution allowing to quickly and smoothly switch from the main system to the back-up one, ensuring the necessary continuity of service and transparency to train supervisors and other operators. The functional architecture of such a solution is able to keep aligned the safety-critical nucleuses, typically based on N-modular redundancy (i.e. ‘KooM’ voting), of the main and the back-up systems. Such a coherent alignment must be kept in terms of both interfaced field devices (e.g. interlocking signals, track circuits, switch points, etc.) – on the ‘bottom’ level – and control room Human Machine Interfaces (HMI) – on the ‘top’ level. The solution is based on heterogeneous and redundant network links (copper/fiber Ethernet/HyperRing) at different levels of system architecture. In this speech, the reference architecture and the fault-tolerance functionalities for disaster recovery are provided, considering the requirements of real railway and mass-transit installations.
Considering Execution Environment Resilience: A White-Box ApproachSERENEWorkshop
The document discusses an approach called semi-purification for automatically generating unit test cases from source code. Semi-purification replaces dependencies like global variables and database calls in the source code with function parameters. This allows existing automated test case generation tools to be used by treating the semi-purified code as if it were pure. Challenges discussed include handling shared subroutines, loops, and concurrency. The goal is to increase test coverage for complex, distributed systems with frequent changes like those used at CERN.
Serene 2015
Davide Scaramuzza
Abstract: With drones becoming more and more popular, safety is a big concern. A critical situation occurs when a drone temporarily loses its GPS position information, which might lead it to crash. This can happen, for instance, when flying close to buildings where GPS signal is lost. In such situations, it is desirable that the drone can rely on fall-back systems and regain stable flight as soon as possible. In this talk, I will present novel methods to automatically recover and stabilize a quadrotor from any initial condition or execute emergency landing. On the one hand, this new technology will allow quadrotors to be launched by simply tossing them in the air, like a “baseball ball”. On the other hand, it will allow them to recover back into stable flight or land on a safe area after a system failure. Since this technology does not rely on any external infrastructure, such as GPS, it enables the safe use of drones in both indoor and outdoor environments. Thus, it can become relevant for commercial use of drones, such as parcel delivery.
Recent videos:
Automatic failure recovery without GPS: https://youtu.be/pGU1s6Y55JI
Autonomous Landing-site detection and landing: https://youtu.be/phaBKFwfcJ4
Engineering Cross-Layer Fault Tolerance in Many-Core SystemsSERENEWorkshop
1) The document discusses engineering cross-layer fault tolerance in many-core systems. It proposes a cross-layer approach where fault tolerance is distributed across system layers rather than handled solely within a single layer.
2) A motivating example of cross-layer fault tolerance is discussed using TCP/IP, where errors can be detected and recovered across multiple layers for improved efficiency.
3) The challenges of ensuring cross-layer fault tolerance for many-core systems containing tens to thousands of cores are discussed to improve reliability, performance and energy efficiency.
4) The plan is to implement a case study of a car number plate recognition application to gain experience with cross-layer fault tolerance, and apply order graphs to model performance
This document presents research on assessing risk to determine the optimal level of redundancy needed when moving critical applications to the cloud. It develops fault tree models based on the physical structure of clouds to calculate failure frequency. It factors in varying resource quality and the costs of downtime and VMs. The results show that deploying between 4-6 redundant VMs provides significant availability gains and reduces total costs by lowering risk compared to basic redundancy approaches. The aim was met of leveraging cloud features in modeling to support high-value, mission critical applications on public clouds.
Biological Immunity and Software Resilience: Two Faces of the Same Coin?SERENEWorkshop
The document discusses the similarities between biological immunity and software resilience. It proposes that biological systems are resilient, with the immune system being a prime example due to its ability to adapt, make decisions through distributed agents, and defend the body through learning. An actor-based model is presented as a way to engineer resilience into software by drawing inspiration from immune system principles like replication, containment, and delegation. A bio-inspired architecture is described that uses supervisor actors to detect changes and spawn helper/killer actors to address issues while maintaining system function. Future work areas are identified like automatic failure recognition, dynamic learning, and multi-layer management of failures.
This document summarizes a presentation on system-level concurrent error detection. It discusses specifying reliability constraints in system specifications, design methodologies that provide error detection capabilities through redundancy, and a two-level hardware/software partitioning approach that first considers traditional costs and then analyzes reliability constraints. The goal is to adopt design for reliability approaches earlier in the system design process to significantly impact costs like timing, energy and area.
SERENE 2014 School: Measurement-Driven Resilience Design of Cloud-Based Cyber...SERENEWorkshop
SERENE 2014 School on Engineering Resilient Cyber Physical Systems
Talk: Measurement-Driven Resilience Design of Cloud-Based Cyber-Physical Systems, by Imre Kocsis
SERENE 2014 School: Resilience in Cyber-Physical Systems: Challenges and Oppo...SERENEWorkshop
SERENE 2014 School on Engineering Resilient Cyber Physical Systems
Talk: Resilience in Cyber-Physical Systems: Challenges and Opportunities, by Gabor Karsai
SERENE 2014 Workshop: Panel on "Views on Runtime Resilience Assessment of Dyn...SERENEWorkshop
The document summarizes a panel discussion on views of runtime resilience assessment of dynamic software systems held at SERENE 2014 in Budapest, Hungary. The panelists represented different domains related to resilience assessment, software engineering, dynamic systems design, and dependable computing. They discussed key challenges around metrics for characterizing resilience, defining dynamic workloads and changeloads, monitoring unbounded and dynamic systems, maintaining accurate runtime models, and standardizing resilience assessment techniques. The panelists emphasized the need for predictive monitoring and adaptation, rather than just detection, to ensure resilience in increasingly complex and evolving software systems.
SERENE 2014 Workshop: Paper "Combined Error Propagation Analysis and Runtime ...SERENEWorkshop
SERENE 2014 - 6th International Workshop on Software Engineering for Resilient Systems
http://serene.disim.univaq.it/
Session 4: Monitoring
Paper 3: Combined Error Propagation Analysis and Runtime Event Detection in Process-driven Systems
SERENE 2014 Workshop: Paper "Simulation Testing and Model Checking: A Case St...SERENEWorkshop
SERENE 2014 - 6th International Workshop on Software Engineering for Resilient Systems
http://serene.disim.univaq.it/
Session 3: Verification and Validation
Paper 2: Simulation Testing and Model Checking: A Case Study Comparing these Approaches
SERENE 2014 Workshop: Paper "Adaptive Domain-Specific Service Monitoring"SERENEWorkshop
- An adaptive service monitoring approach that considers domain-specific errors, such as codec errors for streamed media, in addition to generic errors.
- The approach adapts the monitoring frequency for a particular service and error type based on the historical error rate to reduce monitoring costs.
- An evaluation using real-world data from Smart TV services found that the adaptive approach reduced monitoring costs by 30% with negligible impact on error detection quality.
SERENE 2014 Workshop: Paper "Verification and Validation of a Pressure Contro...SERENEWorkshop
SERENE 2014 - 6th International Workshop on Software Engineering for Resilient Systems
http://serene.disim.univaq.it/
Session 3: Verification and Validation
Paper 1: Verification and Validation of a Pressure Control Unit for Hydraulic Systems
SERENE 2014 Workshop: Paper "Using Instrumentation for Quality Assessment of ...SERENEWorkshop
SERENE 2014 - 6th International Workshop on Software Engineering for Resilient Systems
http://serene.disim.univaq.it/
Session 4: Monitoring
Paper 1: Using Instrumentation for Quality Assessment of Resilient Software in Embedded Systems
SERENE 2014 Workshop: Paper "Advanced Modelling, Simulation and Verification ...SERENEWorkshop
SERENE 2014 - 6th International Workshop on Software Engineering for Resilient Systems
http://serene.disim.univaq.it/
Session 3: Verification and Validation
Paper 3: Advanced Modelling, Simulation and Verification for Future Traffic Regulation Optimisation
SERENE 2014 Workshop: Paper "Formal Fault Tolerance Analysis of Algorithms fo...SERENEWorkshop
SERENE 2014 - 6th International Workshop on Software Engineering for Resilient Systems
http://serene.disim.univaq.it/
Session 2: Analysis of Resilience
Paper : Formal Fault Tolerance Analysis of Algorithms for Redundant Systems in Early Design Stages
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.
ESA/ACT Science Coffee: Diego Blas - Gravitational wave detection with orbita...Advanced-Concepts-Team
Presentation in the Science Coffee of the Advanced Concepts Team of the European Space Agency on the 07.06.2024.
Speaker: Diego Blas (IFAE/ICREA)
Title: Gravitational wave detection with orbital motion of Moon and artificial
Abstract:
In this talk I will describe some recent ideas to find gravitational waves from supermassive black holes or of primordial origin by studying their secular effect on the orbital motion of the Moon or satellites that are laser ranged.
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
The cost of acquiring information by natural selectionCarl Bergstrom
This is a short talk that I gave at the Banff International Research Station workshop on Modeling and Theory in Population Biology. The idea is to try to understand how the burden of natural selection relates to the amount of information that selection puts into the genome.
It's based on the first part of this research paper:
The cost of information acquisition by natural selection
Ryan Seamus McGee, Olivia Kosterlitz, Artem Kaznatcheev, Benjamin Kerr, Carl T. Bergstrom
bioRxiv 2022.07.02.498577; doi: https://doi.org/10.1101/2022.07.02.498577
The cost of acquiring information by natural selection
SERENE 2014 Workshop: Paper "Automatic Generation of Description Files for Highly Available Systems"
1. Automatic Generation of
Description Files for Highly
Available Services
Maxime Turenne
Ali Kanso
Abdelouahed Gherbi
6th International Workshop on Software
Engineering for Resilient Systems,
15th October 2014, Budapest
2. Outline
• Introduction
– What is High Availability (HA)
– Current practice for achieving HA
• Background
– The previous approach for generating middleware HA configuration
• A novel approach
– New domain-specific modeling language
– Our methodology for generating middleware HA configuration
• Prototype implementation
• Conclusion
2
3. Availability Downtime per Year
90% 36.5 days
99% 3.7 days
99.9% 8.8 hours
99.99% 52.6 min
99.999% 5.3 min
99.9999% 31.5 sec
High Availability (HA): At least 99.999 % (a.k.a. five nines)
Service Availability (SA): the percentage of time the
system/service is available throughout a period of time t.
HA definition
3
4. • More than 40% of companies want 99.99%
availability less than one hour outage per
year
Demand on HA
Information Technology and Intelligence Corp. survey
4
5. • 59% of Fortune 500 companies experience a minimum of 1.6 hours
of downtime per week (Gartner 2011)
– 46,000,000$ of loss per year (only employee's salary)
• A Ponemon Institute study shows that in the years 2012 and 2013,
91% of data centers endured unplanned outages
• Average loss of $90,000 per hour in the media sector to about $6.48
million per hour for large online brokerages
Downtime cost
IWGCR short report, 2012 5
6. Redundant
Communication
Network
Operating system
Node U
Node V
Node W
Operating system
Operating system
HA middleware
HA middleware
HA middleware
Process A
Process A
Component X
Process COther
Components
Middleware based HA
solutions
E.g.: OpenSAF is an open source implementation of an HA middleware
(www.OpenSAF.org), with contributions from world leading telecom and
computing companies. 6
Process A
Process AReplica of
Component X
7. SAForum
(Service availability forum)
• Consortium of industry-leading IT and
Telecom companies.
• Defines open standards for HA systems:
– Application Programming Interfaces
– Guidelines for HA system
– Specifications for an HA middleware
7
8. HA MW configuration
Node 1 Node 2
C1
C2
C3
C4
CSI 1
CSI 2
CSI 3
CSI 4
SG
SI 1 SI 2
SU 1 SU 2
8
• The HA management is performed based on a
complex XML configuration file:
9. • The configuration structure is described using a standardized
UML class diagram:
HA MW configuration
9
Based on
Configuration model
Middleware XML configuration
10. Complex domain details
• The 2 main category in the Availability
Management Framework (AMF) configuration:
– The Service Provider:
– The Service:
10
DB
HTTP
Web SI
Node 1 Node 2
SG
MySql
Web SU 1
Apache
MySql
Web SU 2
Apache
11. service provider
• SaAware:
• Container/Contained:
• Proxy/Proxied:
• NonSaAwareNonProxied:
11
Component
SAF API
HA middleware
Execution
environment
SAF API
HA middleware
Component
SAF API
Proxy
SAF API
HA middleware
Proxied
Component
HA middleware
12. hierarchical composition
• This HA MW support the notion of multiple
inter-dependent components collaborating to
provide a higher level of service:
12
Node
Apache
MySQL
Web Service
Provide
Proxy
13. Outline
Introduction
– What is High Availability (HA)
– Current practice for achieving HA
• Background
– The previous approach for generating middleware HA configuration
• A novel approach
– New domain-specific modeling language
– Our methodology for generating middleware HA configuration
• Prototype implementation
• Conclusion
13
14. Previous approach
• Previous automatic configuration approach:
14
ETF file
User HA
requirement
Configuration
generator
Configuration file
Upgrade-campaign
generator
Upgrade-campaign file
*A. Kanso, A. Hamou-Lhadj, M. Toeroe, and F. Khendek, “Generating AMF Configurations from Software Vendor Constraints
and User Requirements”, in Proc. of the Forth Interna-tional Conference on Availability, Reliability and Security, Fukuoka,
Japan, 2009, pp. 454-461
15. Entity Type File (ETF)
• Software vendor description for:
– Software capabilities
– Dependencies
– Limitations
• Standardized by an XML schema
• With constraints derived from:
– the XML schema,
– the Software Management Framework specification,
– the Availability Management Framework specification
15
16. Challenges of defining An
ETF file
• The user needs to write the XML file manually,
• Domain constraints are informally described in
thousand of specification pages,
• Therefore, the user needs deep domain
knowledge.
16
17. Outline
Introduction
– What is High Availability (HA)
– Current practice for achieving HA
Background
– The previous approach for generating middleware HA configuration
• A novel approach
– New domain-specific modeling language
– Our methodology for generating middleware HA configuration
• Prototype implementation
• Conclusion
17
18. Abstracting the domain
• We designed a high level modeling language that is:
– Graphical
– Intuitive
– Expressive
– Standards-based
• We decided to extend the UML component diagram:
18
26. Research process
Prototype implementation
Design the model transformation
Annotate our model with the extracted OCL
Extract the OCL from the specifications
Design the model
Design the concrete syntax
26
27. Outline
Introduction
– What is High Availability (HA)
– Current practice for achieving HA
Background
– The previous approach for generating middleware HA configuration
A novel approach
– New domain-specific modeling language
– Our methodology for generating middleware HA configuration
• Prototype implementation
• Conclusion
27
30. Conclusion
Reduce the design complexity of configurations
—Using an intuitive language that saves time and effort.
Reduce the configuration errors
—By automatically validating the generated configurations
against domain constraints.
No need for the developer to manually manipulate
heavy and complex XML files.
Abstraction of the domain complexity.
30
<
31. Future work
• Integrate the specification of HA and non-
functional requirements in our model and
design language.
31
HA configuration
requirements