This document presents a reference architecture for distributed software deployment. It discusses challenges with traditional software deployment approaches and how newer technologies like containers and immutable infrastructure can help address these challenges. It also describes the Nix package manager and NixOS which aim to make software deployment predictable, reliable and reproducible by treating system configurations as code. However, additional capabilities are needed for deploying distributed, service-oriented systems at scale. The document proposes Disnix, an extension of Nix, to capture deployment specifications in models and perform the complete deployment process from these models while ensuring requirements around security, performance, resilience and licensing are met.
The document discusses reference architectures, including what they are, how they are used, and benefits. Some key points:
- A reference architecture provides standardized guidelines and patterns to reduce project setup time and costs while increasing quality.
- An example project at AstraZeneca saw a 5x return on investment in the reference architecture by reducing rework and discussions.
- Both external and internal reference architectures are described. The external defines overall structure while the internal specifies subsystems, layers, patterns, and tools.
- Reference architectures guide various roles in analyzing, designing, and implementing applications according to the standardized approach. This cuts time spent on architectural discussions and infrastructure issues.
- Multiple internal reference architectures may
A Software Factory Integrating Rational & WebSphere Toolsghodgkinson
The document discusses how a large automotive retailer integrated Rational Software Architect, WebSphere Message Broker, and Rational Team Concert into a software factory to develop an integration layer between a new point of sale system and SAP backend. Key challenges included a multi-vendor global team and parallel development of UI, integration, and backend layers. The software factory employed model-driven development, continuous integration, and practices like architectural modeling in UML, automated WSDL generation, tracking work items and impediments, and collaborative configuration management to help coordinate distributed development and integrate results.
The document discusses modeling and the benefits of modeling complex systems. It notes that modeling helps visualize, specify, guide construction of, and document systems that would otherwise be too vast to comprehend. The importance of modeling increases as systems increase in scale and complexity. Modeling allows for simulating "what if" scenarios to help with early verification and validation. The document discusses how modeling enables the development of things as complex as software systems with millions of lines of code and global deployments.
The document discusses developing an Enterprise Reference Architecture (ERA) to bridge the gap between enterprise architecture and project-level architecture. An ERA is a blueprint that embodies enterprise principles and standards in a form that is easily applicable to business solutions. The document proposes a classification scheme to characterize reference architectures based on their coverage and level of abstraction. It argues that an ERA, situated in the right portion of the scheme, can help shape business solutions while advancing enterprise capabilities.
Transforming Software Architecture for the 21st Century (September 2009)Dion Hinchcliffe
Evolving an important theme I've been working on and presenting all year, this new deck summarizes how enterprise architecture and large scale technology-based business solutions must transform to be more effective in the 21st century.
Contains material on a hypothesis for what's wrong with today's EA as well as potential solutions of merit such as emergent architecture, WOA, enterprise REST, open supply chains (APIs), mashups, and other models.
Presented this week in Oslo Norway to Bouvet's enterprise architecture council.
This document discusses IBM Rational Rhapsody, a model-driven development tool for complex systems and software. It provides capabilities for specifying, designing, developing, validating, and verifying systems using modeling and simulation. The document outlines Rhapsody's key features and benefits, including building quality applications through collaboration and eliminating defects through continual testing. It also describes Rhapsody's model execution, requirements visualization, and team collaboration technologies. Several usage scenarios are presented, such as visualizing legacy code, transitioning to model-driven development, and integrating external code.
Discover DoDAF problems early in the lifecycle with model executionGraham Bleakley
How to develop DoDAF architectures that are executable, providing verification of understanding of architecture requirements and validating architecture interfaces.
Modelling is based upon the UPDM 2.1 profile as implemented in IBMs Rhapsody Tool.
The document discusses reference architectures, including what they are, how they are used, and benefits. Some key points:
- A reference architecture provides standardized guidelines and patterns to reduce project setup time and costs while increasing quality.
- An example project at AstraZeneca saw a 5x return on investment in the reference architecture by reducing rework and discussions.
- Both external and internal reference architectures are described. The external defines overall structure while the internal specifies subsystems, layers, patterns, and tools.
- Reference architectures guide various roles in analyzing, designing, and implementing applications according to the standardized approach. This cuts time spent on architectural discussions and infrastructure issues.
- Multiple internal reference architectures may
A Software Factory Integrating Rational & WebSphere Toolsghodgkinson
The document discusses how a large automotive retailer integrated Rational Software Architect, WebSphere Message Broker, and Rational Team Concert into a software factory to develop an integration layer between a new point of sale system and SAP backend. Key challenges included a multi-vendor global team and parallel development of UI, integration, and backend layers. The software factory employed model-driven development, continuous integration, and practices like architectural modeling in UML, automated WSDL generation, tracking work items and impediments, and collaborative configuration management to help coordinate distributed development and integrate results.
The document discusses modeling and the benefits of modeling complex systems. It notes that modeling helps visualize, specify, guide construction of, and document systems that would otherwise be too vast to comprehend. The importance of modeling increases as systems increase in scale and complexity. Modeling allows for simulating "what if" scenarios to help with early verification and validation. The document discusses how modeling enables the development of things as complex as software systems with millions of lines of code and global deployments.
The document discusses developing an Enterprise Reference Architecture (ERA) to bridge the gap between enterprise architecture and project-level architecture. An ERA is a blueprint that embodies enterprise principles and standards in a form that is easily applicable to business solutions. The document proposes a classification scheme to characterize reference architectures based on their coverage and level of abstraction. It argues that an ERA, situated in the right portion of the scheme, can help shape business solutions while advancing enterprise capabilities.
Transforming Software Architecture for the 21st Century (September 2009)Dion Hinchcliffe
Evolving an important theme I've been working on and presenting all year, this new deck summarizes how enterprise architecture and large scale technology-based business solutions must transform to be more effective in the 21st century.
Contains material on a hypothesis for what's wrong with today's EA as well as potential solutions of merit such as emergent architecture, WOA, enterprise REST, open supply chains (APIs), mashups, and other models.
Presented this week in Oslo Norway to Bouvet's enterprise architecture council.
This document discusses IBM Rational Rhapsody, a model-driven development tool for complex systems and software. It provides capabilities for specifying, designing, developing, validating, and verifying systems using modeling and simulation. The document outlines Rhapsody's key features and benefits, including building quality applications through collaboration and eliminating defects through continual testing. It also describes Rhapsody's model execution, requirements visualization, and team collaboration technologies. Several usage scenarios are presented, such as visualizing legacy code, transitioning to model-driven development, and integrating external code.
Discover DoDAF problems early in the lifecycle with model executionGraham Bleakley
How to develop DoDAF architectures that are executable, providing verification of understanding of architecture requirements and validating architecture interfaces.
Modelling is based upon the UPDM 2.1 profile as implemented in IBMs Rhapsody Tool.
This document provides an overview of the technical architecture for a cloud platform. It discusses various components including source control, continuous integration/build services, artifact storage, deployment services, infrastructure as code, orchestration, configuration/vaults, logging, monitoring, service discovery, load balancing, and platform services. For each component, it outlines relevant features, example solutions, and standards. The overall goal is to provide guidance on architecting a cloud platform that can build, deploy, host, run, and monitor application services.
The document introduces the artITecture Architecture Method for documenting solution level architecture. It describes the method's primary and secondary deliverables for describing different aspects of the architecture. The primary deliverables are software, infrastructure, integration, and data architectures. Architectural thinking considers all phases of the system lifecycle and links to project management. Principles of the method include considering all lifecycle phases and project management implications.
A pattern based approach to the development of updm architecturesGraham Bleakley
A conference paper presented at the International Enterprise Architecture Conference 2014 on the similarities between MODAF and DoDAF. The paper discusses the common threads that run through these frameworks and how they have been implemented in UPDM. It also discusses a development workflow for Enterprise Architecture development based upon Harmony SE.
Oracle OpenWorld 2009 AIA Best PracticesRajesh Raheja
Oracle OpenWorld 2009 Session S311197
Jedi Masters Reveal
Oracle Application Integration Architecture (AIA) Foundation Pack Best Practices
Building Process Integrations
Factors to consider when starting a brand-new requirements management project...IBM Rational software
The document discusses factors to consider when starting a new requirements management project in IBM Rational DOORS Next Generation. It recommends understanding project goals, environment and constraints to optimize the requirements process. Key questions to address include which artifacts define scope, how artifacts will be organized and tracked, what relationships are important, and which development methodology is being followed. The document also discusses configuring artifact types, attributes, link types and modules to structure requirements information in the project.
Architecting and Designing Enterprise ApplicationsGem WeBlog
The document discusses the architecture, views, and viewpoints of enterprise applications. It describes how architecture serves as a blueprint and is organized into views and viewpoints representing stakeholder perspectives. The architecture description consists of aspects depicted by models using languages like UML. Enterprise architecture helps conceptualize and analyze business scenarios considering applications, users, and data. Popular frameworks like TOGAF define architecture domains including business, applications, data, and technology. The blueprint of an application includes logical, technical, data, and infrastructure architectures. Logical architecture defines layers like business, data access, and presentation. Technical architecture identifies frameworks, patterns, and integration. Infrastructure services provide common functions.
Rhapsody and MATLAB/Simulink have several integration points that allow design and simulation of cyber-physical systems. This includes generating Simulink models from Rhapsody, creating S-functions for use in Simulink, and evaluating parametric constraints using MATLAB. Bringing Simulink models into the Rhapsody Design Manager enables traceability and collaboration across the system design lifecycle.
Rhapsody and mechatronics, multi-domain simulationGraham Bleakley
This document discusses mechatronics and its application with Rational Rhapsody Design Manager. [1] Mechatronics involves the integration of mechanical, electrical, and software engineering, requiring a systems engineering approach. [2] Mechatronic modeling requires mathematical modeling tools that can be integrated into logical behavior models. [3] Rhapsody provides a way to work with mathematical modeling tools like Simulink and Modelica to model both logical and physical behavior.
The document discusses Telelogic Rhapsody, a model-driven development tool for designing technical and embedded systems. It addresses key challenges in systems development such as effective collaboration, managing requirements changes, and testing. Rhapsody uses model-driven development approaches like UML/SysML modeling, requirements traceability, model-driven testing, and automatic code generation to help developers meet schedules, reduce errors, and facilitate team collaboration.
The document provides an overview of the Department of Defense Architecture Framework (DODAF). DODAF defines a common approach for describing and comparing enterprise architectures across the DoD. It facilitates the use of common principles, assumptions, and terminology. DODAF consists of 26 products organized into four views - All Views, Operational View, Systems View, and Technical Standards View - to comprehensively document architectures. Future evolution areas include defining a DODAF object model and ontology to facilitate tool interoperability and sharing of architecture data.
The document discusses software architecture documentation. It provides goals for architecture documentation, including presenting common views, defining stakeholders, identifying their concerns, and defining what and how to document. It also discusses the scope of the documentation. Finally, it discusses different approaches to software architecture documentation, including the Rational Unified Process (RUP) and Software Engineering Institute (SEI) methods. The document aims to provide guidance on effective software architecture documentation.
Enterprise Architecture supporting the change by Vladimir Calmic EndavaMoldova ICT Summit
The document discusses enterprise architecture (EA) and its benefits. It defines EA as a logical view of an organization's key capabilities and how they are linked together. EA involves defining changes from both a top-down and bottom-up approach using business, application, and technology building blocks. Adopting EA can improve efficiency, reduce costs, increase manageability and consistency within an organization. The document recommends starting to think about EA, using architecture development methods, embracing service-oriented architectures and cloud technologies.
[2015/2016] Collaborative software development with GitIvano Malavolta
This presentation is about a lecture I gave within the "Software systems and services" immigration course at the Gran Sasso Science Institute, L'Aquila (Italy): http://cs.gssi.infn.it/.
http://www.ivanomalavolta.com
MADES Seminar @ Laboratory of Model-Driven Engineering Applied to Embedded Sy...Alessandra Bagnato
-------
Lieu: salle 1073 (Nano-innov – Bat. 862)
Date: 24 Septembre 2012
Heure: 14:00 – 15:00
Orateur: Alessandra Bagnato
Titre: UML, SysML and MARTE in Use, a High Level Methodology for Real-time and Embedded Systems
-------
Résumé/Abstract
Rapid evolution of real-time and embedded systems (RTES) is continuing at an increasing rate, and new methodologies and design tools are needed to reduce design complexity while decreasing development costs and integrating aspects such as verification and validation. Model-Driven Engineering offers an interesting solution to the above mentioned challenges and is being widely used in various industrial and academic research projects.
The proposed seminar aims at presenting the development context and needs that have fostered the creation of a methodology and a set of UML, SysML and MARTE model-based diagrams within the research and development work carried out EU funded MADES project [http://www.mades-project.org/] which aims to develop novel model-driven techniques to improve existing practices in development of RTES for avionics and surveillance embedded systems industries.
The seminar aims at highlighting the current practice and needs in real Avionics development case studies and in particular takes advantage of the vision of an avionics system integrator, highlighting the perspective of the different needs of its different customers within the Avionics industry that have been taken as a basis to build the methodology and the set of diagrams.
The MADES Project is expected to deliver important improvements in each phase of embedded systems development lifecycle by providing new tools and technologies that support design, validation, simulation, and code generation, while providing better support for component reuse.
MADES technologies are expected to reduce development costs of complex embedded systems for the Aerospace, Defence and other key European industries, while enabling a next generation of highly complex embedded systems to be developed that are more reliable, yet costing less to maintain and evolve as industry needs change and hardware capabilities increase.
After you complete this module, you should be able to
explain these concepts:
- How requirements fit in the development process
- Key principles of requirements definition and management
- How you can manage requirements by using IBM Rational
requirements management tools
Define and Manage Requirements with IBM Rational Requirements ComposerAlan Kan
The document provides an overview of a hands-on lab session on IBM Rational Requirements Composer (RRC). The lab aims to demonstrate how RRC can help teams collaborate to define, manage and trace requirements across the software development lifecycle. The lab covers topics like importing and linking requirements, modeling business processes and use cases, conducting reviews, and generating work items and test cases from requirements. Known issues encountered in the labs are also documented.
This document discusses the NixOS project and declarative deployment approaches. It begins with definitions of declarative and imperative programming. It then discusses how NixOS uses a declarative approach by specifying the desired system configuration rather than imperative steps. Users can install packages and switch between user environments atomically. The Nix package manager builds a complete isolated system configuration from declarative expressions and stores it in a way that allows safe upgrades and rollbacks. This approach can also be extended to distributed systems by specifying configurations for different nodes like storage, database, and web servers.
Techniques and lessons for improvement of deployment processesSander van der Burg
The document discusses techniques for improving software deployment processes using the Nix package manager. It describes how Nix stores packages in isolation to ensure reliability and reproducibility. It recommends decomposing systems into scriptable components with configurable dependencies to facilitate composition and improve build times. Explicitly defining component interfaces, dependencies and compositions can help scale these techniques to large codebases.
This document provides an overview of the technical architecture for a cloud platform. It discusses various components including source control, continuous integration/build services, artifact storage, deployment services, infrastructure as code, orchestration, configuration/vaults, logging, monitoring, service discovery, load balancing, and platform services. For each component, it outlines relevant features, example solutions, and standards. The overall goal is to provide guidance on architecting a cloud platform that can build, deploy, host, run, and monitor application services.
The document introduces the artITecture Architecture Method for documenting solution level architecture. It describes the method's primary and secondary deliverables for describing different aspects of the architecture. The primary deliverables are software, infrastructure, integration, and data architectures. Architectural thinking considers all phases of the system lifecycle and links to project management. Principles of the method include considering all lifecycle phases and project management implications.
A pattern based approach to the development of updm architecturesGraham Bleakley
A conference paper presented at the International Enterprise Architecture Conference 2014 on the similarities between MODAF and DoDAF. The paper discusses the common threads that run through these frameworks and how they have been implemented in UPDM. It also discusses a development workflow for Enterprise Architecture development based upon Harmony SE.
Oracle OpenWorld 2009 AIA Best PracticesRajesh Raheja
Oracle OpenWorld 2009 Session S311197
Jedi Masters Reveal
Oracle Application Integration Architecture (AIA) Foundation Pack Best Practices
Building Process Integrations
Factors to consider when starting a brand-new requirements management project...IBM Rational software
The document discusses factors to consider when starting a new requirements management project in IBM Rational DOORS Next Generation. It recommends understanding project goals, environment and constraints to optimize the requirements process. Key questions to address include which artifacts define scope, how artifacts will be organized and tracked, what relationships are important, and which development methodology is being followed. The document also discusses configuring artifact types, attributes, link types and modules to structure requirements information in the project.
Architecting and Designing Enterprise ApplicationsGem WeBlog
The document discusses the architecture, views, and viewpoints of enterprise applications. It describes how architecture serves as a blueprint and is organized into views and viewpoints representing stakeholder perspectives. The architecture description consists of aspects depicted by models using languages like UML. Enterprise architecture helps conceptualize and analyze business scenarios considering applications, users, and data. Popular frameworks like TOGAF define architecture domains including business, applications, data, and technology. The blueprint of an application includes logical, technical, data, and infrastructure architectures. Logical architecture defines layers like business, data access, and presentation. Technical architecture identifies frameworks, patterns, and integration. Infrastructure services provide common functions.
Rhapsody and MATLAB/Simulink have several integration points that allow design and simulation of cyber-physical systems. This includes generating Simulink models from Rhapsody, creating S-functions for use in Simulink, and evaluating parametric constraints using MATLAB. Bringing Simulink models into the Rhapsody Design Manager enables traceability and collaboration across the system design lifecycle.
Rhapsody and mechatronics, multi-domain simulationGraham Bleakley
This document discusses mechatronics and its application with Rational Rhapsody Design Manager. [1] Mechatronics involves the integration of mechanical, electrical, and software engineering, requiring a systems engineering approach. [2] Mechatronic modeling requires mathematical modeling tools that can be integrated into logical behavior models. [3] Rhapsody provides a way to work with mathematical modeling tools like Simulink and Modelica to model both logical and physical behavior.
The document discusses Telelogic Rhapsody, a model-driven development tool for designing technical and embedded systems. It addresses key challenges in systems development such as effective collaboration, managing requirements changes, and testing. Rhapsody uses model-driven development approaches like UML/SysML modeling, requirements traceability, model-driven testing, and automatic code generation to help developers meet schedules, reduce errors, and facilitate team collaboration.
The document provides an overview of the Department of Defense Architecture Framework (DODAF). DODAF defines a common approach for describing and comparing enterprise architectures across the DoD. It facilitates the use of common principles, assumptions, and terminology. DODAF consists of 26 products organized into four views - All Views, Operational View, Systems View, and Technical Standards View - to comprehensively document architectures. Future evolution areas include defining a DODAF object model and ontology to facilitate tool interoperability and sharing of architecture data.
The document discusses software architecture documentation. It provides goals for architecture documentation, including presenting common views, defining stakeholders, identifying their concerns, and defining what and how to document. It also discusses the scope of the documentation. Finally, it discusses different approaches to software architecture documentation, including the Rational Unified Process (RUP) and Software Engineering Institute (SEI) methods. The document aims to provide guidance on effective software architecture documentation.
Enterprise Architecture supporting the change by Vladimir Calmic EndavaMoldova ICT Summit
The document discusses enterprise architecture (EA) and its benefits. It defines EA as a logical view of an organization's key capabilities and how they are linked together. EA involves defining changes from both a top-down and bottom-up approach using business, application, and technology building blocks. Adopting EA can improve efficiency, reduce costs, increase manageability and consistency within an organization. The document recommends starting to think about EA, using architecture development methods, embracing service-oriented architectures and cloud technologies.
[2015/2016] Collaborative software development with GitIvano Malavolta
This presentation is about a lecture I gave within the "Software systems and services" immigration course at the Gran Sasso Science Institute, L'Aquila (Italy): http://cs.gssi.infn.it/.
http://www.ivanomalavolta.com
MADES Seminar @ Laboratory of Model-Driven Engineering Applied to Embedded Sy...Alessandra Bagnato
-------
Lieu: salle 1073 (Nano-innov – Bat. 862)
Date: 24 Septembre 2012
Heure: 14:00 – 15:00
Orateur: Alessandra Bagnato
Titre: UML, SysML and MARTE in Use, a High Level Methodology for Real-time and Embedded Systems
-------
Résumé/Abstract
Rapid evolution of real-time and embedded systems (RTES) is continuing at an increasing rate, and new methodologies and design tools are needed to reduce design complexity while decreasing development costs and integrating aspects such as verification and validation. Model-Driven Engineering offers an interesting solution to the above mentioned challenges and is being widely used in various industrial and academic research projects.
The proposed seminar aims at presenting the development context and needs that have fostered the creation of a methodology and a set of UML, SysML and MARTE model-based diagrams within the research and development work carried out EU funded MADES project [http://www.mades-project.org/] which aims to develop novel model-driven techniques to improve existing practices in development of RTES for avionics and surveillance embedded systems industries.
The seminar aims at highlighting the current practice and needs in real Avionics development case studies and in particular takes advantage of the vision of an avionics system integrator, highlighting the perspective of the different needs of its different customers within the Avionics industry that have been taken as a basis to build the methodology and the set of diagrams.
The MADES Project is expected to deliver important improvements in each phase of embedded systems development lifecycle by providing new tools and technologies that support design, validation, simulation, and code generation, while providing better support for component reuse.
MADES technologies are expected to reduce development costs of complex embedded systems for the Aerospace, Defence and other key European industries, while enabling a next generation of highly complex embedded systems to be developed that are more reliable, yet costing less to maintain and evolve as industry needs change and hardware capabilities increase.
After you complete this module, you should be able to
explain these concepts:
- How requirements fit in the development process
- Key principles of requirements definition and management
- How you can manage requirements by using IBM Rational
requirements management tools
Define and Manage Requirements with IBM Rational Requirements ComposerAlan Kan
The document provides an overview of a hands-on lab session on IBM Rational Requirements Composer (RRC). The lab aims to demonstrate how RRC can help teams collaborate to define, manage and trace requirements across the software development lifecycle. The lab covers topics like importing and linking requirements, modeling business processes and use cases, conducting reviews, and generating work items and test cases from requirements. Known issues encountered in the labs are also documented.
This document discusses the NixOS project and declarative deployment approaches. It begins with definitions of declarative and imperative programming. It then discusses how NixOS uses a declarative approach by specifying the desired system configuration rather than imperative steps. Users can install packages and switch between user environments atomically. The Nix package manager builds a complete isolated system configuration from declarative expressions and stores it in a way that allows safe upgrades and rollbacks. This approach can also be extended to distributed systems by specifying configurations for different nodes like storage, database, and web servers.
Techniques and lessons for improvement of deployment processesSander van der Burg
The document discusses techniques for improving software deployment processes using the Nix package manager. It describes how Nix stores packages in isolation to ensure reliability and reproducibility. It recommends decomposing systems into scriptable components with configurable dependencies to facilitate composition and improve build times. Explicitly defining component interfaces, dependencies and compositions can help scale these techniques to large codebases.
This document discusses deploying .NET applications using the Nix package manager. It describes how Nix provides build and runtime support for .NET, including implementing functions to build Visual Studio solutions and reference dependent assemblies. While Nix allows building and running .NET software, some caveats exist as the .NET framework and tools are not fully managed by Nix.
The document discusses model-driven distributed software deployment. It introduces the Nix deployment system and proposes Disnix, an extension that allows distributed deployment. Disnix uses three models - services, infrastructure, and distribution - to model a distributed system. It employs a two-phase commit algorithm to allow distributed and atomic upgrades. The document also describes adapting an existing distributed system called SDS2 to be deployable with Disnix, including modifying dependencies and implementing a lookup service. The adaptation demonstrated automatic deployment of SDS2 across multiple machines.
The document discusses the Nix project, which aims to provide reliable and reproducible software deployment. It describes how Nix achieves immutable and isolated package management through its functional approach. It also explains how NixOS leverages Nix to manage an entire Linux system configuration and deployment. Finally, it outlines several applications of Nix including distributed service deployment, virtual machine deployment, testing, and license analysis.
The document discusses using NixOS, a Linux distribution that uses the Nix package manager, for declarative deployment and testing. It describes how NixOS allows systems to be configured and deployed declaratively via Nix expressions. This includes features for deploying single machines, distributed environments, and virtual machine networks in an efficient and reliable manner. It also outlines how NixOS enables integrated testing of distributed systems through the use of virtual machine instances.
A Generic Approach for Deploying and Upgrading Mutable Software ComponentsSander van der Burg
The document discusses an approach for deploying and upgrading mutable software components using the Nix package manager and related tools. It introduces challenges with deploying software and the benefits of the Nix project for reproducible deployments. The approach captures snapshots of mutable components' states, such as databases, and uses these snapshots to reliably recreate the component's state during upgrades or on other machines. It has been used successfully with various systems and component types, but capturing large datasets remains inefficient.
The document describes the Nix package manager and NixOS Linux distribution. Key points:
- Nix stores all packages in isolation using cryptographic hashes to ensure reproducibility. It allows installing multiple versions of packages and switching between them.
- NixOS uses Nix to manage the entire operating system configuration, including packages and configuration files. Updates are atomic and previous configurations are kept to roll back changes.
- Advanced features include building virtual machines that mount the host Nix store for efficiency, and tools like Disnix for deploying applications across distributed infrastructures.
- The document provides an example of using Nix to deploy the Trac version control system across VMs for a Subversion server,
Docker - Demo on PHP Application deployment Arun prasath
Docker is an open-source project to easily create lightweight, portable, self-sufficient containers from any application. The same container that a developer builds and tests on a laptop can run at scale, in production, on VMs, bare metal, OpenStack clusters, public clouds and more.
In this demo, I will show how to build a Apache image from a Dockerfile and deploy a PHP application which is present in an external folder using custom configuration files.
This project report describes the development of an application on the DaVinci platform under the guidance of Prof. TK Dan. Akash Sahoo and Abhijit Tripathy, 7th semester B.Tech students, developed an application to take advantage of the DaVinci's integrated ARM and TMS320C64x+ DSP cores. They ported MontaVista Linux and DSP/BIOS to the DaVinci evaluation module board to enable the application and provide OS support across the hybrid processor system.
This document provides information about Linux containers and Docker. It discusses:
1) The evolution of IT from client-server models to thin apps running on any infrastructure and the challenges of ensuring consistent service interactions and deployments across environments.
2) Virtual machines and their benefits of full isolation but large disk usage, and Vagrant which allows packaging and provisioning of VMs via files.
3) Docker and how it uses Linux containers powered by namespaces and cgroups to deploy applications in lightweight portable containers that are more efficient than VMs. Examples of using Docker are provided.
Dataverse can be deployed using Docker containers to improve maintainability and portability. The document discusses how Docker can isolate applications and their dependencies into portable containers. It provides an example of deploying Dataverse as a set of microservices within Docker containers. Instructions are included on building Docker images, running containers, and managing the containers and images through commands and tools like Docker Desktop, Docker Hub, and Docker Compose.
A Reference Architecture for Distributed Software DeploymentSander van der Burg
This document proposes a reference architecture for distributed software deployment. It discusses the challenges of software deployment, including that it is time-consuming, error-prone, and can involve destructive upgrades. The document reviews the history of software deployment from early high-level languages and operating systems to modern component-based engineering and service-oriented systems on the internet. It notes that existing deployment systems like Nix are not sufficient for deploying modern service-oriented systems due to non-functional requirements around privacy, performance, resilience and other factors. The document proposes developing a new reference architecture to address deployment of distributed, service-oriented systems.
Machine Learning , Analytics & Cyber Security the Next Level Threat Analytics...PranavPatil822557
This document provides an overview of machine learning, analytics, and cyber security presented by Manjunath N V. It includes definitions of key concepts like machine learning, data analytics, and cyber security. It also discusses how machine learning, data analytics, and cyber security are related and can be combined. The document outlines topics that will be covered, including theoretical foundations, hands-on materials, career opportunities, and demonstration of a final output.
nix-processmgmt: An experimental Nix-based process manager-agnostic frameworkSander van der Burg
NixCon 2020 talk about an experimental framework that integrates the Nix package manager with all kinds of process managers, such as : sysvinit, systemd, launchd, and even Docker
Docker moves very fast, with an edge channel released every month and a stable release every 3 months. Patrick will talk about how Docker introduced Docker EE and a certification program for containers and plugins with Docker CE and EE 17.03 (from March), the announcements from DockerCon (April), and the many new features planned for Docker CE 17.05 in May.
This talk will be about what's new in Docker and what's next on the roadmap
The DevOps paradigm - the evolution of IT professionals and opensource toolkitMarco Ferrigno
This document discusses the DevOps paradigm and tools. It begins by defining DevOps as focusing on communication and cooperation between development and operations teams. It then discusses concepts like continuous integration, delivery and deployment. It provides examples of tools used in DevOps like Docker, Kubernetes, Ansible, and monitoring tools. It discusses how infrastructure has evolved to be defined through code. Finally, it discusses challenges of security in DevOps and how DevOps works aligns with open source principles like meritocracy, metrics, and continuous improvement.
Similar to A Reference Architecture for Distributed Software Deployment (20)
Explains how Docker and Nix work as deployment solutions, in what ways they are similar and different, and how they can be combined to achieve interesting results.
Dysnomia: complementing Nix deployments with state deploymentSander van der Burg
This talk covers Dysnomia, a state deployment tool that complements various tools in the Nix project, such as NixOS and Disnix, with state management facilities.
The document discusses Disnix, a toolset for deploying service-oriented systems in a distributed environment. Disnix uses Nix, a package manager, to build components in isolation, track dependencies, and allow reliable upgrades and rollbacks. The document describes how Disnix models components, infrastructure, and distributions, and uses these models to build, transfer, activate, and upgrade components across machines in a safe and atomic manner. It also discusses extending Disnix to support deployment of .NET services on Windows by implementing build support for Visual Studio solutions and resolving .NET runtime dependencies.
A Self-Adaptive Deployment Framework for Service-Oriented SystemsSander van der Burg
This document presents a self-adaptive deployment framework for service-oriented systems. The framework extends the Disnix distributed service deployment tool to dynamically redeploy systems in response to events like machine crashes or additions. It uses a quality of service model to generate new deployment distributions and filters to map services to machines. An evaluation with several case studies shows initial deployments take longer than redeployments, and the framework allows quick recovery from events with minimal downtime. Future work includes supporting more complex networks and stateful services.
The document discusses pull deployment of services in hospital environments. Currently, services are bound to dedicated devices, which leads to overcapacity, inflexibility, and complicated deployment processes. The authors propose a service-oriented architecture that allows services to be deployed on any device using a tool called Disnix. Disnix can automatically and reliably install distributed systems across networks of machines using models of the system and infrastructure. This more flexible approach to service development and deployment could benefit domains using service-oriented systems like CRM, web services, and web applications. Future work includes handling dynamic infrastructures and testing services.
Disnix is a toolset for automatically deploying distributed systems across multiple machines. It addresses challenges like reliable and efficient deployment as well as atomic upgrades and rollbacks. Disnix uses a modular architecture where individual tools perform separate deployment tasks like building, transferring, and activating in a composable way. It leverages the Nix package manager and community to support development, testing, and maintenance.
Automated Deployment of Hetergeneous Service-Oriented SystemSander van der Burg
The document describes Disnix, a tool for automated deployment of heterogeneous service-oriented systems. Disnix uses deployment models to capture specifications of services, infrastructure, and distribution. It derives a deployment process from these models to build, transfer, and activate services and their dependencies in the right order across machines. This ensures complete dependencies and allows atomic upgrades and rollbacks. The document evaluates Disnix by using it to deploy the SDS2 asset tracking system across 8 machines, achieving faster, more reliable deployment and upgrading compared to manual processes.
Pull Deployment of Services: Introduction, Progress and ChallengesSander van der Burg
1) The document discusses pull deployment of services (PDS) in hospital environments as a way to flexibly deploy services to devices based on need rather than having services fixed to devices.
2) A key tool discussed is Disnix, a distributed software deployment tool built on Nix that allows modeling and automated deployment of specified components across a network.
3) While progress has been made in developing the PDS architecture and testing Disnix on an SDS2 case study, challenges remain in applying the techniques to larger real-world systems like Philips' PII platform and addressing issues around porting, integration, and case studies.
The document discusses software deployment in hospital environments. Currently, hospitals use a device-oriented IT infrastructure that has disadvantages like overcapacity and inflexibility. The authors propose a service-oriented approach where users can access services from any device. They describe using the Nix deployment system and Disnix extension to automatically deploy service components across a hospital's heterogeneous infrastructure, modeled as a cloud. Their goal is to design services and the deployment process to support dynamic distribution of components based on required capabilities and connectivity.
This document discusses model-driven distributed software deployment. It introduces the SDS2 healthcare system, Nix package manager, and Disnix which extends Nix to allow deployment of distributed systems. Disnix introduces models for services, infrastructure, and distribution and uses a two-phase commit algorithm to allow atomic deployment across machines. The presentation demonstrates deploying the distributed SDS2 system across multiple machines using Disnix. Future work discussed includes supporting dynamic distribution, heterogeneous environments, and other distributed system types and protocols.
CLASS 12th CHEMISTRY SOLID STATE ppt (Animated)eitps1506
Description:
Dive into the fascinating realm of solid-state physics with our meticulously crafted online PowerPoint presentation. This immersive educational resource offers a comprehensive exploration of the fundamental concepts, theories, and applications within the realm of solid-state physics.
From crystalline structures to semiconductor devices, this presentation delves into the intricate principles governing the behavior of solids, providing clear explanations and illustrative examples to enhance understanding. Whether you're a student delving into the subject for the first time or a seasoned researcher seeking to deepen your knowledge, our presentation offers valuable insights and in-depth analyses to cater to various levels of expertise.
Key topics covered include:
Crystal Structures: Unravel the mysteries of crystalline arrangements and their significance in determining material properties.
Band Theory: Explore the electronic band structure of solids and understand how it influences their conductive properties.
Semiconductor Physics: Delve into the behavior of semiconductors, including doping, carrier transport, and device applications.
Magnetic Properties: Investigate the magnetic behavior of solids, including ferromagnetism, antiferromagnetism, and ferrimagnetism.
Optical Properties: Examine the interaction of light with solids, including absorption, reflection, and transmission phenomena.
With visually engaging slides, informative content, and interactive elements, our online PowerPoint presentation serves as a valuable resource for students, educators, and enthusiasts alike, facilitating a deeper understanding of the captivating world of solid-state physics. Explore the intricacies of solid-state materials and unlock the secrets behind their remarkable properties with our comprehensive presentation.
PPT on Direct Seeded Rice presented at the three-day 'Training and Validation Workshop on Modules of Climate Smart Agriculture (CSA) Technologies in South Asia' workshop on April 22, 2024.
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
Discovery of An Apparent Red, High-Velocity Type Ia Supernova at 𝐳 = 2.9 wi...Sérgio Sacani
We present the JWST discovery of SN 2023adsy, a transient object located in a host galaxy JADES-GS
+
53.13485
−
27.82088
with a host spectroscopic redshift of
2.903
±
0.007
. The transient was identified in deep James Webb Space Telescope (JWST)/NIRCam imaging from the JWST Advanced Deep Extragalactic Survey (JADES) program. Photometric and spectroscopic followup with NIRCam and NIRSpec, respectively, confirm the redshift and yield UV-NIR light-curve, NIR color, and spectroscopic information all consistent with a Type Ia classification. Despite its classification as a likely SN Ia, SN 2023adsy is both fairly red (
�
(
�
−
�
)
∼
0.9
) despite a host galaxy with low-extinction and has a high Ca II velocity (
19
,
000
±
2
,
000
km/s) compared to the general population of SNe Ia. While these characteristics are consistent with some Ca-rich SNe Ia, particularly SN 2016hnk, SN 2023adsy is intrinsically brighter than the low-
�
Ca-rich population. Although such an object is too red for any low-
�
cosmological sample, we apply a fiducial standardization approach to SN 2023adsy and find that the SN 2023adsy luminosity distance measurement is in excellent agreement (
≲
1
�
) with
Λ
CDM. Therefore unlike low-
�
Ca-rich SNe Ia, SN 2023adsy is standardizable and gives no indication that SN Ia standardized luminosities change significantly with redshift. A larger sample of distant SNe Ia is required to determine if SN Ia population characteristics at high-
�
truly diverge from their low-
�
counterparts, and to confirm that standardized luminosities nevertheless remain constant with redshift.
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
Describing and Interpreting an Immersive Learning Case with the Immersion Cub...Leonel Morgado
Current descriptions of immersive learning cases are often difficult or impossible to compare. This is due to a myriad of different options on what details to include, which aspects are relevant, and on the descriptive approaches employed. Also, these aspects often combine very specific details with more general guidelines or indicate intents and rationales without clarifying their implementation. In this paper we provide a method to describe immersive learning cases that is structured to enable comparisons, yet flexible enough to allow researchers and practitioners to decide which aspects to include. This method leverages a taxonomy that classifies educational aspects at three levels (uses, practices, and strategies) and then utilizes two frameworks, the Immersive Learning Brain and the Immersion Cube, to enable a structured description and interpretation of immersive learning cases. The method is then demonstrated on a published immersive learning case on training for wind turbine maintenance using virtual reality. Applying the method results in a structured artifact, the Immersive Learning Case Sheet, that tags the case with its proximal uses, practices, and strategies, and refines the free text case description to ensure that matching details are included. This contribution is thus a case description method in support of future comparative research of immersive learning cases. We then discuss how the resulting description and interpretation can be leveraged to change immersion learning cases, by enriching them (considering low-effort changes or additions) or innovating (exploring more challenging avenues of transformation). The method holds significant promise to support better-grounded research in immersive learning.
Anti-Universe And Emergent Gravity and the Dark UniverseSérgio Sacani
Recent theoretical progress indicates that spacetime and gravity emerge together from the entanglement structure of an underlying microscopic theory. These ideas are best understood in Anti-de Sitter space, where they rely on the area law for entanglement entropy. The extension to de Sitter space requires taking into account the entropy and temperature associated with the cosmological horizon. Using insights from string theory, black hole physics and quantum information theory we argue that the positive dark energy leads to a thermal volume law contribution to the entropy that overtakes the area law precisely at the cosmological horizon. Due to the competition between area and volume law entanglement the microscopic de Sitter states do not thermalise at sub-Hubble scales: they exhibit memory effects in the form of an entropy displacement caused by matter. The emergent laws of gravity contain an additional ‘dark’ gravitational force describing the ‘elastic’ response due to the entropy displacement. We derive an estimate of the strength of this extra force in terms of the baryonic mass, Newton’s constant and the Hubble acceleration scale a0 = cH0, and provide evidence for the fact that this additional ‘dark gravity force’ explains the observed phenomena in galaxies and clusters currently attributed to dark matter.
_Extraction of Ethylene oxide and 2-Chloroethanol from alternate matrices Li...LucyHearn1
How do you know your food is safe?
Last Friday was world World Food Safety Day, facilitated by the Food and Agriculture Organization of the United Nations (FAO) and the World Health Organization (WHO) in which the slogan rightly says, 'food safety is everyone's business'. Due to this, I thought it would be worth sharing some data that I have worked on in this field!
Working at Markes International has really opened my eyes (and unfortunately my friends and family 🤣) to food safety and quality, especially with my recent application work on ethylene oxide and 2-chloroethanol residues in foodstuffs, as of the biggest global food recalls in history was and is still being implemented by the Rapid alert system for food and feed (RASFF) in 2021, for high levels of these carcinogenic compounds.
(June 12, 2024) Webinar: Development of PET theranostics targeting the molecu...Scintica Instrumentation
Targeting Hsp90 and its pathogen Orthologs with Tethered Inhibitors as a Diagnostic and Therapeutic Strategy for cancer and infectious diseases with Dr. Timothy Haystead.
Travis Hills of MN is Making Clean Water Accessible to All Through High Flux ...Travis Hills MN
By harnessing the power of High Flux Vacuum Membrane Distillation, Travis Hills from MN envisions a future where clean and safe drinking water is accessible to all, regardless of geographical location or economic status.
A Reference Architecture for Distributed Software Deployment
1. A Reference Architecture for Distributed Software
Deployment
Sander van der Burg
Delft University of Technology, EEMCS,
Department of Software Technology
August 2, 2013
Sander van der Burg A Reference Architecture for Distributed Software Deployment
2. A Reference Architecture for Distributed Software
Deployment
Sander van der Burg A Reference Architecture for Distributed Software Deployment
3. A Reference Architecture for Distributed Software
Deployment
Sander van der Burg A Reference Architecture for Distributed Software Deployment
5. Software deployment
Sander van der Burg A Reference Architecture for Distributed Software Deployment
Software deployment
All of the activities that make a software system available
for use.
7. Challenges
Sander van der Burg A Reference Architecture for Distributed Software Deployment
Software deployment
Time consuming
Error prone
Destructive upgrades
Downtimes
8. Some history: Early history
Sander van der Burg A Reference Architecture for Distributed Software Deployment
9. Some history: High-level languages and operating systems
Sander van der Burg A Reference Architecture for Distributed Software Deployment
10. Some history: High-level languages and operating systems
Sander van der Burg A Reference Architecture for Distributed Software Deployment
Software components
Requires compiler or interpreter and a compatible operating
system
11. Some history: Component-based software engineering
Sander van der Burg A Reference Architecture for Distributed Software Deployment
12. Some history: Component-based software engineering
Sander van der Burg A Reference Architecture for Distributed Software Deployment
Software components
Components increase programmer productivity
Components increase quality of software
13. Some history: Component-based software engineering
Disadvantages:
Sander van der Burg A Reference Architecture for Distributed Software Deployment
14. Nowadays: Services on the Internet
Sander van der Burg A Reference Architecture for Distributed Software Deployment
15. Nowadays: Services on the internet
Challenges:
Sander van der Burg A Reference Architecture for Distributed Software Deployment
16. Nowadays: Services on the Internet
Challenges:
Sander van der Burg A Reference Architecture for Distributed Software Deployment
Software components
Software deployment has become increasingly more compli-
cated
17. Earlier research: Nix and NixOS
A GNU/Linux distribution using the Nix package manager
Sander van der Burg A Reference Architecture for Distributed Software Deployment
18. Nix store
Main idea: store all packages
in isolation from each other:
/nix/store/rpdqxnilb0cg...
-firefox-3.5.4
Paths contain a 160-bit
cryptographic hash of all
inputs used to build the
package:
Sources
Libraries
Compilers
Build scripts
. . .
/nix/store
l9w6773m1msy...-openssh-4.6p1
bin
ssh
sbin
sshd
smkabrbibqv7...-openssl-0.9.8e
lib
libssl.so.0.9.8
c6jbqm2mc0a7...-zlib-1.2.3
lib
libz.so.1.2.3
im276akmsrhv...-glibc-2.5
lib
libc.so.6
Sander van der Burg A Reference Architecture for Distributed Software Deployment
19. Nix expressions
openssh.nix
{ stdenv, fetchurl, openssl, zlib }:
stdenv.mkDerivation {
name = "openssh-4.6p1";
src = fetchurl {
url = http://.../openssh-4.6p1.tar.gz;
sha256 = "0fpjlr3bfind0y94bk442x2p...";
};
buildCommand = ’’
tar xjf $src
./configure --prefix=$out --with-openssl=${openssl}
make; make install
’’;
}
Sander van der Burg A Reference Architecture for Distributed Software Deployment
20. Nix expressions
all-packages.nix
openssh = import ../tools/networking/openssh {
inherit fetchurl stdenv openssl zlib;
};
openssl = import ../development/libraries/openssl {
inherit fetchurl stdenv perl;
};
stdenv = ...;
openssl = ...;
zlib = ...;
perl = ...;
nix-env -f all-packages.nix -iA openssh
Produces a /nix/store/l9w6773m1msy...-openssh-4.6p1
package in the Nix store.
Sander van der Burg A Reference Architecture for Distributed Software Deployment
21. User environments
◮ Users can have
different sets of
installed applications.
PATH
/nix/.../profiles
current
42
/nix/store
pp56i0a01si5...-user-env
bin
firefox
ssh
l9w6773m1msy...-openssh-4.6p1
bin
ssh
rpdqxnilb0cg...-firefox-3.5.4
bin
firefox
Sander van der Burg A Reference Architecture for Distributed Software Deployment
22. User environments
◮ Users can have
different sets of
installed applications.
◮ nix-env operations
create new user
environments in the
store.
PATH
/nix/.../profiles
current
42
/nix/store
pp56i0a01si5...-user-env
bin
firefox
ssh
l9w6773m1msy...-openssh-4.6p1
bin
ssh
rpdqxnilb0cg...-firefox-3.5.4
bin
firefox
aqn3wygq9jzk...-openssh-5.2p1
bin
ssh
(nix-env -u openssh)
Sander van der Burg A Reference Architecture for Distributed Software Deployment
23. User environments
◮ Users can have
different sets of
installed applications.
◮ nix-env operations
create new user
environments in the
store.
PATH
/nix/.../profiles
current
42
/nix/store
pp56i0a01si5...-user-env
bin
firefox
ssh
l9w6773m1msy...-openssh-4.6p1
bin
ssh
rpdqxnilb0cg...-firefox-3.5.4
bin
firefox
aqn3wygq9jzk...-openssh-5.2p1
bin
ssh
i3d9vh6d8ip1...-user-env
bin
ssh
firefox
(nix-env -u openssh)
Sander van der Burg A Reference Architecture for Distributed Software Deployment
24. User environments
◮ Users can have
different sets of
installed applications.
◮ nix-env operations
create new user
environments in the
store.
PATH
/nix/.../profiles
current
42
43
/nix/store
pp56i0a01si5...-user-env
bin
firefox
ssh
l9w6773m1msy...-openssh-4.6p1
bin
ssh
rpdqxnilb0cg...-firefox-3.5.4
bin
firefox
aqn3wygq9jzk...-openssh-5.2p1
bin
ssh
i3d9vh6d8ip1...-user-env
bin
ssh
firefox
(nix-env -u openssh)
Sander van der Burg A Reference Architecture for Distributed Software Deployment
25. User environments
◮ Users can have
different sets of
installed applications.
◮ nix-env operations
create new user
environments in the
store.
◮ We can atomically
switch between them.
PATH
/nix/.../profiles
current
42
43
/nix/store
pp56i0a01si5...-user-env
bin
firefox
ssh
l9w6773m1msy...-openssh-4.6p1
bin
ssh
rpdqxnilb0cg...-firefox-3.5.4
bin
firefox
aqn3wygq9jzk...-openssh-5.2p1
bin
ssh
i3d9vh6d8ip1...-user-env
bin
ssh
firefox
(nix-env -u openssh)
Sander van der Burg A Reference Architecture for Distributed Software Deployment
26. User environments
◮ Users can have
different sets of
installed applications.
◮ nix-env operations
create new user
environments in the
store.
◮ We can atomically
switch between them.
◮ These are roots of the
garbage collector.
PATH
/nix/.../profiles
current
43
/nix/store
pp56i0a01si5...-user-env
bin
firefox
ssh
l9w6773m1msy...-openssh-4.6p1
bin
ssh
rpdqxnilb0cg...-firefox-3.5.4
bin
firefox
aqn3wygq9jzk...-openssh-5.2p1
bin
ssh
i3d9vh6d8ip1...-user-env
bin
ssh
firefox
(nix-env --remove-generations old)
Sander van der Burg A Reference Architecture for Distributed Software Deployment
27. User environments
◮ Users can have
different sets of
installed applications.
◮ nix-env operations
create new user
environments in the
store.
◮ We can atomically
switch between them.
◮ These are roots of the
garbage collector.
PATH
/nix/.../profiles
current
43
/nix/store
rpdqxnilb0cg...-firefox-3.5.4
bin
firefox
aqn3wygq9jzk...-openssh-5.2p1
bin
ssh
i3d9vh6d8ip1...-user-env
bin
ssh
firefox
(nix-collect-garbage)
Sander van der Burg A Reference Architecture for Distributed Software Deployment
28. NixOS
In NixOS, all packages including the Linux kernel and
configuration files are managed by Nix.
NixOS does not have directories such as: /lib and /usr
NixOS has a minimal /bin and /etc
But NixOS is more then just a distribution managed by Nix
Sander van der Burg A Reference Architecture for Distributed Software Deployment
30. NixOS configuration
nixos-rebuild switch
Nix package manager builds a complete system configuration
Includes all packages and generates all configuration files, e.g.
OpenSSH configuration
Upgrades are (almost) atomic
Components are stored safely next to each other, due to hashes
No files are automatically removed or overwritten
Users can switch to older generations of system configurations
not garbage collected yet
Sander van der Burg A Reference Architecture for Distributed Software Deployment
32. Deploying service-oriented systems
Nix and NixOS are not sufficient for deploying service-oriented
systems:
Sander van der Burg A Reference Architecture for Distributed Software Deployment
33. Deploying service-oriented systems
Nix and NixOS are not sufficient for deploying service-oriented
systems:
Sander van der Burg A Reference Architecture for Distributed Software Deployment
34. Deploying service-oriented systems
Nix and NixOS are not sufficient for deploying service-oriented
systems:
Sander van der Burg A Reference Architecture for Distributed Software Deployment
Non-functional requirements
Is privacy-sensitive data secured?
Do the analysis components perform well?
Is the system resilient to machine crashes?
Are the software licenses governing the off-the-shelf
components properly obeyed?
35. A Reference Architecture for Distributed Software
Deployment
Sander van der Burg A Reference Architecture for Distributed Software Deployment
37. Disnix
Distributed deployment extension for the Nix package
manager
Captures deployment specification in models
Performs complete deployment process from models
Guarantees complete dependencies
Component agnostic
Supports atomic upgrades and rollbacks
Sander van der Burg A Reference Architecture for Distributed Software Deployment
39. Disnix
$ disnix-env -s services.nix -i infrastructure.nix -d distribution.nix
Sander van der Burg A Reference Architecture for Distributed Software Deployment
40. Disnix expressions
MELogService.nix
{javaenv, config, SDS2Util}:
{mobileeventlogs}:
javaenv.createTomcatWebApplication rec {
name = "MELogService";
contextXML = ’’
<Context>
<Resource auth="Container" type="javax.sql.DataSource"
url="jdbc:mysql://${mobileeventlogs.target.hostname}..." />
</Context>
’’;
webapp = javaenv.buildWebService {
inherit name;
src = ../../../WebServices/MELogService;
wsdlFile = "MELogService.wsdl";
libs = [ config SDS2Util ];
};
}
Sander van der Burg A Reference Architecture for Distributed Software Deployment
41. Service model
{distribution, system}:
let pkgs = import ../top-level/all-packages.nix {
inherit distribution system;
}; in
{ mobileeventlogs = {
name = "mobileeventlogs";
pkg = pkgs.mobileeventlogs;
type = "mysql-database";
};
MELogService = {
name = "MELogService";
pkg = pkgs.MELogService;
dependsOn = { inherit mobileeventlogs; };
type = "tomcat-webapplication";
};
SDS2AssetTracker = {
name = "SDS2AssetTracker";
pkg = pkgs.SDS2AssetTracker;
dependsOn = { inherit MELogService ...; };
type = "tomcat-webapplication";
};
...
}
Sander van der Burg A Reference Architecture for Distributed Software Deployment
42. Infrastructure model
{
test1 = {
hostname = "test1.net";
tomcatPort = 8080;
mysqlUser = "user";
mysqlPassword = "secret";
mysqlPort = 3306;
targetEPR = http://test1.net/.../DisnixService;
system = "i686-linux";
};
test2 = {
hostname = "test2.net";
tomcatPort = 8080;
...
targetEPR = http://test2.net/.../DisnixService;
system = "x86_64-linux";
};
}
Captures machines in the network and their relevant properties and
capabilities.
Sander van der Burg A Reference Architecture for Distributed Software Deployment
43. Distribution model
{infrastructure}:
{
mobileeventlogs = [ infrastructure.test1 ];
MELogService = [ infrastructure.test2 ];
SDS2AssetTracker = [ infrastructure.test1 infrastructure.test2 ];
...
}
Maps services to machines
Sander van der Burg A Reference Architecture for Distributed Software Deployment
44. Deployment process
Specifications are used to derive deployment process:
Building services from source code
Transferring services to target machines
Deactivating obsolete services and activating new services
Sander van der Burg A Reference Architecture for Distributed Software Deployment
45. Dynamic Disnix
Various events may occur in a network of machines:
Crashing machines
Adding a new machine
Change of a capability (e.g. increase of RAM)
Dynamic Disnix generates infrastructure and distribution
models and redeploys a system
Sander van der Burg A Reference Architecture for Distributed Software Deployment
48. Virtualization
nixos-build-vms network.nix; ./result/bin/nixos-run-vms
Builds a network of QEMU-KVM virtual machines closely
resembling the network of NixOS configurations
We don’t create disk images
The VM mounts the Nix store of the host system using
SMB/CIFS
Sander van der Burg A Reference Architecture for Distributed Software Deployment
52. License analysis
We can also trace all files and processes involved in a build
process
And we can determine the licenses of the original source files
to say something about the result
/usr/bin/patchelfpatchelf.cc g++ patchelf.o g++ patchelf install
Sander van der Burg A Reference Architecture for Distributed Software Deployment
53. Conclusion
We have shown a reference architecture for distributed
software deployment
Reference architecture is based on Nix, a purely functional
package manager, and NixOS a Linux distribution built
around Nix
We have shown tools to automate the deployment of
distributed systems
They provide fully automatic, reliable, reproducible, and
efficient deployment for the latest generation of systems
Components of the reference architecture can be used to
construct a domain-specific deployment tool
Sander van der Burg A Reference Architecture for Distributed Software Deployment
54. A Reference Architecture for Distributed Software
Deployment
More information:
Sander van der Burg A Reference Architecture for Distributed Software Deployment
55. References
NixOS website: http://nixos.org
Nix. A purely functional package manager
Nixpkgs. Nix packages collection
NixOS. Nix based GNU/Linux distribution
Hydra. Nix based continuous build and integration server
Disnix. Nix based distributed service deployment
NixOps. NixOS-based multi-cloud deployment tool
Software available under free and open-source licenses
(LGPL/X11)
Sander van der Burg A Reference Architecture for Distributed Software Deployment