Presentation form Nov 2009 Eclipse DemoCamp in Stuttgart. This presentation talks about how DITA can be used for efficient management of Eclipse help documentation with support of DITAworks. It also presents how DITAworks IDE tooling can help to optimize managemnt of Eclipse help context IDs and communication between development and documentation teams.
The document discusses several trends in development including:
1. Moving from traditional planning-focused development to approaches like design thinking, lean startup, agile development and continuous delivery which emphasize validation, iteration and rapid learning.
2. Adopting modern architectures like microservices and containers to improve scalability, robustness and deployment speed.
3. Embracing DevOps practices and technologies like Docker to enable continuous integration and deployment allowing code to be deployed several times a day.
4. The evolution of integration approaches from traditional ESB to API management, service mesh and serverless architectures in the cloud.
5. Emerging technologies like low-code platforms, event-driven programming and application platforms as
Most modern software systems are subject to variation or come in many variants. Web browsers like Firefox or Chrome are available on different operating systems, in different languages, while users can configure 2000+ preferences or install numerous 3rd parties extensions (or plugins). Web servers like Apache, operating systems like the Linux kernel, or a video encoder like x264 are other examples of software systems that are highly configurable at compile-time or at run-time for delivering the expected functionality andmeeting the various desires of users. Variability ("the ability of a software system or artifact to be efficiently extended, changed,customized or configured for use in a particular context") is therefore a crucial property of software systems. Organizations capable of mastering variability can deliver high-quality variants (or products) in a short amount of time and thus attract numerous customers, new use-cases or usage contexts. A hard problem for end-users or software developers is to master the combinatorial explosion induced by variability: Hundreds of configuration options can be combined, each potentially with distinct functionality and effects on execution time, memory footprint, quality of the result, etc. The first part of this course will introduce variability-intensive systems, their applications and challenges, in various software contexts. We will use intuitive examples (like a generator of LaTeX paper variants) and real-world systems (like the Linux kernel). A second objective of this course is to show the relevance of ArtificialIntelligence (AI) techniques for exploring and taming such enormous variability spaces. In particular, we will introduce how (1) satisfiability and constraint programming solvers can be used to properly model and reason about variability; (2) how machine learning can be used to discover constraints and predict the variability behavior of configurable systems or software product lines.
http://ejcp2019.icube.unistra.fr/
Biology, medicine, physics, astrophysics, chemistry: all these scientific domains need to process large amount of data with more and more complex software systems. For achieving reproducible science, there are several challenges ahead involving multidisciplinary collaboration and socio-technical innovation with software at the center of the problem. Despite the availability of data and code, several studies report that the same data analyzed with different software can lead to different results. I am seeing this problem as a manifestation of deep software variability: many factors (operating system, third-party libraries, versions, workloads, compile-time options and flags, etc.) themselves subject to variability can alter the results, up to the point it can dramatically change the conclusions of some scientific studies. In this keynote, I argue that deep software variability is a threat and also an opportunity for reproducible science. I first outline some works about (deep) software variability, reporting on preliminary evidence of complex interactions between variability layers. I then link the ongoing works on variability modelling and deep software variability in the quest for reproducible science.
10265 developing data access solutions with microsoft visual studio 2010bestip
This document provides a course summary for a course titled "Developing Data Access Solutions with Microsoft Visual Studio 2010". The course is intended to teach experienced developers to optimize data access designs using technologies like ADO.NET Entity Framework, LINQ, WCF Data Services and the Sync Framework. Over the five day course, students will learn to design entity data models, query data, handle concurrency, customize entities and build n-tier solutions using these technologies. The target audience is professional .NET developers familiar with data access and Visual Studio 2010 who want to improve productivity and application quality.
An Extensible Virtual Digital Libraries Generator @ ECDL 2008Leonardo Candela
The document describes a VDL Generator Framework that was developed to support the definition and operation of Virtual Digital Libraries (VDLs) on e-Infrastructures. The framework uses a modular approach with logical and physical components to represent and generate logical and deployment plans for VDLs. It employs a search strategy using dynamic programming to optimize the generation of deployment plans from logical plans in a domain-agnostic manner. The framework was implemented within the gCube system to serve e-Science scenarios by making the difficult task of defining and deploying VDLs more user-friendly.
#1 The diversity of terminology shows the large spectrum of shapes DSLs can take.
#2 As syntax and development environment matter, DSLs should allow the user to choose the right shape according to their usage or task.
#3 A metamorphic DSL vision is proposed where DSLs can adapt to the most appropriate shape, including transitioning between shapes based on usage or task.
The document discusses Google's ML Kit, which is a mobile SDK that brings machine learning capabilities to Android and iOS apps. It allows developers to easily implement functionality like text recognition, face detection, and custom model integration with just a few lines of code, without requiring deep knowledge of machine learning or model optimization. The document provides steps to integrate ML Kit into an Android app, including adding the necessary dependencies, configuring models for on-device or cloud-based APIs, processing images with the models, and extracting results. It also discusses using ML Kit with custom TensorFlow Lite models and converting models to the TFLite format required for mobile.
The document discusses several trends in development including:
1. Moving from traditional planning-focused development to approaches like design thinking, lean startup, agile development and continuous delivery which emphasize validation, iteration and rapid learning.
2. Adopting modern architectures like microservices and containers to improve scalability, robustness and deployment speed.
3. Embracing DevOps practices and technologies like Docker to enable continuous integration and deployment allowing code to be deployed several times a day.
4. The evolution of integration approaches from traditional ESB to API management, service mesh and serverless architectures in the cloud.
5. Emerging technologies like low-code platforms, event-driven programming and application platforms as
Most modern software systems are subject to variation or come in many variants. Web browsers like Firefox or Chrome are available on different operating systems, in different languages, while users can configure 2000+ preferences or install numerous 3rd parties extensions (or plugins). Web servers like Apache, operating systems like the Linux kernel, or a video encoder like x264 are other examples of software systems that are highly configurable at compile-time or at run-time for delivering the expected functionality andmeeting the various desires of users. Variability ("the ability of a software system or artifact to be efficiently extended, changed,customized or configured for use in a particular context") is therefore a crucial property of software systems. Organizations capable of mastering variability can deliver high-quality variants (or products) in a short amount of time and thus attract numerous customers, new use-cases or usage contexts. A hard problem for end-users or software developers is to master the combinatorial explosion induced by variability: Hundreds of configuration options can be combined, each potentially with distinct functionality and effects on execution time, memory footprint, quality of the result, etc. The first part of this course will introduce variability-intensive systems, their applications and challenges, in various software contexts. We will use intuitive examples (like a generator of LaTeX paper variants) and real-world systems (like the Linux kernel). A second objective of this course is to show the relevance of ArtificialIntelligence (AI) techniques for exploring and taming such enormous variability spaces. In particular, we will introduce how (1) satisfiability and constraint programming solvers can be used to properly model and reason about variability; (2) how machine learning can be used to discover constraints and predict the variability behavior of configurable systems or software product lines.
http://ejcp2019.icube.unistra.fr/
Biology, medicine, physics, astrophysics, chemistry: all these scientific domains need to process large amount of data with more and more complex software systems. For achieving reproducible science, there are several challenges ahead involving multidisciplinary collaboration and socio-technical innovation with software at the center of the problem. Despite the availability of data and code, several studies report that the same data analyzed with different software can lead to different results. I am seeing this problem as a manifestation of deep software variability: many factors (operating system, third-party libraries, versions, workloads, compile-time options and flags, etc.) themselves subject to variability can alter the results, up to the point it can dramatically change the conclusions of some scientific studies. In this keynote, I argue that deep software variability is a threat and also an opportunity for reproducible science. I first outline some works about (deep) software variability, reporting on preliminary evidence of complex interactions between variability layers. I then link the ongoing works on variability modelling and deep software variability in the quest for reproducible science.
10265 developing data access solutions with microsoft visual studio 2010bestip
This document provides a course summary for a course titled "Developing Data Access Solutions with Microsoft Visual Studio 2010". The course is intended to teach experienced developers to optimize data access designs using technologies like ADO.NET Entity Framework, LINQ, WCF Data Services and the Sync Framework. Over the five day course, students will learn to design entity data models, query data, handle concurrency, customize entities and build n-tier solutions using these technologies. The target audience is professional .NET developers familiar with data access and Visual Studio 2010 who want to improve productivity and application quality.
An Extensible Virtual Digital Libraries Generator @ ECDL 2008Leonardo Candela
The document describes a VDL Generator Framework that was developed to support the definition and operation of Virtual Digital Libraries (VDLs) on e-Infrastructures. The framework uses a modular approach with logical and physical components to represent and generate logical and deployment plans for VDLs. It employs a search strategy using dynamic programming to optimize the generation of deployment plans from logical plans in a domain-agnostic manner. The framework was implemented within the gCube system to serve e-Science scenarios by making the difficult task of defining and deploying VDLs more user-friendly.
#1 The diversity of terminology shows the large spectrum of shapes DSLs can take.
#2 As syntax and development environment matter, DSLs should allow the user to choose the right shape according to their usage or task.
#3 A metamorphic DSL vision is proposed where DSLs can adapt to the most appropriate shape, including transitioning between shapes based on usage or task.
The document discusses Google's ML Kit, which is a mobile SDK that brings machine learning capabilities to Android and iOS apps. It allows developers to easily implement functionality like text recognition, face detection, and custom model integration with just a few lines of code, without requiring deep knowledge of machine learning or model optimization. The document provides steps to integrate ML Kit into an Android app, including adding the necessary dependencies, configuring models for on-device or cloud-based APIs, processing images with the models, and extracting results. It also discusses using ML Kit with custom TensorFlow Lite models and converting models to the TFLite format required for mobile.
Este documento clasifica los lenguajes de programación en cuatro categorías principales (lenguajes de máquina, bajo nivel, medio nivel y alto nivel) dependiendo de su nivel de abstracción y paradigma de programación. Explica que los lenguajes de máquina son directamente legibles por la máquina, los de bajo nivel se acercan a su funcionamiento, y los de medio nivel son considerados por algunos como intermedios.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
Este documento clasifica los lenguajes de programación en cuatro categorías principales (lenguajes de máquina, bajo nivel, medio nivel y alto nivel) dependiendo de su nivel de abstracción y paradigma de programación. Explica que los lenguajes de máquina son directamente legibles por la máquina, los de bajo nivel se acercan a su funcionamiento, y los de medio nivel son considerados por algunos como intermedios.
Producing documentation for Eclipse RCP applications using single source prin...wild_wild_leha
This document discusses using single source principles to produce documentation for Eclipse RCP applications. It outlines typical documentation deliverables like printed manuals, help files, and online documentation. Traditional documentation tools have challenges like redundant content and complex workflows. The document proposes using DITA and the DITAworks toolset as a solution. DITAworks allows authoring once and generating documentation in multiple formats from a single source. It also facilitates content reuse and integration with development processes.
Shameel Ahamad M A is an SAP SD consultant with over 4 years of experience implementing various SAP SD business processes. He has extensive experience with master data setup, sales order processing, consignment processes, IDOC processing, interfaces, and the order to cash cycle. Some of his clients include British American Tobacco, Apple, and internal projects for Wipro. He has expertise in SAP SD, CRM, and basic ABAP skills.
Using DITAworks for Eclipse Help publishingwild_wild_leha
This presentation shows how DITAworks adds value to process of single source documentation management. It also focuses on specific support of Eclipse Help features as one of the target publishing formats.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
Lenguajes de programación pueden clasificarse según su nivel de abstracción, como lenguajes de máquina, bajo nivel, medio nivel o alto nivel. También se clasifican según su paradigma de programación o forma de implementación, ya sea directamente legible por la máquina o más cercano a su funcionamiento. Algunos lenguajes se consideran de nivel medio.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive function. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
Raymond J. Brunoni has talents in communication, organization, writing and creativity. His motivations include achieving targets and goals, satisfaction from progress, and thinking creatively and strategically. His passions are challenges, group dynamics, goals and accomplishments. As a career highlight, he was a project manager for creating a 50,000 square foot office complex and relocating a company, and held a dual customer relations and facility manager role at another company. References praised his professionalism, focus on going above and beyond, and being a valuable asset and recommendation for management roles requiring initiative.
Textivity is a service that allows coaches to easily broadcast information to entire sports teams and parents using a cell phone. It saves time by allowing coaches to speak once and have the information sent directly to players and parents, ensuring everyone receives the same vital details. For a low cost of $30 per 3 person family per season, it provides a convenient way to communicate and eliminate confusion while also helping to raise funds that go back into the team.
TexTivity is a service that allows coaches to easily broadcast information to entire teams and parents using text messages from their cell phone. It saves coaches time by allowing them to communicate updates once instead of repeating themselves. Parents and players receive the same information directly through text messages instead of email, making communications more convenient and eliminating confusion. The service costs $30 per family each season, but $15 per family is fundraised each season to support the team.
This document discusses Backstage, an open platform for building developer portals created by Spotify. It summarizes that Backstage unifies all tooling, services, apps, data and docs with a single consistent UI to make sense of a company's entire software ecosystem. It provides speed, chaos control and scalability. Backstage lets developers easily create and manage software, and explore their company's full software ecosystem to enable collaboration.
The document discusses the Total Data Science Process (TDSP) which aims to integrate DevOps practices into the data science workflow to improve collaboration, quality, and productivity. The TDSP provides standardized components like a data science lifecycle, project templates and roles, reusable utilities, and shared infrastructure to help address common challenges around organization, collaboration, quality control, and knowledge sharing for data science teams. It describes the various TDSP components that standardize the data science process and ease challenges around the data science solutions development lifecycle.
Worried about the learning curve to introduce Deep Learning in your organization? Don’t be. The DEEP-HybridDataCloud project offers a framework for all users, including non-experts, enabling the transparent training, sharing and serving of Deep Learning models both locally or on hybrid cloud system. In this webinar we will be showing a set of use cases, from different research areas, integrated within the DEEP infrastructure.
The DEEP solution is based on Docker containers packaging already all the tools needed to deploy and run the Deep Learning models in the most transparent way. No need to worry about compatibility problems. Everything has already been tested and encapsulated so that the user has a fully working model in just a few minutes. To make things even easier, we have developed an API allowing the user to interact with the model directly from the web browser.
Pat Farrell, Migrating Legacy Documentation to XML and DITAfarrelldoc
Pat Farrell is a TECHNICAL information developer who has developed a variety of custom solutions to increase productivity. This presentation is an overview of Pat's technical innovations followed by more detail of a conversion project he managed: migrating documentation to XML and DITA. Learn what you need to begin such a conversion project: workflow, considerations, and the benefits and fallbacks of using in-house or external resources for your XML or DITA conversion project.
This document discusses improving documentation for open source projects by making it more modular, reusable, and using open standards like DITA. It introduces single sourcing of documentation and DITA, and describes how Drupal and DITA can be combined to allow documentation to become a modular unit that is collaboratively developed and adapted for different contexts. Future visions include better integration of DITA metadata and structures into Drupal to improve technical authoring and documentation workflows.
This document outlines various artifact sets produced during the software engineering process, including requirement, design, implementation, deployment, test, and management artifacts. It discusses the artifacts in each set and how they evolve over the software lifecycle. The key artifact sets are the requirement set containing the engineering context, the design set representing different abstraction levels, the implementation set with source code, and the deployment set for delivering the software to users. Test artifacts must also be developed concurrently and documented similarly. Management artifacts include documents for planning, tracking status and releases, and defining the development environment.
This slide deck gives an overview of the Nuxeo Platform and what makes it the best solution for document management and content management initiatives.
The document discusses the Nuxeo Platform, a content application platform that improves productivity for application developers and deliverers. It enables them to provide successful content-driven applications by providing a comprehensive ecosystem across application development, delivery, and management. This includes tools for content modeling, configuration, deployment, and runtime execution of custom content applications.
Este documento clasifica los lenguajes de programación en cuatro categorías principales (lenguajes de máquina, bajo nivel, medio nivel y alto nivel) dependiendo de su nivel de abstracción y paradigma de programación. Explica que los lenguajes de máquina son directamente legibles por la máquina, los de bajo nivel se acercan a su funcionamiento, y los de medio nivel son considerados por algunos como intermedios.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
Este documento clasifica los lenguajes de programación en cuatro categorías principales (lenguajes de máquina, bajo nivel, medio nivel y alto nivel) dependiendo de su nivel de abstracción y paradigma de programación. Explica que los lenguajes de máquina son directamente legibles por la máquina, los de bajo nivel se acercan a su funcionamiento, y los de medio nivel son considerados por algunos como intermedios.
Producing documentation for Eclipse RCP applications using single source prin...wild_wild_leha
This document discusses using single source principles to produce documentation for Eclipse RCP applications. It outlines typical documentation deliverables like printed manuals, help files, and online documentation. Traditional documentation tools have challenges like redundant content and complex workflows. The document proposes using DITA and the DITAworks toolset as a solution. DITAworks allows authoring once and generating documentation in multiple formats from a single source. It also facilitates content reuse and integration with development processes.
Shameel Ahamad M A is an SAP SD consultant with over 4 years of experience implementing various SAP SD business processes. He has extensive experience with master data setup, sales order processing, consignment processes, IDOC processing, interfaces, and the order to cash cycle. Some of his clients include British American Tobacco, Apple, and internal projects for Wipro. He has expertise in SAP SD, CRM, and basic ABAP skills.
Using DITAworks for Eclipse Help publishingwild_wild_leha
This presentation shows how DITAworks adds value to process of single source documentation management. It also focuses on specific support of Eclipse Help features as one of the target publishing formats.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
Lenguajes de programación pueden clasificarse según su nivel de abstracción, como lenguajes de máquina, bajo nivel, medio nivel o alto nivel. También se clasifican según su paradigma de programación o forma de implementación, ya sea directamente legible por la máquina o más cercano a su funcionamiento. Algunos lenguajes se consideran de nivel medio.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive function. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
Raymond J. Brunoni has talents in communication, organization, writing and creativity. His motivations include achieving targets and goals, satisfaction from progress, and thinking creatively and strategically. His passions are challenges, group dynamics, goals and accomplishments. As a career highlight, he was a project manager for creating a 50,000 square foot office complex and relocating a company, and held a dual customer relations and facility manager role at another company. References praised his professionalism, focus on going above and beyond, and being a valuable asset and recommendation for management roles requiring initiative.
Textivity is a service that allows coaches to easily broadcast information to entire sports teams and parents using a cell phone. It saves time by allowing coaches to speak once and have the information sent directly to players and parents, ensuring everyone receives the same vital details. For a low cost of $30 per 3 person family per season, it provides a convenient way to communicate and eliminate confusion while also helping to raise funds that go back into the team.
TexTivity is a service that allows coaches to easily broadcast information to entire teams and parents using text messages from their cell phone. It saves coaches time by allowing them to communicate updates once instead of repeating themselves. Parents and players receive the same information directly through text messages instead of email, making communications more convenient and eliminating confusion. The service costs $30 per family each season, but $15 per family is fundraised each season to support the team.
This document discusses Backstage, an open platform for building developer portals created by Spotify. It summarizes that Backstage unifies all tooling, services, apps, data and docs with a single consistent UI to make sense of a company's entire software ecosystem. It provides speed, chaos control and scalability. Backstage lets developers easily create and manage software, and explore their company's full software ecosystem to enable collaboration.
The document discusses the Total Data Science Process (TDSP) which aims to integrate DevOps practices into the data science workflow to improve collaboration, quality, and productivity. The TDSP provides standardized components like a data science lifecycle, project templates and roles, reusable utilities, and shared infrastructure to help address common challenges around organization, collaboration, quality control, and knowledge sharing for data science teams. It describes the various TDSP components that standardize the data science process and ease challenges around the data science solutions development lifecycle.
Worried about the learning curve to introduce Deep Learning in your organization? Don’t be. The DEEP-HybridDataCloud project offers a framework for all users, including non-experts, enabling the transparent training, sharing and serving of Deep Learning models both locally or on hybrid cloud system. In this webinar we will be showing a set of use cases, from different research areas, integrated within the DEEP infrastructure.
The DEEP solution is based on Docker containers packaging already all the tools needed to deploy and run the Deep Learning models in the most transparent way. No need to worry about compatibility problems. Everything has already been tested and encapsulated so that the user has a fully working model in just a few minutes. To make things even easier, we have developed an API allowing the user to interact with the model directly from the web browser.
Pat Farrell, Migrating Legacy Documentation to XML and DITAfarrelldoc
Pat Farrell is a TECHNICAL information developer who has developed a variety of custom solutions to increase productivity. This presentation is an overview of Pat's technical innovations followed by more detail of a conversion project he managed: migrating documentation to XML and DITA. Learn what you need to begin such a conversion project: workflow, considerations, and the benefits and fallbacks of using in-house or external resources for your XML or DITA conversion project.
This document discusses improving documentation for open source projects by making it more modular, reusable, and using open standards like DITA. It introduces single sourcing of documentation and DITA, and describes how Drupal and DITA can be combined to allow documentation to become a modular unit that is collaboratively developed and adapted for different contexts. Future visions include better integration of DITA metadata and structures into Drupal to improve technical authoring and documentation workflows.
This document outlines various artifact sets produced during the software engineering process, including requirement, design, implementation, deployment, test, and management artifacts. It discusses the artifacts in each set and how they evolve over the software lifecycle. The key artifact sets are the requirement set containing the engineering context, the design set representing different abstraction levels, the implementation set with source code, and the deployment set for delivering the software to users. Test artifacts must also be developed concurrently and documented similarly. Management artifacts include documents for planning, tracking status and releases, and defining the development environment.
This slide deck gives an overview of the Nuxeo Platform and what makes it the best solution for document management and content management initiatives.
The document discusses the Nuxeo Platform, a content application platform that improves productivity for application developers and deliverers. It enables them to provide successful content-driven applications by providing a comprehensive ecosystem across application development, delivery, and management. This includes tools for content modeling, configuration, deployment, and runtime execution of custom content applications.
Modular Documentation Joe Gelb Techshoret 2009Suite Solutions
Designing, building and maintaining a coherent content model is critical to proper planning, creation, management and delivery of documentation and training content. This is especially true when implementing a modular or topic-based XML standard such as DITA, SCORM and S1000D, and is essential for successfully facilitating content reuse, multi-purpose conditional publishing and user-driven content.
During this presentation we will review basic concepts and methods for implementing information architecture. We will then introduce an innovative, comprehensive methodology for information modeling and content development that employs recognized XML standards for representation and interchange of knowledge, such as Topic Maps and SKOS. In this way, semantic technologies designed for taxonomy and ontology development can be brought to bear for creating and managing technical documentation and training content, and ultimately impacting the usability and findability of technical information.
Developer Experience (DX) for UX ProfessionalsIan Jennings
Ian Jennings presents at the Austin UXPA meetup on November 12, 2019 at Visa.
Developer Experience (DX) is the equivalent to User Experience (UX) when the user of the software or system is a developer. Sure, the science is the same, but this talk will teach you why developer experience is gaining traction as a new field. Between APIs, SDKs, code, documentation, demos, CLIs, tutorials, and developer portals, DX is a whole new beast. Learn about the emergence of Developer Experience, the similarities and differences between UX an DX, and the tools you need to apply your UX experience toward the field of DX.
Speaker Bio:
Ian Jennings is the founder of Haxor, a developer experience testing platform based in Austin TX. Haxor tests and measures APIs, SDKs, and developer products with on-demand feedback from real developers. Previously Ian co-founded developer meetup platform Hacker League (acquired by Mashery and Intel) before spending 6 years at PubNub establishing their developer experience strategy. He also operates DevPort, a developer portfolio site populated by thousands of developers.
The document discusses DevOps practices for AI projects. It outlines some common problems with current approaches that treat models as "piles of scripts" without governance or reproducibility. The Team Data Science Process (TDSP) framework is presented as a solution to implement traceability, validation, automation, and observability. The Azure Machine Learning service is highlighted as a tool that can help easily implement the AI/ML lifecycle and integrate with DevOps practices like continuous integration/delivery (CI/CD) pipelines. It provides a high-level overview of the service's capabilities and components.
Continuous Integration for Oracle Database DevelopmentVladimir Bakhov
The document provides information about continuous integration (CI) for database development projects. It discusses how version control, automated testing, and continuous deployment can be applied to database code and artifacts. Key points include:
- Storing database scripts, structures, and data migrations in version control to allow for automated deployment and rollbacks.
- Maintaining a "trunk" version that serves as the single source of truth for all changes.
- Taking nightly backups of a production-like environment and deploying changes since the last build to test integration.
- Generating deployment scripts by comparing the trunk to the current production version.
- Running automated tests after each deployment to catch errors early.
Microsoft Ignite 2018 BRK3192 Container DevOps on AzureJessica Deen
This document provides an overview of DevOps concepts and tools. It discusses containers and container orchestration with Kubernetes. It also mentions Azure DevOps and Azure Kubernetes Service (AKS) as tools that can help with DevOps practices like continuous integration/delivery (CI/CD). Helm charts are presented as a way to define and manage complex Kubernetes applications and services. Some best practices for Kubernetes are also listed.
ODSC East 2020 Accelerate ML Lifecycle with Kubernetes and Containerized Da...Abhinav Joshi
This deck provide an overview of containers and Kubernetes, and how these technologies can help solve the challenges faced by data scientists, ML engineers, and application developers. Next, it showcases the key capabilities required in a containers and kubernetes platform to help data scientists easily use technologies like Jupyter Notebooks, ML frameworks, programming languages to innovate faster. Finally it discusses the available platform options (e.g. KubeFlow, Open Data Hub, etc.), and some examples of how data scientists are accelerating their ML initiatives with containers and kubernetes platform.
Simon Brown: Software Architecture as Code at I T.A.K.E. Unconference 2015Mozaic Works
This document discusses software architecture and how it relates to code. It suggests that software architecture should be more accessible to developers and embodied in the code through architecturally evident coding styles. Components can be extracted from code if naming conventions, packaging, and other patterns are used. Both diagrams and code should reflect the architectural abstractions. Software architecture models can be maintained as code to keep them in sync with implementation changes.
Scaling AI/ML with Containers and Kubernetes Tushar Katarki
Scaling AI and machine learning projects poses challenges around collaboration, data access, and deploying models into production. Containers and Kubernetes can help address these challenges by providing a self-service platform for data scientists to access tools, frameworks, and compute resources. This allows for rapid iteration and sharing of work. Kubernetes provides resource management and workload scheduling across hybrid cloud environments. OpenShift is a distribution of Kubernetes optimized for AI/ML workloads. It incorporates additional services for continuous integration/delivery and automation. Open Data Hub is an open source community project and reference architecture for building AI platforms on OpenShift and Kubernetes.
Build and automate your machine learning application with docker and jenkinsKnoldus Inc.
This document discusses using Docker and Jenkins to automate machine learning application deployment. It introduces Docker as a tool to create isolated environments for applications using containers. Benefits of Docker include consistent environments, portability, and easy scaling. The document demonstrates building a ML application Docker image. It then introduces Jenkins as an open-source automation server that facilitates continuous integration and delivery. Jenkins allows automating build, test and deployment tasks through jobs and pipelines. Feedback is welcomed.
Dataverse can be deployed using Docker containers to improve maintainability and portability. The document discusses how Docker can isolate applications and their dependencies into portable containers. It provides an example of deploying Dataverse as a set of microservices within Docker containers. Instructions are included on building Docker images, running containers, and managing the containers and images through commands and tools like Docker Desktop, Docker Hub, and Docker Compose.
The Cloud Deployment Toolkit (CDTK) project is a proposed open source project under the Eclipse Technology Project.
This proposal is in the Project Proposal Phase (as defined in the Eclipse Development Process) and is written to declare its intent and scope.
We solicit additional participation and input from the Eclipse community. Please send all feedback to the CDTK forum.
The document provides tips for documentation design in software projects. It discusses why documentation is important for project success and ensuring all team members understand goals and responsibilities. It also covers reasons why documentation is often not written or read, such as being time-consuming or not matching the delivered system. The document recommends developing documentation concurrently with the system to ensure it always matches and using tools to simplify documentation generation and navigation.
1. Experts in Information Management Solutions and Services
Optimized integration of documentation with Eclipse/RCP
applications
Alexej Spas,
instinctools GmbH
November 2009
4. Typical SW documentation deliverables:
Printed documentation materials (manuals, references etc.)
Application help
Context sensitive help
Documentation materials that should be published online
(Online help)
Training materials
Reference documentation (API docs and s.o.)
... other documents
Most of these documents have quite a high potential for partial
content reuse.
4
6. Challenges we are Facing in this Scenario are:
Dealing with different source formats and redundant content
Increasing Complexity of Documentation
Globalization & Localization
Shortening of Development Cycles
High Quality Expectations
Different Target Media
Need of Integration
Increasing Demand for Documentation Variants
Conclusion: Without consistent documentation methodology and
appropriate tool support there is very less chance to manage all
required deliverables efficiently
6
7. Solution
Methodology: Single source publishing allows:
same content to be used in different documents or in various
formats.
labor-intensive and expensive work of editing only to be carried out
once, on one source document.
further transformations to be performed mechanically, by
automated tools.
Implementation: XML/DITA:
DITA divides content into small, self-contained topics
DITA Topics can be reused in different deliverables.
Tools:
DITAworks as Authoring platform
DITAworks IDE tooling to enable efficient collaboration between
development and documentation teams
7
9. Advantages of DITAworks in This Scenario
Single-source publishing approach
Comfortable WYSIWYG editing
Generate different formats from single source
Minimize efforts spent on managing documentation variants
Increase content reuse and minimize amount of managed content
Minimize translation costs
Increase quality and consistency of documentation
Automatically build product documentation as part of product build
process
Content can be pulled from 3rd party systems
Content can be published to 3rd systems
9
11. Additional Challenges in Eclipse RCP Scenario:
Organizing efficient collaboration between Dev team and Doc team
Continuous development: Detecting new undocumented places in
Source code
Support of all Eclipse Help features in Single Source environment (live
actions, cheat sheets and s.o.)
2 alternative ways of Help Context ID mappings
Componentization of documentation: Cross-plugin links in
documentation
Link validation
11
12. Direct Context Mapping Approach
UI Plug-In Eclipse Help
System
PlatformUI.getWorkbench().
getHelpSystem().setHelp Presents context-sensitive
(control, help_Context_ID) help for context ID
Help Context ID
PlatformUI.getWorkbench().getHelpSystem()
.setHelp(dialog.getShell(),
IWorkbenchHelpContextIds.NEW_WIZARD);
14. Mapping Strategies Compared
Direct context mapping approach
“+”:
Simple
“-”:
Context IDs hardcoded into source code
1:1 relationship between context IDs and contexts
Dynamic context mapping approach
“+”:
Context IDs in source code and in documentation are decoupled
Doc. team can freely assign Context IDs to any context (N:N)
“-”:
Requires more management
14
15. DITAworks: Extended Eclipse Help support
Highlights:
Specialized DITA types for support eclipse help and contexts
Support of live actions and cheat sheets
Cross-plugin links generation and validation
Support of dynamic context ID mapping (DTP approach)
Eclipse help specific validations
Tools for integration with development process. (Context IDs
management between development and documentation teams)
Plug-in for Eclipse IDE
ID synchronization wizards
15
17. Demo details and goals
Based on classical Eclipse “RCP Mail Template” example project
Create a documentation for our sample RCP application in form of:
Eclipse Help
PDF
Assign context help according to Dynamic Context Mapping Strategy
Demonstrate the work environments for Dev and Doc teams
Demonstrate the process
17
19. Step 1: setup infrastructure
2 workspaces for Dev and Doc teams
Shared projects (via VC) for content exchange
Different tooling:
Eclipse IDE + DITAworks IDE tools for Dev
DITAworks for Doc teams
19
20. Work infrastructure
Version Control:
/mail.rcp /mail.rcp /mail.doc
/mail.doc /mail.doc /mail.doc.sources
/mail.doc.sources /mail.doc.sources /mail_Model
/mail_Model
21. Step 2: Assign context IDs in code
Role: Developer
Tool: DITAworks IDE tooling
Find Java UI components
that require Help Context ID
Assign new Context IDs
using refactoring wizard.
21
22. Step 3: Export context IDs to shared project
Role: Developer
Tool: DITAworks IDE tooling
Run Export Wizard for
Context IDs
Describe exported IDs
(optional)
Store Context IDs to
shared help source project
22
23. Step 4: Assimilate context IDs and document
Role: Doc team
Tool: DITAworks
Open or Import exported
Context IDs
Document: Assign existing
topics. Write new topics.
Assign Context
definitions to help plugin
23
24. Step 5: Publish Eclipse help documentation
Role: Doc team
Tool: DITAworks
Setup publishing
configuration for Eclipse
help plug-in
Run publishing process
Share results.
24
25. Step 6: Integrate help into application
Role: Developer
Tool: Eclipse IDE
Include Documentation
bundles into application.
Include org.eclipse.help.*
bundles
Build & Run
Press F1 on one of the
views in sample app
25
26. Step 7: Generate other formats
Role: Doc team
Tool: DITAworks
Setup publishing
configuration for Eclipse
help plug-in
Run publishing process
Press F1 on one of the
views in sample app
26
27. Summary
DITAworks addresses most of the challenges in the area of RCP
application documentation as a ready out-of-the-box product
DITAworks enables single-source approach to the development of
documentation under Eclipse
DITAworks can be easily integrated with other Eclipse based tools
DITAworks provides IDE tooling to optimize collaboration
DITAworks pays special attention to support of Eclipse help format
DITAworks is also a good starting point for custom solutions dealing
with structured document generation
27
30. About *instinctools
*instinctools GmbH delivers Information Management solutions on Java technology
since 2001, on Eclipse since 2007
Germany (Stuttgart)
Eclipse Application Design and MD
Project
Partner
R&D Management
Implementation Services Management Management Customer Sales
Support
Eclipse Application Maintenance and
Consulting
Support Implementation
Technical Product Partners
Tools for Technical Documentation Management Development Project
Services
(single source strategies)
Belarus (Grodno)
Management Team in Germany, Software Lab near shore (Belarus)
Successfully serving premium customers like Daimler, Hubert Burda Media, Garant,
EnBW and SMEs
Proven management processes and reliable project delivery infrastructure
Member of tekom, Eclipse Foundation
30