The document discusses some of the drawbacks of using DITA from the perspective of a large content management system (CCMS). It notes that DITA's coverage of core CCMS requirements is surprisingly small, addressing only 18% of requirements compared to 67% for a typical CCMS. It also argues that the evolution of the DITA standard is too slow given market demands. Additionally, it outlines how DITA deals with the proliferation of files as content is translated and new versions are created, noting this can result in many files versus a single topic accessed differently in a CCMS.
This document summarizes a presentation about optimizing DITA-based content for search engine optimization. The presentation discusses how DITA content is transformed and published on the web, and what search engines like Google prioritize, such as descriptive titles, effective short descriptions, and relationship tables. It emphasizes writing content with users in mind by understanding their needs and scenarios. While techniques like keywords and Dublin Core metadata don't significantly impact rankings, focusing on user experience through topics like tasks and troubleshooting is important as search evolves to understand natural language queries.
Optimizing your DITA content model for translationAmber Swope
The document discusses optimizing DITA content for translation by removing ambiguity from the content model. It recommends indicating when to translate content using attributes, using appropriate DITA elements, and avoiding inline content or key references. The speaker has over 20 years of experience in the industry and is the author of papers on information development and DITA.
A taxonomy is a way to define and express relationships between things to facilitate organizing, classifying, and discovering relationships in a collection of information. Without a taxonomy, related items must be given arbitrary or ambiguous names when identified. A taxonomy provides a uniform system through a hierarchical or polyarchical structure to show how items are similar or different. Developing a taxonomy involves determining its purpose, scope, relationships to represent, and tools before analyzing content to identify common terms and categories.
DITA Quick Start: System Architecture of a Basic DITA ToolsetSuite Solutions
Presenter: Joe Gelb, President, Suite Solutions
Abstract: In this webinar, you will learn about the software, integration and customization which enable you to effectively author, manage, localize, publish and share your DITA XML content. We will review how each tool fits into the content lifecycle and discuss options for an incremental DITA XML implementation using a basic toolset as the starting point.
Keith Schengili-Roberts - DITA Worst PracticesJack Molisani
While people are interested in hearing about successes, we can actually learn more from failure. Not only do we discover what not to do, but also how to avoid the circumstances that led to it. Presenter Keith Schengili-Roberts has seen a lot of good and bad things happen to DITA implementations over the years, and part of his job at IXIASOFT is to investigate what works, what doesn’t, and why. Listen to his stories on the best (worst) DITA practices!
DITA, Semantics, Content Management, Dynamic Documents, and Linked Data – A M...Paul Wlodarczyk
DITA was conceived as a model for improving reuse through topic-oriented modularization of content. Instead of creating new content or copying and pasting information which may or may not be current and authoritative, organizations manage a repository of content assets – or DITA topics – that can be centrally managed, maintained and reused across the enterprise. This helps to accelerate the creation and maintenance of documents and other deliverables and to ensure the quality and consistency of the content organizations publish. But the next frontier of DITA adoption is leveraging semantic technologies—taxonomies, ontologies and text analytics—to automate the delivery of targeted content. For example, a service incident from a customer is automatically matched with the appropriate response, which is authored and managed as a DITA topic. Learn how organizations can leverage DITA, semantics, content management, dynamic documents, and linked data to fully utilize the value of their information.
Optimizing Content Reuse with DITA - LavaCon Webinar with Keith Schengili-Rob...IXIASOFT
Join Keith Schengili-Roberts, IXIASOFT DITA Specialist, and the LavaCon crew, for a free webinar on Thursday, September 8, 2016 to learn more about optimizing content reuse with DITA. Just click on the gotowebinar link above to register - it's free!
Optimizing Content Reuse with DITA
DITA was designed around the idea of content reuse. Maps, topics, conrefs and keys all provide the means for sharing and reusing content effectively within a documentation team using the standard. But what are the optimal ways of doing this, and what are the common mistakes first-time DITA users make when it comes to content reuse? Did you know that DITA 1.3 offers up additional means for reusing content via using such things as scoped keys? And what good is content reuse if you can’t find the content you are looking for?
In this presentation IXIASOFT’s DITA Specialist Keith Schengili-Roberts will examine content reuse best practices, and look at how the idea of content reuse has evolved, changed and been refined since DITA first debuted over ten years ago.
Webinar hosted by LavaCon, Sponsored by IXIASOFT.
Organizations often need to quickly analyze large amounts of data, such as logs generated from a wide variety of sources and formats. However, traditional approaches require a lot of time and effort designing complex data transformation and loading processes; and configuring data warehouses. Using AWS, you can start querying your datasets within minutes. In this session you will learn how you can deploy a managed Presto environment in minutes to interactively query log data using standard ANSI SQL. Presto is a popular open source SQL engine for running interactive analytic queries against data sources of all sizes. We will talk about common use cases and best practices for running Presto on Amazon EMR.
This document summarizes a presentation about optimizing DITA-based content for search engine optimization. The presentation discusses how DITA content is transformed and published on the web, and what search engines like Google prioritize, such as descriptive titles, effective short descriptions, and relationship tables. It emphasizes writing content with users in mind by understanding their needs and scenarios. While techniques like keywords and Dublin Core metadata don't significantly impact rankings, focusing on user experience through topics like tasks and troubleshooting is important as search evolves to understand natural language queries.
Optimizing your DITA content model for translationAmber Swope
The document discusses optimizing DITA content for translation by removing ambiguity from the content model. It recommends indicating when to translate content using attributes, using appropriate DITA elements, and avoiding inline content or key references. The speaker has over 20 years of experience in the industry and is the author of papers on information development and DITA.
A taxonomy is a way to define and express relationships between things to facilitate organizing, classifying, and discovering relationships in a collection of information. Without a taxonomy, related items must be given arbitrary or ambiguous names when identified. A taxonomy provides a uniform system through a hierarchical or polyarchical structure to show how items are similar or different. Developing a taxonomy involves determining its purpose, scope, relationships to represent, and tools before analyzing content to identify common terms and categories.
DITA Quick Start: System Architecture of a Basic DITA ToolsetSuite Solutions
Presenter: Joe Gelb, President, Suite Solutions
Abstract: In this webinar, you will learn about the software, integration and customization which enable you to effectively author, manage, localize, publish and share your DITA XML content. We will review how each tool fits into the content lifecycle and discuss options for an incremental DITA XML implementation using a basic toolset as the starting point.
Keith Schengili-Roberts - DITA Worst PracticesJack Molisani
While people are interested in hearing about successes, we can actually learn more from failure. Not only do we discover what not to do, but also how to avoid the circumstances that led to it. Presenter Keith Schengili-Roberts has seen a lot of good and bad things happen to DITA implementations over the years, and part of his job at IXIASOFT is to investigate what works, what doesn’t, and why. Listen to his stories on the best (worst) DITA practices!
DITA, Semantics, Content Management, Dynamic Documents, and Linked Data – A M...Paul Wlodarczyk
DITA was conceived as a model for improving reuse through topic-oriented modularization of content. Instead of creating new content or copying and pasting information which may or may not be current and authoritative, organizations manage a repository of content assets – or DITA topics – that can be centrally managed, maintained and reused across the enterprise. This helps to accelerate the creation and maintenance of documents and other deliverables and to ensure the quality and consistency of the content organizations publish. But the next frontier of DITA adoption is leveraging semantic technologies—taxonomies, ontologies and text analytics—to automate the delivery of targeted content. For example, a service incident from a customer is automatically matched with the appropriate response, which is authored and managed as a DITA topic. Learn how organizations can leverage DITA, semantics, content management, dynamic documents, and linked data to fully utilize the value of their information.
Optimizing Content Reuse with DITA - LavaCon Webinar with Keith Schengili-Rob...IXIASOFT
Join Keith Schengili-Roberts, IXIASOFT DITA Specialist, and the LavaCon crew, for a free webinar on Thursday, September 8, 2016 to learn more about optimizing content reuse with DITA. Just click on the gotowebinar link above to register - it's free!
Optimizing Content Reuse with DITA
DITA was designed around the idea of content reuse. Maps, topics, conrefs and keys all provide the means for sharing and reusing content effectively within a documentation team using the standard. But what are the optimal ways of doing this, and what are the common mistakes first-time DITA users make when it comes to content reuse? Did you know that DITA 1.3 offers up additional means for reusing content via using such things as scoped keys? And what good is content reuse if you can’t find the content you are looking for?
In this presentation IXIASOFT’s DITA Specialist Keith Schengili-Roberts will examine content reuse best practices, and look at how the idea of content reuse has evolved, changed and been refined since DITA first debuted over ten years ago.
Webinar hosted by LavaCon, Sponsored by IXIASOFT.
Organizations often need to quickly analyze large amounts of data, such as logs generated from a wide variety of sources and formats. However, traditional approaches require a lot of time and effort designing complex data transformation and loading processes; and configuring data warehouses. Using AWS, you can start querying your datasets within minutes. In this session you will learn how you can deploy a managed Presto environment in minutes to interactively query log data using standard ANSI SQL. Presto is a popular open source SQL engine for running interactive analytic queries against data sources of all sizes. We will talk about common use cases and best practices for running Presto on Amazon EMR.
While open-source solutions may have no purchase cost, total costs including configuration, customization and support can equal proprietary solutions. DITA provides benefits like reuse and translation but has limitations in areas like graphics, equations, custom output and legacy content migration. PDF publishing from DITA is especially challenging due to the complexity of XSL-FO. DITA works best for organizations with significant reuse across contexts and languages, while smaller groups may find its limitations easier to overcome.
ETUG Spring 2013 - Designing for Touch: Not Just for Mobile AnymorePaul Hibbitts
While student use of tablets and mobile phones continues to experience tremendous growth, touchscreens are destined for even broader use with the release of such products as Windows 8 and the Google Chomebook Pixel. In this session user experience consultant Paul Hibbitts shares some of his core design techniques and principles to create touch-friendly websites. Techniques such as user stories and responsive design sketching will be explored, along with touchscreen interaction design principles.
In addition to discussion, participants will undertake several workshop activities. While not required, participants are encouraged to bring a touch-enabled device along with a notebook to the session.
LavaCon 2012 - Gaining Value From Global Content Using A CCMSbrentmurphy1
Suzanne Mescan and Brent Murphy share their experience regarding the value propositions of implementing and utilizing a CCMS to manage technical documentation for a global audience
Improving the mobile learning experience using DITAMark Poston
This document discusses using DITA to improve mobile learning experiences. It outlines trends in extended DITA use, dynamic delivery of personalized content to multiple systems, and the Tin Can API for tracking learning experiences across devices. The document proposes using DITA learning maps and metadata to organize reusable learning objects into customized courses and deliver them to mobile apps. DITA's native support for learning maps, topics, and metadata can help build combined learning solutions that track learning on and offline.
Is your technical content development organization considering a move to structured authoring and/or DITA (Darwin Information Typing Architecture)? This presentation provides a high-level introduction to what DITA is--and what the benefits of moving to DITA are. DITA is an excellent solution for many--but not all--organizations and projects. This introduction can help you begin to understand why DITA may or may not be a good solution for you.
Improve your Chances for Documentation Success with DITA and a CCMS LavaCon L...IXIASOFT
This document discusses how adopting DITA and a content management system (CCMS) can improve documentation success. It outlines key features of DITA including content reuse. Four main reasons for adopting DITA and a CCMS are discussed: needing more efficiency, outgrowing current tools, rising localization costs, and needing content verification. Four things that can be done with DITA and a CCMS are also presented: versioning content, implementing workflows, measuring documentation metrics, and improving localization. The presenter is then available for questions.
Move Our DITA Content to Another CCMS? Seriously? - IXIASOFT User Conference ...IXIASOFT
Presented by Nancy Howe and gg Heath, Information Architects, Teradata Labs at the IXIASOFT User Conference 2016.
How do you move half a million highly intertwined objects from one CCMS to another, while producing new content at a higher rate than ever? In this presentation we'll share the challenges we discovered, the solutions we developed, and the processes we used to migrate DITA content from a proprietary CCMS to the IXIASOFT DITA CMS. Teradata is a long-time DITA user, supporting approximately 300 deliverables, thousands of pages of content with up to 90% reuse, intense filtering, and translation.
The long and winding road of migrating to IXIASOFT DITA-CMS included overcoming technical challenges like bursting massive conref containers into thousands of referable-content topics, completely changing our approach to versioning content, standardizing filtering, and implementing a new translation workflow.
The document discusses the concept of business environment and its importance. It defines business environment as the total external factors beyond a firm's control that influence its operations and decision making. These factors include economic, social, political, legal, and technological aspects. Understanding the business environment is important for firms to identify opportunities and threats, direct growth, and adapt to changes. The different types of business environment discussed are the economic environment, consisting of economic conditions and policies, and the non-economic environment comprising social, political, legal, technological, and other external aspects.
What’s new in DITA 1.3?
by Yas Etessam, DITA Consultant and Leigh White, DITA Specialist at IXIASOFT
Come and learn about the new proposed DITA 1.3 features. Leigh and Yas will provide an overview of the new architectural features including extensions to the DITA core vocabulary, Online Help vocabularies, scoped keys, branched filtering and enhancements to the Learning and Training, Troubleshooting and Table specializations.
This document provides an overview of XML authoring simplified for all using DITA and FrameMaker. It begins with some housekeeping notes and then covers:
1. Setting up FrameMaker preferences for DITA authoring
2. An overview of DITA topics, maps, and specializations
3. Introduction to core DITA topic types and elements
4. Using maps to organize content
5. Examples of concept, task, and reference topic types
6. Publishing content from DITA maps using the native FrameMaker publisher or DITA Open Toolkit
7. Customizing DITA Open Toolkit output
The document emphasizes making DITA authoring easy for SME contributors
Introduction to XML and Structured Authoring • Overview of DITA • Topics: The Basic Information Types • Maps: Assembling Topics into Deliverables • Common elements and attributes • Metadata • Examples and exercises
Sometimes, a spontaneous road trip can be a lot of fun, as long as you’re willing to take the good with the bad—getting lost, car trouble, unfriendly (or just plain weird) natives, bad diner food. Usually, though, the most successful trips involve planning, roadmaps, and best of all, guidance from people who’ve already been there.
The journey from traditional, deliverable-centric content creation to DITA-based content creation falls into this second category. In this session, we talk about one small publication group’s experience moving to DITA, from the initial discussions to the successful implementation of a FrameMaker-based, end-to-end publication process. Here are some of the high points of the project; we’ll discuss our decision-making process and some of our technical approaches in detail in the session.
This 2-hour tutorial was presented at the tcworld 2011 conference in Wiesbaden. It shows how you do not have to use the DITA Open Toolkit, Ant scripts, native XML editors and XSL-FO or other transformations to use DITA and create output in a variety of formats. DITA for the rest of us. It is NOT a tutorial about DITA - check out my DITA for Dummies to find that type of info.
The DITA Learning and Training SpecializationIXIASOFT
The document discusses the DITA Learning specialization, which provides a standardized structure for organizing learning content. It describes the key elements used to structure learning maps, objects, overviews, plans, content, assessments and summaries. Implementation considerations include content reuse strategies between technical publications and training. The specialization will be further supported in DITA 1.3 through additional map structures and improved interactions.
Gone through articles and presentations on the web and got a half-baked understanding of the Darwin Information Typing Architecture (DITA)?
Refer to my DITA Quick Start presentation for the 2007 STC India Conference to learn to evaluate, plan and start implementing DITA.
In this presentation, you will learn about the following:
o Structured authoring and XML
o Key DITA concepts: topics, maps, specialization
o DITA architecture and content model
o Authoring in topics
o Organizing content using DITA maps
o Creating relationship tables
o Conditional text and reuse in DITA
o Metadata support in DITA
o DITA tools, standards and processes
o Publishing with the DITA Open Toolkit
Ahmad Barkati is a Field Service Engineer for National Oilwell Varco based in Indonesia. He has over 13 years of experience in pressure engineering, mud logging, and borehole enlargement projects. His qualifications include certifications in offshore safety and hydraulics software. Barkati has worked on drilling projects in Indonesia, Vietnam, and Malaysia for oil companies like Pertamina, Genting Oil, and Petronas.
El resumen describe un estudio estadístico que analiza el número de balances generales que realizan diariamente contadores públicos entrevistados. La tabla muestra la frecuencia absoluta, relativa y acumulada para cada número de balances generales de 0 a 6 realizados por los contadores.
The document discusses Kaiser Permanente's innovations in telemedicine and virtual care delivery. It describes how telemedicine allows patients to see their primary care physician and a specialist on the same day, and reduces the need for dermatology referrals by 33%. The document also discusses a telemedicine rover used in ICUs that allows remote doctors to interact with patients, access records, and move around the facility. This results in fewer nighttime emergencies and urgent pages. Finally, the document mentions Kaiser Permanente's plans to expand care delivery through retail clinics in Target and Walmart stores, and their recognition for innovative online tools and highest member satisfaction in California.
techcello is a product company providing .Net Engineering Stacks with multi-tenant architecture. You get the freedom, flexibility and control of custom development without the complexities, risks, cost and time overheads of building and maintaining your own framework. Multi-tenant applications built using techcello\'s framework can be deployed anywhere : On-premise Windows / SQL boxes, Data centres, Virtual machines, Amazon RDS / MySQL and Windows Azure / SQL Azure.
The document summarizes a multi-tenant SaaS framework called celloSaaS that allows developers to build multi-tenant applications on the .NET stack. It has been in development since 2010 and is currently on version 2.3. It saves developers 30-40% of the time and cost of building multi-tenant applications from scratch. It offers features like tenant-level customization, security isolation, and developer productivity tools.
While open-source solutions may have no purchase cost, total costs including configuration, customization and support can equal proprietary solutions. DITA provides benefits like reuse and translation but has limitations in areas like graphics, equations, custom output and legacy content migration. PDF publishing from DITA is especially challenging due to the complexity of XSL-FO. DITA works best for organizations with significant reuse across contexts and languages, while smaller groups may find its limitations easier to overcome.
ETUG Spring 2013 - Designing for Touch: Not Just for Mobile AnymorePaul Hibbitts
While student use of tablets and mobile phones continues to experience tremendous growth, touchscreens are destined for even broader use with the release of such products as Windows 8 and the Google Chomebook Pixel. In this session user experience consultant Paul Hibbitts shares some of his core design techniques and principles to create touch-friendly websites. Techniques such as user stories and responsive design sketching will be explored, along with touchscreen interaction design principles.
In addition to discussion, participants will undertake several workshop activities. While not required, participants are encouraged to bring a touch-enabled device along with a notebook to the session.
LavaCon 2012 - Gaining Value From Global Content Using A CCMSbrentmurphy1
Suzanne Mescan and Brent Murphy share their experience regarding the value propositions of implementing and utilizing a CCMS to manage technical documentation for a global audience
Improving the mobile learning experience using DITAMark Poston
This document discusses using DITA to improve mobile learning experiences. It outlines trends in extended DITA use, dynamic delivery of personalized content to multiple systems, and the Tin Can API for tracking learning experiences across devices. The document proposes using DITA learning maps and metadata to organize reusable learning objects into customized courses and deliver them to mobile apps. DITA's native support for learning maps, topics, and metadata can help build combined learning solutions that track learning on and offline.
Is your technical content development organization considering a move to structured authoring and/or DITA (Darwin Information Typing Architecture)? This presentation provides a high-level introduction to what DITA is--and what the benefits of moving to DITA are. DITA is an excellent solution for many--but not all--organizations and projects. This introduction can help you begin to understand why DITA may or may not be a good solution for you.
Improve your Chances for Documentation Success with DITA and a CCMS LavaCon L...IXIASOFT
This document discusses how adopting DITA and a content management system (CCMS) can improve documentation success. It outlines key features of DITA including content reuse. Four main reasons for adopting DITA and a CCMS are discussed: needing more efficiency, outgrowing current tools, rising localization costs, and needing content verification. Four things that can be done with DITA and a CCMS are also presented: versioning content, implementing workflows, measuring documentation metrics, and improving localization. The presenter is then available for questions.
Move Our DITA Content to Another CCMS? Seriously? - IXIASOFT User Conference ...IXIASOFT
Presented by Nancy Howe and gg Heath, Information Architects, Teradata Labs at the IXIASOFT User Conference 2016.
How do you move half a million highly intertwined objects from one CCMS to another, while producing new content at a higher rate than ever? In this presentation we'll share the challenges we discovered, the solutions we developed, and the processes we used to migrate DITA content from a proprietary CCMS to the IXIASOFT DITA CMS. Teradata is a long-time DITA user, supporting approximately 300 deliverables, thousands of pages of content with up to 90% reuse, intense filtering, and translation.
The long and winding road of migrating to IXIASOFT DITA-CMS included overcoming technical challenges like bursting massive conref containers into thousands of referable-content topics, completely changing our approach to versioning content, standardizing filtering, and implementing a new translation workflow.
The document discusses the concept of business environment and its importance. It defines business environment as the total external factors beyond a firm's control that influence its operations and decision making. These factors include economic, social, political, legal, and technological aspects. Understanding the business environment is important for firms to identify opportunities and threats, direct growth, and adapt to changes. The different types of business environment discussed are the economic environment, consisting of economic conditions and policies, and the non-economic environment comprising social, political, legal, technological, and other external aspects.
What’s new in DITA 1.3?
by Yas Etessam, DITA Consultant and Leigh White, DITA Specialist at IXIASOFT
Come and learn about the new proposed DITA 1.3 features. Leigh and Yas will provide an overview of the new architectural features including extensions to the DITA core vocabulary, Online Help vocabularies, scoped keys, branched filtering and enhancements to the Learning and Training, Troubleshooting and Table specializations.
This document provides an overview of XML authoring simplified for all using DITA and FrameMaker. It begins with some housekeeping notes and then covers:
1. Setting up FrameMaker preferences for DITA authoring
2. An overview of DITA topics, maps, and specializations
3. Introduction to core DITA topic types and elements
4. Using maps to organize content
5. Examples of concept, task, and reference topic types
6. Publishing content from DITA maps using the native FrameMaker publisher or DITA Open Toolkit
7. Customizing DITA Open Toolkit output
The document emphasizes making DITA authoring easy for SME contributors
Introduction to XML and Structured Authoring • Overview of DITA • Topics: The Basic Information Types • Maps: Assembling Topics into Deliverables • Common elements and attributes • Metadata • Examples and exercises
Sometimes, a spontaneous road trip can be a lot of fun, as long as you’re willing to take the good with the bad—getting lost, car trouble, unfriendly (or just plain weird) natives, bad diner food. Usually, though, the most successful trips involve planning, roadmaps, and best of all, guidance from people who’ve already been there.
The journey from traditional, deliverable-centric content creation to DITA-based content creation falls into this second category. In this session, we talk about one small publication group’s experience moving to DITA, from the initial discussions to the successful implementation of a FrameMaker-based, end-to-end publication process. Here are some of the high points of the project; we’ll discuss our decision-making process and some of our technical approaches in detail in the session.
This 2-hour tutorial was presented at the tcworld 2011 conference in Wiesbaden. It shows how you do not have to use the DITA Open Toolkit, Ant scripts, native XML editors and XSL-FO or other transformations to use DITA and create output in a variety of formats. DITA for the rest of us. It is NOT a tutorial about DITA - check out my DITA for Dummies to find that type of info.
The DITA Learning and Training SpecializationIXIASOFT
The document discusses the DITA Learning specialization, which provides a standardized structure for organizing learning content. It describes the key elements used to structure learning maps, objects, overviews, plans, content, assessments and summaries. Implementation considerations include content reuse strategies between technical publications and training. The specialization will be further supported in DITA 1.3 through additional map structures and improved interactions.
Gone through articles and presentations on the web and got a half-baked understanding of the Darwin Information Typing Architecture (DITA)?
Refer to my DITA Quick Start presentation for the 2007 STC India Conference to learn to evaluate, plan and start implementing DITA.
In this presentation, you will learn about the following:
o Structured authoring and XML
o Key DITA concepts: topics, maps, specialization
o DITA architecture and content model
o Authoring in topics
o Organizing content using DITA maps
o Creating relationship tables
o Conditional text and reuse in DITA
o Metadata support in DITA
o DITA tools, standards and processes
o Publishing with the DITA Open Toolkit
Ahmad Barkati is a Field Service Engineer for National Oilwell Varco based in Indonesia. He has over 13 years of experience in pressure engineering, mud logging, and borehole enlargement projects. His qualifications include certifications in offshore safety and hydraulics software. Barkati has worked on drilling projects in Indonesia, Vietnam, and Malaysia for oil companies like Pertamina, Genting Oil, and Petronas.
El resumen describe un estudio estadístico que analiza el número de balances generales que realizan diariamente contadores públicos entrevistados. La tabla muestra la frecuencia absoluta, relativa y acumulada para cada número de balances generales de 0 a 6 realizados por los contadores.
The document discusses Kaiser Permanente's innovations in telemedicine and virtual care delivery. It describes how telemedicine allows patients to see their primary care physician and a specialist on the same day, and reduces the need for dermatology referrals by 33%. The document also discusses a telemedicine rover used in ICUs that allows remote doctors to interact with patients, access records, and move around the facility. This results in fewer nighttime emergencies and urgent pages. Finally, the document mentions Kaiser Permanente's plans to expand care delivery through retail clinics in Target and Walmart stores, and their recognition for innovative online tools and highest member satisfaction in California.
techcello is a product company providing .Net Engineering Stacks with multi-tenant architecture. You get the freedom, flexibility and control of custom development without the complexities, risks, cost and time overheads of building and maintaining your own framework. Multi-tenant applications built using techcello\'s framework can be deployed anywhere : On-premise Windows / SQL boxes, Data centres, Virtual machines, Amazon RDS / MySQL and Windows Azure / SQL Azure.
The document summarizes a multi-tenant SaaS framework called celloSaaS that allows developers to build multi-tenant applications on the .NET stack. It has been in development since 2010 and is currently on version 2.3. It saves developers 30-40% of the time and cost of building multi-tenant applications from scratch. It offers features like tenant-level customization, security isolation, and developer productivity tools.
Building An XML Publishing System With DITAScott Abel
Presented at DocTrain East 2007 Conference by Brian Buehling, Dakota Systems -- Since its inception, DITA has rapidly gained acceptance as a standard document structure used in many XML-based content management and publishing systems. DITA is an XML schema developed primarily to support technical documentation for a wide array of applications. This session will cover the commonly used element, attribute and entity constructs that are defined in the schema. More importantly, recommendations concerning how best to implement DITA solutions will be given. Special attention is given to developing practical DITA applications since, in many cases, some DITA elements will have to be extended through a mechanism called specialization to produce a robust XML-based publishing system.
This document provides an overview comparison of the HP TRIM and Objective electronic document and records management (EDRM) systems. It describes the basic functions, flexibility, integration capabilities, ease of use, and limitations of each system. Both systems are seen as suitable for medium to large organizations, with HP TRIM having a larger market share but Objective growing and expanding its capabilities through acquisitions. Key differences include Objective typically being more costly to implement due to consulting requirements for customizations.
This document provides a summary of Gartner's Magic Quadrant report on enterprise content management vendors. It assesses 22 vendors and places them in four categories based on their completeness of vision and ability to execute. The summary analyzes the strengths and cautions of several leading vendors, including Alfresco, EMC, Ever Team, Fabasoft, HP, Hyland, and IBM. It describes their product portfolios, target markets, growth strategies, and areas for improvement.
nTireDMS is one of the Innovative Software Application System with broad components and capacities on taking care of records and procedures, created by SunSmart Global Ltd going for Mid to Large undertakings spread crosswise over Industry verticals. nTireDMS is 100% program based application going along to n-Tire building design for facilitating over Cloud, Internet and Intranet situations. nTireDMS is a 100% online, profoundly versatile, complete answer for overseeing/distributed every one of your archives/booklets/forms electronically. nTireDMS empowers you to rapidly, effectively and safely oversee records of any sort.
Living Multiple Lives: The New Technical CommunicatorScott Abel
This presentation delivered by Noz Urbina at the Documentation and Training West 2008 conference (www.doctrain.com) in Vancouver, BC.
The world is becoming more and more tech-savvy by the picosecond. More savvy means more demanding! Today organizations need to juggle management of customer-generated content, maximize the use of cross-departmental contributions, and still deliver quality technical communication products to their user base. This presentation takes a low-tech, cross-industry look at why strategies are changing and how organizations are adapting (or not!) to these challenges.
Living Multiple Lives: The New Technical CommunicatorScott Abel
Presented by Noz Urbina at Documentation and Training West, May 6-9, 2008 in Vancouver, BC
This presentation is for team leaders, information managers, tech communicators and product managers who care about maximizing efficiency and return on investment in the information-heavy parts of their product cycle.
We will discuss current developments in the field of Technical Communications and how the role of the Technical Communicator has been rapidly and fundamentally evolving. The world is becoming more and more tech-savvy by the picosecond. More savvy means more demanding, and an organization’s ability to balance internal and external management of supporting technical information while delivering quality technical communication products has gone from being a burdensome nuisance, to a central and strategic must for market competitiveness.
This presentation takes a low-tech, cross-industry look at why strategies are changing and how organizations are adapting to these challenges. Best practices for approach, organizing teams, planning for change, DITA/XML, and departmental integration will all be addressed.
Cloud computing is a better way to run your business. Instead of developing, maintaining and running your content management applications yourself, you access everything you need through the web. You just log in, customize it, and start using it. That’s the power of cloud computing.
Canadian Experts Discuss Modern Data Stacks and Cloud Computing for 5 Years o...Daniel Zivkovic
Two #ModernDataStack talks and one DevOps talk: https://youtu.be/4R--iLnjCmU
1. "From Data-driven Business to Business-driven Data: Hands-on #DataModelling exercise" by Jacob Frackson of Montreal Analytics
2. "Trends in the #DataEngineering Consulting Landscape" by Nadji Bessa of Infostrux Solutions
3. "Building Secure #Serverless Delivery Pipelines on #GCP" by Ugo Udokporo of Google Cloud Canada
We ran out of time for the 4th presenter, so the event will CONTINUE in March... stay tuned! Compliments of #ServerlessTO.
XyEnterprise is a software developer and services provider focused on content management and multi-channel publishing solutions. Their solutions include a content management system, an electronic publishing system, and an interactive content delivery platform. They help companies manage structured XML content to reduce costs of authoring, content development, and delivery across multiple formats. Emerging standards like DITA and S1000D encourage component-based rather than document-based authoring and provide opportunities for automation, reuse, and just-in-time publishing across channels. XyEnterprise's role involves all aspects of the publishing process from content creation to dynamic, tailored delivery on multiple channels.
This document describes Cello, a cloud-ready, multi-tenant application development platform for .NET. Cello addresses common pain points in building software-as-a-service applications by providing pre-built modules for tenant management, security, customization, workflows, and more. This allows developers to focus on their core business solutions while leveraging Cello's tested frameworks. Customers can customize applications by configuring features, forms, and business rules at the tenant level. Cello aims to reduce costs, risks, and time-to-market for developing configurable multi-tenant applications.
This document summarizes the key phases and sections of an IT 265 Data Structures course project. The project covered common data structures like lists, stacks, queues, trees, and sorting/searching algorithms. It evaluated recursion and provided examples of insertion sort, bubble sort, and selection sort. The goal was to demonstrate understanding of these fundamental data structures and algorithms through code examples and explanations of their applications and efficiency.
Cloud: a disruptive technlogy that CEO should use to transform their businessBertrand MAES
Cloud:
What cloud really means ?
How it should help CEO transform their business ?
How it should help CEO transform their IT department ?
Prerequisite for a sucessful cloud project
AtomicDB is a proprietary software technology that uses an n-dimensional associative memory system instead of a traditional table-based database. This allows information to be stored and related in a way analogous to human memory. The technology does not require extensive programming and can rapidly build and modify information systems to meet evolving needs. It provides significant cost and performance advantages over traditional databases for managing complex, relational data.
The document provides a summary of Vel Murugan's skills and experience. It includes his educational background, including a B.E. in computer science and a master's in software engineering. It also outlines his work experience at Ramco Systems from 2015 to present, where he has worked on projects involving ERP implementation and customization for clients. His roles have included development, support, testing, and working with an offshore team. The document lists his technical skills such as SQL Server, Crystal Reports, and Ramco tools. It provides details on two projects involving aircraft material management and production/warehousing systems.
- Kevin Smedley discusses how to strategize and implement an efficient CAD and Vault environment for design and manufacturing. He emphasizes the importance of standards, communication, and consistency.
- Some keys to success include creating documentation for executives, assessing software and hardware needs, developing deployment strategies and standards, and formalizing processes to minimize errors and improve efficiency.
- The goal is to build a sustainable CAD environment that allows for improved collaboration and stays on the cutting edge of technology.
AnalytiX DS specializes in the development of ‘agile tools’ for the data integration industry which automate manual data mapping and ETL conversion processes.
Similar to 5 Reasons not to use Dita from a CCMS Perspective (20)
🏎️Tech Transformation: DevOps Insights from the Experts 👩💻campbellclarkson
Connect with fellow Trailblazers, learn from industry experts Glenda Thomson (Salesforce, Principal Technical Architect) and Will Dinn (Judo Bank, Salesforce Development Lead), and discover how to harness DevOps tools with Salesforce.
The Rising Future of CPaaS in the Middle East 2024Yara Milbes
Explore "The Rising Future of CPaaS in the Middle East in 2024" with this comprehensive PPT presentation. Discover how Communication Platforms as a Service (CPaaS) is transforming communication across various sectors in the Middle East.
WMF 2024 - Unlocking the Future of Data Powering Next-Gen AI with Vector Data...Luigi Fugaro
Vector databases are transforming how we handle data, allowing us to search through text, images, and audio by converting them into vectors. Today, we'll dive into the basics of this exciting technology and discuss its potential to revolutionize our next-generation AI applications. We'll examine typical uses for these databases and the essential tools
developers need. Plus, we'll zoom in on the advanced capabilities of vector search and semantic caching in Java, showcasing these through a live demo with Redis libraries. Get ready to see how these powerful tools can change the game!
A Comprehensive Guide on Implementing Real-World Mobile Testing Strategies fo...kalichargn70th171
In today's fiercely competitive mobile app market, the role of the QA team is pivotal for continuous improvement and sustained success. Effective testing strategies are essential to navigate the challenges confidently and precisely. Ensuring the perfection of mobile apps before they reach end-users requires thoughtful decisions in the testing plan.
What to do when you have a perfect model for your software but you are constrained by an imperfect business model?
This talk explores the challenges of bringing modelling rigour to the business and strategy levels, and talking to your non-technical counterparts in the process.
Measures in SQL (SIGMOD 2024, Santiago, Chile)Julian Hyde
SQL has attained widespread adoption, but Business Intelligence tools still use their own higher level languages based upon a multidimensional paradigm. Composable calculations are what is missing from SQL, and we propose a new kind of column, called a measure, that attaches a calculation to a table. Like regular tables, tables with measures are composable and closed when used in queries.
SQL-with-measures has the power, conciseness and reusability of multidimensional languages but retains SQL semantics. Measure invocations can be expanded in place to simple, clear SQL.
To define the evaluation semantics for measures, we introduce context-sensitive expressions (a way to evaluate multidimensional expressions that is consistent with existing SQL semantics), a concept called evaluation context, and several operations for setting and modifying the evaluation context.
A talk at SIGMOD, June 9–15, 2024, Santiago, Chile
Authors: Julian Hyde (Google) and John Fremlin (Google)
https://doi.org/10.1145/3626246.3653374
Transforming Product Development using OnePlan To Boost Efficiency and Innova...OnePlan Solutions
Ready to overcome challenges and drive innovation in your organization? Join us in our upcoming webinar where we discuss how to combat resource limitations, scope creep, and the difficulties of aligning your projects with strategic goals. Discover how OnePlan can revolutionize your product development processes, helping your team to innovate faster, manage resources more effectively, and deliver exceptional results.
Malibou Pitch Deck For Its €3M Seed Roundsjcobrien
French start-up Malibou raised a €3 million Seed Round to develop its payroll and human resources
management platform for VSEs and SMEs. The financing round was led by investors Breega, Y Combinator, and FCVC.
Superpower Your Apache Kafka Applications Development with Complementary Open...Paul Brebner
Kafka Summit talk (Bangalore, India, May 2, 2024, https://events.bizzabo.com/573863/agenda/session/1300469 )
Many Apache Kafka use cases take advantage of Kafka’s ability to integrate multiple heterogeneous systems for stream processing and real-time machine learning scenarios. But Kafka also exists in a rich ecosystem of related but complementary stream processing technologies and tools, particularly from the open-source community. In this talk, we’ll take you on a tour of a selection of complementary tools that can make Kafka even more powerful. We’ll focus on tools for stream processing and querying, streaming machine learning, stream visibility and observation, stream meta-data, stream visualisation, stream development including testing and the use of Generative AI and LLMs, and stream performance and scalability. By the end you will have a good idea of the types of Kafka “superhero” tools that exist, which are my favourites (and what superpowers they have), and how they combine to save your Kafka applications development universe from swamploads of data stagnation monsters!
Mobile App Development Company In Noida | Drona InfotechDrona Infotech
React.js, a JavaScript library developed by Facebook, has gained immense popularity for building user interfaces, especially for single-page applications. Over the years, React has evolved and expanded its capabilities, becoming a preferred choice for mobile app development. This article will explore why React.js is an excellent choice for the Best Mobile App development company in Noida.
Visit Us For Information: https://www.linkedin.com/pulse/what-makes-reactjs-stand-out-mobile-app-development-rajesh-rai-pihvf/
Manyata Tech Park Bangalore_ Infrastructure, Facilities and Morenarinav14
Located in the bustling city of Bangalore, Manyata Tech Park stands as one of India’s largest and most prominent tech parks, playing a pivotal role in shaping the city’s reputation as the Silicon Valley of India. Established to cater to the burgeoning IT and technology sectors
How Can Hiring A Mobile App Development Company Help Your Business Grow?ToXSL Technologies
ToXSL Technologies is an award-winning Mobile App Development Company in Dubai that helps businesses reshape their digital possibilities with custom app services. As a top app development company in Dubai, we offer highly engaging iOS & Android app solutions. https://rb.gy/necdnt
Enhanced Screen Flows UI/UX using SLDS with Tom KittPeter Caitens
Join us for an engaging session led by Flow Champion, Tom Kitt. This session will dive into a technique of enhancing the user interfaces and user experiences within Screen Flows using the Salesforce Lightning Design System (SLDS). This technique uses Native functionality, with No Apex Code, No Custom Components and No Managed Packages required.
Alluxio Webinar | 10x Faster Trino Queries on Your Data PlatformAlluxio, Inc.
Alluxio Webinar
June. 18, 2024
For more Alluxio Events: https://www.alluxio.io/events/
Speaker:
- Jianjian Xie (Staff Software Engineer, Alluxio)
As Trino users increasingly rely on cloud object storage for retrieving data, speed and cloud cost have become major challenges. The separation of compute and storage creates latency challenges when querying datasets; scanning data between storage and compute tiers becomes I/O bound. On the other hand, cloud API costs related to GET/LIST operations and cross-region data transfer add up quickly.
The newly introduced Trino file system cache by Alluxio aims to overcome the above challenges. In this session, Jianjian will dive into Trino data caching strategies, the latest test results, and discuss the multi-level caching architecture. This architecture makes Trino 10x faster for data lakes of any scale, from GB to EB.
What you will learn:
- Challenges relating to the speed and costs of running Trino in the cloud
- The new Trino file system cache feature overview, including the latest development status and test results
- A multi-level cache framework for maximized speed, including Trino file system cache and Alluxio distributed cache
- Real-world cases, including a large online payment firm and a top ridesharing company
- The future roadmap of Trino file system cache and Trino-Alluxio integration
3. SCHEMAGroup2015–Allrightsreserved
Definitions and Terminology:
Marcus Kesseler, SCHEMA & DERCOM
Marcus Kesseler
Computer Scientist with a heavy Artificial Intelligence background.
One of two founders and managing directors of SCHEMA GmbH.
SCHEMA
A software company based in Nürnberg.
SCHEMA is 20 years old and we have been making
and selling CCMS from day one.
DERCOM
Is the Association of German Manufacturers of Authoring and
Content Management Systems.
Currently 7 companies, with 1,400 customers between them.
4. SCHEMAGroup2015–Allrightsreserved
Definitions and Terminology:
CCMS
CCMS
Component Content Management System.
The main difference between a CMS and a CCMS:
A CCMS has the ability to aggregate content
components into larger documents.
A CCMS is able to publish content as “classic”
documents or as Web portal content or app content,
all with very high quality.
5. SCHEMAGroup2015–Allrightsreserved
Definitions and Terminology:
DITA
DITA
Darwin Information Typing Architecture, an XML and files-based standard
for the representation of componentized and interlinked content.
Although there are several DITA-based CCMS implementations, DITA can
be used with just an XML Editor, the file system and the DITA Open
Toolkit.
What we like about DITA, is the visibility it brings to the enormous
advantages of componentized content.
We fully agree with the DITA community, that there really is no alternative
to working with components (or topics) in large-scale, state-of-the-art
technical content authoring, management and distribution.
6. SCHEMAGroup2015–Allrightsreserved
More Terminology: Essential
and Incidental Complexity
Essential complexity, also called intrinsic or inherent complexity, is the
complexity you cannot hide or get rid of in a software implementation.
It is directly derived from the domain you are modelling.
Example: When moving from a document based content authoring to
a componentized one, the number of objects you have to deal with
goes up by two or three orders of magnitude. The only way to hide this
increase would be to hide the components, which, of course, would
defeat the purpose.
Incidental complexity, is an extra dose of complexity added on top of
the essential complexity by bad choices of architecture, data
representation or user experience design.
8. SCHEMAGroup2015–Allrightsreserved
Our context is not the Lone
Technical Content Ranger
All arguments in this talk assume that we are talking
about the processes and needs of large technical content
department operating at a high level of maturity.
We are not talking about the perspective of the Lone
Technical Content Ranger.
Russell Ward presented this perspective in his great
talk last year here at tekom 2014:
Five reasons not to use DITA
[http://conferences.tekom.de/fileadmin/tx_doccon/slides/742_5_Reasons_Not_to_Use_DITA.pdf]
9. SCHEMAGroup2015–Allrightsreserved
Large Technical Content
Departments: Some Parameters
So, what is a Large Technical Content Department?
5 to several dozen technical writers.
Publications have to be regularly updated in 5 to 30
(or more) languages.
Multiple publication formats, including:
Paginated formats, like PDF (directly or via InDesign,
FrameMaker or Word).
Online formats, like HTML, HTML5, EPUB, etc.
Custom XML formats.
10. SCHEMAGroup2015–Allrightsreserved
Large Technical Content
Departments: Processes & Worflows
The following are defined and enforced:
Writing standards and terminology
Translation standards and workflows
Artwork & media standards and workflows
Publication workflows
Release workflows
Distribution Workflows
11. SCHEMAGroup2015–Allrightsreserved
Large Technical Content
Departments: Core Challenges
Layout has to be of the highest quality, strictly
adhering to Corporate Design standards.
Products are highly modular or organized in product
families with common base features, both of which
are key requirements for effective and massive
content reuse.
Product innovation is fast and relentless,
the technical content team is always under pressure
to keep product and information life cycles in sync.
So, just another great day in the wonderful world
of technical content publishing. Life is good!
12. Reason 1
Coverage of Component Content
Management Requirements in
DITA is Surprisingly Small
13. SCHEMAGroup2015–Allrightsreserved
Requirements Coverage of
XML, DITA and CCMS
# Process Name & Requirements
Max
Points
XML DITA CCMS
1
Topics management
(classes, workflows, versioning, ownership, access control).
10 0 3 9
2
Manage the links between topics
(classes, workflows, versioning, ownership, referential integrity).
10 0 3 9
3
Management of the maps that build the publications out of the underlying
components
(versioning, ownership, referential integrity).
10 0 3 9
4
Manage the metadata on topics, links and maps
(classes, workflows, versioning, ownership).
10 1 2 9
5
Translation management with automatic flagging of topics needing re-translation
(ownership, workflow, dataflow).
10 1 1 8
6
Media assets management
(classes, workflows, ownership, guidelines, conversion, translation).
10 1 2 7
7
Publication formats and layout management
(design within corporate guidelines, implementation, revisions).
10 0 4 8
8
Automatic publication generation and channel specific distribution
(workflow, IT systems integration).
10 0 2 6
9
Overall content, links and publications quality assurance and approval processes
(correctness, writing style, terminology, translations, links, publication maps,
graphics and layout).
10 2 3 8
14. SCHEMAGroup2015–Allrightsreserved
Requirements Coverage of
XML, DITA and CCMS
# Process Name & Requirements
Max
Points
XML DITA CCMS
10
Information model management
(conceptual design, classes, roles, rights, workflows, evolution).
10 0 2 9
11
Performance & costs management
(financial controlling, key performance indicators monitoring, tracking, corrective
actions)
10 0 2 4
12
Security
(user management, user roles, access control, change tracking).
10 0 0 8
13
IT and software infrastructure management
(change, updates and upgrades).
10 0 0 4
14
Manage the communication with adjacent departments, like product
management, engineering and marketing
(responsibilities, workflows).
10 0 0 3
15
Team management
(skills, training, structure, responsibilities, motivation).
10 0 0 0
Coverage [Points] 150 5 27 101
Coverage [Percent] 3% 18% 67%
Coverage with CCMS baseline [Percent] 27% 100%
15. SCHEMAGroup2015–Allrightsreserved
Requirements Coverage of
XML, DITA and CCMS
XML DITA CCMS
[DITA]
CCMS
[DERCOM]
Business
Logic in
DITA
Open
Toolkit
Business
Logic in
Database,
Workflow
System,
TMS
Interfaces,
Media
Assets
Management,
etc
Non-DITA CCMSs bonus for
being on the market for at
least 10 years longer
?
16. SCHEMAGroup2015–Allrightsreserved
Drawbacks of a Small
Requirements Coverage
Comparing CCMSs based on their level of DITA compliance
would not yield much insights, since most requirements
are outside of DITA’s scope.
All features not within DITA’s scope would not be trivially
portable to other DITA-based systems. Some examples:
Versioning
Translation states & dataflow
Release and ongoing workflow states
Media assets management
Access rights & user management
Note: Even with a DITA-based CCMS, you would
incur a significant amount of vendor lock-in!
18. SCHEMAGroup2015–Allrightsreserved
Evolution of DITA is too Slow
An update every five years is just not compatible with the
demands of an ever accelerating market (variables? scoped
keys?).
Fast evolution of DITA is impeded by the following two
inherently conflicting requirements:
The need to add features that are crucially missing in real-
life application scenarios.
The need to prevent new features that would add even more
incidental complexity to the standard.
19. SCHEMAGroup2015–Allrightsreserved
Evolution of DITA is too slow
Scoped keys are a good example:
Under heavy reuse scenarios you are very, very
likely to need them.
On the other hand, should tech writers really need
to be trained in programming languages scoping
concepts, just to be able to handle reuse
variability?
21. SCHEMAGroup2015–Allrightsreserved
How is a DITA Topic
Represented in a File System?
TOP
[XML]
DITA
Topic
File
File Metadata
(Name, Owner,
LastWriteDate, …)
Metadata within
XML DITA Topic
(class, author,
target audience, …)
XML
Content
23. SCHEMAGroup2015–Allrightsreserved
… and some versions …
TOP
EN
V1
. . .TOP
FR
V1
TOP
JA
V1
TOP
PT
V1
TOP
EN
V2
. . .TOP
FR
V2
TOP
JA
V2
TOP
PT
V2
TOP
EN
Vn
. . .TOP
FR
Vn
TOP
JA
Vn
TOP
PT
Vn
...
24. SCHEMAGroup2015–Allrightsreserved
… and after several years, a single topic
may have proliferated into m × n files!
TOP
EN
V1
TOP
FR
V1
TOP
JA
V1
TOP
PT
V1
TOP
EN
V2
TOP
FR
V2
TOP
JA
V2
TOP
PT
V2
TOP
EN
Vn
TOP
FR
Vn
TOP
JA
Vn
TOP
PT
Vn
n versions
m languages
25. SCHEMAGroup2015–Allrightsreserved
How m × n Topics are
accessed in DITA
In DITA each single translation or version is a unique,
individual file and hence a distinct topic.
The user has to know exactly what language and version is
being referenced.
Keys or file names will likely follow some pattern like this:
Topic_Intro_en_V1
Topic_Intro_fr_V1
Topic_Intro_ja_V1
Topic_Intro_en_V2
Topic_Intro_fr_V2
Topic_Intro_ja_V2
26. SCHEMAGroup2015–Allrightsreserved
How m × n Topics are
Accessed in a CCMS
In a CCMS implemented on top of a database, all these m × n
topics can be addressed with a single key:
[ID_Intro, Language, LatestReleasedVersion]
where Language and LatestReleasedVersion are variables,
that the system will automatically populate as needed.
In Computer Science this is called a composite key, and was
invented over 45 years ago at IBM.
Composite keys capture and optimally encode the regularities in
the target domain and let the computer do the tedious book-
keeping. This is what computers are good at!
27. SCHEMAGroup2015–Allrightsreserved
How m × n Topics are Accessed
by the Author in a CCMS
Authors will rarely need to see, insert or handle
full CCMS composite topic keys:
[ID_Intro, Language, LatestReleasedVersion]
Since the composite key structure is universal within the system,
there is no need to explicitly represent the variable parts. They are
optional and will be implicitly added at document aggregation time.
What the author sees and handles is just:
[ID_Intro]
And, of course, usually even this is hidden by the GUI.
28. SCHEMAGroup2015–Allrightsreserved
Advantages of Composite Keys
DITA would be so much easier, if references were defined as
composite keys:
Maps would be directly reusable. No need to create and
maintain a map for each language. A change to the map
structure in English is automatically available in all other
languages.
New languages (or versions) can be added to your pool without
touching the maps at all!
No need to develop, train and enforce sophisticated file name
or key patterns to manually capture and encode these rather
trivial domain regularities.
Authors need only insert a reference to the topic, the system
does the tedious and error-prone book-keeping.
29. SCHEMAGroup2015–Allrightsreserved
Representation of m × n
Topics in a CCMS
EN FR JA PT
TOPIC
Metadata for
this version in
this language
Metadata for
all versions in
this language
Metadata for
all versions in
all languages
Topic
container
Language
container
XML
container XML
V1
XML
V2
XML
Vn
XML
V1
XML
V2
XML
Vn
XML
V1
XML
V2
XML
Vn
XML
V1
XML
V2
XML
Vn
XML
content
36. SCHEMAGroup2015–Allrightsreserved
DITA‘s XML-first Paradigm vs.
a Database-first Paradigm
In DITA, every information or data that is needed to drive business
processes has to be inside the XML files together with the content as
such (= DITA’s XML first paradigm).
This goes against quite a few Computer Science information model
designing principles.
Any change, however minimal, to a topic can affect content, structure,
linking or metadata and therefore has to be carefully scrutinized to
identify what exactly changed and if any consistency rules were
broken.
Enforcing the principles of Atomicity, Consistency and Isolation in DITA
is quite a challenge (cf. The ACID Principles of Database Design).
37. SCHEMAGroup2015–Allrightsreserved
DITA‘s XML-first vs.
Database-first
Please note that DITA’s XML first is a huge incidental complexity driver
for DITA-based CCMS implementations:
There is pressure to improve metadata handling by keeping them in
the database, but, with XML-first, you also have to keep them in the
DITA files. Now there are two distinct and separate representations.
You’ve lost your single source of truth.
The database value and the DITA XML value can get inconsistent
from update conflicts and may have to be manually corrected by the
users.
Controlling change permissions for individual metadata values in a
file is also a huge challenge. It is possible to do it in good XML
editors. But users can still open the XML file in Notepad…
39. SCHEMAGroup2015–Allrightsreserved
Trend in CCMS: Content
Model Complexity Reduction
In the last 10 years, there has been a very strong trend in the CCMS
market to reduce content model complexity (aka semantic DTDs).
Content departments observed, that in the long term, they never got back
their investment into design, implementation, training and especially
maintenance of their sophisticated, made-to-order content models.
The trend is simply to move the needed business data from the XML
content into the database, where it is much easier to implement, manage,
interface with, retrieve and use productively.
40. SCHEMAGroup2015–Allrightsreserved
Examples of Content
Model Complexity Reduction
Some examples:
Topic types or classes are just metadata in the database. The variability on
the XML Editor (DTD) level is reduced to an absolute minimum.
All metadata assigned to a topic is moved from the XML into the database.
Fine grained variability in the content is handled by variables, which on the
XML content level are just very simple references into the database. The data
model for variables in the database is very powerful and table oriented
(=EXCEL), so that it is easy to maintain versions, languages and taxonomic
dependencies of variable names and values without touching the XML
content.
41. SCHEMAGroup2015–Allrightsreserved
DITA Specialization
As a Computer Scientist, I think DITA Specialization is a really impressive and
elegant solution for the implementation of sophisticated content models.
But again, DITA is adding all this sophistication to the XML level, where it will
incur a big cost in incidental complexity.
I think that there is a consensus, that even the default DITA content model is
already challenging for most technical writers new to component-based
authoring.
42. SCHEMAGroup2015–Allrightsreserved
DITA Specialization
There is a paradox, in that just to trim the content model down to a more
manageable scope, you already need a significant amount of consulting and
configuration.
The OASIS Lightweight DITA Initiative, chaired by Michael Priestley (IBM), is
trying to remedy this situation, so that you can start simple and add more
features later, when you understand the principles and can be sure that you
really need them.
44. SCHEMAGroup2015–Allrightsreserved
Summary of our
5 Reasons against DITA
1. Coverage of Component Content Management
Requirements in DITA is Surprisingly Small.
2. Evolution of the DITA Standard is too Slow.
3. How DITA deals with the Number of Files Explosion.
4. DITA‘s XML-first Paradigm.
5. The Default DITA Content Model is too Complex.
45. SCHEMAGroup2015–Allrightsreserved
Conclusion
As long as the DITA standard is based on a non-negotiable
XML-first paradigm, it will always incur a tremendous
incidental complexity cost on multiple levels:
Initial configuration, even if just to trim DITA back, is
significant.
Integrating DITA into a CCMS (or database) is fragile and
expensive.
Technical writers will need a lot of training and close
motivation monitoring.
46. SCHEMAGroup2015–Allrightsreserved
Recommendation
Our recommendation would be to decouple the DITA
business logic from the XML-first principle.
In the end, this means the DITA Open Toolkit would not
be just a smart topic aggregation compiler, but behave
much more like an integrated database application, in
short: just like a state-of-the-art CCMS.
Tekom 2015 presents a very convenient opportunity to
take a closer look at these systems!