The document summarizes the experience of implementing a DITA content management system (CMS) at AMD's graphics division over almost four years. Key points include:
1) Productivity increased 2.3-3x through content reuse, automation, and fewer formatting issues in localization. Output increased while staff decreased.
2) Localization costs dropped to less than half of pre-CMS levels due to greater content reuse and streamlined processes.
3) Tracking metadata allows comprehensive measurement of productivity, including topics created/modified, translation auto-matches, and topic reuse rates. This data aligns with product release cycles.
The document summarizes AMD's localization processes and efforts to improve them. Key points include:
- AMD transitioned to using DITA XML and a CMS to streamline localization, reducing costs and delivery times.
- Localization is important for reaching global customers and enabling business worldwide.
- AMD localizes materials for marketing, engineering documentation, and internal use.
- Improving processes included selecting a single company-wide localization vendor through an RFI/RFP process.
Dita Metrics in Production: How, When, Where, and Why (and How Much) ReduxKeith Schengili-Roberts
An update to an earlier presentation that talks about DITA Metrics looking at my experiences while at AMD, and looking at production metrics as well as ROI.
10 Million Dita Topics Can't Be Wrong, December 6th, 2016, Webinar by Keith Schengili-Roberts, IXIASOFT DITA Specialist, Hosted by Scott Abel at The Content Wrangler Virtual Summit
Optimizing Content Reuse with DITA - slides from FREE webinar presented by LavaCon, with Keith Schengili-Roberts, IXIASOFT DITA Specialist
DITA was designed around the idea of content reuse. Maps, topics, conrefs and keys all provide the means for sharing and reusing content effectively within a documentation team using the standard. But what are the optimal ways of doing this, and what are the common mistakes first-time DITA users make when it comes to content reuse? Did you know that DITA 1.3 offers up additional means for reusing content by using such things as scoped keys? And what good is content reuse if you can’t find the content you are looking for?
In this presentation IXIASOFT’s DITA Specialist Keith Schengili-Roberts will examine content reuse best practices, and look at how the idea of content reuse has evolved, changed and been refined since DITA first debuted over ten years ago. This webinar will be conducted through GoToWebinar, and the link will be sent the day before the event.
Webinar sponsored by IXIASOFT, presented by LavaCon.
DITA and Agile are Made for Each Other by Keith Schengili-Roberts, IXIASOFT DITA Specialist. Presented at CMS/DITA North America 2016 in Reston, Virginia.
Agile software development makes specific demands on documentation teams, whose content creators now need to be more nimble, describe features in a piece-meal fashion, and report on their progress in an effective way. The topic-based structure of DITA is ideally suited to these needs. Keith Schengili-Roberts (also known as “DITAWriter”) focuses on how DITA-based content is the optimal way of working in an Agile environment, enabling content creators to effectively meet the demands of short sprint cycles, measure content output for Scrum meetings, and how to become a pig rather than staying a chicken (yes, seriously). Keith also looks at several case studies of DITA-using documentation groups working within an Agile environment. If you are wondering about what the impacts are of working with Agile or are simply looking to optimize your DITA-based documentation processes, come to this presentation!
What can the audience expect to learn?
Keith expands upon the material that was touched upon during the Best Practices conference on the same subject, including information based on subsequent interviews with clients and other content creators who are using DITA in an Agile environment. He provides information on how others are using DITA in this scenario and emerging best practices within it. Keith has found that many content creators using DITA are looking to move to an Agile environment—particularly if they work for a software firm. The ideas presented here serve as an introduction on what to expect. Even those who do not fit this scenario may find some of the ways and processes used by DITA-using doc groups in an Agile team to be beneficial.
Localization and DITA: What you Need to Know - LocWorld32IXIASOFT
The document discusses localization best practices when using DITA (Darwin Information Typing Architecture). It provides an overview of key DITA features like content reuse and separation of form and content. It also looks at current adoption of DITA, with over 650 companies using it worldwide across many sectors. Localization considerations with DITA are examined, including challenges around incomplete translation packages, content reuse with conrefs and conditions, and ensuring proper context for translation. Best practices are suggested for localization teams and LSPs (language service providers) working with DITA content.
The document summarizes AMD's localization processes and efforts to improve them. Key points include:
- AMD transitioned to using DITA XML and a CMS to streamline localization, reducing costs and delivery times.
- Localization is important for reaching global customers and enabling business worldwide.
- AMD localizes materials for marketing, engineering documentation, and internal use.
- Improving processes included selecting a single company-wide localization vendor through an RFI/RFP process.
Dita Metrics in Production: How, When, Where, and Why (and How Much) ReduxKeith Schengili-Roberts
An update to an earlier presentation that talks about DITA Metrics looking at my experiences while at AMD, and looking at production metrics as well as ROI.
10 Million Dita Topics Can't Be Wrong, December 6th, 2016, Webinar by Keith Schengili-Roberts, IXIASOFT DITA Specialist, Hosted by Scott Abel at The Content Wrangler Virtual Summit
Optimizing Content Reuse with DITA - slides from FREE webinar presented by LavaCon, with Keith Schengili-Roberts, IXIASOFT DITA Specialist
DITA was designed around the idea of content reuse. Maps, topics, conrefs and keys all provide the means for sharing and reusing content effectively within a documentation team using the standard. But what are the optimal ways of doing this, and what are the common mistakes first-time DITA users make when it comes to content reuse? Did you know that DITA 1.3 offers up additional means for reusing content by using such things as scoped keys? And what good is content reuse if you can’t find the content you are looking for?
In this presentation IXIASOFT’s DITA Specialist Keith Schengili-Roberts will examine content reuse best practices, and look at how the idea of content reuse has evolved, changed and been refined since DITA first debuted over ten years ago. This webinar will be conducted through GoToWebinar, and the link will be sent the day before the event.
Webinar sponsored by IXIASOFT, presented by LavaCon.
DITA and Agile are Made for Each Other by Keith Schengili-Roberts, IXIASOFT DITA Specialist. Presented at CMS/DITA North America 2016 in Reston, Virginia.
Agile software development makes specific demands on documentation teams, whose content creators now need to be more nimble, describe features in a piece-meal fashion, and report on their progress in an effective way. The topic-based structure of DITA is ideally suited to these needs. Keith Schengili-Roberts (also known as “DITAWriter”) focuses on how DITA-based content is the optimal way of working in an Agile environment, enabling content creators to effectively meet the demands of short sprint cycles, measure content output for Scrum meetings, and how to become a pig rather than staying a chicken (yes, seriously). Keith also looks at several case studies of DITA-using documentation groups working within an Agile environment. If you are wondering about what the impacts are of working with Agile or are simply looking to optimize your DITA-based documentation processes, come to this presentation!
What can the audience expect to learn?
Keith expands upon the material that was touched upon during the Best Practices conference on the same subject, including information based on subsequent interviews with clients and other content creators who are using DITA in an Agile environment. He provides information on how others are using DITA in this scenario and emerging best practices within it. Keith has found that many content creators using DITA are looking to move to an Agile environment—particularly if they work for a software firm. The ideas presented here serve as an introduction on what to expect. Even those who do not fit this scenario may find some of the ways and processes used by DITA-using doc groups in an Agile team to be beneficial.
Localization and DITA: What you Need to Know - LocWorld32IXIASOFT
The document discusses localization best practices when using DITA (Darwin Information Typing Architecture). It provides an overview of key DITA features like content reuse and separation of form and content. It also looks at current adoption of DITA, with over 650 companies using it worldwide across many sectors. Localization considerations with DITA are examined, including challenges around incomplete translation packages, content reuse with conrefs and conditions, and ensuring proper context for translation. Best practices are suggested for localization teams and LSPs (language service providers) working with DITA content.
Troubleshooting: The Two Laws - IXIASOFT User Conference 2016IXIASOFT
Presented by Alex Kozaris, IXIASOFT IT Specialist at the IXIASOFT User Conference 2016.
Murphy’s Law says that if something can go wrong, it will. But don’t let Murphy tell you what to do; instead come to this presentation where Alex will take you through effective troubleshooting procedures for the issues that he commonly sees with his extensive experience of solving issues involving the IXIASOFT DITA CMS.
Agile Content Development and the IXIASOFT DITA CMSIXIASOFT
Keith Schengili-Roberts, IXIASOFT DITA Information Architect, reviews the benefits of working with agile content development and the IXIASOFT DITA CMS.
Presented at the IXIASOFT User Conference 2015. Kristen James Eberlein and Keith Schengili-Roberts discuss the way the DITA standard has evolved over the last 10 years, where it's at today, and what can be expected in the future.
This document summarizes the additions to DITA standards over multiple versions and discusses the complexity of the DITA standard and tagset. It notes that:
- DITA 1.1-1.3 added many new elements and features over time like bookmaps, keys, and specialized domains.
- However, smaller companies with non-technical writers may struggle to implement many new DITA 1.3 features that require specialized resources.
- The document proposes a "Standard DITA" subset for beginners and a "Advanced DITA" full feature set, arguing constraints do not truly simplify the standard. It notes DITA 2.0 aims to address accessibility and simplification
Improve your Chances for Documentation Success with DITA and a CCMS LavaCon L...IXIASOFT
This document discusses how adopting DITA and a content management system (CCMS) can improve documentation success. It outlines key features of DITA including content reuse. Four main reasons for adopting DITA and a CCMS are discussed: needing more efficiency, outgrowing current tools, rising localization costs, and needing content verification. Four things that can be done with DITA and a CCMS are also presented: versioning content, implementing workflows, measuring documentation metrics, and improving localization. The presenter is then available for questions.
Keith Schengili-Roberts: Improve Your Chances for Documentation Success with ...Jack Molisani
Keith Schengili-Roberts presented on how moving to DITA and a content management system (CCMS) can improve chances of documentation success. He outlined key features of DITA like content reuse and separation of form from content. Four chief reasons for wanting to move included needing more efficiency, outgrowing current tools, rising localization costs, and content verification needs. A CCMS allows for versioning, workflow, metrics on production and reuse, and improved localization. It was argued the benefits outweigh upfront costs over time through opportunities for process improvement.
Short Descriptions Shouldn't Be a Tall Order: Writing Effective Short Descrip...IXIASOFT
Short Descriptions Shouldn't Be a Tall Order: Writing Effective Short Descriptions, Webinar by Keith Schengili-Roberts, IXIASOFT and Joe Storbeck, Jana - Hosted by CIDM
The document describes the OnBase product which centralizes business content in one secure location and drives it through processes quickly. It works with other applications to deliver information whenever and wherever needed. OnBase improves customer service, reduces operating costs, and minimizes risks. It can be deployed on-premises or in the cloud with flexibility to migrate between the two. OnBase is comprised of modular components that can be combined to create customized solutions for specific needs.
How to Optimize Your Metadata and TaxonomyIXIASOFT
1. The document discusses how to optimize metadata and taxonomy by creating a content strategy plan, determining key metrics, applying metadata to content, and communicating results to stakeholders.
2. It outlines the key steps: create a content strategy plan, determine metrics to measure goals, apply metadata to content using a metadata schema, and communicate results using reports and queries.
3. Applying metadata according to the strategy helps users find content and measures strategy success, while communicating results builds trust and credibility with stakeholders.
While open-source solutions may have no purchase cost, total costs including configuration, customization and support can equal proprietary solutions. DITA provides benefits like reuse and translation but has limitations in areas like graphics, equations, custom output and legacy content migration. PDF publishing from DITA is especially challenging due to the complexity of XSL-FO. DITA works best for organizations with significant reuse across contexts and languages, while smaller groups may find its limitations easier to overcome.
Painless XML Authoring?: How DITA Simplifies XMLScott Abel
Presented at DocTrain East 2007 by Bob Doyle, DITA Users -- This introduction to XML Authoring will acquaint you with over fifty tools aimed at structuring content with DITA. They are not just DITA-compliant authoring tools (editors) for writers. They also include content management systems (CMS), translation management systems (TMS), and dynamic publishing engines that fully support DITA. You will also need to know about tools that convert legacy documents to DITA and help to design stylesheets for DITA deliverables. The best DITA tools for technical communicators implement the DITA standard while hiding all the complexity of the underlying XML (eXtensible Markup Language).
As a tech writer and not a tech, you should be able to forget about XML - except to know that you are using it (DITA is XML) and that it consists of named content elements (or components) with attributes. You need to know enough about the content elements so you can reference (conref) them for reuse. You need to know about their attributes so you can filter on them for conditional processing. And you should appreciate that because components are uniquely identifiable they lend themselves perfectly to automated dynamic assembly using a publishing engine.
We will describe how you can get started with structured writing without knowing XML or installing anything.
The promise of topic-based structured authoring is not simply better documentation. It is the creation of mission-critical information for your organization, written with a deep understanding of your most important audiences, that can be repurposed to multiple delivery channels and localized for multilingual global markets. You are not just writing content, you are preparing the information deliverables that enhance the value of your organization in all its markets.
To do that well, you must understand the latest tools in structured writing that are revolutionizing corporate information systems - today in documentation but tomorrow throughout the enterprise, from external marketing to internal human resources. Whether you are trying to push a new product into a new market or are “onboarding” a new employee, the need for high quality information to educate the customer or train the new salesperson is a challenge for technical communicators. You need to think outside the docs!
The key idea behind Darwin Information Typing Architecture is to create content in small chunks or modules called topics. A topic is the right size when it can stand alone as meaningful information. Topics are then assembled into documents using DITA maps, which are hierarchical lists of pointers or links to topics. The pointers are called “topicrefs” (for topic references).
Think of documents as assembled from single-source component parts. Assembly can be conditional, dependent on properties or metadata “tags” you attach to a topic. For example, the “audience” property might be “beginner” or “advanced.”
At a still finer level of granularity, individual elements of a topic can also be assigned property tags for conditional assembly. More importantly, a topic element can be assigned a unique ID that makes it a content component reusable in other topics.
As you will learn, DITA is a leading technology for “component content management,” which multiplies the value of your work. You need to leverage DITA and structured content to multiply your income.
Architecture Frameworks
Indicate a large number of views and processes.
Beginner's may find this a bit daunting.
In this talk I attempt
to boil enterprise architecture down
to a few fundamental techniques and ideas.
In this DCL Webinar, long-time DITA champion Don Day will talk about the basic principles of lightweight structured authoring and the current work of the OASIS Lightweight DITA Subcommittee along those lines. And since this is a work in progress, Don will lay out some practical steps you can take today to start taking advantage of some of these principles as we anticipate the Subcommittee's eventual recommendations.
IBM Connect 2017: Refresh and Extend IBM Domino ApplicationsEd Brill
This session covered new capabilities such as additional REST APIs coming in future feature packs of IBM Domino; IBM's partnership with Panagenda ApplicationInsights; and partners such as Darwino, We4IT's Aveedo, and Sapho that provide tools to modernize corporate and situational applications.
Sometimes, a spontaneous road trip can be a lot of fun, as long as you’re willing to take the good with the bad—getting lost, car trouble, unfriendly (or just plain weird) natives, bad diner food. Usually, though, the most successful trips involve planning, roadmaps, and best of all, guidance from people who’ve already been there.
The journey from traditional, deliverable-centric content creation to DITA-based content creation falls into this second category. In this session, we talk about one small publication group’s experience moving to DITA, from the initial discussions to the successful implementation of a FrameMaker-based, end-to-end publication process. Here are some of the high points of the project; we’ll discuss our decision-making process and some of our technical approaches in detail in the session.
Learn what DITA is and why you might need it to create documentation.
These are the slides from a presentation we gave at Write the Docs/PDX DITA joint meetup in December of 2014.
DITA Quick Start: System Architecture of a Basic DITA ToolsetSuite Solutions
Presenter: Joe Gelb, President, Suite Solutions
Abstract: In this webinar, you will learn about the software, integration and customization which enable you to effectively author, manage, localize, publish and share your DITA XML content. We will review how each tool fits into the content lifecycle and discuss options for an incremental DITA XML implementation using a basic toolset as the starting point.
The document compares the DITA and oManual standards for documenting repair procedures. While DITA can represent repair manuals, oManual has specific requirements for semantic information and visual content that directly influence the user experience on sites like iFixit. DITA tasks are more flexible than oManual steps, which drive a structured authoring process. An ideal workflow may convert relevant DITA content into oManual structures while maintaining correlations back to the original data.
ARMA Vancouver (in partnership with ARMA VI) invited Bruce Miller from RIMtech to give his 2 day “Managing Electronic Records with SharePoint” workshop.
Bruce Smith recaps some of the key messages about managing an EDRMS project, the roles of IT and RM, metrics for measuring progress, and 3rd party tools to add recordkeeping capabilities to SharePoint.
Bruce Norman Smith has been a SharePoint champion at Environment Canada and the Medical Council of Canada. A Master’s degree in Library and Information Studies (MLIS, McGill ‘08) provides Bruce with graduate level training in business process analysis, database design, xml metadata development, and IM theory & methods. Bruce’s talent for bridging the gaps between business needs, RM and Archival requirements & technical best practices ensures your entire organization can benefit from a SharePoint implementation. His current focus is on mastering the infrastructure and services that support a rock solid ECM solution.
Bruce's blog site is: http://seek.itgroove.net/
ECAD MCAD Design Data Management with PTC Windchill and Cadence Allegro PCBEMA Design Automation
Learn how PTC and Cadence have developed a unique collaboration environment to connect Allegro PCB design data with the Windchill PLM system for robust file management and check-in check-out capabilities.
Understanding and Communicating the Financial Impact of XML & DITAScott Abel
You already know that XML and DITA for documentation and publishing boast extreme productivity and cost-savings, as well as revenue opportunities. But how can you build your case to the executives to get the green light?
Come to this session to discover your potential financial returns. Learn how to calculate the cost of your current processes, and calculate the potential savings with new technology for authoring, re-use automation, localization, review, and publishing. You can then use these ROI calculations for budget requests and business cases to senior executives, to set expectations with the team and stakeholders, and track project success.
You’ll learn how to:
* Quantify and calculate savings in authoring, localization, reviewing, and publishing
* Build your business case for DITA including sample scenarios and calculations
* Communicate your proposed savings to senior executives
* Save 20 - 40% on authoring, 25-50% on localization costs, and over 50% on your existing publishing costs.
Troubleshooting: The Two Laws - IXIASOFT User Conference 2016IXIASOFT
Presented by Alex Kozaris, IXIASOFT IT Specialist at the IXIASOFT User Conference 2016.
Murphy’s Law says that if something can go wrong, it will. But don’t let Murphy tell you what to do; instead come to this presentation where Alex will take you through effective troubleshooting procedures for the issues that he commonly sees with his extensive experience of solving issues involving the IXIASOFT DITA CMS.
Agile Content Development and the IXIASOFT DITA CMSIXIASOFT
Keith Schengili-Roberts, IXIASOFT DITA Information Architect, reviews the benefits of working with agile content development and the IXIASOFT DITA CMS.
Presented at the IXIASOFT User Conference 2015. Kristen James Eberlein and Keith Schengili-Roberts discuss the way the DITA standard has evolved over the last 10 years, where it's at today, and what can be expected in the future.
This document summarizes the additions to DITA standards over multiple versions and discusses the complexity of the DITA standard and tagset. It notes that:
- DITA 1.1-1.3 added many new elements and features over time like bookmaps, keys, and specialized domains.
- However, smaller companies with non-technical writers may struggle to implement many new DITA 1.3 features that require specialized resources.
- The document proposes a "Standard DITA" subset for beginners and a "Advanced DITA" full feature set, arguing constraints do not truly simplify the standard. It notes DITA 2.0 aims to address accessibility and simplification
Improve your Chances for Documentation Success with DITA and a CCMS LavaCon L...IXIASOFT
This document discusses how adopting DITA and a content management system (CCMS) can improve documentation success. It outlines key features of DITA including content reuse. Four main reasons for adopting DITA and a CCMS are discussed: needing more efficiency, outgrowing current tools, rising localization costs, and needing content verification. Four things that can be done with DITA and a CCMS are also presented: versioning content, implementing workflows, measuring documentation metrics, and improving localization. The presenter is then available for questions.
Keith Schengili-Roberts: Improve Your Chances for Documentation Success with ...Jack Molisani
Keith Schengili-Roberts presented on how moving to DITA and a content management system (CCMS) can improve chances of documentation success. He outlined key features of DITA like content reuse and separation of form from content. Four chief reasons for wanting to move included needing more efficiency, outgrowing current tools, rising localization costs, and content verification needs. A CCMS allows for versioning, workflow, metrics on production and reuse, and improved localization. It was argued the benefits outweigh upfront costs over time through opportunities for process improvement.
Short Descriptions Shouldn't Be a Tall Order: Writing Effective Short Descrip...IXIASOFT
Short Descriptions Shouldn't Be a Tall Order: Writing Effective Short Descriptions, Webinar by Keith Schengili-Roberts, IXIASOFT and Joe Storbeck, Jana - Hosted by CIDM
The document describes the OnBase product which centralizes business content in one secure location and drives it through processes quickly. It works with other applications to deliver information whenever and wherever needed. OnBase improves customer service, reduces operating costs, and minimizes risks. It can be deployed on-premises or in the cloud with flexibility to migrate between the two. OnBase is comprised of modular components that can be combined to create customized solutions for specific needs.
How to Optimize Your Metadata and TaxonomyIXIASOFT
1. The document discusses how to optimize metadata and taxonomy by creating a content strategy plan, determining key metrics, applying metadata to content, and communicating results to stakeholders.
2. It outlines the key steps: create a content strategy plan, determine metrics to measure goals, apply metadata to content using a metadata schema, and communicate results using reports and queries.
3. Applying metadata according to the strategy helps users find content and measures strategy success, while communicating results builds trust and credibility with stakeholders.
While open-source solutions may have no purchase cost, total costs including configuration, customization and support can equal proprietary solutions. DITA provides benefits like reuse and translation but has limitations in areas like graphics, equations, custom output and legacy content migration. PDF publishing from DITA is especially challenging due to the complexity of XSL-FO. DITA works best for organizations with significant reuse across contexts and languages, while smaller groups may find its limitations easier to overcome.
Painless XML Authoring?: How DITA Simplifies XMLScott Abel
Presented at DocTrain East 2007 by Bob Doyle, DITA Users -- This introduction to XML Authoring will acquaint you with over fifty tools aimed at structuring content with DITA. They are not just DITA-compliant authoring tools (editors) for writers. They also include content management systems (CMS), translation management systems (TMS), and dynamic publishing engines that fully support DITA. You will also need to know about tools that convert legacy documents to DITA and help to design stylesheets for DITA deliverables. The best DITA tools for technical communicators implement the DITA standard while hiding all the complexity of the underlying XML (eXtensible Markup Language).
As a tech writer and not a tech, you should be able to forget about XML - except to know that you are using it (DITA is XML) and that it consists of named content elements (or components) with attributes. You need to know enough about the content elements so you can reference (conref) them for reuse. You need to know about their attributes so you can filter on them for conditional processing. And you should appreciate that because components are uniquely identifiable they lend themselves perfectly to automated dynamic assembly using a publishing engine.
We will describe how you can get started with structured writing without knowing XML or installing anything.
The promise of topic-based structured authoring is not simply better documentation. It is the creation of mission-critical information for your organization, written with a deep understanding of your most important audiences, that can be repurposed to multiple delivery channels and localized for multilingual global markets. You are not just writing content, you are preparing the information deliverables that enhance the value of your organization in all its markets.
To do that well, you must understand the latest tools in structured writing that are revolutionizing corporate information systems - today in documentation but tomorrow throughout the enterprise, from external marketing to internal human resources. Whether you are trying to push a new product into a new market or are “onboarding” a new employee, the need for high quality information to educate the customer or train the new salesperson is a challenge for technical communicators. You need to think outside the docs!
The key idea behind Darwin Information Typing Architecture is to create content in small chunks or modules called topics. A topic is the right size when it can stand alone as meaningful information. Topics are then assembled into documents using DITA maps, which are hierarchical lists of pointers or links to topics. The pointers are called “topicrefs” (for topic references).
Think of documents as assembled from single-source component parts. Assembly can be conditional, dependent on properties or metadata “tags” you attach to a topic. For example, the “audience” property might be “beginner” or “advanced.”
At a still finer level of granularity, individual elements of a topic can also be assigned property tags for conditional assembly. More importantly, a topic element can be assigned a unique ID that makes it a content component reusable in other topics.
As you will learn, DITA is a leading technology for “component content management,” which multiplies the value of your work. You need to leverage DITA and structured content to multiply your income.
Architecture Frameworks
Indicate a large number of views and processes.
Beginner's may find this a bit daunting.
In this talk I attempt
to boil enterprise architecture down
to a few fundamental techniques and ideas.
In this DCL Webinar, long-time DITA champion Don Day will talk about the basic principles of lightweight structured authoring and the current work of the OASIS Lightweight DITA Subcommittee along those lines. And since this is a work in progress, Don will lay out some practical steps you can take today to start taking advantage of some of these principles as we anticipate the Subcommittee's eventual recommendations.
IBM Connect 2017: Refresh and Extend IBM Domino ApplicationsEd Brill
This session covered new capabilities such as additional REST APIs coming in future feature packs of IBM Domino; IBM's partnership with Panagenda ApplicationInsights; and partners such as Darwino, We4IT's Aveedo, and Sapho that provide tools to modernize corporate and situational applications.
Sometimes, a spontaneous road trip can be a lot of fun, as long as you’re willing to take the good with the bad—getting lost, car trouble, unfriendly (or just plain weird) natives, bad diner food. Usually, though, the most successful trips involve planning, roadmaps, and best of all, guidance from people who’ve already been there.
The journey from traditional, deliverable-centric content creation to DITA-based content creation falls into this second category. In this session, we talk about one small publication group’s experience moving to DITA, from the initial discussions to the successful implementation of a FrameMaker-based, end-to-end publication process. Here are some of the high points of the project; we’ll discuss our decision-making process and some of our technical approaches in detail in the session.
Learn what DITA is and why you might need it to create documentation.
These are the slides from a presentation we gave at Write the Docs/PDX DITA joint meetup in December of 2014.
DITA Quick Start: System Architecture of a Basic DITA ToolsetSuite Solutions
Presenter: Joe Gelb, President, Suite Solutions
Abstract: In this webinar, you will learn about the software, integration and customization which enable you to effectively author, manage, localize, publish and share your DITA XML content. We will review how each tool fits into the content lifecycle and discuss options for an incremental DITA XML implementation using a basic toolset as the starting point.
The document compares the DITA and oManual standards for documenting repair procedures. While DITA can represent repair manuals, oManual has specific requirements for semantic information and visual content that directly influence the user experience on sites like iFixit. DITA tasks are more flexible than oManual steps, which drive a structured authoring process. An ideal workflow may convert relevant DITA content into oManual structures while maintaining correlations back to the original data.
ARMA Vancouver (in partnership with ARMA VI) invited Bruce Miller from RIMtech to give his 2 day “Managing Electronic Records with SharePoint” workshop.
Bruce Smith recaps some of the key messages about managing an EDRMS project, the roles of IT and RM, metrics for measuring progress, and 3rd party tools to add recordkeeping capabilities to SharePoint.
Bruce Norman Smith has been a SharePoint champion at Environment Canada and the Medical Council of Canada. A Master’s degree in Library and Information Studies (MLIS, McGill ‘08) provides Bruce with graduate level training in business process analysis, database design, xml metadata development, and IM theory & methods. Bruce’s talent for bridging the gaps between business needs, RM and Archival requirements & technical best practices ensures your entire organization can benefit from a SharePoint implementation. His current focus is on mastering the infrastructure and services that support a rock solid ECM solution.
Bruce's blog site is: http://seek.itgroove.net/
ECAD MCAD Design Data Management with PTC Windchill and Cadence Allegro PCBEMA Design Automation
Learn how PTC and Cadence have developed a unique collaboration environment to connect Allegro PCB design data with the Windchill PLM system for robust file management and check-in check-out capabilities.
Understanding and Communicating the Financial Impact of XML & DITAScott Abel
You already know that XML and DITA for documentation and publishing boast extreme productivity and cost-savings, as well as revenue opportunities. But how can you build your case to the executives to get the green light?
Come to this session to discover your potential financial returns. Learn how to calculate the cost of your current processes, and calculate the potential savings with new technology for authoring, re-use automation, localization, review, and publishing. You can then use these ROI calculations for budget requests and business cases to senior executives, to set expectations with the team and stakeholders, and track project success.
You’ll learn how to:
* Quantify and calculate savings in authoring, localization, reviewing, and publishing
* Build your business case for DITA including sample scenarios and calculations
* Communicate your proposed savings to senior executives
* Save 20 - 40% on authoring, 25-50% on localization costs, and over 50% on your existing publishing costs.
Success Factors for DITA Adoption with XMetaL: Best Practices and FundamentalsScott Abel
Adopting structured authoring and content management requires managing change across the entire organization. Key factors for success include aligning with business needs, creating an implementation roadmap, mapping content to audience needs, updating processes and procedures, revising staffing models, and creating a plan to handle legacy documentation. Pilot projects allow testing changes in a limited scope before full adoption.
Metrics for continual improvements - Nolwenn Kerzreho LavaconDublin2016IXIASOFT
The switch to DITA is often justified using a business plan based on the expected Return On Investment (ROI). However, DITA metrics aren't just about cost savings. They are also extremely valuable in evaluating and optimizing your production process as they can help you answer the following questions:
• Is your content effectively serving your audiences?
• Is reuse optimal?
• What are the ongoing content costs?
In this session, you will learn:
• How to set the right metrics for your organization
• How to use DITA metrics beyond cost savings
• How DITA metrics can contribute to a continual improvement process
Agile Meets DITA: Developing User Documentation in an Agile EnvironmentNabayan Roy
The document discusses how writers can develop user documentation in an agile environment using DITA (Darwin Information Typing Architecture). Some key points:
- Agile development focuses on iterative delivery in short cycles called sprints, requiring documentation to also be produced incrementally.
- DITA is a standard for authoring reusable topic-based content that can be more easily updated and assembled incrementally to match iterative development.
- When using DITA, writers can translate user stories into task-based documentation, take a minimalistic approach, and focus on producing "fit for purpose" documentation.
The main focus of my talk is how DITA works well in an Agile environment for technical publication to produce simple, crisp, and lean user documentation per sprint. Just as programmers employ Agile techniques to improve their deliverables, task-oriented documentation using DITA helps technical writers in creating user deliverables that allows for continuous feedback and improve the documentation’s velocity and adaptability to change, even extreme changes.
Session at tcworld 2016. Organized by Kristen James Eberlein (Eberlein Consulting LLC); other participants were Joe Gollner (Gnostyx), George Bina (SyncroSoft), Jean-François Ameye (IXIASOFT), and Eliot Kimber (Contrext).
Using Markdown and Lightweight DITA in a Collaborative EnvironmentIXIASOFT
Using Markdown and Lightweight DITA in a Collaborative Environment, by Keith Schengili-Roberts, IXIASOFT DITA Evangelist and Market Researcher and Leigh W. White, IXIASOFT DITA Specialist, at the CIDM CMS DITA North America, April 2017
From Planning to Publishing: How Business Objects Migrated Documentation to D...Scott Abel
Presented by Dave Holmes at Documentation and Training West May 6-9, 2008 in Vancouver, BC
In 2006, Business Objects faced a major challenge. How to migrate over 50,000 pages of unstructured non-topic based documentation it had acquired through rapid growth and acquisitions. The answer was to use DITA to standardize content creation, management, translation and publishing processes company-wide. In this session, you will learn how they went from planning to publishing using an iterative approach, and how you can use this method to see the results of a content migration sooner in your project cycle.
Planning our End Game at Automation Anywhere: A Story of Content and Tools St...LavaConConference
The document discusses the content and tools strategy planning at Automation Anywhere. It describes how the documentation team assembled top talent, adopted DITA and tools like Oxygen and Zoomin for authoring and publishing, converted content to DITA, developed processes using JIRA, and established localization workflows to support 13 languages. The team overcame challenges related to content, tools, people and process. Now the documentation portal supports over 100 automated build jobs, localized content, and a self-support model. Future goals include enhanced information design, metrics, search and reporting.
The document discusses metadata strategies for DITA content at an enterprise scale. It introduces the [A] Content Intelligence Framework, which separates structure and semantics using a Master Content Model and Master Semantic Model. The framework maximizes investments in DITA by enabling metadata-enriched, structured content to be delivered across multiple channels. The document also reviews DITA's built-in metadata and semantic mechanisms and their strengths and weaknesses for implementing metadata at scale.
Canadian Experts Discuss Modern Data Stacks and Cloud Computing for 5 Years o...Daniel Zivkovic
Two #ModernDataStack talks and one DevOps talk: https://youtu.be/4R--iLnjCmU
1. "From Data-driven Business to Business-driven Data: Hands-on #DataModelling exercise" by Jacob Frackson of Montreal Analytics
2. "Trends in the #DataEngineering Consulting Landscape" by Nadji Bessa of Infostrux Solutions
3. "Building Secure #Serverless Delivery Pipelines on #GCP" by Ugo Udokporo of Google Cloud Canada
We ran out of time for the 4th presenter, so the event will CONTINUE in March... stay tuned! Compliments of #ServerlessTO.
LavaCon 2017 - Implementing a Customer-driven Transition to DITA Content: A S...Jack Molisani
When customer expectations uproot your documentation processes and PDF content offering, how do you mobilize a team that has used the same tools and processes to create book-based, unstructured content for over two decades? When new demands drive the change for structured content to support a myriad of users and multi-channel publishing, the logical choice is a DITA workflow.
Join Ciena, The Content Era and Adobe Tech Comm at LavaCon 2017 Portland for an immersive workshop that highlights how a DITA workflow is possible with familiar tools, a modest budget, and creative handling of the content.
Reports and DITA Metrics IXIASOFT User Conference 2016IXIASOFT
The document discusses using metrics to measure the production and quality of DITA-authored documentation. It provides examples of different types of metrics that can be captured from DITA files and a DITA CMS, including metrics on topic types, structural elements, readability, consistency, and reuse. It also demonstrates how to generate reports on metrics using the reporting features of the IXIASOFT DITA CMS and the DITA QA plugin.
Provides an overview of the DITA for Small Teams (www.d4st.org) project and the general approach of using off-the-shelf open-source and commercial tools to set up a usable DITA authoring, management, and delivery system.
Building An XML Publishing System With DITAScott Abel
Presented at DocTrain East 2007 Conference by Brian Buehling, Dakota Systems -- Since its inception, DITA has rapidly gained acceptance as a standard document structure used in many XML-based content management and publishing systems. DITA is an XML schema developed primarily to support technical documentation for a wide array of applications. This session will cover the commonly used element, attribute and entity constructs that are defined in the schema. More importantly, recommendations concerning how best to implement DITA solutions will be given. Special attention is given to developing practical DITA applications since, in many cases, some DITA elements will have to be extended through a mechanism called specialization to produce a robust XML-based publishing system.
[Case Study] - Nuclear Power, DITA and FrameMaker: The How's and Why'sScott Abel
Presented by Thomas Aldous at Documentation and Training East 2008,
October 29-November 1 in Burlington, MA.
This session is for anyone that is interested in learning how to
manage a transition to Specialized DITA including Content Management
Systems, Editors and Publishing Server issues and resolutions. As a
added bonus, we will also convert an Word Document To Specialized DITA
and edit the content is FrameMaker 8. There will be a question and
answer period at the end of the session for both technical and project
management issues.
This document summarizes a presentation on using SharePoint 2013 for enterprise content management (ECM) and records management (ERM). It discusses why organizations use SharePoint for these purposes due to its cost advantages over competitors and integration capabilities. It outlines SharePoint's ECM and ERM features and limitations. It provides examples of overcoming limitations through custom configurations and third-party tools. The presentation emphasizes aligning ERM with business needs and integrating it with everyday processes rather than creating isolated records systems.
apidays London 2023 - DocOps and Automation in Fintech, Kateryna Osadchenko, ...apidays
apidays London 2023 - APIs for Smarter Platforms and Business Processes
September 13 & 14, 2023
DocOps and Automation in Fintech: A tech writer's perspective
Kateryna Osadchenko, Content Designer at Kindred Group PLC
------
Check out our conferences at https://www.apidays.global/
Do you want to sponsor or talk at one of our conferences?
https://apidays.typeform.com/to/ILJeAaV8
Learn more on APIscene, the global media made by the community for the community:
https://www.apiscene.io
Explore the API ecosystem with the API Landscape:
https://apilandscape.apiscene.io/
Similar to (Almost) Four Years On: Metrics, ROI, and Other Stories from a Mature DITA CMS Installation (20)
This presentation was made to the Boston DITA Users Group in December 2019. It looks at how the DITA standard is developed, some of the new "features" to expect in DITA 2.0, a brief look at Lightweight DITA, and the possible futures of DITA and structured content in general.
This document provides an overview of the first class in an Information Architecture course. The class covers introductions and an overview of the course schedule and assignments. It also begins to define what information architecture is, noting there is no single agreed upon definition but providing some examples. The instructor introduces himself and his background, as well as the goals and philosophy of the course.
The slidedeck for the fourth and final class of the Information Architecture course (Part 2) I teach at the University of Toronto's iSchool. This class covers:
- Creating a Web Style Guide
- Icons/Expression in Design
- Localization 101
- Change Management
- Creating a Functional Specification for Your CMS
Slidedeck for the second class on Information Architecture: Part 2. This one examines the basics of how to create a set of wireframes and accessibility requirements for the Web.
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
(Almost) Four Years On: Metrics, ROI, and Other Stories from a Mature DITA CMS Installation
1. (Almost) Four Years On:
Metrics, ROI, and Other
Stories from a Mature DITA
CMS Installation
Keith Schengili-Roberts | November 15, 2010
2. Agenda
• Intro + ROI
• Things We Didn’t Expect
• Measuring Productivity: Uses of Metadata
2
3. Who is This Guy?
Keith Schengili-Roberts
• Manager for documentation and
localization for AMD’s Professional
Graphics division (formerly ATI)
Prior to becoming manager of the
group, was its information architect
• Lecturer at University of Toronto’s
Professional Learning Center since
1999, teaching courses on information
architecture and content management
(sample slide decks available from:
http://www.infoarchcourse.com/)
• Author of four titles on Internet
technologies; last title was “Core CSS,
2nd Edition” (2001)
3
4. ROI Executive Summary
Proven return on investment (ROI) benefits from
using a CMS-based DITA over the previous toolchain:
Productivity/output increases
– Somewhere between 2.3 and 3 times more efficient
Can “do more with what we’ve already got”
– Minimalism and content re-use goes a long way
– We have fewer writers than when we started while our
output rate continues to increase
Localization cost savings
– Localization budget is now less than half of what we
needed from the year before we started using the DITA
CMS
– We are much more productive
4
5. What We Do
Documentation & Localization Group at AMD's Graphics Product
Group (GPG)
Formerly ATI
Based in Markham, Ontario
4 writers, 2 process engineers, 2 localizers, 1 manager
CMS: DITA CMS from Ixiasoft (www.ixiasoft.com)
Responsible for:
End-user documentation, including online help (20%)
Engineering documentation for ODM/OEM partners (60%)
Technical training documentation for partners (20%)
Localize in up to 25 languages (mostly end-user and UI)
Primary outputs are PDF and XHTML
5
6. Where We Started (i.e., “The Bad Old Days”)
Circa 2003-2006:
• Used unstructured FrameMaker
Localization costs very high
Code page issues made localization QA work hard
Could not reliably keep in sync with major software releases
(monthly cadence required for online help; could only do it twice
a year)
Writers were deeply siloed
Very little content shared
Content re-use (especially between different docs) very low
Output was efficient but quality was highly variable
6
7. Where We Are Now
Have been using Ixiasoft’s DITA CMS in production since
February 2007
Have published more than 2,200 documents in that time
46% in English
54% in the languages to which we localize (21 maximum)
Writers and the documentation process are more nimble
Any writer can take on another’s projects
Content re-use rate is good (slightly more than 50% monthly)
Quality is uniformly better; re-used topics are edited topics
Localization process is streamlined, with more time now
available to focus on QA than on administration or fixing
formatting issues
7
8. Getting ROI by Doing More with What We’ve Already Got
• Using the old toolchain, we spent about 50% of our time
formatting content; this equates to an almost equal boost in
productivity using the DITA CMS.
• We automate things that can (and should) be automated; no
more TOCs or Indexes built by hand.
• Through attrition, we have fewer personnel writing/localizing
content; despite this, our output rate has increased.
An information architecture content audit of existing materials
emphasized minimalism and re-use within and between
document types.
Content re-use is considerable; now, de-siloed writers are more
flexible on what they can work on.
We continue our effort to find out what customers find useful,
and to give them only the information they require.
8
9. ROI: Doing More with Less
Comparative numbers from 2007:
• Numbers show equivalent work on engineering docs (size
types/sizes of docs/product release cycle)
• DITA CMS made us faster
• More than doubled output using the same headcount while
taking on an expanded range of document types
9
10. ROI: Doing More with Less (cont.)
What’s happened since 2007?
10
11. ROI: Doing More with Less (cont.)
In 2009, 4 writers were responsible for 366 docs.
• On average, each writer produced 91.5 docs in a year = ~23 per
writer per quarter
This figure does include revisions; however, on average, we do same
number of revisions as we did under the old toolchain (we just do them
faster).
• Compare this to some roughly equivalent numbers from another
Tech Writing team cover a similar subject area using our old
toolchain:
They produced 360 docs using 9 over the course of a year; their docs
roughly the same size, type and having a similar release cadence
This = 40 docs per writer per year, or 10 per writer per quarter
– By these numbers, use of the DITA CMS improves efficiency by 2.3 times
(your own results may vary)
• The two localization coordinators were responsible for producing
432 docs in the system during 2009.
11
12. ROI: Localization Cost Savings
• Content re-use in English corresponds directly to
translated content re-use
• Eliminated desktop publishing (DTP) charges
• As a result, we are able to produce publications
more quickly and reliably and less expensively
than with our old toolchain:
One example is our Catalyst Control Center online help:
prior to the DITA CMS, we could only hope to do this at
most every 6 months; now, we can keep up with the
monthly software release cycle.
12
13. CMS-based DITA and Localization Costs
CMS ROI
“Bad Old Days”
Content audit +
Single-sourcing
Blue line= localization budget for quarter; Red line= actual localization spend
Our annual localization budget is now 2.5 times less than the year before we started using
the CMS (2006)
• DITA CMS has more than paid for itself based only on reduced localization costs
The volume of localized content has increased over this time period
13
14. DITA Advantages from a Writer’s Perspective
Moving and implementing DITA is typically a
management decision, but there are advantages for
the writers:
Learning a new and valued skill (I've had two writers
hired out from under me by another firm looking to "do
DITA").
As content re-use increases over time, the writers act
more as editors, so have a higher "value-add" to the
content process.
Significant topic re-use means that writers learn more
about other subjects using other writers’ topics,
effectively de-siloing the writing team.
Programmatic skills increasingly called into play because
there is a need for people who understand XSL and text-
parsing languages (such as Python) and also understand
publishing.
14
15. Things We Didn’t Expect
• Need for a “house” DITA Style Guide
Also found ways to help enforce it
• Conrefs vs. Cloning
• More nimble options available for doing localization
• Use of tracking-based metadata allows us to do
thorough productivity measures
And allows us to measure useful things we had not
initially anticipated
16. How Much DITA Do You Need?
In terms of the number of tags
you need to use, it may be less
than you think:
Our initial approach was
evolutionary; writers could use
any tag they felt necessary, and
over time DITA tagging styles
were established and made
uniform (DITA Style Guide).
Using fewer tags decreases
formatting issues/clashes when
creating XSL output types.
In all, we actively use fewer than
half of all DITA 1.1 tags.
16
17. Cloud of Relative Tag Usage
• 67 tags displayed, with a threshold of +20 min. usage
• Tags not included because they are auto-populated/included in
our topic templates: othermeta, metadata, prolog, searchtitle,
shortdesc, titlealts, navtitle
17
• Created using “Wordle” from www.wordle.net
18. Creating a DITA Style Guide
A recommendation for any tech docs group that uses
DITA extensively:
Helps new writers/contributors come up to speed
Usefully narrows the scope of the XSL work that needs to
be done
Many things are “legal” in DITA but may be poor from a
“house style” standpoint, for example:
– Can have unformatted block content between a header and a table
in a section
– Tables and figures do not have to have a title
– Can have unlimited nested lists
– Alpha lists can contain more than 26 items
– Lists can contain only a single item
18
19. Schematron Can Help Enforce DITA Style
What is Schematron? “Schematron is a rule-based validation
language for making assertions about the presence or absence
of patterns in XML trees.” (www.wikipedia.org)
We use Schematron to point out to the writers potential
errors/lapses in our DITA House Style:
Text between a section and table not wrapped in block tags:
A list ought to have more than one item (otherwise, why make it
a list?):
19
20. XSL Can Help Enforce DITA House Style
We have a DITA house style that says nested lists should be no
more than two levels deep.
Here’s Schematron doing it’s job:
And here is the result if you try to output it:
20
21. Conrefs vs. Cloning
At a very early stage we decided not to use conrefs in our DITA
content
• Made localization programmatically complicated/inefficient
• Creating a localization kit would mean finding all conrefs in a doc
(however many levels they are nested) and then “flattening” them;
leads to inefficient segment-matching
• Did not seem cost-effective from an author’s perspective
• Would seem to limit reuse as conref targets become “fixed”; dare
not change without affecting many docs
• Searching and then defining a single phrase or paragraph to reuse
not always an efficient use of time
21
22. Conrefs vs. Cloning
• We instead chose a “clone” approach to topic re-use:
• Essentially, make a copy of an existing topic and use only the
parts that you need in your current document
• Original topic and cloned are completely separate (though
trackable; parent/child relationship is retained in CMS)
• Cloning is only done when the amount of change is sufficient
that the original topic cannot accommodate it
• Writers can more freely re-use existing topics for their own
needs
• When a localization kit is made, the segment matching process
is efficient
22
23. Nimble Localization Processes with DITA XML
Under the old toolchain, localizing a 200+ page document to a
single language within a week (without huge expense) was
impossible.
DITA XML allows us to be more nimble: for critical large
documents, we can send the localization firm finished “parts” as
we get them (“70/20/10”):
When roughly 70% of a large document is done, we send it off
for translation, followed a week or two later with another 20%
of new and updated material, then the last 10% when we
complete it.
While this process does cost more than sending in a whole
document at once, it reduces the turnaround time from weeks
to days, and quality is much improved because it is not done in
a rush.
This approach was simply not feasible using our old toolchain;
ultimately, the new toolchain is still cheaper and much faster.
23
24. Measuring Productivity: Uses of Metadata
There are three main purposes for metadata:
Retrieval
Re-use
Tracking
• Everyone who has used a search engine is familiar with
the “Retrieval” part.
• Authors can add their own metadata to topics to aid in
later retrieval for re-use.
Topic and map dependencies can be checked, and
associated topics re-used in other publications.
25. Tracking Metadata
Tracking metadata (in our case, mainly dates, author, and
topic/map status) is used for understanding trends and
managing workflow.
The types of questions we can readily answer include:
Who created the content (author)?
When was it created (date)?
Who modified it (editor)?
Who reviewed it (reviewer/approver)?
Where has it been re-used (map relation)?
Has it been published or translated (status/language)?
25
26. How We Measure Productivity
Metric we use is a combination of topics created + topics
modified in a monthly/quarterly timeframe:
Each new topic created counts as 1.
Modified topics are also counted, though again only as 1.
Subsequent revisions to the same topic in a given
timeframe are not counted.
Provides us with a very good view of ongoing work, and
the numbers align with known product release cycles.
Works both as an aggregate measure (total output per
month), and as a measure of a writer’s individual
productivity.
Maps are also tracked, but are not as good for measuring
productivity since they come in many sizes and have widely
varying development timelines.
26
28. Topic Production Matches Product Cadence
Product
Product Product Release Cycle
Release Cycle Release Cycle #3
#1 #2
Secondary Peak
Secondary Peak
Secondary Peak
Main Peak
Main Peak
Main Peak
• Regular peak of production in Q3, typically followed by secondary peak in Q1
28
29. Localization Segments Auto-translated within CMS Monthly
• Portion in orange is
the percentage
that were 100%
matches, and were
never sent to a
localization vendor
= pure ROI!
• From July 2008 to
July 2009, an avg.
of 54% of
segments were
auto-translated
within the system.
29
30. Sample Topic Reuse Rate (Monthly)
From Jan 2008 to June 2009, average monthly topic reuse rate = 53.53%
30
31. An Interesting Trend: Topic Ratios
Except in year one, reference topics steadily make up ~74% of all topics used
31
32. What is the Average Size of a Topic?
Maps avg. = 3.47 kb
Concepts avg. = 2.46 kb
References avg. = 7.88 kb
Tasks avg. = 3.20 kb
1 byte = 1 character
1000 bytes (1 kb) = 1000 characters
• Concepts avg. 0.65 of a page of
Lorem ipsum text in Word
• References avg. 2.6 pages
Smallest: half a page
Largest: ~200 pages
• Tasks avg. 1 page
32