This webinar provided an overview of Darwin Information Typing Architecture (DITA) including definitions of DITA, its key principles and components, and reasons for adopting a DITA approach. The presenter discussed DITA information types and domains, and how DITA supports structural integrity, reusability, and extensibility. The webinar also addressed how to determine if an organization needs DITA, what is required to implement it, and recommended a phased migration approach. The presentation concluded with references and information about training and consulting services.
Introduction to Object Storage Solutions White PaperHitachi Vantara
Learn more about Hitachi Content Platform Anywhere by visiting http://www.hds.com/products/file-and-content/hitachi-content-platform-anywhere.html
and more information on the Hitachi Content Platform is at http://www.hds.com/products/file-and-content/content-platform
Use of the Data-Distribution Service (DDS) --a publish-subscribe middleware standard from OMG -- as a communication infrastructure for Event Processing Engines.
Data Sharing in Extremely Resource Constrained EnvionrmentsAngelo Corsaro
This presentation introduces XRCE a new protocol for very efficiently distributing data in resource constrained (power, network, computation, and storage) environments. XRCE greatly improves the wire efficiency of existing protocol and in many cases provides higher level abstractions.
Introduction to Object Storage Solutions White PaperHitachi Vantara
Learn more about Hitachi Content Platform Anywhere by visiting http://www.hds.com/products/file-and-content/hitachi-content-platform-anywhere.html
and more information on the Hitachi Content Platform is at http://www.hds.com/products/file-and-content/content-platform
Use of the Data-Distribution Service (DDS) --a publish-subscribe middleware standard from OMG -- as a communication infrastructure for Event Processing Engines.
Data Sharing in Extremely Resource Constrained EnvionrmentsAngelo Corsaro
This presentation introduces XRCE a new protocol for very efficiently distributing data in resource constrained (power, network, computation, and storage) environments. XRCE greatly improves the wire efficiency of existing protocol and in many cases provides higher level abstractions.
[Case Study] - Nuclear Power, DITA and FrameMaker: The How's and Why'sScott Abel
Presented by Thomas Aldous at Documentation and Training East 2008,
October 29-November 1 in Burlington, MA.
This session is for anyone that is interested in learning how to
manage a transition to Specialized DITA including Content Management
Systems, Editors and Publishing Server issues and resolutions. As a
added bonus, we will also convert an Word Document To Specialized DITA
and edit the content is FrameMaker 8. There will be a question and
answer period at the end of the session for both technical and project
management issues.
Guests Alyssa Fox (NetIQ) and Toni Mantych (ADP) discuss their differing DITA implementation decisions. They will address the primary factors and decision-making process for when to choose DITA or not.
This session was presented by Suchitra Shettigar, Learning and Development Head at Metapercept. During this session, Suchitra presented basics of DITA-XML based authoring and its benefits.
Modular Documentation Joe Gelb Techshoret 2009Suite Solutions
Designing, building and maintaining a coherent content model is critical to proper planning, creation, management and delivery of documentation and training content. This is especially true when implementing a modular or topic-based XML standard such as DITA, SCORM and S1000D, and is essential for successfully facilitating content reuse, multi-purpose conditional publishing and user-driven content.
During this presentation we will review basic concepts and methods for implementing information architecture. We will then introduce an innovative, comprehensive methodology for information modeling and content development that employs recognized XML standards for representation and interchange of knowledge, such as Topic Maps and SKOS. In this way, semantic technologies designed for taxonomy and ontology development can be brought to bear for creating and managing technical documentation and training content, and ultimately impacting the usability and findability of technical information.
DITA, Semantics, Content Management, Dynamic Documents, and Linked Data – A M...Paul Wlodarczyk
DITA was conceived as a model for improving reuse through topic-oriented modularization of content. Instead of creating new content or copying and pasting information which may or may not be current and authoritative, organizations manage a repository of content assets – or DITA topics – that can be centrally managed, maintained and reused across the enterprise. This helps to accelerate the creation and maintenance of documents and other deliverables and to ensure the quality and consistency of the content organizations publish. But the next frontier of DITA adoption is leveraging semantic technologies—taxonomies, ontologies and text analytics—to automate the delivery of targeted content. For example, a service incident from a customer is automatically matched with the appropriate response, which is authored and managed as a DITA topic. Learn how organizations can leverage DITA, semantics, content management, dynamic documents, and linked data to fully utilize the value of their information.
Organize & manage master meta data centrally, built upon kong, cassandra, neo4j & elasticsearch. Managing master & meta data is a very common problem with no good opensource alternative as far as I know, so initiating this project – MasterMetaData.
Putting quality to the test ... How do you define quality in content conversion? Is it only about the output? Or do the performance, speed and reliability of the process matter too? Stilo has put its leading OmniMark content processing solution to the test to see how it stands up against the DITA Open Toolkit. See the results of this technical benchmarking exercise in the following presentation and contact Stilo to learn how you can accelerate the adoption of DITA with OmniMark. www.stilo.com
Click here to listen to the webcast - http://bit.ly/MdAzXd
DITA Tasks are often the most valuable content we create – especially when we present them in Support portals. But if end-users can’t find them they have no value – avoiding that requires classifying them with metadata and labels from a standard taxonomy.
Taxonomy and metadata can seem like scary or complex turf to the uninitiated – but they don’t have to be. In this 40-minute webinar, Paul Wlodarczyk will walk you through a simple process to begin to assemble a basic taxonomy of controlled vocabularies for tagging your DITA Tasks.
You will learn:
The most critical metadata for classifying tasks – regardless of your industry
How to use tools that you already own to build your taxonomy
Simple rules for keeping your terms consistent
Using existing lists of terms so you don’t have to build a taxonomy from scratch
What “Model” DITA Specializations Can Teach About Information ModelincDon Day
The DITA Open Toolkit download site includes several demo specializations that few people discover and use. In this webinar, DITA maven, Don Day, will use these examples to highlight the role of information modelling that led to each specialization. Don will highlight the key points of how each specialization was created, or how semantics were introduced into the specialization, and a whole lot more.
This presentation addresses how some of the challenges that have historically confronted implementers of markup technologies (SGML and XML) and how DITA, together with some of the usability innovations associated with Web 2.0, can be used to address them. Presented at Content Convergence and Integration in Vancouver (12 March 2008).
Don Day relates the background and development of IBM's prototype DITA Wiki, a collaborative tool for extending the uptake of DITA within IBM by teams not necessarily trained as technical writers.
[Case Study] - Nuclear Power, DITA and FrameMaker: The How's and Why'sScott Abel
Presented by Thomas Aldous at Documentation and Training East 2008,
October 29-November 1 in Burlington, MA.
This session is for anyone that is interested in learning how to
manage a transition to Specialized DITA including Content Management
Systems, Editors and Publishing Server issues and resolutions. As a
added bonus, we will also convert an Word Document To Specialized DITA
and edit the content is FrameMaker 8. There will be a question and
answer period at the end of the session for both technical and project
management issues.
Guests Alyssa Fox (NetIQ) and Toni Mantych (ADP) discuss their differing DITA implementation decisions. They will address the primary factors and decision-making process for when to choose DITA or not.
This session was presented by Suchitra Shettigar, Learning and Development Head at Metapercept. During this session, Suchitra presented basics of DITA-XML based authoring and its benefits.
Modular Documentation Joe Gelb Techshoret 2009Suite Solutions
Designing, building and maintaining a coherent content model is critical to proper planning, creation, management and delivery of documentation and training content. This is especially true when implementing a modular or topic-based XML standard such as DITA, SCORM and S1000D, and is essential for successfully facilitating content reuse, multi-purpose conditional publishing and user-driven content.
During this presentation we will review basic concepts and methods for implementing information architecture. We will then introduce an innovative, comprehensive methodology for information modeling and content development that employs recognized XML standards for representation and interchange of knowledge, such as Topic Maps and SKOS. In this way, semantic technologies designed for taxonomy and ontology development can be brought to bear for creating and managing technical documentation and training content, and ultimately impacting the usability and findability of technical information.
DITA, Semantics, Content Management, Dynamic Documents, and Linked Data – A M...Paul Wlodarczyk
DITA was conceived as a model for improving reuse through topic-oriented modularization of content. Instead of creating new content or copying and pasting information which may or may not be current and authoritative, organizations manage a repository of content assets – or DITA topics – that can be centrally managed, maintained and reused across the enterprise. This helps to accelerate the creation and maintenance of documents and other deliverables and to ensure the quality and consistency of the content organizations publish. But the next frontier of DITA adoption is leveraging semantic technologies—taxonomies, ontologies and text analytics—to automate the delivery of targeted content. For example, a service incident from a customer is automatically matched with the appropriate response, which is authored and managed as a DITA topic. Learn how organizations can leverage DITA, semantics, content management, dynamic documents, and linked data to fully utilize the value of their information.
Organize & manage master meta data centrally, built upon kong, cassandra, neo4j & elasticsearch. Managing master & meta data is a very common problem with no good opensource alternative as far as I know, so initiating this project – MasterMetaData.
Putting quality to the test ... How do you define quality in content conversion? Is it only about the output? Or do the performance, speed and reliability of the process matter too? Stilo has put its leading OmniMark content processing solution to the test to see how it stands up against the DITA Open Toolkit. See the results of this technical benchmarking exercise in the following presentation and contact Stilo to learn how you can accelerate the adoption of DITA with OmniMark. www.stilo.com
Click here to listen to the webcast - http://bit.ly/MdAzXd
DITA Tasks are often the most valuable content we create – especially when we present them in Support portals. But if end-users can’t find them they have no value – avoiding that requires classifying them with metadata and labels from a standard taxonomy.
Taxonomy and metadata can seem like scary or complex turf to the uninitiated – but they don’t have to be. In this 40-minute webinar, Paul Wlodarczyk will walk you through a simple process to begin to assemble a basic taxonomy of controlled vocabularies for tagging your DITA Tasks.
You will learn:
The most critical metadata for classifying tasks – regardless of your industry
How to use tools that you already own to build your taxonomy
Simple rules for keeping your terms consistent
Using existing lists of terms so you don’t have to build a taxonomy from scratch
What “Model” DITA Specializations Can Teach About Information ModelincDon Day
The DITA Open Toolkit download site includes several demo specializations that few people discover and use. In this webinar, DITA maven, Don Day, will use these examples to highlight the role of information modelling that led to each specialization. Don will highlight the key points of how each specialization was created, or how semantics were introduced into the specialization, and a whole lot more.
This presentation addresses how some of the challenges that have historically confronted implementers of markup technologies (SGML and XML) and how DITA, together with some of the usability innovations associated with Web 2.0, can be used to address them. Presented at Content Convergence and Integration in Vancouver (12 March 2008).
Don Day relates the background and development of IBM's prototype DITA Wiki, a collaborative tool for extending the uptake of DITA within IBM by teams not necessarily trained as technical writers.
Trekk cross media series using xml to create once - distribute everywhere - e...Jeffrey Stewart
This ebook is based on a blog series leading up to the IDEAlliance XML 2010; eMedia Revolution conference. In
each chapter, I present one of the ideas that provide the foundation of my presentation at that conference.
Reduce costs and complexity with a strategy that includes XML standards adoption, structured content creation and
a digital-first workflow.
As eMedia devices and delivery systems proliferate, publishers, agencies and traditional media service providers
are challenged with keeping up with demand for content conversion. Content distributors can reduce costs and
complexity with a strategy that includes adoption of XML standards, a component architecture for structured
content creation and a workflow that adheres to a digital-first orientation.
Introduction To Information Modeling With DITAScott Abel
Presented at DocTrain East 2007 Conference by Alan Houser, Group Wellesley -- Through effective task analysis and information modeling, organizations can maximize the usability of their technical documentation while minimizing the required development and maintenance effort. During this interactive workshop, students will learn the principles of minimalist documentation, how to perform an effective task and topic analysis, approaches to migrating legacy documentation to DITA or other information models, and methods for mapping content to pre-defined information types. We will also use software tools to assist in performing topic analysis. While this workshop will use DITA information models as examples, the workshop will provide value for anybody who needs to move to a structured authoring environment and improve the usability and maintainability of their technical documentation.
In many organizations, writers are judged by the volume of content that they produce. The larger the manual or help system, the more effective the writer. A fatter manual is considered to be a better manual.
From the users perspective, however, fatter does not mean better. There is no positive correlation between page or topic count and usability. Large documentation sets may be intimidating and are likely to present usability issues. Furthermore, higher page or topic counts mean higher maintenance, translation, and production costs.
The minimalist documentation strategy provides a way to design and deliver highly usable documentation while minimizing the amount of content that must be developed, maintained, and produced to support a product or service. The increasingly-popular DITA information architecture is based on the concepts of minimalist documentation.
During this workshop, we will learn the principles of minimalist documentation, and how minimalist documentation strategies meet both user needs and business needs. We will learn how to design minimalist documentation using the DITA information architecture. We will interactively experience the important prerequisite of task and topic analysis for creating well-designed, highly usable minimalist documentation sets.
We will also demonstrate the use of software tools to support topic analysis. In an interactive session, we will use the IBM Task Modeler to develop a task analysis for a product or service. The instructor will demonstrate how to use the IBM Task Modeler to automatically generate DITA map files and prototype DITA-based output.
Recent enhancements to Enterprise Vault give your organization new levels of control over your unstructured data. In this session, you'll learn how you can make the most of these new and enhanced capabilities. This includes using intelligent workflows that leverage classification and machine learning to accelerate your compliance activities, taking advantage of flexible new cloud deployment and cloud storage options, and much more. Don't miss this opportunity to explore best practices that will transform Enterprise Vault into one of the most versatile and powerful information management tools in your arsenal.