The DITA Open Toolkit download site includes several demo specializations that few people discover and use. In this webinar, DITA maven, Don Day, will use these examples to highlight the role of information modelling that led to each specialization. Don will highlight the key points of how each specialization was created, or how semantics were introduced into the specialization, and a whole lot more.
BUILDING YOUR ADAPTIVE MODEL: Setting Goals Using the Adaptive Content Maturi...Don Day
Presented by Don Day and Jenny Magic
Delivering the right content to the right audience at the right time can be challenging. Enter adaptive content. This session will introduce you to the concept of adaptive content, explain how it works, and outline a step by step path via the Adaptive Content Maturity Model.
In this session, you will learn:
The differences between Adaptive Content, Personalized Content, Intelligent Content, and Responsive Web Design.
The key qualities of Adaptive Content with a checklist for evaluating your content.
The 5 phases of Adaptive Content, via the Adaptive Content Maturity Model.
We will conclude with tips for assessing planning goals and adopting Adaptive Content in your organization.
Dita for the web: Make Adaptive Content Simple for Writers and DeveloperDon Day
Lavacon 2013, Portland, Oregon
On the challenges of implementing structured, in-browser editing environements for creating adaptive content for the Web.
Exploiting Layout and Content
Don Day, Contelligence Group
Content Architecture for Rapid Knowledge Reuse-congility2011Don Day
A familiar content issue is gathering and integrating the knowledge of isolated subject matter experts (SMEs) throughout an organization into a robust content strategy. This presentation will give you some perspectives on how to engage your SMEs in contributing their knowledge as directly as possible in a structured format for ease of integration into a larger, more versatile content strategy. The first part of this presentation will lay out an architecture for a cross-organization, single source content strategy based on DITA (Darwin Information Typing Architecture) for this example. The second part of the presentation considers the use of that architecture for handling information flows during a disaster response. The system must allow people to respond appropriately to the rapid influx of disparate questions at the same time as receiving large quantities of information from multiple data sources of variable reliability. The use of structured content based on DITA can contribute to the effective use of information in a crisis.
Gone through articles and presentations on the web and got a half-baked understanding of the Darwin Information Typing Architecture (DITA)?
Refer to my DITA Quick Start presentation for the 2007 STC India Conference to learn to evaluate, plan and start implementing DITA.
In this presentation, you will learn about the following:
o Structured authoring and XML
o Key DITA concepts: topics, maps, specialization
o DITA architecture and content model
o Authoring in topics
o Organizing content using DITA maps
o Creating relationship tables
o Conditional text and reuse in DITA
o Metadata support in DITA
o DITA tools, standards and processes
o Publishing with the DITA Open Toolkit
Don Day relates the background and development of IBM's prototype DITA Wiki, a collaborative tool for extending the uptake of DITA within IBM by teams not necessarily trained as technical writers.
Is your technical content development organization considering a move to structured authoring and/or DITA (Darwin Information Typing Architecture)? This presentation provides a high-level introduction to what DITA is--and what the benefits of moving to DITA are. DITA is an excellent solution for many--but not all--organizations and projects. This introduction can help you begin to understand why DITA may or may not be a good solution for you.
BUILDING YOUR ADAPTIVE MODEL: Setting Goals Using the Adaptive Content Maturi...Don Day
Presented by Don Day and Jenny Magic
Delivering the right content to the right audience at the right time can be challenging. Enter adaptive content. This session will introduce you to the concept of adaptive content, explain how it works, and outline a step by step path via the Adaptive Content Maturity Model.
In this session, you will learn:
The differences between Adaptive Content, Personalized Content, Intelligent Content, and Responsive Web Design.
The key qualities of Adaptive Content with a checklist for evaluating your content.
The 5 phases of Adaptive Content, via the Adaptive Content Maturity Model.
We will conclude with tips for assessing planning goals and adopting Adaptive Content in your organization.
Dita for the web: Make Adaptive Content Simple for Writers and DeveloperDon Day
Lavacon 2013, Portland, Oregon
On the challenges of implementing structured, in-browser editing environements for creating adaptive content for the Web.
Exploiting Layout and Content
Don Day, Contelligence Group
Content Architecture for Rapid Knowledge Reuse-congility2011Don Day
A familiar content issue is gathering and integrating the knowledge of isolated subject matter experts (SMEs) throughout an organization into a robust content strategy. This presentation will give you some perspectives on how to engage your SMEs in contributing their knowledge as directly as possible in a structured format for ease of integration into a larger, more versatile content strategy. The first part of this presentation will lay out an architecture for a cross-organization, single source content strategy based on DITA (Darwin Information Typing Architecture) for this example. The second part of the presentation considers the use of that architecture for handling information flows during a disaster response. The system must allow people to respond appropriately to the rapid influx of disparate questions at the same time as receiving large quantities of information from multiple data sources of variable reliability. The use of structured content based on DITA can contribute to the effective use of information in a crisis.
Gone through articles and presentations on the web and got a half-baked understanding of the Darwin Information Typing Architecture (DITA)?
Refer to my DITA Quick Start presentation for the 2007 STC India Conference to learn to evaluate, plan and start implementing DITA.
In this presentation, you will learn about the following:
o Structured authoring and XML
o Key DITA concepts: topics, maps, specialization
o DITA architecture and content model
o Authoring in topics
o Organizing content using DITA maps
o Creating relationship tables
o Conditional text and reuse in DITA
o Metadata support in DITA
o DITA tools, standards and processes
o Publishing with the DITA Open Toolkit
Don Day relates the background and development of IBM's prototype DITA Wiki, a collaborative tool for extending the uptake of DITA within IBM by teams not necessarily trained as technical writers.
Is your technical content development organization considering a move to structured authoring and/or DITA (Darwin Information Typing Architecture)? This presentation provides a high-level introduction to what DITA is--and what the benefits of moving to DITA are. DITA is an excellent solution for many--but not all--organizations and projects. This introduction can help you begin to understand why DITA may or may not be a good solution for you.
DITA Quick Start: System Architecture of a Basic DITA ToolsetSuite Solutions
Presenter: Joe Gelb, President, Suite Solutions
Abstract: In this webinar, you will learn about the software, integration and customization which enable you to effectively author, manage, localize, publish and share your DITA XML content. We will review how each tool fits into the content lifecycle and discuss options for an incremental DITA XML implementation using a basic toolset as the starting point.
Sometimes, a spontaneous road trip can be a lot of fun, as long as you’re willing to take the good with the bad—getting lost, car trouble, unfriendly (or just plain weird) natives, bad diner food. Usually, though, the most successful trips involve planning, roadmaps, and best of all, guidance from people who’ve already been there.
The journey from traditional, deliverable-centric content creation to DITA-based content creation falls into this second category. In this session, we talk about one small publication group’s experience moving to DITA, from the initial discussions to the successful implementation of a FrameMaker-based, end-to-end publication process. Here are some of the high points of the project; we’ll discuss our decision-making process and some of our technical approaches in detail in the session.
Introduction to XML and Structured Authoring • Overview of DITA • Topics: The Basic Information Types • Maps: Assembling Topics into Deliverables • Common elements and attributes • Metadata • Examples and exercises
DITA Quick Start Webinar: Defining Your Style Sheet RequirementsSuite Solutions
Your DITA implementation is under way, and promises higher content reusability with shorter time to publication. A key aspect of your implementation is automated multi-channel publishing of your content to a variety of outputs: PDF, HTML, online help, mobile, dynamic web, eLearning and more. In this webinar, expert project manager Yehudit Lindblom and Suite Solutions President Joe Gelb go beyond formatting requirements to review best practices that help you cover all the bases for smooth implementation and easy maintenance of your dynamic publishing customizations.
Learn more about DITA Quick Start http://www.suite-sol.com/pages/solutions/dita-quick-start.html
Follow us on LinkedIn http://www.linkedin.com/company/527916
Presented by Alan Houser at Documentation and Training West, May 6-9, 2008
DITA provides an end-to-end architecture for authoring, managing, and publishing topic-oriented technical information. DITA may appear to be an ideal solution for authoring and maintaining online help systems. However, the DITA specification does not directly accommodate many of the customary and expected features provided by conventional help authoring tools. Learn how you can use DITA today to deliver online help, and learn what the DITA Technical Committee is doing to make DITA more directly usable for online help in the future.
Attendees will learn the following:
* How features of the DITA architecture map to the structure of conventional online help systems
* Why DITA can provide an ideal solution for authoring, maintaining, and delivering online help
* Current issues and limitations when using DITA for online help
* Common features of help authoring tools and how they map to (or don’t map to) DITA
* Approaches for maintaining and delivering DITA-based context-sensitive help
Using Tibco SpotFire (via Virtuoso ODBC) as Linked Data Front-endKingsley Uyi Idehen
Detailed guide covering the configuration of a Virtuoso ODBC Data Source Name (DSN) into the Web of Linked Data en route to utilization via Tibco's SpotFire BI tool.
Basically, SpotFire as a Linked (Open) Data fronte-end via ODBC.
DITA is an OASIS standard for modular content that can be assembled and published in many different ways. The full DITA standard provides powerful features for single-sourcing and structured authoring but can be intimidating for new adopters who require only a subset of those features.
The OASIS DITA Technical Committee is planning to define a lightweight DITA architecture to allow a broader range of authoring and publishing tools to support a useful subset of the full DITA standard.
This presentation provides a preview of the lightweight DITA proposal for DITA 1.3, including some example markup and possible architectural approaches.
Enterprise & Web based Federated Identity Management & Data Access Controls Kingsley Uyi Idehen
This presentation breaks down issues associated with federated identity management and protected resource access controls (policies). Specifically, it uses Virtuoso and RDF to demonstrate how this longstanding issue has been addressed using the combination of RDF based entity relationship semantics and Linked Open Data.
This presentation provides an overview of the Virtuoso platform which special emphasis on its Knowledge Graph and Data Virtualization functionality realms.
This was a presentation given to the Ontolog groups session on Ontology Life Cycles and Software. It covers the implications of the Web as the software platform and realities delivered by the Linked Open Data cloud.
LavaCon 2012: How to Deliver the Wrong Content to the Wrong Person at the Wro...Don Day
This session will offer a simple primer on how to help good content go bad. It’s surprisingly easy to mess up content delivery, and we’ll prove it by looking at some of the inappropriate and amusing examples that are served up daily all over the Internet. Using a simple three step approach, you too can be guaranteed to botch content delivery. Communicators from marketing, technical, or other fields probably insist on excellent content delivery. We will give in to them and also prove that delivering relevant, timely, and personalized content is just as easy and demonstrate how it can be done.
The Internet is Everywhere – So What's Changed? [Noz Urbina, DITA EU 2013]Noz Urbina
The word “internet” is 30 years old, the actual networks even older. Email is nearly 40 years old. We now live in a world where professional-and-parenting-age adults have never known a World Without Web. But what has the impact been? This generation—and the internet user population as a whole—is consuming content in wildly different ways. Each new experience immediately sets new expectations for the future, creating a snowball effect. This session will look at that snowball, try to demonstrate quite how enormous it truly is, and discuss how DITA content helps us address a new crop of user expectations. We will look at how the true scale of changes in culture and expectations that impact communication, real-world scenarios where user and products will operate differently and why DITA is ideal to address the new challenges.
DITA Quick Start: System Architecture of a Basic DITA ToolsetSuite Solutions
Presenter: Joe Gelb, President, Suite Solutions
Abstract: In this webinar, you will learn about the software, integration and customization which enable you to effectively author, manage, localize, publish and share your DITA XML content. We will review how each tool fits into the content lifecycle and discuss options for an incremental DITA XML implementation using a basic toolset as the starting point.
Sometimes, a spontaneous road trip can be a lot of fun, as long as you’re willing to take the good with the bad—getting lost, car trouble, unfriendly (or just plain weird) natives, bad diner food. Usually, though, the most successful trips involve planning, roadmaps, and best of all, guidance from people who’ve already been there.
The journey from traditional, deliverable-centric content creation to DITA-based content creation falls into this second category. In this session, we talk about one small publication group’s experience moving to DITA, from the initial discussions to the successful implementation of a FrameMaker-based, end-to-end publication process. Here are some of the high points of the project; we’ll discuss our decision-making process and some of our technical approaches in detail in the session.
Introduction to XML and Structured Authoring • Overview of DITA • Topics: The Basic Information Types • Maps: Assembling Topics into Deliverables • Common elements and attributes • Metadata • Examples and exercises
DITA Quick Start Webinar: Defining Your Style Sheet RequirementsSuite Solutions
Your DITA implementation is under way, and promises higher content reusability with shorter time to publication. A key aspect of your implementation is automated multi-channel publishing of your content to a variety of outputs: PDF, HTML, online help, mobile, dynamic web, eLearning and more. In this webinar, expert project manager Yehudit Lindblom and Suite Solutions President Joe Gelb go beyond formatting requirements to review best practices that help you cover all the bases for smooth implementation and easy maintenance of your dynamic publishing customizations.
Learn more about DITA Quick Start http://www.suite-sol.com/pages/solutions/dita-quick-start.html
Follow us on LinkedIn http://www.linkedin.com/company/527916
Presented by Alan Houser at Documentation and Training West, May 6-9, 2008
DITA provides an end-to-end architecture for authoring, managing, and publishing topic-oriented technical information. DITA may appear to be an ideal solution for authoring and maintaining online help systems. However, the DITA specification does not directly accommodate many of the customary and expected features provided by conventional help authoring tools. Learn how you can use DITA today to deliver online help, and learn what the DITA Technical Committee is doing to make DITA more directly usable for online help in the future.
Attendees will learn the following:
* How features of the DITA architecture map to the structure of conventional online help systems
* Why DITA can provide an ideal solution for authoring, maintaining, and delivering online help
* Current issues and limitations when using DITA for online help
* Common features of help authoring tools and how they map to (or don’t map to) DITA
* Approaches for maintaining and delivering DITA-based context-sensitive help
Using Tibco SpotFire (via Virtuoso ODBC) as Linked Data Front-endKingsley Uyi Idehen
Detailed guide covering the configuration of a Virtuoso ODBC Data Source Name (DSN) into the Web of Linked Data en route to utilization via Tibco's SpotFire BI tool.
Basically, SpotFire as a Linked (Open) Data fronte-end via ODBC.
DITA is an OASIS standard for modular content that can be assembled and published in many different ways. The full DITA standard provides powerful features for single-sourcing and structured authoring but can be intimidating for new adopters who require only a subset of those features.
The OASIS DITA Technical Committee is planning to define a lightweight DITA architecture to allow a broader range of authoring and publishing tools to support a useful subset of the full DITA standard.
This presentation provides a preview of the lightweight DITA proposal for DITA 1.3, including some example markup and possible architectural approaches.
Enterprise & Web based Federated Identity Management & Data Access Controls Kingsley Uyi Idehen
This presentation breaks down issues associated with federated identity management and protected resource access controls (policies). Specifically, it uses Virtuoso and RDF to demonstrate how this longstanding issue has been addressed using the combination of RDF based entity relationship semantics and Linked Open Data.
This presentation provides an overview of the Virtuoso platform which special emphasis on its Knowledge Graph and Data Virtualization functionality realms.
This was a presentation given to the Ontolog groups session on Ontology Life Cycles and Software. It covers the implications of the Web as the software platform and realities delivered by the Linked Open Data cloud.
LavaCon 2012: How to Deliver the Wrong Content to the Wrong Person at the Wro...Don Day
This session will offer a simple primer on how to help good content go bad. It’s surprisingly easy to mess up content delivery, and we’ll prove it by looking at some of the inappropriate and amusing examples that are served up daily all over the Internet. Using a simple three step approach, you too can be guaranteed to botch content delivery. Communicators from marketing, technical, or other fields probably insist on excellent content delivery. We will give in to them and also prove that delivering relevant, timely, and personalized content is just as easy and demonstrate how it can be done.
The Internet is Everywhere – So What's Changed? [Noz Urbina, DITA EU 2013]Noz Urbina
The word “internet” is 30 years old, the actual networks even older. Email is nearly 40 years old. We now live in a world where professional-and-parenting-age adults have never known a World Without Web. But what has the impact been? This generation—and the internet user population as a whole—is consuming content in wildly different ways. Each new experience immediately sets new expectations for the future, creating a snowball effect. This session will look at that snowball, try to demonstrate quite how enormous it truly is, and discuss how DITA content helps us address a new crop of user expectations. We will look at how the true scale of changes in culture and expectations that impact communication, real-world scenarios where user and products will operate differently and why DITA is ideal to address the new challenges.
Multidimensional Content Strategy: A Plan for Dodging the Oncoming TrainNoz Urbina
The conceptual model of 4D content is one that takes into account not just
the length and width of a content asset, but looks at 'depth' ( related
content, social layers, 'drill down') and 'time' (dynamic,
contextually-relevant and personalised content). It is a model to support
adaptive content personalisation on any device or channel.
Our audiences are ever more adept at ignoring us on an ever growing number
of channels. We are still reeling from the surge of mobile devices in all
their many forms, but we can see wearable technologies and augmented
reality bearing down on us like a freight train.
To respond we must rethink how we work with content at a fundamental level.
The world is four-dimensional place (length, width, depth and time), but
we were raised and trained to think of content as flat, 2D deliverables.
How can actually create and deliver content for everyone and no one at
once? How can we create words and images like Lego that can be dynamically
built into relevant and valuable content for the right person and the
right context?
How can we do all this coherently, without the train hitting us and
smashing our messages into a fragmented mess?
By changing our mindsets, and adopting a content strategy that can support
today’s content initiatives. Check out this session and take the first step
in the right direction.
Rebuilding Your Mindset for the Future of Content Work [Tekom /TCWorld 2013]Noz Urbina
[A variant of my session from http://bit.ly/nozu_istc13a now with "The bright side of the NSA scandal!]
This session is about getting yourself ready for the future, whatever it may bring. Change is not something that we usually excel at in technical communications.
If we don’t update our thinking, content and methods, each new wave of technology puts us yet another step behind the curve. Even though tablets and smart phones have reached near ubiquity with professional users, most organisations do not have their people, processes, platforms or content ready for mobile delivery. Many are not even internet-ready. Today we’re bombarded by announcements of new content creation and consumption technologies that are wearable, social, dynamic or embedded directly in products.
Although we can talk about how to do something about it, before our content and processes can change, we must change. We must address what is actually holding us back: how we think about our content in the first place.
This session will provide a new and inspiring perspective on how you can and must work with content to be ready for the future. We’ll look at updating our processes, structures and the biases and habits that surround them.
This is Your Brain on Content: Cognitive Science Lessons for Content StrategyNoz Urbina
A 'director's cut' of my Biological Imperative for Adaptive Content session from earlier this year.
The thesis: semantic, structured content is more suited to our brains natural functioning and mechanisms than traditional, unstructured content. It’s counter-intuitive, but is it true?
Our basic understanding of communicating content has changed. Under the pressures of multi-channel and multi-device content challenges, the old rules we learned about good content and processes are breaking down. How do we optimize for all this diversity?
Contemporary research in cognitive science and neurobiology can offer us new ways of thinking about communication at a basic, human level. This session could be considered a study in empathy, looking at how we can break out of our current mindsets, deconstruct old habits, and see justification for new ones around user needs. It offers cognitive science
and neurolobiology lessons relevant to today’s content landscape, and a common language to help you bridge the communication issues with your clients, colleagues, managers, and end users.
This session will cover models and methodologies to better structure content, optimize editorial processes, and build effective, influential strategies couched in the most human of terms.
Adaptive Content is high on everyone's mind, thanks to Responsive Web Design, new Google ranking strategies, customer demand and more. Problem is, how do we do it? This is where Content Marketing and Content Strategy meet Content Engineering. Get the big picture and see how others are doing it.
Presented to Austin Content Meetup, 21 November 2013 by Don Day.
[Workshop] The incremental steps towardsdynamic and embedded content deliver...Noz Urbina
[A variant of my 2013 Technical Communcations UK presentation]
Dynamic delivery is delivery of context-appropriate information that can be assembled at the time of request with the most up-to-date, relevant content appropriate for the user and interface in question.
Embedded content is where content becomes a seamless part of device interfaces. Products become “self-describing”, allowing users to work uninterrupted by the need to open help files or manuals.
Many aspire to working in this way, but few (so far) have achieved it. This workshop looks at the benefits, requirements, and barriers related to these new types of delivery.
We will look at:
Why should we bother with this type of delivery?
What type of techniques, technologies and skills are required to realise such a system?
What are the risks at each stage?
Companies often have a problem in capturing the experience of their technical or field personnel if the that person falls back to using email or a favorite word processor on a whim to record their knowledge.
Particularly in the support arena, special tools have been devised to try to capture and correlate the knowledge that is often created in the course of handling support calls. Lately, and across wider domains of knowledge or disciplines, wikis have been used with varying success for capturing at least some of that otherwise misplaced knowledge. But even on a centralized resource like a wiki, there is still the problem of how to retrieve and reuse that content as a more strategically-tagged corporate asset.
The DITA Content Collaboration project seeks to make DITA authoring commonplace for scenarios in which content creators can benefit from the structuring disciplines of this tool.
This presentation demonstrates a structured approach to collaborative writing that benefits the preservation and curation of valued, yet too-often marginalized content of knowledge workers in an organization or company.
Connecting Intelligent Content with Micropublishing and BeyondDon Day
This presentation will describe and demonstrate a grand unified vision for pulling together different kinds of single-page products for the Web, for print, and more. Lessons from this model can give you an edge in market-leading adoption of the next great thing after micropublishing, the current trend.
The Biological Imperative for Intelligent ContentNoz Urbina
[Originally presented at Intelligent Content 2014] It's been about 1000 years since the last time our basic understanding of communicating content has changed as much as it's changing today. Under the pressures of multi-channel and multi-device content challenges, the old rules we learned about good content and processes are breaking down. How do we optimize for all this diversity? There is a way to understand, master, and even leverage all this change before competitors beat you to it. This isn’t an industry issue. The challenges around discussing and making full use of today’s digital communication platforms are faced by all cultures around the world as they adopt them.
Contemporary research in cognitive science and neurobiology, leads us to new ways of thinking about communication at a basic, human level. This session could be considered a study in empathy. It offers cognitive science and neurolobiology lessons relevant to today's content landscape, and a common language to help you bridge the communication issues with your clients, colleagues, managers, and end users.
Don’t worry – this session isn't a jargon-filled nerd-fest, but a roadmap to navigating the world of content, today and tomorrow. It will cover techniques and methodologies to better structure content, optimize editorial processes, and build effective, influential strategies.
[soap Keynote] The Freedom to Grow: how standards facilitate the techcomm ind...Noz Urbina
Standards – either in the XML sense or simply communication best practices –help grow, accelerate and “professionalise” an industry. Would construction be without material standards for width and strengths, or certification for specific skills? How could we have transportation without standards for traffic and processes? Standards are what help ad-hoc processes become enterprise-class, and scale beyond our expectations.
Technical communication is in an era of rapid, disruptive and revolutionary change. The true nature of the challenge is understood by a few, and pros and cons of potential solutions by even fewer. The future therefore will require that we work together to exchange knowledge as best we can to help each other hit the many moving targets. We must do this because our old techniques and processes just can’t keep up, and no one organisation has the time or funds to re-invent every solution on their own. Standards help an organisation with little funds tackle larger challenges, and larger organisations implement profound change with reduced risk. The alternative is potentially getting left behind as the industry and community rush forward.
My slides from LavaCon Dublin, 2016:
Overview:
The cutting edge of modern science and thousands of years of communication history lead us to the same conclusion: we are pattern-based, model-building beings. This can seem either obvious or foreign to you, depending on your background, but rarely when we're talking about structuring information do we properly reconnect with the bigger picture outside the world of words and pictures.
Structured content isn't about XML, DITA or publishing, it's about imbuing content with some universal and deeply human qualities. With those qualities come a myriad of follow-on benefits to reader, writer and brand. With just the right amount of structure we're more engaged, more open-minded, and simply happier. This is true for content, but to prove it generally, we're going to first look at art, music, technology, communication and memory. Doing so we'll see how taking a wider view will help us structure content better, better bridge the silos in our organisations, and delight our customers throughout their journeys.
Storming the Castle 2015 [LavaCon Breakout Session]Noz Urbina
Updated for 2015....
It seems sometimes like management engagement with your content strategy is like a great mystical prize sealed up in the highest tower of a maze-like castle; and there’s a huge moat; and the whole thing is on top of a mountain…
To actually reach it is a challenge that will in itself take a strategy, special tools (and weapons?), and a great mountain-climbing, maze-solving team.
Noz Urbina shares some of his experience on how we can get closer to our content strategy objectives by not falling at the first barrier: getting the necessary support to develop and implement it. Based on a career selling content strategies into a diverse range of organisations – from a few hundred staff to tens-of-thousands – some of his tips will involve judicious use of common sense, and others will be potentially surprising. Learn how you can storm that castle, and claim your prize.
COPE Content Modelling for Adaptive UX - Noz UrbinaNoz Urbina
FIRST PRESENTED AT CONTENT STRATEGY APPLIED 2013, eBay's OFFICES, LONDON, UK
Multi-channel, or COPE (Create Once, Publish Everywhere), content is a bit of a holy grail right now. Our trade is discussing content being freed from the browser, available for reuse, and accessible in apps, kiosks, and responsive mobile deliverables. We need to deliver eBooks and syndication services to our partners – even deliver to wearable technologies. All this for the benefit of users, and of course, the organisations that serve them.
Adaptive content is content that is agile enough to realise all these ambitions. But making our content adaptive means addressing a topic that sends many running for the fire exit or nearest window: semantic modelling of structured content. This session will connect the dots between adaptive content, responsive design, multi-channel delivery and user experiences to show you why you want and even need to have semantic content structures. It will then go through the non-terrifying intro to getting started with modelling your own content in a future-proof way.
The wall falls down: Integrating our online and offline worlds [Confab 2015]Noz Urbina
[Confab version of my keynote talk]
There are no longer discrete online and offline worlds. Holding onto this idea is hurting our communications.
In this session, we will take a look at communications that seamlessly blend physical and digital experiences. When you take omnichannel, wearable devices, and the internet of things—and put them together in one integrated ecosystem for users—the dividing line disappears.
What do we gain when we fully integrate online and offline? How should communications change to cope with a life of constantly accelerating change? We’ll look at examples and techniques that can help prepare for this new paradigm.
The Wall has Come Down: Integrating our Online and Offline Worlds (IoT / Wear...Noz Urbina
Thesis: There are no longer discrete online and offline worlds. Holding onto this idea is hurting our communications.
In this session we take a look at the medium and long-term implications of wearable devices and the internet of things. Walking through a journey of realisation across 2 years, we'll go through what it means to content when you take omnichannel, wearable devices and the internet of things and put them together in one integrated ecosystem.
Screens are shrinking and working in tandem; connectivity is marching on towards ubiquity. Eventually there comes a point where the online world or ‘digital space’ and our real-life day-to-day will integrate so seamlessly that differentiating them will seem antiquated.
What does that mean to communication and content? What happens when Moore’s Law applies to our lives? What is the impact on information, technology and eventually culture? How should communications change to cope with a life of constantly accelerating change?
We'll will address these questions and more.
Adaptive Content equals Architecture plus Process minus Reality [Noz Urbina, ...Noz Urbina
Adaptive content is one of the most powerful and critical concepts of this decade. It is an attempt to address a never-before-seen diversity of content contexts and platforms, as well as sky-high user expectations. We are in an age where our smartphones are already starting to bore us. What were head-spinning miracles of science and technology less than three years ago “lack innovation” today. With customers assimilating new technologies into their lives and resetting expectations at this speed, the pressure to provide innovative, differentiating and strategically significant content experience is higher than ever. New platforms and interface paradigms are just around the corner. Adaptive content promises to help us address these challenges, but it still takes organisations years to adapt themselves. Noz Urbina focuses on how content architecture and process need to be altered for adaptive content, and what to do when reality sets in.
Introduction To Information Modeling With DITAScott Abel
Presented at DocTrain East 2007 Conference by Alan Houser, Group Wellesley -- Through effective task analysis and information modeling, organizations can maximize the usability of their technical documentation while minimizing the required development and maintenance effort. During this interactive workshop, students will learn the principles of minimalist documentation, how to perform an effective task and topic analysis, approaches to migrating legacy documentation to DITA or other information models, and methods for mapping content to pre-defined information types. We will also use software tools to assist in performing topic analysis. While this workshop will use DITA information models as examples, the workshop will provide value for anybody who needs to move to a structured authoring environment and improve the usability and maintainability of their technical documentation.
In many organizations, writers are judged by the volume of content that they produce. The larger the manual or help system, the more effective the writer. A fatter manual is considered to be a better manual.
From the users perspective, however, fatter does not mean better. There is no positive correlation between page or topic count and usability. Large documentation sets may be intimidating and are likely to present usability issues. Furthermore, higher page or topic counts mean higher maintenance, translation, and production costs.
The minimalist documentation strategy provides a way to design and deliver highly usable documentation while minimizing the amount of content that must be developed, maintained, and produced to support a product or service. The increasingly-popular DITA information architecture is based on the concepts of minimalist documentation.
During this workshop, we will learn the principles of minimalist documentation, and how minimalist documentation strategies meet both user needs and business needs. We will learn how to design minimalist documentation using the DITA information architecture. We will interactively experience the important prerequisite of task and topic analysis for creating well-designed, highly usable minimalist documentation sets.
We will also demonstrate the use of software tools to support topic analysis. In an interactive session, we will use the IBM Task Modeler to develop a task analysis for a product or service. The instructor will demonstrate how to use the IBM Task Modeler to automatically generate DITA map files and prototype DITA-based output.
Modular Documentation Joe Gelb Techshoret 2009Suite Solutions
Designing, building and maintaining a coherent content model is critical to proper planning, creation, management and delivery of documentation and training content. This is especially true when implementing a modular or topic-based XML standard such as DITA, SCORM and S1000D, and is essential for successfully facilitating content reuse, multi-purpose conditional publishing and user-driven content.
During this presentation we will review basic concepts and methods for implementing information architecture. We will then introduce an innovative, comprehensive methodology for information modeling and content development that employs recognized XML standards for representation and interchange of knowledge, such as Topic Maps and SKOS. In this way, semantic technologies designed for taxonomy and ontology development can be brought to bear for creating and managing technical documentation and training content, and ultimately impacting the usability and findability of technical information.
This session was presented by Suchitra Shettigar, Learning and Development Head at Metapercept. During this session, Suchitra presented basics of DITA-XML based authoring and its benefits.
Painless XML Authoring?: How DITA Simplifies XMLScott Abel
Presented at DocTrain East 2007 by Bob Doyle, DITA Users -- This introduction to XML Authoring will acquaint you with over fifty tools aimed at structuring content with DITA. They are not just DITA-compliant authoring tools (editors) for writers. They also include content management systems (CMS), translation management systems (TMS), and dynamic publishing engines that fully support DITA. You will also need to know about tools that convert legacy documents to DITA and help to design stylesheets for DITA deliverables. The best DITA tools for technical communicators implement the DITA standard while hiding all the complexity of the underlying XML (eXtensible Markup Language).
As a tech writer and not a tech, you should be able to forget about XML - except to know that you are using it (DITA is XML) and that it consists of named content elements (or components) with attributes. You need to know enough about the content elements so you can reference (conref) them for reuse. You need to know about their attributes so you can filter on them for conditional processing. And you should appreciate that because components are uniquely identifiable they lend themselves perfectly to automated dynamic assembly using a publishing engine.
We will describe how you can get started with structured writing without knowing XML or installing anything.
The promise of topic-based structured authoring is not simply better documentation. It is the creation of mission-critical information for your organization, written with a deep understanding of your most important audiences, that can be repurposed to multiple delivery channels and localized for multilingual global markets. You are not just writing content, you are preparing the information deliverables that enhance the value of your organization in all its markets.
To do that well, you must understand the latest tools in structured writing that are revolutionizing corporate information systems - today in documentation but tomorrow throughout the enterprise, from external marketing to internal human resources. Whether you are trying to push a new product into a new market or are “onboarding” a new employee, the need for high quality information to educate the customer or train the new salesperson is a challenge for technical communicators. You need to think outside the docs!
The key idea behind Darwin Information Typing Architecture is to create content in small chunks or modules called topics. A topic is the right size when it can stand alone as meaningful information. Topics are then assembled into documents using DITA maps, which are hierarchical lists of pointers or links to topics. The pointers are called “topicrefs” (for topic references).
Think of documents as assembled from single-source component parts. Assembly can be conditional, dependent on properties or metadata “tags” you attach to a topic. For example, the “audience” property might be “beginner” or “advanced.”
At a still finer level of granularity, individual elements of a topic can also be assigned property tags for conditional assembly. More importantly, a topic element can be assigned a unique ID that makes it a content component reusable in other topics.
As you will learn, DITA is a leading technology for “component content management,” which multiplies the value of your work. You need to leverage DITA and structured content to multiply your income.
[Case Study] - Nuclear Power, DITA and FrameMaker: The How's and Why'sScott Abel
Presented by Thomas Aldous at Documentation and Training East 2008,
October 29-November 1 in Burlington, MA.
This session is for anyone that is interested in learning how to
manage a transition to Specialized DITA including Content Management
Systems, Editors and Publishing Server issues and resolutions. As a
added bonus, we will also convert an Word Document To Specialized DITA
and edit the content is FrameMaker 8. There will be a question and
answer period at the end of the session for both technical and project
management issues.
DSM is a higher level of CASE process, a way to model data structures and logic in domain concepts independent from programming languages and thus also include syntax details. The final source code in a desired programming language is derived automatically from these high concept models by using exact language generators.The whole process of Meta-modeling in the MetaEdit+ tool rotates around the Meta types represented together as GOPPRR
This presentation provided some helpful content about technical approach and context about how HPAC organized the business end to execute this web project. Capturing the business goals remains the critical first step; requirements provide an important starting point but must also retain the flexibility to deliver on the underlying business goals.
A presentation by Mike Jennings and Roger Howard for the Createasphere DAM conference 2011 in Burbank, CA.
The presentation discusses issues in metadata interoperability and tools to improve it -- mostly open-source or free tools.
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
Opendatabay - Open Data Marketplace.pptxOpendatabay
Opendatabay.com unlocks the power of data for everyone. Open Data Marketplace fosters a collaborative hub for data enthusiasts to explore, share, and contribute to a vast collection of datasets.
First ever open hub for data enthusiasts to collaborate and innovate. A platform to explore, share, and contribute to a vast collection of datasets. Through robust quality control and innovative technologies like blockchain verification, opendatabay ensures the authenticity and reliability of datasets, empowering users to make data-driven decisions with confidence. Leverage cutting-edge AI technologies to enhance the data exploration, analysis, and discovery experience.
From intelligent search and recommendations to automated data productisation and quotation, Opendatabay AI-driven features streamline the data workflow. Finding the data you need shouldn't be a complex. Opendatabay simplifies the data acquisition process with an intuitive interface and robust search tools. Effortlessly explore, discover, and access the data you need, allowing you to focus on extracting valuable insights. Opendatabay breaks new ground with a dedicated, AI-generated, synthetic datasets.
Leverage these privacy-preserving datasets for training and testing AI models without compromising sensitive information. Opendatabay prioritizes transparency by providing detailed metadata, provenance information, and usage guidelines for each dataset, ensuring users have a comprehensive understanding of the data they're working with. By leveraging a powerful combination of distributed ledger technology and rigorous third-party audits Opendatabay ensures the authenticity and reliability of every dataset. Security is at the core of Opendatabay. Marketplace implements stringent security measures, including encryption, access controls, and regular vulnerability assessments, to safeguard your data and protect your privacy.
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
As Europe's leading economic powerhouse and the fourth-largest hashtag#economy globally, Germany stands at the forefront of innovation and industrial might. Renowned for its precision engineering and high-tech sectors, Germany's economic structure is heavily supported by a robust service industry, accounting for approximately 68% of its GDP. This economic clout and strategic geopolitical stance position Germany as a focal point in the global cyber threat landscape.
In the face of escalating global tensions, particularly those emanating from geopolitical disputes with nations like hashtag#Russia and hashtag#China, hashtag#Germany has witnessed a significant uptick in targeted cyber operations. Our analysis indicates a marked increase in hashtag#cyberattack sophistication aimed at critical infrastructure and key industrial sectors. These attacks range from ransomware campaigns to hashtag#AdvancedPersistentThreats (hashtag#APTs), threatening national security and business integrity.
🔑 Key findings include:
🔍 Increased frequency and complexity of cyber threats.
🔍 Escalation of state-sponsored and criminally motivated cyber operations.
🔍 Active dark web exchanges of malicious tools and tactics.
Our comprehensive report delves into these challenges, using a blend of open-source and proprietary data collection techniques. By monitoring activity on critical networks and analyzing attack patterns, our team provides a detailed overview of the threats facing German entities.
This report aims to equip stakeholders across public and private sectors with the knowledge to enhance their defensive strategies, reduce exposure to cyber risks, and reinforce Germany's resilience against cyber threats.
5. Information Modeling
from the Demo DITA*
Specializations
Don Day,
Contelligence Group LLC
* The Darwin Information Typing Architecture, an OASIS XML
markup standard
6. Lead-up: High Octane Content
• Adobe TechComm Central blog post:
http://blogs.adobe.com/techcomm/2014/06/high-octane-
documents-june-12-dita-model-webinar.html
Imagine a Content Octane Rating that indicates whether content
has the metadata and structural refinement necessary to keep the
business engine running smoothly under load.
• 85: Unleaded; Conventional text file
• 87: Use of basic styling markers (HTML or Markdown)
• 89: Use of semantic phrase markup (var, cite, kbd, code, etc.)
• 90: Use of complex data models (e. g., structures for sections)
• 91: Premium! Supports interaction with rules-driven processing
• What is the Content Octane Rating (COR) of your documents?
• Note also the formalized rating system, DITA Maturity Model
7. About This Presentation
There is value in having structured information.
How to get started? We’ll cover:
1. High level goals of an Information Model
2. Comparative overview of some sample designs
from the DITA community:
• What were they thinking, good or bad?
• How would I organize and structure my own content?
3. Summarize a design approach you can apply
for your content
8. 1. Goals of Information Models
• “An Information Model is a set of principles that define how
you intend to structure the information you develop.”
-- JoAnn Hackos, CIDM Newsletter, Feb. 2010
• It is a contract between the document and the outside world:
• For querying into the document (not just full text search)
• For processing the content in ways that support the business
• For publishing the content as its readers need or prefer
9. A Model Is:
A representation of the
underlying Information
Architecture. It helps:
• Builders (authors, tool
vendors) to create
conforming instances
of the model
• Occupants (readers,
publishing tools) to
navigate and get best
use of those facilities.
Photo credit: Cushing Memorial Library and Archives, Texas
A&M /Foter / Creative Commons Attribution 2.0 Generic (CC BY 2.0)
10. An Information Model Promotes:
• Consistency of writing style
• Readers can anticipate where they want to look
• Separation of formatting concerns from the model itself
• Useful data types for processing:
• Semantic intent for search relevance
• Structure to indicate scope of relevant content
• Association of business rules to the content:
• Management of translation process
• Automation of workflow and QA controls
• Automation of backup, version control
• Ability to share interoperable content with business partners,
OEMs, and customers, as needed
12. 2. Comparative Review
• Available Cases:
• XML Applications and Initiatives (at last count, 594!)
• http://xml.coverpages.org/xmlApplications.html
• DITA Open Toolkit plugins
• Various locations, but manageable number
• We will go with DITA OT-compatible designs
• Methodology (CSI model):
• The Lineup
• Psychological Profile (What were they thinking?)
• Motive (What were they trying to accomplish?)
• Modus Operandi (How did they do it?)
• Applicable Charges (What can we learn from mistakes and wins?)
14. DITA Open Toolkit Plugins
• For this particular lineup (a spectrum of quality):
• Music: https://github.com/dita-ot/ext-plugins/tree/master/music
• MsgRef: http://sourceforge.net/projects/dita-ot/files/Plug-
in_message%20specialization/
• Faq: https://github.com/dita-ot/ext-plugins/tree/master/faq
• eNote: https://github.com/dita-ot/ext-plugins/tree/master/enote
• Known plugin repositories (some duplicates):
• https://github.com/dita-ot/ext-plugins (models, extensions)
• https://github.com/robander/metadita (extension points)
• http://sourceforge.net/projects/dita-ot/files/ (stable releases)
• https://groups.yahoo.com/neo/groups/dita-users/files/Demos/
15. Music plugin
Characteristic Assessment
Line of business Personal demo by Robert Anderson, DITA OT lead
Apparent business driver Reduce Robert’s time spent in teaching plugin
concepts; exemplar for plugin authors (DTDs and
processing hooks); enable greater DITA-OT uptake
Design methodology Model a typical “collector’s database” (portfolio)
Use of typed data Sorting CDs/songs by categories and types; extends
<simpletable> as a relational database.
Usability Obvious, meaningful element names
Utility For CD/song collections, mainly of personal interest;
as a teaching tool, highly useful
Compelling virtues Well documented; complete application with
multiple outputs and even some editor support
Odious flaws None
https://github.com/dita-ot/ext-plugins/tree/master/music
16. Music DTD fragment
• <!-- LONG NAME: Music Collection -->
• <!ELEMENT songCollection (%title;, (%titlealts;)?, (%shortdesc; | %abstract;)?,
• (%prolog;)?, (%songBody;)?, (%related-links;)?,
• (%song-info-types;)* ) >
• <!-- LONG NAME: Music Body -->
• <!ELEMENT songBody ((%section; | %simpletable; | %songList;)* ) >
• <!-- LONG NAME: -->
• <!ELEMENT songList ((%songRow;)+) >
•
<!-- LONG NAME: -->
• <!ELEMENT songRow ((%song;)?, (%album;)?, (%artist;)?,(%genre;)?,
• (%rating;)?,(%count;)?,(%playdate;)?)>
•
<!-- LONG NAME: -->
• <!ELEMENT song (%ph.cnt;)* >
•
<!-- LONG NAME: -->
• <!ELEMENT album (%ph.cnt;)* >
•
<!-- LONG NAME: -->
• <!ELEMENT artist (%ph.cnt;)* >
•
<!-- LONG NAME: -->
• <!ELEMENT genre (%ph.cnt;)* >
•
<!-- LONG NAME: -->
• <!ELEMENT count (%ph.cnt;)* >
•
• <!-- LONG NAME: -->
• <!ELEMENT playdate (%ph.cnt;)* >
18. Msgref plugin
Characteristic Assessment
Line of business Software company (but could be hardware codes)
Apparent business driver Single source for content that appears in both
product interfaces and in documentation (to lower
translation redundancy, for example)
Design methodology Represent the Java Resource Bundle structure
Use of typed data Deliberate, strongly fielded (see the msgID “title”)
Usability Abbreviated element names (probably necessary
due to wordiness of the domain, but an NLS issue);
Difficult to write without a fielded editing tool.
Utility Very good fit for the designed purpose (hands-off
reuse of message strings)
Compelling virtues Natural use of a “message” infotype and fields
Odious flaws Development costs for authors and tools interfaces
http://sourceforge.net/projects/dita-ot/files/Plug-in_message%20specialization/
21. FAQ plugin
Characteristic Assessment
Line of business Support organizations; call centers
Apparent business driver Capture resolved issues as new best practice
responses for subsequent problem calls
Design methodology Assess the structure of conventional FAQs on the
Web, model the design as a DITA specialization
Use of typed data Rich information type (top-down) and categories;
some internal semantic terms as well
Usability Is functional, obvious; could be extended as needed
Utility The authoring problem it addresses is already
solved by knowledge base applications; better
suited as a “delivery aggregator”
Compelling virtues Simple, clear information model
Odious flaws None; could actually be used for “DITA on the Web”
https://github.com/dita-ot/ext-plugins/tree/master/faq
24. Enote plugin
Characteristic Assessment
Line of business Mimics existing email tools; demonstrates using
content structures for header metadata
Apparent business driver Demo only; not in response to a business need
Design methodology Demonstrate “XML data islands” within standard
note structures.
Use of typed data Yes, for the header data islands
Usability Good to see how content can be used for data; to
some extent, this need is handled by DITA 1.2 +
Utility Not a real application
Compelling virtues Good teaching tool (like a car engine cut in half)
Odious flaws No longer a best practice for embedded data; use
the new <data> element
https://github.com/dita-ot/ext-plugins/tree/master/enote
27. 3. A Design Approach for DITA
1. Determine the business imperative
2. Identify stakeholders
3. Get sponsorship and team
4. Analysis & design:
• Top-down: Identify information types and content structures
• Bottom-up: Identify keywords and data types
• Find a good-enough depth of concerns (Best is enemy of good)
• Test usability of names (elements, attributes, value keywords)
• Test usability of design in an actual XML editor
• Test publishing/processing/search effectiveness
• Document early; capture lessons often
5. Report up
6. “Make it so, Number One!”
28. On your own:
Smaller project ideas
• Recipes
• Meeting minutes
• Database for collections (action figures, cameras, stamps, etc.)
• APIs
• Unix-style “man pages”
• Trading cards, baseball or Pokémon style
• Neighborhood newsletter/web site
• “Kleine Kinder, kleine Sorgen, große Kinder, große Sorgen.“
29. On your own:
New or reused?
• Port an existing design to your framework (for example, apply
this design to a DITA framework: http://www.happy-
monkey.net/recipes/)
• Represent an existing process in the model (basically what the
enote demo did)
• Port existing to your framework, then augment with your
process requirements
30. On your own:
Considerations
• Ease of authoring
• Clear distinction of “things” vs “properties”
• Naming: clarity vs verbosity
• Balance between precision and usability:
• Avoid needing to parse key data in your processor
e. g. <date>June 12 2014</date> for Europeans
• On the other hand, avoid too much detail:
<sentence>
<word>This</word> <word>is</word> <word>just</word>
<word>wrong!</word>
</sentence>
31. Here be Dragons!
• How will your chunks be used? Each new process represents a
new “application context” for the collection.
• What business rules need to be supported by the process? Are
they part of the application-level information model?
• Roll your own or involve a consultant?
• What are the costs of support and maintenance?
• What are the costs of training and getting up to speed?
32. Wrapping up
• Skills you may want to learn:
• UML or “Data Modeling 101”
• XML schema design
• Editor configuration (EDDs for FrameMaker)
• Web forms for simple fielded interfaces
• Where to find outside help
• https://groups.yahoo.com/neo/groups/dita-users/info
• LinkedIn XML- and DITA-related groups
• Support lists for the authoring and CMS tools you have