The document discusses why technical communicators should care about metadata. It notes that metadata helps users find the right information through search and filtering. When structured properly through topics and relationships, metadata can help manage content, create conditional publications, and interface with machines. The presentation provides tips for technical communicators such as defining style guides and processes for metadata, structuring information, and letting publishing engines utilize metadata to their full potential.
Slides from my Metadata Workshop at Content Strategy Applied 2012. The session included several hands on exercises, which is where a lot of the interesting conversation took place.
How Google and Brad Stevens made me BetterCoachGoTo
This presentation compares Butler University Men's Basketball Team's concept of Dreamtime to Google's 20% time and how business concepts can succeed on the basketball court
Slides from my Metadata Workshop at Content Strategy Applied 2012. The session included several hands on exercises, which is where a lot of the interesting conversation took place.
How Google and Brad Stevens made me BetterCoachGoTo
This presentation compares Butler University Men's Basketball Team's concept of Dreamtime to Google's 20% time and how business concepts can succeed on the basketball court
Building a semantic enterprise content management system from scratch v1Ron Michael Zettlemoyer
How we built a practical ontology-driven corporate intranet portal in the cloud in three months using off-the-shelf technology. Presented at SemTechBiz San Francisco, June 6th 2012.
TechWise with Eric Kavanagh, Dr. Robin Bloor and Dr. Kirk Borne
Live Webcast on July 23, 2014
Watch the archive: https://bloorgroup.webex.com/bloorgroup/lsr.php?RCID=59d50a520542ee7ed00a0c38e8319b54
Analytical applications are everywhere these days, and for good reason. Organizations large and small are using analytics to better understand any aspect of their business: customers, processes, behaviors, even competitors. There are several critical success factors for using analytics effectively: 1) know which kind of apps make sense for your company; 2) figure out which data sets you can use, both internal and external; 3) determine optimal roles and responsibilities for your team; 4) identify where you need help, either by hiring new employees or using consultants 5) manage your program effectively over time.
Register for this episode of TechWise to learn from two of the most experienced analysts in the business: Dr. Robin Bloor, Chief Analyst of The Bloor Group, and Dr. Kirk Borne, Data Scientist, George Mason University. Each will provide their perspective on how companies can address each of the key success factors in building, refining and using analytics to improve their business. There will then be an extensive Q&A session in which attendees can ask detailed questions of our experts and get answers in real time. Registrants will also receive a consolidated deck of slides, not just from the main presenters, but also from a variety of software vendors who provide targeted solutions.
Visit InsideAnlaysis.com for more information.
Spreadmart To Data Mart BISIG PresentationDan English
Presentation at the North Central BI Special Interest Group (BISIG) going over a case study of converting an Excel Spreadmart solution to a SSAS data mart solution
This presentation addresses how some of the challenges that have historically confronted implementers of markup technologies (SGML and XML) and how DITA, together with some of the usability innovations associated with Web 2.0, can be used to address them. Presented at Content Convergence and Integration in Vancouver (12 March 2008).
Highlights on most interesting RecSys papers - Elena Smirnova, Lowik Chanusso...recsysfr
RecSys conference was held in Como at the end of August. We will summarize for you the most trendy techniques and results presented at this conference.
What “Model” DITA Specializations Can Teach About Information ModelincDon Day
The DITA Open Toolkit download site includes several demo specializations that few people discover and use. In this webinar, DITA maven, Don Day, will use these examples to highlight the role of information modelling that led to each specialization. Don will highlight the key points of how each specialization was created, or how semantics were introduced into the specialization, and a whole lot more.
Not to be confused with Oracle Database Vault (a commercial db security product), Data Vault Modeling is a specific data modeling technique for designing highly flexible, scalable, and adaptable data structures for enterprise data warehouse repositories. It is not a replacement for star schema data marts (and should not be used as such). This approach has been used in projects around the world (Europe, Australia, USA) for the last 10 years but is still not widely known or understood. The purpose of this presentation is to provide attendees with a detailed introduction to the technical components of the Data Vault Data Model, what they are for and how to build them. The examples will give attendees the basics for how to build, and design structures when using the Data Vault modeling technique. The target audience is anyone wishing to explore implementing a Data Vault style data model for an Enterprise Data Warehouse, Operational Data Warehouse, or Dynamic Data Integration Store. See more content like this by following my blog http://kentgraziano.com or follow me on twitter @kentgraziano.
Building a semantic enterprise content management system from scratch v1Ron Michael Zettlemoyer
How we built a practical ontology-driven corporate intranet portal in the cloud in three months using off-the-shelf technology. Presented at SemTechBiz San Francisco, June 6th 2012.
TechWise with Eric Kavanagh, Dr. Robin Bloor and Dr. Kirk Borne
Live Webcast on July 23, 2014
Watch the archive: https://bloorgroup.webex.com/bloorgroup/lsr.php?RCID=59d50a520542ee7ed00a0c38e8319b54
Analytical applications are everywhere these days, and for good reason. Organizations large and small are using analytics to better understand any aspect of their business: customers, processes, behaviors, even competitors. There are several critical success factors for using analytics effectively: 1) know which kind of apps make sense for your company; 2) figure out which data sets you can use, both internal and external; 3) determine optimal roles and responsibilities for your team; 4) identify where you need help, either by hiring new employees or using consultants 5) manage your program effectively over time.
Register for this episode of TechWise to learn from two of the most experienced analysts in the business: Dr. Robin Bloor, Chief Analyst of The Bloor Group, and Dr. Kirk Borne, Data Scientist, George Mason University. Each will provide their perspective on how companies can address each of the key success factors in building, refining and using analytics to improve their business. There will then be an extensive Q&A session in which attendees can ask detailed questions of our experts and get answers in real time. Registrants will also receive a consolidated deck of slides, not just from the main presenters, but also from a variety of software vendors who provide targeted solutions.
Visit InsideAnlaysis.com for more information.
Spreadmart To Data Mart BISIG PresentationDan English
Presentation at the North Central BI Special Interest Group (BISIG) going over a case study of converting an Excel Spreadmart solution to a SSAS data mart solution
This presentation addresses how some of the challenges that have historically confronted implementers of markup technologies (SGML and XML) and how DITA, together with some of the usability innovations associated with Web 2.0, can be used to address them. Presented at Content Convergence and Integration in Vancouver (12 March 2008).
Highlights on most interesting RecSys papers - Elena Smirnova, Lowik Chanusso...recsysfr
RecSys conference was held in Como at the end of August. We will summarize for you the most trendy techniques and results presented at this conference.
What “Model” DITA Specializations Can Teach About Information ModelincDon Day
The DITA Open Toolkit download site includes several demo specializations that few people discover and use. In this webinar, DITA maven, Don Day, will use these examples to highlight the role of information modelling that led to each specialization. Don will highlight the key points of how each specialization was created, or how semantics were introduced into the specialization, and a whole lot more.
Not to be confused with Oracle Database Vault (a commercial db security product), Data Vault Modeling is a specific data modeling technique for designing highly flexible, scalable, and adaptable data structures for enterprise data warehouse repositories. It is not a replacement for star schema data marts (and should not be used as such). This approach has been used in projects around the world (Europe, Australia, USA) for the last 10 years but is still not widely known or understood. The purpose of this presentation is to provide attendees with a detailed introduction to the technical components of the Data Vault Data Model, what they are for and how to build them. The examples will give attendees the basics for how to build, and design structures when using the Data Vault modeling technique. The target audience is anyone wishing to explore implementing a Data Vault style data model for an Enterprise Data Warehouse, Operational Data Warehouse, or Dynamic Data Integration Store. See more content like this by following my blog http://kentgraziano.com or follow me on twitter @kentgraziano.
This presentation provides demonstrations of Watson API Services utilized in various Big Data and Analytic applications and was presented at Penn State's Nittany Watson Challenge Immersion event on January 19-20, 2017.
UX, DX, DSX: Developers and Data Scientists as UsersUXDXConf
More and more companies nowadays are investing heavily in building infrastructure for developers and data scientists. But often, building infrastructure products are treated as pure engineering practices and differentiated from feature products.
I would like to share my experience leading a team at BuzzFeed in building user-centric infrastructure products for our developers and data scientists, and how I integrate and adapt traditional PM techniques for technical products.
Building software for our peers is a double-edged sword. On one hand, our users are technologists themselves and have immense appreciation for well-designed infrastructure and tools. On the other hand, it is very tempting for us as developers to make assumptions about those folks with whom we work closely. When building tools for data scientists, it is especially crucial to keep in mind that they have their own distinct workflows and needs.
18. DITA Links Data
Phrases and variables
reused
Images and icons
Segments
injected during
publishing
(captions, links,
signal words, …)
+
Filters
Relationship table
20. What Can Metadata DO with DITA?
Help you manage content
– status, keywords, author, reviewer, proprietor
Create conditional publications
– conditional publishing (audience, product,
platform…)
Communicate with translators
– Translation=no, commenting, language…
Describe linked information
– parents, children, peers… in context
21. Structure The Information
Search in the work environments
categories, keywords, version
Expose what’s here: faceted search
Link connected information: inheritance,
status…
1
22. Manage The Documentation Library
Manage projects:
% completed, author topics, reviewed topics…
Plan updates: creation information, review
Control the production: author, reviewer,
translator…
2
23. Adapt the Publications
Reuse fragments, variables and media resources
Create conditional publications:
audience = novice
Translate and send for translation (language,
indexing, reading direction)
xml:lang = ja-jp
Control links: inbound and outbound
linking = target only
3
24. AND, on The Machine Side…
Automate processes in the various
environments (action on metadata)
Optimize the research and discovery of
elements,
the reuse,
the consistency,
the usability.
4
25. Wrapping Up…
TC should care about metadata because?
• They shape your work environment
• They help users access the right information
• They enable better control on content in the
development team
• They interface with machines
27. I. Check the Environments
Is the metadata enough outside of your working
environments?
> Are the metadata enough in and out of
context: included metadata?
> Are the metadata transparent on the final
formats (Web example)?
28. II. Define Your Style Guide and Process
Choose the metadata to use…
Choose your values
– Keywords, versioning…
Adapt your methods
– What is the impact when using New or Changed ?
29. III. Structure the Information
Think BIG…
What is the real scope of the information in the
document library? Today? Tomorrow?
What is the library scope?
Do you need in-house metadata?
30. IV. Let the Publishing Engine Do its Job
Sample for the Web