From the start of my career, I have tried to explain technology to people who may be less trained in it than most of the geeks who run around trying to sell their stuff. With DITA, the concepts can be explained by relating them to real-world examples.
This presentation was done at DITA North America in San Jose and tcworld in Wiesbaden.
Introduction to XML and Structured Authoring • Overview of DITA • Topics: The Basic Information Types • Maps: Assembling Topics into Deliverables • Common elements and attributes • Metadata • Examples and exercises
Gone through articles and presentations on the web and got a half-baked understanding of the Darwin Information Typing Architecture (DITA)?
Refer to my DITA Quick Start presentation for the 2007 STC India Conference to learn to evaluate, plan and start implementing DITA.
In this presentation, you will learn about the following:
o Structured authoring and XML
o Key DITA concepts: topics, maps, specialization
o DITA architecture and content model
o Authoring in topics
o Organizing content using DITA maps
o Creating relationship tables
o Conditional text and reuse in DITA
o Metadata support in DITA
o DITA tools, standards and processes
o Publishing with the DITA Open Toolkit
Last updated on Dec 12, 2014
The Sightly template language, shipped with Adobe Experience Manager 6.0, simplifies a lot the component development workflow by allowing front-end developers to edit components themselves directly.
Learn about the main features of that template language, and about the tools available to make project development work more efficient.
Formation JPA Avancé / Hibernate gratuite par Ippon 2014Ippon
Les ORM, c’est pratique. Mais cela peut rapidement devenir complexe ou subtile. JPA permet de rapidement modéliser la couche d’accès aux données avec une facilité indiscutable. Cependant, il est préférable de bien en comprendre le fonctionnement pour éviter quelques anti patterns fâcheux.
La formation JPA Avancé proposée par Ippon détaille les aspects techniques et permet d’aller plus loin dans la compréhension et la maîtrise. Enrichie par des TP très fournis lorsqu’elle est dispensée par les formateurs Ippon, elle permet d’assimiler en 3 jours les subtilités et offre les outils pour réaliser une couche d’accès aux données de qualité, performante et maintenable.
Techniques de modélisation, gestion et subtilités du cache (L1, L2), mécanismes transactionnels, langage de requêtage… Tous ces aspects et bien d’autres sont détaillés et illustrés afin de vous apporter les clefs pour vos prochains projets.
Découvrez dès aujourd’hui les slides de cette formation, mis à disposition dans le cadre de l’OpenFormation.
Demystifying Data Warehousing as a Service (GLOC 2019)Kent Graziano
Extended deck from the 2019 GLOC event in Cleveland. Discusses what a DWaaS is, the top 10 features of Snowflake that represent that, and a check list for what questions to ask when choosing a cloud based data warehouse.
Introduction to XML and Structured Authoring • Overview of DITA • Topics: The Basic Information Types • Maps: Assembling Topics into Deliverables • Common elements and attributes • Metadata • Examples and exercises
Gone through articles and presentations on the web and got a half-baked understanding of the Darwin Information Typing Architecture (DITA)?
Refer to my DITA Quick Start presentation for the 2007 STC India Conference to learn to evaluate, plan and start implementing DITA.
In this presentation, you will learn about the following:
o Structured authoring and XML
o Key DITA concepts: topics, maps, specialization
o DITA architecture and content model
o Authoring in topics
o Organizing content using DITA maps
o Creating relationship tables
o Conditional text and reuse in DITA
o Metadata support in DITA
o DITA tools, standards and processes
o Publishing with the DITA Open Toolkit
Last updated on Dec 12, 2014
The Sightly template language, shipped with Adobe Experience Manager 6.0, simplifies a lot the component development workflow by allowing front-end developers to edit components themselves directly.
Learn about the main features of that template language, and about the tools available to make project development work more efficient.
Formation JPA Avancé / Hibernate gratuite par Ippon 2014Ippon
Les ORM, c’est pratique. Mais cela peut rapidement devenir complexe ou subtile. JPA permet de rapidement modéliser la couche d’accès aux données avec une facilité indiscutable. Cependant, il est préférable de bien en comprendre le fonctionnement pour éviter quelques anti patterns fâcheux.
La formation JPA Avancé proposée par Ippon détaille les aspects techniques et permet d’aller plus loin dans la compréhension et la maîtrise. Enrichie par des TP très fournis lorsqu’elle est dispensée par les formateurs Ippon, elle permet d’assimiler en 3 jours les subtilités et offre les outils pour réaliser une couche d’accès aux données de qualité, performante et maintenable.
Techniques de modélisation, gestion et subtilités du cache (L1, L2), mécanismes transactionnels, langage de requêtage… Tous ces aspects et bien d’autres sont détaillés et illustrés afin de vous apporter les clefs pour vos prochains projets.
Découvrez dès aujourd’hui les slides de cette formation, mis à disposition dans le cadre de l’OpenFormation.
Demystifying Data Warehousing as a Service (GLOC 2019)Kent Graziano
Extended deck from the 2019 GLOC event in Cleveland. Discusses what a DWaaS is, the top 10 features of Snowflake that represent that, and a check list for what questions to ask when choosing a cloud based data warehouse.
= Manage ontologies and use semantic data in SharePoint with GRASP =
GRASP ("Graph for SharePoint") is the SharePoint solution that introduces ontologies and semantic data into SharePoint. Ontologies are uploaded and managed directly in SharePoint. This fosters collaboration among ontologists and ensures preservation and compliance with the ECM-strategy of your company.
= SPARQL queries in SharePoint =
Ontologies are uploaded into an attached triple store (RDF store) directly from within SharePoint. With the standard query language SPARQL you can query them and, thus, retrieve their data. Additionally, any semantic data can be processed in SharePoint that is accessible via a SPARQL endpoint or triple store. SPARQL query results are available in SharePoint web parts and SharePoint lists in order to generate insights that are important for your SharePoint users and workflows.
=Applications with GRASP=
GRASP is optimized for companies that pursue a SharePoint-based strategy and that want to extend this strategy to cover their ontologies or that want to utilize semantic data to improve business processes. Typical industries are: Pharma, Insurance, Manufacturing.
*Central ontology life-cycle management in SharePoint.
*Controlled and standardized user-access, back-up and recovery strategies for ontologies.
*Semantic data from ontologies and SPARQL endpoints become accessible to SharePoint users and workflows (requires Triplestore basic, OpenLink Virtuoso or TopBraid)
Talend Open Studio Fundamentals #1: Workspaces, Jobs, Metadata and Trips & Tr...Gabriele Baldassarre
Introduction to Talend Open Studio for Data Integration, focusing on job architecture, metadata, workspaces, connection types and common use components. Rick Tips & Tricks sections
Smartsheet’s Transition to Snowflake and Databricks: The Why and Immediate Im...Databricks
Join this session to hear why Smartsheet decided to transition from their entirely SQL-based system to Snowflake and Databricks, and learn how that transition has made an immediate impact on their team, company and customer experience through enabling faster, informed data decisions.
AEM Best Practices for Component DevelopmentGabriel Walt
This presentation describes how to easily get started with an efficient development workflow with Adobe Experience Manager 6.1.
The tools and technologies presented are:
* Project Archetype – https://github.com/Adobe-Marketing-Cloud/aem-project-archetype
* AEM Eclipse Extension – https://docs.adobe.com/docs/en/dev-tools/aem-eclipse.html
* AEM Brackets Extension – https://docs.adobe.com/docs/en/dev-tools/aem-brackets.html
* Sightly Template Language – http://www.slideshare.net/GabrielWalt/component-development
* Sightly REPL Tool – https://github.com/Adobe-Marketing-Cloud/aem-sightly-repl
* Sightly TodoMVC Example – https://github.com/Adobe-Marketing-Cloud/aem-sightly-sample-todomvc
Databricks + Snowflake: Catalyzing Data and AI InitiativesDatabricks
"Combining Databricks, the unified analytics platform with Snowflake, the data warehouse built for the cloud is a powerful combo.
Databricks offers the ability to process large amounts of data reliably, including developing scalable AI projects. Snowflake offers the elasticity of a cloud-based data warehouse that centralizes the access to data. Databricks brings the unparalleled utility of being based on a mature distributed big data processing and AI-enabled tool to the table, capable of integrating with nearly every technology, from message queues (e.g. Kafka) to databases (e.g. Snowflake) to object stores (e.g. S3) and AI tools (e.g. Tensorflow).
Key Takeaways:
How Databricks & Snowflake work;
Why they're so powerful;
How Databricks + Snowflake symbiotically catalyze analytics and AI initiatives"
Relational databases are perhaps the most commonly used data management systems. In relational databases, data is modeled as a collection of disparate tables. In order to unify the data within these tables, a join operation is used. This operation is expensive as the amount of data grows. For information retrieval operations that do not make use of extensive joins, relational databases are an excellent tool. However, when an excessive amount of joins are required, the relational database model breaks down. In contrast, graph databases maintain one single data structure---a graph. A graph contains a set of vertices (i.e. nodes, dots) and a set of edges (i.e. links, lines). These elements make direct reference to one another, and as such, there is no notion of a join operation. The direct references between graph elements make the joining of data explicit within the structure of the graph. The benefit of this model is that traversing (i.e. moving between the elements of a graph in an intelligent, direct manner) is very efficient and yields a style of problem-solving called the graph traversal pattern. This session will discuss graph databases, the graph traversal programming pattern, and their use in solving real-world problems.
How do you scale CSS for millions of visitors or thousands of pages? The slides from Nicole's presentation at Web Directions North in Denver will show you how to use Object Oriented CSS to write fast, maintainable, standards-based front end code. Adds much needed predictability to CSS so that even beginners can participate in writing beautiful, standards-compliant, fast websites.
Talend Big Data Tutorial | Talend DI and Big Data Certification | Talend Onli...Edureka!
( Talend Training: https://www.edureka.co/talend-for-big-data )
This Edureka video on Talend Big Data Tutorial will help you in understanding the basic concepts of Talend and getting familiar with the Talend Open Studio for Big Data which is an open source software provided by Talend to easily communicate with the Big Data technologies like HDFS, Hive, Pig etc.
This video helps you to learn following topics:
1. Big Data
2. Talend With Big Data
3. TOS For Big Data
4. TOS Installation
5. Big Data Components In Talend
6. First Job In Talend
The world of data architecture began with applications. Next came data warehouses. Then text was organized into a data warehouse.
Then one day the world discovered a whole new kind of data that was being generated by organizations. The world found that machines generated data that could be transformed into valuable insights. This was the origin of what is today called the data lakehouse. The evolution of data architecture continues today.
Come listen to industry experts describe this transformation of ordinary data into a data architecture that is invaluable to business. Simply put, organizations that take data architecture seriously are going to be at the forefront of business tomorrow.
This is an educational event.
Several of the authors of the book Building the Data Lakehouse will be presenting at this symposium.
Using Spark Streaming and NiFi for the next generation of ETL in the enterpriseDataWorks Summit
In recent years, big data has moved from batch processing to stream-based processing since no one wants to wait hours or days to gain insights. Dozens of stream processing frameworks exist today and the same trend that occurred in the batch-based big data processing realm has taken place in the streaming world so that nearly every streaming framework now supports higher level relational operations.
On paper, combining Apache NiFi, Kafka, and Spark Streaming provides a compelling architecture option for building your next generation ETL data pipeline in near real time. What does this look like in an enterprise production environment to deploy and operationalized?
The newer Spark Structured Streaming provides fast, scalable, fault-tolerant, end-to-end exactly-once stream processing with elegant code samples, but is that the whole story?
We discuss the drivers and expected benefits of changing the existing event processing systems. In presenting the integrated solution, we will explore the key components of using NiFi, Kafka, and Spark, then share the good, the bad, and the ugly when trying to adopt these technologies into the enterprise. This session is targeted toward architects and other senior IT staff looking to continue their adoption of open source technology and modernize ingest/ETL processing. Attendees will take away lessons learned and experience in deploying these technologies to make their journey easier.
Data Wrangling with PySpark for Data Scientists Who Know Pandas with Andrew RayDatabricks
Data scientists spend more time wrangling data than making models. Traditional tools like Pandas provide a very powerful data manipulation toolset. Transitioning to big data tools like PySpark allows one to work with much larger datasets, but can come at the cost of productivity.
In this session, learn about data wrangling in PySpark from the perspective of an experienced Pandas user. Topics will include best practices, common pitfalls, performance consideration and debugging.
Talend Open Studio for Big Data | Talend Open Studio Tutorial | Talend Online...Edureka!
( Talend Training: https://www.edureka.co/https://www.edureka.co/talend-for-big-data)
This Edureka video on Talend Open Studio will guide you through the complete GUI of Talend Open Studio and build a strong foundation in Talend. This video helps you to learn following topics:
1. What is Talend Open Studio?
2. Advantages Of TOS
3. Downloading TOS
4. TOS GUI
5. Demo
Simple explanation of XSLT - what it is, what it does and how it can help you in creating well-structured content. No tutorial, just the basic concepts.
Faster than Agile - Proposal for Lavacon 2015Jang F.M. Graat
This is my second proposal for the LavaCon conference to be held in New Orleans in October 2015. The first version of this talk will be delivered at DITA/CMS NA in Chicago in April.
= Manage ontologies and use semantic data in SharePoint with GRASP =
GRASP ("Graph for SharePoint") is the SharePoint solution that introduces ontologies and semantic data into SharePoint. Ontologies are uploaded and managed directly in SharePoint. This fosters collaboration among ontologists and ensures preservation and compliance with the ECM-strategy of your company.
= SPARQL queries in SharePoint =
Ontologies are uploaded into an attached triple store (RDF store) directly from within SharePoint. With the standard query language SPARQL you can query them and, thus, retrieve their data. Additionally, any semantic data can be processed in SharePoint that is accessible via a SPARQL endpoint or triple store. SPARQL query results are available in SharePoint web parts and SharePoint lists in order to generate insights that are important for your SharePoint users and workflows.
=Applications with GRASP=
GRASP is optimized for companies that pursue a SharePoint-based strategy and that want to extend this strategy to cover their ontologies or that want to utilize semantic data to improve business processes. Typical industries are: Pharma, Insurance, Manufacturing.
*Central ontology life-cycle management in SharePoint.
*Controlled and standardized user-access, back-up and recovery strategies for ontologies.
*Semantic data from ontologies and SPARQL endpoints become accessible to SharePoint users and workflows (requires Triplestore basic, OpenLink Virtuoso or TopBraid)
Talend Open Studio Fundamentals #1: Workspaces, Jobs, Metadata and Trips & Tr...Gabriele Baldassarre
Introduction to Talend Open Studio for Data Integration, focusing on job architecture, metadata, workspaces, connection types and common use components. Rick Tips & Tricks sections
Smartsheet’s Transition to Snowflake and Databricks: The Why and Immediate Im...Databricks
Join this session to hear why Smartsheet decided to transition from their entirely SQL-based system to Snowflake and Databricks, and learn how that transition has made an immediate impact on their team, company and customer experience through enabling faster, informed data decisions.
AEM Best Practices for Component DevelopmentGabriel Walt
This presentation describes how to easily get started with an efficient development workflow with Adobe Experience Manager 6.1.
The tools and technologies presented are:
* Project Archetype – https://github.com/Adobe-Marketing-Cloud/aem-project-archetype
* AEM Eclipse Extension – https://docs.adobe.com/docs/en/dev-tools/aem-eclipse.html
* AEM Brackets Extension – https://docs.adobe.com/docs/en/dev-tools/aem-brackets.html
* Sightly Template Language – http://www.slideshare.net/GabrielWalt/component-development
* Sightly REPL Tool – https://github.com/Adobe-Marketing-Cloud/aem-sightly-repl
* Sightly TodoMVC Example – https://github.com/Adobe-Marketing-Cloud/aem-sightly-sample-todomvc
Databricks + Snowflake: Catalyzing Data and AI InitiativesDatabricks
"Combining Databricks, the unified analytics platform with Snowflake, the data warehouse built for the cloud is a powerful combo.
Databricks offers the ability to process large amounts of data reliably, including developing scalable AI projects. Snowflake offers the elasticity of a cloud-based data warehouse that centralizes the access to data. Databricks brings the unparalleled utility of being based on a mature distributed big data processing and AI-enabled tool to the table, capable of integrating with nearly every technology, from message queues (e.g. Kafka) to databases (e.g. Snowflake) to object stores (e.g. S3) and AI tools (e.g. Tensorflow).
Key Takeaways:
How Databricks & Snowflake work;
Why they're so powerful;
How Databricks + Snowflake symbiotically catalyze analytics and AI initiatives"
Relational databases are perhaps the most commonly used data management systems. In relational databases, data is modeled as a collection of disparate tables. In order to unify the data within these tables, a join operation is used. This operation is expensive as the amount of data grows. For information retrieval operations that do not make use of extensive joins, relational databases are an excellent tool. However, when an excessive amount of joins are required, the relational database model breaks down. In contrast, graph databases maintain one single data structure---a graph. A graph contains a set of vertices (i.e. nodes, dots) and a set of edges (i.e. links, lines). These elements make direct reference to one another, and as such, there is no notion of a join operation. The direct references between graph elements make the joining of data explicit within the structure of the graph. The benefit of this model is that traversing (i.e. moving between the elements of a graph in an intelligent, direct manner) is very efficient and yields a style of problem-solving called the graph traversal pattern. This session will discuss graph databases, the graph traversal programming pattern, and their use in solving real-world problems.
How do you scale CSS for millions of visitors or thousands of pages? The slides from Nicole's presentation at Web Directions North in Denver will show you how to use Object Oriented CSS to write fast, maintainable, standards-based front end code. Adds much needed predictability to CSS so that even beginners can participate in writing beautiful, standards-compliant, fast websites.
Talend Big Data Tutorial | Talend DI and Big Data Certification | Talend Onli...Edureka!
( Talend Training: https://www.edureka.co/talend-for-big-data )
This Edureka video on Talend Big Data Tutorial will help you in understanding the basic concepts of Talend and getting familiar with the Talend Open Studio for Big Data which is an open source software provided by Talend to easily communicate with the Big Data technologies like HDFS, Hive, Pig etc.
This video helps you to learn following topics:
1. Big Data
2. Talend With Big Data
3. TOS For Big Data
4. TOS Installation
5. Big Data Components In Talend
6. First Job In Talend
The world of data architecture began with applications. Next came data warehouses. Then text was organized into a data warehouse.
Then one day the world discovered a whole new kind of data that was being generated by organizations. The world found that machines generated data that could be transformed into valuable insights. This was the origin of what is today called the data lakehouse. The evolution of data architecture continues today.
Come listen to industry experts describe this transformation of ordinary data into a data architecture that is invaluable to business. Simply put, organizations that take data architecture seriously are going to be at the forefront of business tomorrow.
This is an educational event.
Several of the authors of the book Building the Data Lakehouse will be presenting at this symposium.
Using Spark Streaming and NiFi for the next generation of ETL in the enterpriseDataWorks Summit
In recent years, big data has moved from batch processing to stream-based processing since no one wants to wait hours or days to gain insights. Dozens of stream processing frameworks exist today and the same trend that occurred in the batch-based big data processing realm has taken place in the streaming world so that nearly every streaming framework now supports higher level relational operations.
On paper, combining Apache NiFi, Kafka, and Spark Streaming provides a compelling architecture option for building your next generation ETL data pipeline in near real time. What does this look like in an enterprise production environment to deploy and operationalized?
The newer Spark Structured Streaming provides fast, scalable, fault-tolerant, end-to-end exactly-once stream processing with elegant code samples, but is that the whole story?
We discuss the drivers and expected benefits of changing the existing event processing systems. In presenting the integrated solution, we will explore the key components of using NiFi, Kafka, and Spark, then share the good, the bad, and the ugly when trying to adopt these technologies into the enterprise. This session is targeted toward architects and other senior IT staff looking to continue their adoption of open source technology and modernize ingest/ETL processing. Attendees will take away lessons learned and experience in deploying these technologies to make their journey easier.
Data Wrangling with PySpark for Data Scientists Who Know Pandas with Andrew RayDatabricks
Data scientists spend more time wrangling data than making models. Traditional tools like Pandas provide a very powerful data manipulation toolset. Transitioning to big data tools like PySpark allows one to work with much larger datasets, but can come at the cost of productivity.
In this session, learn about data wrangling in PySpark from the perspective of an experienced Pandas user. Topics will include best practices, common pitfalls, performance consideration and debugging.
Talend Open Studio for Big Data | Talend Open Studio Tutorial | Talend Online...Edureka!
( Talend Training: https://www.edureka.co/https://www.edureka.co/talend-for-big-data)
This Edureka video on Talend Open Studio will guide you through the complete GUI of Talend Open Studio and build a strong foundation in Talend. This video helps you to learn following topics:
1. What is Talend Open Studio?
2. Advantages Of TOS
3. Downloading TOS
4. TOS GUI
5. Demo
Simple explanation of XSLT - what it is, what it does and how it can help you in creating well-structured content. No tutorial, just the basic concepts.
Faster than Agile - Proposal for Lavacon 2015Jang F.M. Graat
This is my second proposal for the LavaCon conference to be held in New Orleans in October 2015. The first version of this talk will be delivered at DITA/CMS NA in Chicago in April.
Trailer for a presentation at the Lavacon conference, October 18-21 in New Orleans. If you would like to see this presentation, go to http://list.ly/list/N4W-lavacon-2015-call-for-speakers, find these slides and vote for them. Those with the most votes get to present their materials.
This short presentation shows how a homegrown flowchart editor, running in a web browser, can be used to design DITA tasks. Instead of painstakingly editing the steps and choice tables, linking them by cross-references, just draw a flowchart and connect its building blocks. Let the software transform your design into a valid and correct DITA task. Of course, you can also skip the DITA and go for interactive HTML5 plus JavaScript.
From user assistance to user guidance: Information appsJang F.M. Graat
Minimalism in technical documentation ultimately leads to using interactive procedures. The advantages of moving control from the user's head to a connected device are countless. But there has to be an easier way of building these so-called information apps (or procedure apps). This presentation shows how the implicit flowchart the designers have in their minds when writing a procedure can be made explicit in a graphical interface, which produces the HTML5 code. Easy design, easy error correction, no more reasons to not use procedures instead of relying on service manuals, training service staff, and the absence of stress on the work floor, where safety-critical procedures are required.
Minimalism in technical documentation ultimately leads to using interactive procedures. This presentation shows why that is the case, and what well-designed interactive procedures can do to bring down the cost of training, the risk in relying on the goodwill and good memory of service staff and the unreliability of debriefing at the end of a tiring working day.
Creating links in technical content greatly supports the user experience, but as technical content evolves, such links are getting harder to handle. Creating static links that are resolved while building the output will not work out in the end, as content creation is moving into the agile world. This is where a new paradigm is required, which enables authors to create semantically defined links, which will be resolved by querying the database of available topics, during runtime. This revolutionises the way we think about cross-references and hyperlinks.
Maximising the effect of progressive disclosureJang F.M. Graat
Minimalism in technical documentation states that we should only deliver info that the user needs. But how can we know what each individual user already knows (and does not need)? The answer is: we cannot. And this is why we should use progressive disclosure techniques to optimize the help we offer to our customers, so that each individual customer can decide for himself whether more info is needed or not, and on which aspects more info is needed.
There is one important catch: Implementing progressive disclosure can be a lot of work and be too costly to do. The solution to this budgetary problem is to use a well-defined structure in your content (preferably DITA) and an XSLT that automatically adds the required hooks and handles (triggers and targets) to make progressive disclosure work.
Progressive Disclosure - Putting the User in ControlJang F.M. Graat
This 2-hour tutorial explains the basic principles of progressive disclosure and includes a shoot-out between two tools that offer various levels of support for implementing progressive disclosure in web-based help systems: Adobe RoboHelp and MadCap Flare.
XPath-based transformations in structured FrameMakerJang F.M. Graat
XSLT allows you to transform the structure of XML files into anything you need. As structured FrameMaker is not exactly XML but follows the same structured design, the capabilities of XSLT within the FrameMaker environment can be very useful. The FrameSLT plug-in produced by West Street Consulting offers this functionality at a very low price. This presentation gives an introduction to what the tool can do, and what it means to do transformations of structure in technical documents.
Publications in DITA are handled via dita maps. Even with conditions and DITAVAL options, these are inherently static and bound to an old book type paradigm. In this presentation I am trying to outline a new paradigm, where the disclosure of information is made truly dynamic. Doing away with maps or at least with one single top-level map that defines all content. Having a dynamic information disclosure layer in place may prepare our technical content for the fast-moving world of today (which will move even faster tomorrow).
Version control systems do not work well for agile content, as this type of content is not built and published as a whole. If parts are changed, links may require changing as well. The presentation tries to outline a new paradigm for handling change in agile content.
Advanced techniques for conversion to structured FrameMakerJang F.M. Graat
Having well-formatted content available in FrameMaker enables you to automatically convert that content to structured FrameMaker (and then possibly moving it into XML). Automating the entire process is possible using a combination of preprocessing (with FrameMaker's built-in ExtendScript), smart conversion tables and post-processing (using FrameSLT, a low-cost plug-in for FrameMaker and some more ExtendScript). This tutorial outlines some of the tricks and tips that will get you started.
Create your own $35 CMS in Structured FrameMakerJang F.M. Graat
Content Management Systems for technical documentation can be expensive and do not provide a magical solution to all your reuse problems. Often, going through the excercise of building your own custom CMS helps you define your set of requirements, so you will be much better prepared to purchase the exact right CMS for you a couple of years down the road. Building your own CMS based on structured FrameMaker is not a very difficult task and teaches you a lot about all the possible issues when starting with modular documentation and reuse.
Dita Spezialisierung - Wie machen Sie es, und warum Sie es machen sollen.Jang F.M. Graat
Spezialisierung ist Alleinstellungsmerkmal von DITA und damit das Wichtigste, was es in der technischen Redaktion in den letzten Jahrzehnten gegeben hat. Dieses Tutorial zeigt, warum Spezialisierung wichtig ist, was es genau bedeutet und wie Sie es erfolgreich umsetzen.
Nur der Nutzer weiss, was der Nutzer noch nicht weiss - Progressive DisclosureJang F.M. Graat
Diese Präsentation zeigt, wie Minimalismus in technische Dokumentation letztendlich zu eine Technik führt, die sich auf English "Progressive Disclosure" nennt und am Besten als "fortschreitende Offenlegung" übersetzt werden kann. Warum Progressive Disclosure wichtig ist und wie es handmässig oder auch automatisiert gemacht wird (falls die Inhalte in einer vernünftig strukturierten Form vorliegen).
Changing the engine without stopping the rickshawJang F.M. Graat
This presentation is a reworked version of a joint presentation with a customer at TCWorld in Germany 2012. It shows how the transition from unstructured documentation to the modern world of structured, XML-based and topic-oriented authoring can be made smooth, without interfering with the publication chain. This presentation describes a project that was done using FrameMaker 10 with its built-in ExtendScript toolkit. It shows how being able to mix unstructured and structured content, including a DITA-type conref mechanism, can be used to keep the system running while the materials are converted and pushed into a repository for reuse one by one. This flattens the legacy documentation hurdle that may keep companies from moving to modern authoring practices.
How I killed the webmaster - and got away with itJang F.M. Graat
This presentation was delivered at the STC Summit 2005 in Seattle. It shows how I implemented a website for the TransAlpine Chapter without a webmaster having to do all the stupid work (uploading stuff, taking it down, etc.). We received a Pacesetter Award for the coolest website in all of STC.
How to become a trainer - and make lots of $$$Jang F.M. Graat
This presentation was delivered at the STC Summit 2005 in Seattle. Jobs for technical authors were hard to find, and I tried to show people what you can do with your technical communication skills if you also know how to explain stuff to a live audience. Sorry that the gradient applied to the background does not show on SlideShare.
Getting your hands dirty - How tech authors may be able to survive in the mac...Jang F.M. Graat
This presentation was held at the STC Summit 2005 in Seattle. It shows how technical authors, hit by the offshoring of tech comms, can find plenty of work in the machinery business. After all, that business domain is less likely to be offshored and there are many more small machinery companies than global software corporations.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.