The NASA Earth Science Data and Information System (ESDIS) is migrating documentation for their data and products towards International Standards developed by ISO Technical Committee 211 (ISO/TC211). In order to do this effectively, NASA must understand and participate in the ISO process. This presentation was given at a NASA ISO Seminar during November 2012. It outlines the ISO standards process and describes some extensions to the ISO standards that are being proposed to address ESDIS requirements not addressed in the original standard.
We are interested in developing a standard method for writing ISO TC211 compliant metadata into HDF data files. This presentation shows some initial workflows for this using the HDF Product Designer.
This is an introductory slide for accessing NASA HDF/HDF-EOS data for beginners. NASA distributes many Earth Science data in HDF/HDF-EOS file format and new users struggle to understand the file format and use the NASA HDF/HDF-EOS data properly. This brief presentation will help new users to understand the basic concepts about the HDF/HDF-EOS and to know the available tools that can access the NASA data easily.
FDMEE Scripting - Cloud and On-Premises - It Ain't Groovy, But It's My Bread ...Joseph Alaimo Jr
So you've made the transition to FDMEE and now you want more. FDMEE scripting is what empowers you to do more with the application. Explore how to use FDMEE scripting to enhance and streamline your integrations and how to use Jython and the various APIs within the EPM stack to take your application to the next level. Have a hybrid FDMEE cloud deployment? That's cool, let's look at Groovy and EPMAutomate and see how FDMEE can function as your automation hub.
This session will be most useful for those with a solid FDMEE foundation and some familiarity with scripting languages.
A Glide, Skip or a Jump: Efficiently Stream Data into Your Medallion Architec...HostedbyConfluent
"The medallion architecture graduates raw data sitting in operational systems into a set of refined tables in a series of stages, ultimately processing data to serve analytics from gold tables. While there is a deep desire to build this architecture incrementally from streaming data sources like Kafka, it is very challenging with current technologies available on lakehouses; a lot of technologies can’t efficiently update records or efficiently process incremental data without recomputing all the data to serve low-latency tables. Apache Hudi is a transactional data lake platform with full mutability support, including streaming upserts, and provides a powerful incremental processing framework. Apache Hudi powers the largest transactional data lakes in the industry, differentiating on fast upserts and change streams to only process and serve the change records.
To further improve the upsert performance, Hudi now supports a new record-level index that deterministically maps the record key to the file location orders of magnitude faster. As a result, Hudi speeds up computationally expensive MERGE operations even more by avoiding full table scans. On the query side, Hudi now supports database-style change data capture with before, and after images to chain flow of inserts, updates and deletes change records from bronze to silver to gold tables.
In this talk, attendees will walk away with:
- The current challenges of building a medallion architecture at low-latency
- How the record index and incremental updates work with Apache Hudi
- How the new Hudi CDC feature unlocks incremental processing on the lake
- How you can efficiently build a medallion architecture by avoiding expensive operations"
We are interested in developing a standard method for writing ISO TC211 compliant metadata into HDF data files. This presentation shows some initial workflows for this using the HDF Product Designer.
This is an introductory slide for accessing NASA HDF/HDF-EOS data for beginners. NASA distributes many Earth Science data in HDF/HDF-EOS file format and new users struggle to understand the file format and use the NASA HDF/HDF-EOS data properly. This brief presentation will help new users to understand the basic concepts about the HDF/HDF-EOS and to know the available tools that can access the NASA data easily.
FDMEE Scripting - Cloud and On-Premises - It Ain't Groovy, But It's My Bread ...Joseph Alaimo Jr
So you've made the transition to FDMEE and now you want more. FDMEE scripting is what empowers you to do more with the application. Explore how to use FDMEE scripting to enhance and streamline your integrations and how to use Jython and the various APIs within the EPM stack to take your application to the next level. Have a hybrid FDMEE cloud deployment? That's cool, let's look at Groovy and EPMAutomate and see how FDMEE can function as your automation hub.
This session will be most useful for those with a solid FDMEE foundation and some familiarity with scripting languages.
A Glide, Skip or a Jump: Efficiently Stream Data into Your Medallion Architec...HostedbyConfluent
"The medallion architecture graduates raw data sitting in operational systems into a set of refined tables in a series of stages, ultimately processing data to serve analytics from gold tables. While there is a deep desire to build this architecture incrementally from streaming data sources like Kafka, it is very challenging with current technologies available on lakehouses; a lot of technologies can’t efficiently update records or efficiently process incremental data without recomputing all the data to serve low-latency tables. Apache Hudi is a transactional data lake platform with full mutability support, including streaming upserts, and provides a powerful incremental processing framework. Apache Hudi powers the largest transactional data lakes in the industry, differentiating on fast upserts and change streams to only process and serve the change records.
To further improve the upsert performance, Hudi now supports a new record-level index that deterministically maps the record key to the file location orders of magnitude faster. As a result, Hudi speeds up computationally expensive MERGE operations even more by avoiding full table scans. On the query side, Hudi now supports database-style change data capture with before, and after images to chain flow of inserts, updates and deletes change records from bronze to silver to gold tables.
In this talk, attendees will walk away with:
- The current challenges of building a medallion architecture at low-latency
- How the record index and incremental updates work with Apache Hudi
- How the new Hudi CDC feature unlocks incremental processing on the lake
- How you can efficiently build a medallion architecture by avoiding expensive operations"
The Digital Object Identifier (DOI) is used for identifying intellectual property in the digital environment. The DOI is like a digital fingerprint: Each article receives a unique one at birth, and it can be used to identify the article throughout its lifespan, no matter where it goes. A DOI should be interpreted as 'digital identifier of an object' rather than 'identifier of a digital object'. A DOI can be assigned to any Object. In this workshop you will learn how to define a DOI, prepare Meta Data, and assign a DOI for a journal paper.
Modular Documentation Joe Gelb Techshoret 2009Suite Solutions
Designing, building and maintaining a coherent content model is critical to proper planning, creation, management and delivery of documentation and training content. This is especially true when implementing a modular or topic-based XML standard such as DITA, SCORM and S1000D, and is essential for successfully facilitating content reuse, multi-purpose conditional publishing and user-driven content.
During this presentation we will review basic concepts and methods for implementing information architecture. We will then introduce an innovative, comprehensive methodology for information modeling and content development that employs recognized XML standards for representation and interchange of knowledge, such as Topic Maps and SKOS. In this way, semantic technologies designed for taxonomy and ontology development can be brought to bear for creating and managing technical documentation and training content, and ultimately impacting the usability and findability of technical information.
Living Documentation (NCrafts Paris 2015, DDDx London 2015, BDX.io 2015, Code...Cyrille Martraire
What if documentation was as fun as coding? Always up-to-date? And what if it could even improve your design? Reconsider how you invest in knowledge to accelerate delivery, with a touch of Domain-Driven Design.
For more, get the book on Leanpub: https://leanpub.com/livingdocumentation
Object Graph Mapping with Spring Data Neo4j 3 - Nicki Watt & Michael Hunger @...Neo4j
Nicki and Michael have recently been working together on the project to develop/upgrade the Spring Data Neo4j 3 (SDN) library to take advantage of some of the latest Neo4j 2.0 features. This talk takes a look at what can be expected of the new framework, and how it can be used to help model various different use cases with a simple Java domain model backed by a Neo4j database.
This presentation was provided by Ivan Salcedo of BSI during the NISO Live Connections event, XML for Standards Publishers, held on October 9, 2017 in Geneva, Switzerland.
High Performance JSON Search and Relational Faceted Browsing with Lucenelucenerevolution
Presented by Renaud Delbru, Co-Founder, SindiceTech
In this presentation, we will discuss how Lucene and Solr can be used for very efficient search of tree-shaped schemaless document, e.g. JSON or XML, and can be then made to address both graph and relational data search. We will discuss the capabilities of SIREn, a Lucene/Solr plugin we have developed to deal with huge collections of tree-shaped schemaless documents, and how SIREn is built using Lucene extensibility capabilities (Analysis, Codec, Flexible Query Parser). We will compare it with Lucene's BlockJoin Query API in nested schemaless data intensive scenarios. We will then go through use cases that show how relational or graph data can be turned into JSON documents using Hadoop and Pig, and how this can be used in conjunction with SIREn to create relational faceting systems with unprecedented performance. Take-away lessons from this session will be awareness about using Lucene/Solr and Hadoop for relational and graph data search, as well as the awareness that it is now possible to have relational faceted browsers with sub-second response time on commodity hardware.
DEVNET-2002 Coding 201: Coding Skills 201: Going Further with REST and Python...Cisco DevNet
Are you ready to dive deeper into using Python and REST? This class continues the Coding Skills 101 topics and goes deeper into techniques for parsing JSON, and debugging.
Microsoft Graph provides a unified programmability model that you can use to build apps for organizations and consumers that interact with the data of millions of users. You can use the Microsoft Graph REST APIs to access data in Azure Active Directory, Office 365 services, Enterprise Mobility and Security services, Windows 10 services, Dynamics 365, and more. We will spend the next 30 minutes exploring what you can do with the Microsoft Graph.
This session was presented by Suchitra Shettigar, Learning and Development Head at Metapercept. During this session, Suchitra presented basics of DITA-XML based authoring and its benefits.
.NET Fest 2018. Антон Молдован. One year of using F# in production at SBTechNETFest
В 2017 году мы начали активно использовать F# для построения high-load push-based queryable API, а также обработки больших потоков данных (stateful stream processing). На тот момент времени никто в наших командах не имел предыдущего опыта по внедрению и применению F# но мы решили попробовать. На этом докладе я расскажу о нашем опыте внедрения F#, его проблемах и недостатках, о том как мы его научились готовить, где имеет смысл его применять и как подружить C#/OOP с F#/FP в одном проекте.
Данный доклад нацелен на аудиторию не имеющую предыдущего опыта с FP/F#.
Agenda:
- Why did we choose F# over C#?
- A high-level overview of the architecture of our push-based queryable API.
- Adopting F# for C#/OOP developers (inconveniences, C# interoperability, code style, DDD, TDD)
If you are translating metadata between dialects do you know what you are losing? There is a way to identify it and quantitatively characterize lossiness of the translation.
The Digital Object Identifier (DOI) is used for identifying intellectual property in the digital environment. The DOI is like a digital fingerprint: Each article receives a unique one at birth, and it can be used to identify the article throughout its lifespan, no matter where it goes. A DOI should be interpreted as 'digital identifier of an object' rather than 'identifier of a digital object'. A DOI can be assigned to any Object. In this workshop you will learn how to define a DOI, prepare Meta Data, and assign a DOI for a journal paper.
Modular Documentation Joe Gelb Techshoret 2009Suite Solutions
Designing, building and maintaining a coherent content model is critical to proper planning, creation, management and delivery of documentation and training content. This is especially true when implementing a modular or topic-based XML standard such as DITA, SCORM and S1000D, and is essential for successfully facilitating content reuse, multi-purpose conditional publishing and user-driven content.
During this presentation we will review basic concepts and methods for implementing information architecture. We will then introduce an innovative, comprehensive methodology for information modeling and content development that employs recognized XML standards for representation and interchange of knowledge, such as Topic Maps and SKOS. In this way, semantic technologies designed for taxonomy and ontology development can be brought to bear for creating and managing technical documentation and training content, and ultimately impacting the usability and findability of technical information.
Living Documentation (NCrafts Paris 2015, DDDx London 2015, BDX.io 2015, Code...Cyrille Martraire
What if documentation was as fun as coding? Always up-to-date? And what if it could even improve your design? Reconsider how you invest in knowledge to accelerate delivery, with a touch of Domain-Driven Design.
For more, get the book on Leanpub: https://leanpub.com/livingdocumentation
Object Graph Mapping with Spring Data Neo4j 3 - Nicki Watt & Michael Hunger @...Neo4j
Nicki and Michael have recently been working together on the project to develop/upgrade the Spring Data Neo4j 3 (SDN) library to take advantage of some of the latest Neo4j 2.0 features. This talk takes a look at what can be expected of the new framework, and how it can be used to help model various different use cases with a simple Java domain model backed by a Neo4j database.
This presentation was provided by Ivan Salcedo of BSI during the NISO Live Connections event, XML for Standards Publishers, held on October 9, 2017 in Geneva, Switzerland.
High Performance JSON Search and Relational Faceted Browsing with Lucenelucenerevolution
Presented by Renaud Delbru, Co-Founder, SindiceTech
In this presentation, we will discuss how Lucene and Solr can be used for very efficient search of tree-shaped schemaless document, e.g. JSON or XML, and can be then made to address both graph and relational data search. We will discuss the capabilities of SIREn, a Lucene/Solr plugin we have developed to deal with huge collections of tree-shaped schemaless documents, and how SIREn is built using Lucene extensibility capabilities (Analysis, Codec, Flexible Query Parser). We will compare it with Lucene's BlockJoin Query API in nested schemaless data intensive scenarios. We will then go through use cases that show how relational or graph data can be turned into JSON documents using Hadoop and Pig, and how this can be used in conjunction with SIREn to create relational faceting systems with unprecedented performance. Take-away lessons from this session will be awareness about using Lucene/Solr and Hadoop for relational and graph data search, as well as the awareness that it is now possible to have relational faceted browsers with sub-second response time on commodity hardware.
DEVNET-2002 Coding 201: Coding Skills 201: Going Further with REST and Python...Cisco DevNet
Are you ready to dive deeper into using Python and REST? This class continues the Coding Skills 101 topics and goes deeper into techniques for parsing JSON, and debugging.
Microsoft Graph provides a unified programmability model that you can use to build apps for organizations and consumers that interact with the data of millions of users. You can use the Microsoft Graph REST APIs to access data in Azure Active Directory, Office 365 services, Enterprise Mobility and Security services, Windows 10 services, Dynamics 365, and more. We will spend the next 30 minutes exploring what you can do with the Microsoft Graph.
This session was presented by Suchitra Shettigar, Learning and Development Head at Metapercept. During this session, Suchitra presented basics of DITA-XML based authoring and its benefits.
.NET Fest 2018. Антон Молдован. One year of using F# in production at SBTechNETFest
В 2017 году мы начали активно использовать F# для построения high-load push-based queryable API, а также обработки больших потоков данных (stateful stream processing). На тот момент времени никто в наших командах не имел предыдущего опыта по внедрению и применению F# но мы решили попробовать. На этом докладе я расскажу о нашем опыте внедрения F#, его проблемах и недостатках, о том как мы его научились готовить, где имеет смысл его применять и как подружить C#/OOP с F#/FP в одном проекте.
Данный доклад нацелен на аудиторию не имеющую предыдущего опыта с FP/F#.
Agenda:
- Why did we choose F# over C#?
- A high-level overview of the architecture of our push-based queryable API.
- Adopting F# for C#/OOP developers (inconveniences, C# interoperability, code style, DDD, TDD)
If you are translating metadata between dialects do you know what you are losing? There is a way to identify it and quantitatively characterize lossiness of the translation.
The HDF Product Designer – Interoperability in the First MileTed Habermann
Interoperable data have been a long-time goal in many scientific communities. The recent growth in analysis, visualization and mash-up applications that expect data stored in a standardized manner has brought the interoperability issue to the fore. On the other hand, producing interoperable data is often regarded as a sideline task in a typical research team for which resources are not readily available. The HDF Group is developing a software tool aimed at lessening the burden of creating data in standards-compliant, interoperable HDF5 files. The tool, named HDF Product Designer, lowers the threshold needed to design such files by providing a user interface that combines the rich HDF5 feature set with applicable metadata conventions. Users can quickly devise new HDF5 files while at the same time seamlessly incorporating the latest best practices and conventions from their community. That is what the term interoperability in the first mile means: enabling generation of interoperable data in HDF5 files from the onset of their production. The tool also incorporates collaborative features, allowing team approach in the file design, as well as easy transfer of best practices as they are being developed. The current state of the tool and the plans for future development will be presented. Constructive input from interested parties is always welcome.
Hdf Augmentation: Interoperability in the Last MileTed Habermann
Science data files are generally written to serve well-defined purposes for a small science teams. In many cases, the organization of the data and the metadata are designed for custom tools developed and maintained by and for the team. Using these data outside of this context many times involves restructuring, re-documenting, or reformatting the data. This expensive and time-consuming process usually prevents data reuse and thus decreases the total life-cycle value of the data considerably. If the data are unique or critically important to solving a particular problem, they can be modified into a more generally usable form or metadata can be added in order to enable reuse. This augmentation process can be done to enhance data for the intended purpose or for a new purpose, to make the data available to new tools and applications, to make the data more conventional or standard, or to simplify preservation of the data. The HDF Group has addressed augmentation needs in many ways: by adding extra information, by renaming objects or moving them around in the file, by reducing complexity of the organization, and sometimes by hiding data objects that are not understood by specific applications. In some cases these approaches require re-writing the data into new files and in some cases it can be done externally, without affecting the original file. We will describe and compare several examples of each approach.
ISO Metadata Improvements - Questions and AnswersTed Habermann
The ISO Standards for describing geospatial data, services, and other resources are changing. These slides describe a few of these changes in terms of documentation needs and how the new standards address these needs. I talked with these slides at a recent webinar that is available at https://www.youtube.com/watch?v=un-PtJLclIM&feature=youtu.be
The ISO Metadata Standards include the capability to add citations to many kinds of external resources. This is very important for providing complete documentation required to understand and reproduce scientific results.
Can ISO 19157 support current NASA data quality metadata?Ted Habermann
ISO 19157 provides a powerful framework for describing quality of Earth science datasets. As NASA migrates towards using that standard, it is important to understand whether and how existing data quality content fits into the ISO 19157 model. This talk demonstrates that fit and concludes that ISO 19157 can include all existing content and also includes new capabilities that can be very useful for all kinds of NASA data users.
Communities use many different dialects to document their data. We need to be able to translate between these dialects and to understand how much is lost in translation
Wikis, Rubrics and Views: An Integrated Approach to Improving DocumentationTed Habermann
For many years scientists and data managers have focused on creating metadata that supports the discovery of available data. This is important, but once data sets are discovered, users need metadata that supports use and understanding of those data. This talk describes a system developed to support the required metadata improvements using wikis, rubrics, and metadata views. The wikis provide a mechanism for the community to record experiences and lessons learned and provide high-quality examples. Rubrics provide a mechanism for consistent and clear quantitative evaluation of the completeness of metadata records. The results displays include integrated links to the wiki. Views provide views with connections to the wiki and on-going interactive learning. These tools can be used with metadata from any standard and can facilitate translation of the metadata between multiple standards.
The HDF format is the foundation for sharing data in many communities that have created domain-specific conventions on top of HDF. This presentation was given at the Winter meeting of the Earth Science Information Partnership (ESIP).
For many years metadata development activities have focused on developing and sharing metadata for discovering data. This is important. Once data are discovered, metadata supporting use and understanding become important. Efforts to encourage scientists and data providers to create those metadata have had limited success. This talk describes some approaches and tools for supporting the organizational change efforts required to integrate use and understanding metadata into organizational cultures. These approaches are described in terms of the ideas presented in Switch: How to Change Things When Change is Hard.
New data access paradigms support a variety of human and machine access paths with data servers (THREDDS, https://www.unidata.ucar.edu/software/thredds/current/tds/ and Hyrax, http://opendap.org) that support multiple services for a given dataset. We need metadata that can describe those services and unambiguously differentiate between access paths for humans and for machines. The ISO 19115 metadata standard includes service metadata and allows data and services for that data to be described in the same record. I propose that we use the service metadata for machine access and the more traditional distribution information for human access. This talk was presented at the ESIP (espied.org) meeting during January 2014.
NASA's Earth Observing System (EOS) archive includes data collected over many years by many satellite instruments. These data are stored in the HDF format that includes data and metadata. The content of the metadata was examined for compliance with a set of conventions developed by the NASA science community at the beginning of the EOS Project (the HDF-EOS conventions). The initial results show that ~50% of the data files and 76% of the datasets have metadata that allows them to be used easily in standard tools. This talk was presented at the ESIP (espied.org) meeting during January 2014.
Science platforms are made up of (at least) four planks: data formats, services, tools and conventions. I focus here on formats and conventions, specifically the HDF5 format, already used in many disciplines, and the Climate-Forecast and HDF-EOS Conventions. Many science disciplines have already agreed on HDF as the preferred format for storing and sharing data. It is well established in high performance computing and supports arbitrary grouping and annotation. Community conventions are critical for useful data on top of the format. The Climate-Forecast (CF) conventions were created for relatively simple gridded data types while the HDF-EOS conventions originally considered more complex data (swaths). Making simple conventions more complex makes adoption more difficult. Community input and the need for stable data processing systems must be balanced in governance of conventions.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
2. ISO Process
Preliminary
Proposal
Draft New
Work Item
Proposal
0-6 months
New Work
Item
Proposal
Working
Draft
or PAS
Committee
6 months
Enquiry
Approval
5 months
? months
Preparatory
2 months
12 months
Technical
Consensus
(DIS)
9 months
Final Draft
International
Standard
(FDIS)
Publication
3 months
International
Standard
(IS)
3. ISO Project Options
Stage
Proposal
Preparatory
Committee
Enquiry
Approval
Publication
Normal
Procedure
Acceptance
of Proposal
Preparation
of working
draft
Develop and
accept
committee
draft
Develop and
accept
enquiry draft
Approval of
FDIS
Publication
of
International
Standard
Draft With
Proposal
Acceptance
of Proposal
Study by
Working
Group
Develop and
accept
committee
draft
Develop and
accept
enquiry draft
Approval of
FDIS
Publication
of
International
Standard
Fast
Track
Acceptance
of Proposal
Accept
enquiry draft
Approval of
FDIS
Publication
of
International
Standard
Technical
Specification
Acceptance
of Proposal
Technical
Report
Publically
Available
Specification
Acceptance
of Proposal
Approve
draft PAS
Accept Draft
Publication
of Technical
Specification
Accept Draft
Prepare Draft
Publication
of Technical
Report
Publication
of PAS
4. ISO Timeline
3 years
P
C
E
A
P
P
Implementation
*
C
E
A
P
6-8 years
*
19115-1 (Metadata Conceptual Model: IS 2013/06)
19115-3 (Metadata Schema, Transform, Schematron: IS 2015/05)
19115-2 (Metadata Acquisition / Images Conceptual Model)
*
3 years
3-5 years
*
*
*
19139-2 (Metadata Acquisition / Images Schema: TS 2012/10)
19157 (Data Quality Conceptual Model: IS 2013/04)
19157 (Data Quality Schema)
*
TS 19130 Parts 1-2 (Imagery Sensor Models for Geopositioning
Conceptual Model: TS 2012/12)
5. Conceptual Models and Implementations
Model
Delay
Schema
19115
2003-05
3.3 Years
19139
2006-08
19115-2
2009-02
3.6 Years
19139-2
2012-10
19115-1
2013-06
1.9 Years
19115-3
2015-05
5.8 Years
~50% Decrease
19157
2013-04
?
19157-2
?
19130
2012-12
?
?
?
Now creating XML schemas directly from UML models using software developed in OGC
Testbed. This capability is also being added directly into the tool by the vendor.
Test schemas for 19115-1 and 19157 with proposed NASA extensions are available now.
6. Conventions and Extensions
P
C
E
Implementation
A
P
P
C
E
Conventions and Extensions
Conventions are agreements that ensure consistent usage of the standard and
facilitate interoperability across communities. They are expressed as guidance and
best practices for the community.
Requirements not considered during the development of the original standard or
that emerge during the implementation period can be addressed using extensions to
the standard.
These extensions should be considered as candidate changes when the standard is
revised every five years.
Developing NASA conventions and extensions:
1) makes it possible to use the standard now and
2) increases the chances of addressing NASA needs in the next version of the
standard.
A
P
8. (Some) Existing Documentation Requirements
More than 2600 Collections have
been described using the ECHO
metadata dialect.
Almost 6000 DIFs include the
word NASA.
9. ECHO Dialect
~150 Elements with content. Keywords, identifiers, and contact information are the most
commonly occurring fields.
324 AdditionalAttributes occur over 3000 times
Most Common: PROCESSVERSION QAPERCENTGOODQUALITY QAPERCENTOTHERQUALITY
QAPERCENTNOTPRODUCEDCLOUD QAPERCENTNOTPRODUCEDOTHER
10. DIF Dialect
~130 elements with content. Keywords, identifiers, and contact information are the
most commonly occurring fields.
99% translation to ISO, ~79% translation to ECHO
11. More Overlap Than Difference
The metadata dialects currently used
by ESDIS have much more overlap than
difference.
The mappings are generally well
understood: we are in a tweaking
stage.
ECHO
DIF
The translations can be implemented
using well-known, standard tools that
are designed for XML processing.
These are different from the
programming languages generally used
for scientific data processing.
ISO
12. Additional Attributes
The ECHO AdditionalAttributes are an undifferentiated pile of information.
They can be translated to ISO, but we need to know where to put them. In order to do
that, we need to identify their types.
13. AdditionalAttributes and ISO
Many of the AdditionalAttributes fit into
standard ISO elements.
Others are implementation specific elements of
existing ISO types.
Rules:
Percent/Pct > contentInformation
Quality > qualityInformation
Gain/Bias > instrumentInformation
75% of the attributes, 81% of content
citation.date
citation.identifier
citation.pointOfContact
contentInformation
descriptiveKeywords
distributionInformation
extent
geographicIdentifier
instrumentInformation
lineage
processingInformation
qualityInformation
14. RecordType/Record - ISO with XML Reference
RecordType
<eos:otherPropertyType>
<gco:RecordType xlink:href="http://www.echo.nasa.gov/ingest/schemas/operations/Collection.xsd#
xpointer(//element[@name='AdditionalAttribute'])">Echo Additional Attribute</gco:RecordType>
</eos:otherPropertyType>
<eos:otherProperty>
<gco:Record xmlns:echo="http://www.echo.nasa.gov/ingest/schemas/operations">
<echo:AdditionalAttribute>
<echo:Name>ScanAngle</echo:Name>
<echo:DataType>float</echo:DataType>
<echo:Description>The angle between the sensor view vector and the nadir axis</echo:Description>
<echo:MeasurementResolution/>
<echo:ParameterRangeBegin/>
<echo:ParameterRangeEnd/>
<echo:ParameterUnitsOfMeasure>degrees</echo:ParameterUnitsOfMeasure>
<echo:ParameterValueAccuracy/>
<echo:ValueAccuracyExplanation/>
<echo:Value>47.5</echo:Value>
</echo:AdditionalAttribute>
</gco:Record>
</eos:otherProperty>
Record