A walkthrough of the CIDOC-CRM based, LOD data model developed and maintained at https://linked.art/ for describing cultural heritage resources and activities.
Tiers of Abstraction and Audience in Cultural Heritage Data ModelingRobert Sanderson
A walk through of a framework based around the distinctions between Abstraction, Implementation and Audience for considering the value and utility of data modeling patterns and paradigms in cultural heritage information systems. In particular, a focus on CIDOC-CRM, BibFrame, RiC-CM/RiC-O, EDM, and IIIF, with the intent to demonstrate best practices and anti-patterns in modeling.
An introduction to the linked.art LOD data model, based on a carefully selected profile of CIDOC-CRM, and expressed as JSON-LD. It focuses on developer happiness and data usability, while trying to also maintain as much of the richness of CRM as possible.
Linked Art: Sustainable Cultural Knowledge through Linked Open Usable DataRobert Sanderson
An introduction to Linked Art - why we need it, what it is, and how it works. A great starting point if you're interested in linked open usable data in cultural heritage, especially art museums.
IIIF and Linked Data: A Cultural Heritage DAM EcosystemRobert Sanderson
Presentation at DAMLA, November 15 2017, on the adoption of the IIIF image interoperability APIs across the Cultural Heritage sector for access to digital assets. How Linked Open Data then provides interoperable discovery solutions for that content.
Sanderson CNI 2020 Keynote - Cultural Heritage Research Data EcosystemRobert Sanderson
There have been, and continue to be, many initiatives to address the social, technological, financial and policy-based challenges that throw up roadblocks towards achieving this vision. However, it is hard to tell whether we are making progress, or whether we are eternally waiting for the hyperloop that will never come. If we are to ever be able to answer research questions that require a broad, international corpus of cultural data, then we need an ecosystem that can be characterized with 5 “C”s: Collaborative, Consistent, Connected, Correct and Contextualized. Each of these has implications for the sustainability, innovation, usability, timeliness and ethical considerations that must be addressed in a coherent and holistic manner. As with autonomous vehicles, technology (and perhaps even machine “intelligence”) is a necessary but insufficient component.
In this presentation, I will frame and motivate this grand challenge and propose where we can build connections between the academy, the cultural heritage sector, and industry. The discussion will explore the issues, and highlight some of the successful endeavors and more approachable opportunities where, together, progress can be made.
Presentation about usability of linked data, following LODLAM 2020 at the Getty. Discusses JSON-LD 1.1, IIIF, Linked Art, in the context of the design principles for building usable APIs on top of semantically accurate models, and domain specific vocabularies.
In particular a focus on the different abstraction layers between conceptual model, ontology, vocabulary, and application profile and the various uses of the data.
Standards and Communities: Connected People, Consistent Data, Usable Applicat...Robert Sanderson
Keynote presentation at JCDL 2019 at UIUC, on the interaction between standards (development and usage) and communities. Looking at Linked Open Data, digital library protocols, and evaluation of standards practices.
Tiers of Abstraction and Audience in Cultural Heritage Data ModelingRobert Sanderson
A walk through of a framework based around the distinctions between Abstraction, Implementation and Audience for considering the value and utility of data modeling patterns and paradigms in cultural heritage information systems. In particular, a focus on CIDOC-CRM, BibFrame, RiC-CM/RiC-O, EDM, and IIIF, with the intent to demonstrate best practices and anti-patterns in modeling.
An introduction to the linked.art LOD data model, based on a carefully selected profile of CIDOC-CRM, and expressed as JSON-LD. It focuses on developer happiness and data usability, while trying to also maintain as much of the richness of CRM as possible.
Linked Art: Sustainable Cultural Knowledge through Linked Open Usable DataRobert Sanderson
An introduction to Linked Art - why we need it, what it is, and how it works. A great starting point if you're interested in linked open usable data in cultural heritage, especially art museums.
IIIF and Linked Data: A Cultural Heritage DAM EcosystemRobert Sanderson
Presentation at DAMLA, November 15 2017, on the adoption of the IIIF image interoperability APIs across the Cultural Heritage sector for access to digital assets. How Linked Open Data then provides interoperable discovery solutions for that content.
Sanderson CNI 2020 Keynote - Cultural Heritage Research Data EcosystemRobert Sanderson
There have been, and continue to be, many initiatives to address the social, technological, financial and policy-based challenges that throw up roadblocks towards achieving this vision. However, it is hard to tell whether we are making progress, or whether we are eternally waiting for the hyperloop that will never come. If we are to ever be able to answer research questions that require a broad, international corpus of cultural data, then we need an ecosystem that can be characterized with 5 “C”s: Collaborative, Consistent, Connected, Correct and Contextualized. Each of these has implications for the sustainability, innovation, usability, timeliness and ethical considerations that must be addressed in a coherent and holistic manner. As with autonomous vehicles, technology (and perhaps even machine “intelligence”) is a necessary but insufficient component.
In this presentation, I will frame and motivate this grand challenge and propose where we can build connections between the academy, the cultural heritage sector, and industry. The discussion will explore the issues, and highlight some of the successful endeavors and more approachable opportunities where, together, progress can be made.
Presentation about usability of linked data, following LODLAM 2020 at the Getty. Discusses JSON-LD 1.1, IIIF, Linked Art, in the context of the design principles for building usable APIs on top of semantically accurate models, and domain specific vocabularies.
In particular a focus on the different abstraction layers between conceptual model, ontology, vocabulary, and application profile and the various uses of the data.
Standards and Communities: Connected People, Consistent Data, Usable Applicat...Robert Sanderson
Keynote presentation at JCDL 2019 at UIUC, on the interaction between standards (development and usage) and communities. Looking at Linked Open Data, digital library protocols, and evaluation of standards practices.
Illusions of Grandeur: Trust and Belief in Cultural Heritage Linked Open DataRobert Sanderson
What is the notion of trust, when it comes to publishing linked open data in the cultural heritage sector? This presentation discusses some aspects with relation to three primary questions: How do we trust what was said, trust that the institution said it, and trust what it means?
Invited seminar for UIUC's IS 575 class on metadata in theory and practice, about structural metadata practice in RDF/LOD. Touches on OAI-ORE, PCDM, Annotation, IIIF and Linked Art. Challenges explored are graph boundaries, APIs and context specific metadata.
To be useful, Linked Open Data requires shared identities and the reuse of their identifiers (URIs). This presentation argues that exact identity matching is both theoretically and practically impossible, and proposes some practical considerations for how to create an actual web of data.
Presented as invited seminar at UC Berkeley, February 24th, 2017
Community Challenges for Practical Linked Open Data - Linked Pasts keynoteRobert Sanderson
A call to action to discuss and agree on practical considerations around the creation, publication and discovery of linked open data about historical activities and objects.
Text of approximately what I said: http://bit.ly/usable_lod
Towards digitizing scholarly communicationSören Auer
Slides of the VIVO 2016 Conference keynote: Despite the availability of ubiquitous connectivity and information technology, scholarly communication has not changed much in the last hundred years: research findings are still encoded in and decoded from linear, static articles and the possibilities of digitization are rarely used. In this talk, we will discuss strategies for digitizing scholarly communication. This comprises in particular: the use of machine-readable, dynamic content; the description and interlinking of research artifacts using Linked Data; the crowd-sourcing of multilingual
educational and learning content. We discuss the relation of these developments to research information systems and how they could become part of an open ecosystem for scholarly communication.
Haystack 2019 - Natural Language Search with Knowledge Graphs - Trey GraingerOpenSource Connections
To optimally interpret most natural language queries, it is necessary to understand the phrases, entities, commands, and relationships represented or implied within the search. Knowledge graphs serve as useful instantiations of ontologies which can help represent this kind of knowledge within a domain.
In this talk, we'll walk through techniques to build knowledge graphs automatically from your own domain-specific content, how you can update and edit the nodes and relationships, and how you can seamlessly integrate them into your search solution for enhanced query interpretation and semantic search. We'll have some fun with some of the more search-centric use cased of knowledge graphs, such as entity extraction, query expansion, disambiguation, and pattern identification within our queries: for example, transforming the query "bbq near haystack" into
{ filter:["doc_type":"restaurant"], "query": { "boost": { "b": "recip(geodist(38.034780,-78.486790),1,1000,1000)", "query": "bbq OR barbeque OR barbecue" } } }
We'll also specifically cover use of the Semantic Knowledge Graph, a particularly interesting knowledge graph implementation available within Apache Solr that can be auto-generated from your own domain-specific content and which provides highly-nuanced, contextual interpretation of all of the terms, phrases and entities within your domain. We'll see a live demo with real world data demonstrating how you can build and apply your own knowledge graphs to power much more relevant query understanding within your search engine.
ePADD: Overview & Project Update -- Society of American Archivists (SAA) Annu...Josh Schneider
Presentation delivered at the Society of American Archivists (SAA) Annual Meeting, 2016, to the Metadata and Digital Object Roundtable.
ePADD is a software package developed by Stanford University's Special Collections & University Archives that supports archival processes around the appraisal, ingest, processing, discovery, and delivery of email archives. More information, including links to the software, user guide, and community forums, can be found at https://library.stanford.edu/projects/epadd.
Vectors in Search - Towards More Semantic MatchingSimon Hughes
With the advent of deep learning and algorithms like word2vec and doc2vec, vectors-based representations are increasingly being used in search to represent anything from documents to images and products. However, search engines work with documents made of tokens, and not vectors, and are typically not designed for fast vector matching out of the box. In this talk, I will give an overview of how vectors can be derived from documents to produce a semantic representation of a document that can be used to implement semantic / conceptual search without hurting performance. I will then I will describe a few different techniques for efficiently searching vector-based representations in an inverted index, such as learning sparse representations of vectors, clustering, and learning binary vectors. Finally, I will discuss some of the pitfalls of vector-based search, and how to get the best of both worlds by combining vector-based scoring with traditional relevancy metrics such as BM25.
Illusions of Grandeur: Trust and Belief in Cultural Heritage Linked Open DataRobert Sanderson
What is the notion of trust, when it comes to publishing linked open data in the cultural heritage sector? This presentation discusses some aspects with relation to three primary questions: How do we trust what was said, trust that the institution said it, and trust what it means?
Invited seminar for UIUC's IS 575 class on metadata in theory and practice, about structural metadata practice in RDF/LOD. Touches on OAI-ORE, PCDM, Annotation, IIIF and Linked Art. Challenges explored are graph boundaries, APIs and context specific metadata.
To be useful, Linked Open Data requires shared identities and the reuse of their identifiers (URIs). This presentation argues that exact identity matching is both theoretically and practically impossible, and proposes some practical considerations for how to create an actual web of data.
Presented as invited seminar at UC Berkeley, February 24th, 2017
Community Challenges for Practical Linked Open Data - Linked Pasts keynoteRobert Sanderson
A call to action to discuss and agree on practical considerations around the creation, publication and discovery of linked open data about historical activities and objects.
Text of approximately what I said: http://bit.ly/usable_lod
Towards digitizing scholarly communicationSören Auer
Slides of the VIVO 2016 Conference keynote: Despite the availability of ubiquitous connectivity and information technology, scholarly communication has not changed much in the last hundred years: research findings are still encoded in and decoded from linear, static articles and the possibilities of digitization are rarely used. In this talk, we will discuss strategies for digitizing scholarly communication. This comprises in particular: the use of machine-readable, dynamic content; the description and interlinking of research artifacts using Linked Data; the crowd-sourcing of multilingual
educational and learning content. We discuss the relation of these developments to research information systems and how they could become part of an open ecosystem for scholarly communication.
Haystack 2019 - Natural Language Search with Knowledge Graphs - Trey GraingerOpenSource Connections
To optimally interpret most natural language queries, it is necessary to understand the phrases, entities, commands, and relationships represented or implied within the search. Knowledge graphs serve as useful instantiations of ontologies which can help represent this kind of knowledge within a domain.
In this talk, we'll walk through techniques to build knowledge graphs automatically from your own domain-specific content, how you can update and edit the nodes and relationships, and how you can seamlessly integrate them into your search solution for enhanced query interpretation and semantic search. We'll have some fun with some of the more search-centric use cased of knowledge graphs, such as entity extraction, query expansion, disambiguation, and pattern identification within our queries: for example, transforming the query "bbq near haystack" into
{ filter:["doc_type":"restaurant"], "query": { "boost": { "b": "recip(geodist(38.034780,-78.486790),1,1000,1000)", "query": "bbq OR barbeque OR barbecue" } } }
We'll also specifically cover use of the Semantic Knowledge Graph, a particularly interesting knowledge graph implementation available within Apache Solr that can be auto-generated from your own domain-specific content and which provides highly-nuanced, contextual interpretation of all of the terms, phrases and entities within your domain. We'll see a live demo with real world data demonstrating how you can build and apply your own knowledge graphs to power much more relevant query understanding within your search engine.
ePADD: Overview & Project Update -- Society of American Archivists (SAA) Annu...Josh Schneider
Presentation delivered at the Society of American Archivists (SAA) Annual Meeting, 2016, to the Metadata and Digital Object Roundtable.
ePADD is a software package developed by Stanford University's Special Collections & University Archives that supports archival processes around the appraisal, ingest, processing, discovery, and delivery of email archives. More information, including links to the software, user guide, and community forums, can be found at https://library.stanford.edu/projects/epadd.
Vectors in Search - Towards More Semantic MatchingSimon Hughes
With the advent of deep learning and algorithms like word2vec and doc2vec, vectors-based representations are increasingly being used in search to represent anything from documents to images and products. However, search engines work with documents made of tokens, and not vectors, and are typically not designed for fast vector matching out of the box. In this talk, I will give an overview of how vectors can be derived from documents to produce a semantic representation of a document that can be used to implement semantic / conceptual search without hurting performance. I will then I will describe a few different techniques for efficiently searching vector-based representations in an inverted index, such as learning sparse representations of vectors, clustering, and learning binary vectors. Finally, I will discuss some of the pitfalls of vector-based search, and how to get the best of both worlds by combining vector-based scoring with traditional relevancy metrics such as BM25.
A walk through of the Linked Art data model, API and community processes. Presented originally at the Rijksmuseum for the 5th Linked Art face to face meeting. Linked Art is a linked open usable data specification created by the community to describe artwork, museum objects, and related bibliographic and archival content.
This presentation was provided by Rob Sanderson of the J. Paul Getty Trust during the NISO Virtual Conference, Open Data Projects, held on Wednesday, June 13, 2018.
With the advent of deep learning and algorithms like word2vec and doc2vec, vectors-based representations are increasingly being used in search to represent anything from documents to images and products. However, search engines work with documents made of tokens, and not vectors, and are typically not designed for fast vector matching out of the box. In this talk, I will give an overview of how vectors can be derived from documents to produce a semantic representation of a document that can be used to implement semantic / conceptual search without hurting performance. I will then describe a few different techniques for efficiently searching vector-based representations in an inverted index, including LSH, vector quantization and k-means tree, and compare their performance in terms of speed and relevancy. Finally, I will describe how each technique can be implemented efficiently in a lucene-based search engine such as Solr or Elastic Search.
Presentation of the Semantic Knowledge Graph research paper at the 2016 IEEE 3rd International Conference on Data Science and Advanced Analytics (Montreal, Canada - October 18th, 2016)
Abstract—This paper describes a new kind of knowledge representation and mining system which we are calling the Semantic Knowledge Graph. At its heart, the Semantic Knowledge Graph leverages an inverted index, along with a complementary uninverted index, to represent nodes (terms) and edges (the documents within intersecting postings lists for multiple terms/nodes). This provides a layer of indirection between each pair of nodes and their corresponding edge, enabling edges to materialize dynamically from underlying corpus statistics. As a result, any combination of nodes can have edges to any other nodes materialize and be scored to reveal latent relationships between the nodes. This provides numerous benefits: the knowledge graph can be built automatically from a real-world corpus of data, new nodes - along with their combined edges - can be instantly materialized from any arbitrary combination of preexisting nodes (using set operations), and a full model of the semantic relationships between all entities within a domain can be represented and dynamically traversed using a highly compact representation of the graph. Such a system has widespread applications in areas as diverse as knowledge modeling and reasoning, natural language processing, anomaly detection, data cleansing, semantic search, analytics, data classification, root cause analysis, and recommendations systems. The main contribution of this paper is the introduction of a novel system - the Semantic Knowledge Graph - which is able to dynamically discover and score interesting relationships between any arbitrary combination of entities (words, phrases, or extracted concepts) through dynamically materializing nodes and edges from a compact graphical representation built automatically from a corpus of data representative of a knowledge domain.
Improving Search in Workday Products using Natural Language ProcessingDataWorks Summit
Workday is a leading provider of cloud-based enterprise software products such as Human Capital Management, Talent, Finance, Student, Planning etc. These products produce a wealth of natural language data. However, this data is unstructured and denormalized. Retrieving relevant information from such data is a challenging task. Using simple index-based search methods can only take us so far. The Data Science team at Workday is determined to apply Machine Learning and AI to make search better across Workday’s products.
In this session, we present to you, how we use word embeddings to normalize the data and add structure to it. We will also talk about using word representations to make search intelligent. The specific use cases we will discuss are adding synonyms detection and entity-recommendation.
In this talk, we will focus on the word-embeddings techniques explored, metrics used to evaluate Natural Language Processing Models, tools built, and future work as a part of improving search.
Speaker
Namrata Ghadi, Workday Inc, Software Development Engineer (Data Science)
Adam Baker, Workday Inc, Sr Software Engineer
Domain Driven Design main concepts
This presentation is a summary of the book "Domain Driven Design" from InfoQ.
Here is the link: http://www.infoq.com/minibooks/domain-driven-design-quickly
Dice.com Bay Area Search - Beyond Learning to Rank TalkSimon Hughes
This talk describes how to implement conceptual search (semantic search) within a modern search engine using the word2vec algorithm to learn concepts. We also cover how to auto-tune the search engine parameters using black box optimization techniques, and the problems of feedback loops encountered when building machine learning systems that modify the user behavior used to train the system.
Digital Share 2017 presentation about Linked Open Data at The Getty, starting from what LOD is, to why we're interested in it, and some of the practical approaches we're using to make it real.
Data Science Keys to Open Up OpenNASA DatasetsPyData
By Noemi Derzsy
PyData New York City 2017
Open source data has enabled society to engage in community-based research, and has provided government agencies with more visibility and trust from individuals. I will briefly introduce the openNASA platform with over 32,000 open NASA datasets, and I will present open NASA metadata analysis, and tools for applying NLP/topic modeling techniques to understand open government dataset associations.
An introduction to Linked Open Usable Data (LOUD) through the lens of a zooming paradigm, and thoughts on how such a paradigm can help to address some grand challenges of LOUD, including search granularity, trust and reconciliation. Presented to the IDLab / Knowledge at Web Scale department of the University of Ghent in Feb '23
LUX - Cross Collections Cultural Heritage at YaleRobert Sanderson
A brief presentation based on the CNI talk for the Linked Data for Libraries Discovery affinity group about LUX, Linked Open Usable Data and our discovery processes based on graphs rather than documents.
Data is our Product: Thoughts on LOD SustainabilityRobert Sanderson
Invited keynote presentation for the LINCS Project, June 23rd 2022 at the University of Guelph, Canada. It describes thoughts on a framework for sustainability of linked open usable data products in the cultural heritage domain.
A Perspective on Wikidata: Ecosystems, Trust, and UsabilityRobert Sanderson
Brief and skeptical presentation about wikidata and its potential for use and abuse in the cultural heritage data ecosystem, presented at the PCC/LDAC forum on wikidata, November 12th, 2021.
Euromed2018 Keynote: Usability over Completeness, Community over CommitteeRobert Sanderson
Discussion of cultural heritage issues around usability and prioritization with completeness, and focus on bringing together communities rather than small and transient committees. Focus on Linked Open Usable Data, Annotations, JSON-LD, IIIF and Linked.Art.
Background for linked open data at the J Paul Getty Trust, followed by a summary of Linked Open Usable Data, and an initial walkthrough of the https://linked.art/ model.
Linked Open Data is great for recommendations about publishing data, but we need five more stars for the consumer -- How can it be both complete and usable? Design principles for Linked Open Usable Data.
US2TS Conference position paper on publishing and retrieving not just LOD, but LOUD -- Linked Open Usable Data.
APIs are the UIs of Developers, and need:
* Correct Abstraction level for the Audience
* Few Barriers to Entry
* Comprehensible by introspection
* Thorough Documentation with copy-able examples
* Few Exceptions, instead consistent patterns
Discussion of the needs around updating Shared Canvas data model for IIIF's Presentation API, and aligning with new work such as the Web Annotation specs.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
If you only need half of the completeness, you should not be punished in terms of usability. Should be able to get close to the maximum usability for the particular use case’s completeness requirements.