First, an interdisciplinary research approach of the Digital Humanities will be presented with a research project at the Bibliotheca Hertziana (MPI for Art History), Rome serving as an example. In the Institute's research, questions about the historical understanding of social space and its change during the so-called Long Middle Ages play a central role. The investigation of the relationship between historical maps and texts is intended to explore the historical understanding of space and the knowledge associated with it by taking up approaches from cognitive linguistics. Cognitive maps depict culturally specific spatial knowledge and practices. Annotation and analysis of historical texts and maps pose special challenges for knowledge representation and processing. The research data will be published as Linked Open Data in the Semantic Web. Subsequently, problems and challenges for machine learning in this area will be discussed.
The document discusses geographic information retrieval (GIR). It introduces GIR as a specialized branch of information retrieval that deals with georeferenced information. It describes some general problems in GIR, such as ambiguity of place names and fuzzy geographic boundaries. It also discusses how cognitive models of human understanding of geography can impact GIR. The document then covers techniques for geo-referencing documents using gazetteers and ontologies. It concludes by discussing related projects, evaluation of GIR systems, and a gazetteer server and service developed for UK academia.
The document discusses using maps and timelines for visualization. It provides four ways that maps and timelines can be used: 1) Plotting precise spatial and/or temporal coordinates, 2) Drawing artistic overlays, 3) Appropriating maps and timelines as metaphors for abstract dimensions, 4) Combinations of these approaches with layers. The functions of these visualizations include telling stories or starting conversations at the border between narrative and data. Historical examples like Minard's map of Napoleon's march are discussed.
GIS offers archaeologists an exciting tool to analyze and interpret spatial and temporal archaeological data. Main applications include cultural resource management, landscape analysis, and site catchment analysis. GIS allows visualization of 3D relationships, time series analysis, and predictive modeling. It provides advantages like integrating diverse data types, interpreting landscapes at various scales, and analyzing issues like site distributions. However, GIS also has limitations like being dependent on original data quality and having a bias towards spatial over other types of analysis. Future uses may include more 3D modeling and accounting for seasonal landscape changes.
This document provides an introduction to geographic information systems (GIS). It discusses that GIS is a system for storing and analyzing spatial data through digital mapping and allows users to query data and perform spatial analysis. It also summarizes applications of GIS such as natural hazards assessment, habitat analysis, and location analysis as well as its connections to fields like cartography, remote sensing, geography, and spatial analysis.
This document discusses qualitative spatial reasoning, using cardinal directions as an example. It introduces cardinal directions as a way to reason about large-scale spaces without precise quantitative measurements. Two specific systems for determining and reasoning with cardinal directions are discussed. The document outlines a comprehensive research agenda for qualitative spatial reasoning, including extending it to reasoning about extended objects and other spatial relations beyond just distance and direction.
Towards a graph of ancient world geographical knowledgeElton Barker
Presentation on three collaborative projects: Hestia (http://hestia.open.ac.uk/), GAP (http://googleancientplaces.wordpress.com/gapvis/), and Pelagios (pelagios-project.blogspot.com)
This document provides an introduction to spatial data mining. It discusses how spatial data mining can be used to discover interesting patterns in large spatial datasets. Some examples of patterns that can be found include crime hotspots, cancer clusters, and habitat ranges for endangered species. Spatial data mining is useful because it can analyze huge amounts of spatial data at a scale not possible through manual analysis. It can also be used to generate hypotheses about geographic processes that can then be explored further with traditional statistical methods. The document outlines some key characteristics of spatial data mining, such as spatial autocorrelation and the importance of regional knowledge.
The document discusses geographic information retrieval (GIR). It introduces GIR as a specialized branch of information retrieval that deals with georeferenced information. It describes some general problems in GIR, such as ambiguity of place names and fuzzy geographic boundaries. It also discusses how cognitive models of human understanding of geography can impact GIR. The document then covers techniques for geo-referencing documents using gazetteers and ontologies. It concludes by discussing related projects, evaluation of GIR systems, and a gazetteer server and service developed for UK academia.
The document discusses using maps and timelines for visualization. It provides four ways that maps and timelines can be used: 1) Plotting precise spatial and/or temporal coordinates, 2) Drawing artistic overlays, 3) Appropriating maps and timelines as metaphors for abstract dimensions, 4) Combinations of these approaches with layers. The functions of these visualizations include telling stories or starting conversations at the border between narrative and data. Historical examples like Minard's map of Napoleon's march are discussed.
GIS offers archaeologists an exciting tool to analyze and interpret spatial and temporal archaeological data. Main applications include cultural resource management, landscape analysis, and site catchment analysis. GIS allows visualization of 3D relationships, time series analysis, and predictive modeling. It provides advantages like integrating diverse data types, interpreting landscapes at various scales, and analyzing issues like site distributions. However, GIS also has limitations like being dependent on original data quality and having a bias towards spatial over other types of analysis. Future uses may include more 3D modeling and accounting for seasonal landscape changes.
This document provides an introduction to geographic information systems (GIS). It discusses that GIS is a system for storing and analyzing spatial data through digital mapping and allows users to query data and perform spatial analysis. It also summarizes applications of GIS such as natural hazards assessment, habitat analysis, and location analysis as well as its connections to fields like cartography, remote sensing, geography, and spatial analysis.
This document discusses qualitative spatial reasoning, using cardinal directions as an example. It introduces cardinal directions as a way to reason about large-scale spaces without precise quantitative measurements. Two specific systems for determining and reasoning with cardinal directions are discussed. The document outlines a comprehensive research agenda for qualitative spatial reasoning, including extending it to reasoning about extended objects and other spatial relations beyond just distance and direction.
Towards a graph of ancient world geographical knowledgeElton Barker
Presentation on three collaborative projects: Hestia (http://hestia.open.ac.uk/), GAP (http://googleancientplaces.wordpress.com/gapvis/), and Pelagios (pelagios-project.blogspot.com)
This document provides an introduction to spatial data mining. It discusses how spatial data mining can be used to discover interesting patterns in large spatial datasets. Some examples of patterns that can be found include crime hotspots, cancer clusters, and habitat ranges for endangered species. Spatial data mining is useful because it can analyze huge amounts of spatial data at a scale not possible through manual analysis. It can also be used to generate hypotheses about geographic processes that can then be explored further with traditional statistical methods. The document outlines some key characteristics of spatial data mining, such as spatial autocorrelation and the importance of regional knowledge.
The document summarizes a presentation on setting up a domain-specific language model for natural language processing (NLP) tasks at DATEV eG. It discusses adapting a pretrained BERT model to DATEV's domain by fine-tuning it on DATEV corpus data. A proof-of-concept evaluation on two classification tasks showed improved results over baselines, demonstrating that incorporating domain data enhances NLP models for a company's specific needs. Key insights included the importance of domain knowledge and corpus composition for different subdomains.
Erlangen Artificial Intelligence & Machine Learning Meetup #16:
Daria Stepanova, Bosch Center for Artificial Intelligence,
Rule Induction and Reasoning in Knowledge Graphs;
Advances in information extraction have enabled the automatic construction of large knowledge graphs (KGs) like DBpedia, Freebase, YAGO and Wikidata.
Learning rules from KGs is a crucial task for KG completion, cleaning and curation. This tutorial presents state-of-the-art rule induction methods, recent advances, research opportunities as well as open challenges along this avenue. We put a particular emphasis on the problems of learning exception-enriched rules from highly biased and incomplete data. Finally, we discuss possible extensions of classical rule induction techniques to account for unstructured resources (e.g., text) along with the structured ones.
In the quest of improving the quality of education, Flexudy leverages the
power of AI to help people learn more efficiently.
During the talk, I will show how we trained an automatic extractive text
summarizer based on concepts from Reinforcement Learning, Deep Learning and Natural Language Processing. Also, I will talk about how we use pre-trained NLP models to generate simple questions for self-assessment.
Like other fields of computer vision, image retrieval has been
revolutionized by deep learning in recent years. Convolutional neural networks are now the tool of choice for computing feature representations of images. Many successful architectures employ global pooling layers to aggregate feature maps to a compact image representation. Using the neural network training procedure based on backpropagation and gradient descent methods, we can learn the global pooling operation from the training data.
We review existing approaches to learned pooling and propose two new layers: A learnable, extended variant of LSE pooling and the generalized max pooling layer based on an aggregation function from classical computer vision.
Our experiments show that learned global pooling can improve performance of image retrieval networks compared to the average pooling baseline for both tasks. For writer identification, our generalized max pooling layer outperforms all other tested pooling layers. Our learnable LSE pooling performs better than global average pooling and yields the best rank-1 score in our experiments on the Market-1501 dataset.
Slides by Alexander März:
The language of statistics is of probabilistic nature. Any model that falls short of providing quantification of the uncertainty attached to its outcome is likely to provide an incomplete and potentially misleading
picture. While this is an irrevocable consensus in statistics, machine
learning approaches usually lack proper ways of quantifying uncertainty. In fact, a possible distinction between the two modelling cultures can be
attributed to the (non)-existence of uncertainty estimates that allow for,
e.g., hypothesis testing or the construction of estimation/prediction
intervals. However, quantification of uncertainty in general and
probabilistic forecasting in particular doesn’t just provide an average
point forecast, but it rather equips the user with a range of outcomes and the probability of each of those occurring.
In an effort of bringing both disciplines closer together, the audience is
introduced to a new framework of XGBoost that predicts the entire
conditional distribution of a univariate response variable. In particular,
XGBoostLSS models all moments of a parametric distribution (i.e., mean,
location, scale and shape [LSS]) instead of the conditional mean only.
Choosing from a wide range of continuous, discrete and mixed
discrete-continuous distribution, modelling and predicting the entire
conditional distribution greatly enhances the flexibility of XGBoost, as it
allows to gain additional insight into the data generating process, as well
as to create probabilistic forecasts from which prediction intervals and
quantiles of interest can be derived. As such, XGBoostLSS contributes to
the growing literature on statistical machine learning that aims at
weakening the separation between Breiman‘s „Data Modelling Culture“ and „Algorithmic Modelling Culture“, so that models designed mainly for
prediction can also be used to describe and explain the underlying data
generating process of the response of interest.
How to use Azure Machine Learning service to manage the lifecycle of your models. Azure Machine Learning uses a Machine Learning Operations (MLOps) approach, which improves the quality and consistency of your machine learning solutions.
The structure of a Machine Learning code base can have a large impact on effective collaboration and time to production.
In this talk I will present our solution developed for the FutureOps Matching Automation project and talk about lessons learned and best practices.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
More Related Content
More from Erlangen Artificial Intelligence & Machine Learning Meetup
The document summarizes a presentation on setting up a domain-specific language model for natural language processing (NLP) tasks at DATEV eG. It discusses adapting a pretrained BERT model to DATEV's domain by fine-tuning it on DATEV corpus data. A proof-of-concept evaluation on two classification tasks showed improved results over baselines, demonstrating that incorporating domain data enhances NLP models for a company's specific needs. Key insights included the importance of domain knowledge and corpus composition for different subdomains.
Erlangen Artificial Intelligence & Machine Learning Meetup #16:
Daria Stepanova, Bosch Center for Artificial Intelligence,
Rule Induction and Reasoning in Knowledge Graphs;
Advances in information extraction have enabled the automatic construction of large knowledge graphs (KGs) like DBpedia, Freebase, YAGO and Wikidata.
Learning rules from KGs is a crucial task for KG completion, cleaning and curation. This tutorial presents state-of-the-art rule induction methods, recent advances, research opportunities as well as open challenges along this avenue. We put a particular emphasis on the problems of learning exception-enriched rules from highly biased and incomplete data. Finally, we discuss possible extensions of classical rule induction techniques to account for unstructured resources (e.g., text) along with the structured ones.
In the quest of improving the quality of education, Flexudy leverages the
power of AI to help people learn more efficiently.
During the talk, I will show how we trained an automatic extractive text
summarizer based on concepts from Reinforcement Learning, Deep Learning and Natural Language Processing. Also, I will talk about how we use pre-trained NLP models to generate simple questions for self-assessment.
Like other fields of computer vision, image retrieval has been
revolutionized by deep learning in recent years. Convolutional neural networks are now the tool of choice for computing feature representations of images. Many successful architectures employ global pooling layers to aggregate feature maps to a compact image representation. Using the neural network training procedure based on backpropagation and gradient descent methods, we can learn the global pooling operation from the training data.
We review existing approaches to learned pooling and propose two new layers: A learnable, extended variant of LSE pooling and the generalized max pooling layer based on an aggregation function from classical computer vision.
Our experiments show that learned global pooling can improve performance of image retrieval networks compared to the average pooling baseline for both tasks. For writer identification, our generalized max pooling layer outperforms all other tested pooling layers. Our learnable LSE pooling performs better than global average pooling and yields the best rank-1 score in our experiments on the Market-1501 dataset.
Slides by Alexander März:
The language of statistics is of probabilistic nature. Any model that falls short of providing quantification of the uncertainty attached to its outcome is likely to provide an incomplete and potentially misleading
picture. While this is an irrevocable consensus in statistics, machine
learning approaches usually lack proper ways of quantifying uncertainty. In fact, a possible distinction between the two modelling cultures can be
attributed to the (non)-existence of uncertainty estimates that allow for,
e.g., hypothesis testing or the construction of estimation/prediction
intervals. However, quantification of uncertainty in general and
probabilistic forecasting in particular doesn’t just provide an average
point forecast, but it rather equips the user with a range of outcomes and the probability of each of those occurring.
In an effort of bringing both disciplines closer together, the audience is
introduced to a new framework of XGBoost that predicts the entire
conditional distribution of a univariate response variable. In particular,
XGBoostLSS models all moments of a parametric distribution (i.e., mean,
location, scale and shape [LSS]) instead of the conditional mean only.
Choosing from a wide range of continuous, discrete and mixed
discrete-continuous distribution, modelling and predicting the entire
conditional distribution greatly enhances the flexibility of XGBoost, as it
allows to gain additional insight into the data generating process, as well
as to create probabilistic forecasts from which prediction intervals and
quantiles of interest can be derived. As such, XGBoostLSS contributes to
the growing literature on statistical machine learning that aims at
weakening the separation between Breiman‘s „Data Modelling Culture“ and „Algorithmic Modelling Culture“, so that models designed mainly for
prediction can also be used to describe and explain the underlying data
generating process of the response of interest.
How to use Azure Machine Learning service to manage the lifecycle of your models. Azure Machine Learning uses a Machine Learning Operations (MLOps) approach, which improves the quality and consistency of your machine learning solutions.
The structure of a Machine Learning code base can have a large impact on effective collaboration and time to production.
In this talk I will present our solution developed for the FutureOps Matching Automation project and talk about lessons learned and best practices.
More from Erlangen Artificial Intelligence & Machine Learning Meetup (7)
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 6
Ai for cultural history
1. Guenther Goerz, Chiara Seidl, Martin Thiering
Bibliotheca Hertziana –Max Planck Institute for the History of Art, Rome
FAU Erlangen-Nuremberg, Department of Computer Science, Digital
Humanities
Technical University Berlin, Department of Linguistics
Linked Biondo
Modelling Geographical Features
in Renaissance Texts and Maps
3. Research Question and Methodology
• Cognition of geographical space in history:
How would you start ?
– Toponyms and definite place descriptions
• Text annotation: Named Entity Recognition,
geographic verification
– Spatial relations: topology, orientation, distance,…
• Cognitive-linguistic annotation
Constructions figure- spatial_indicator – ground
– Comparisons with contemporary maps
• Cognitive maps (Common Sense Geography)
– Spatial objects and relations
– Epistemological modelling
G. Goerz, FAU, CS DH & BHR 3
4. Problem statement
• Common sense conceptualizations of geographic
concepts and relations in ancient and early modern
texts and maps
– Analytic methods of cognitive computational linguistics
Corpus construction, annotation, and parsing
– Formal two-level representation
• (Cognitive) linguistic
• Conceptual – general semantics (onto-logical)
– Linguistic and historical interpretation and evaluation
– Long term goal: Synthetic
Reconstruction of cognitive maps/sketches
G. Goerz, FAU, CS DH & BHR 4
6. Toponyms and spatial relations
• Text annotation (semi-automatic)
– Recogito 2 annotation tool with geographic
verfication (gazetteers, e.g. Pleiades)
– Spatial Role Labeling (manually: brat)
• Machine Learning? … corpus problem
– Narrative order: virtual trips
• Map annotation
– Recogito 2 (incl. visualization)
– Cartometric investigations (Guckelsberger)
G. Goerz, FAU, CS DH & BHR 6
7. Investigations on Renaissance maps
• Biondo’s use of maps ? (“Petrarca”, Ptolemy)
• Which maps to choose ?
– Six large maps of Italy (15. c.; Milanesi
2007/08); Paulinus Minorita (14. c.); Tabula
Peutingeriana
– Ptolemaic maps (ca. 30 traditional and
“novae” – mostly after 1450!)
– Sea charts (Portolans) before 1450 (max. 10)
G. Goerz, FAU, CS DH & BHR 7
11. DARE map view of Ptolemaeus L23n
G. Goerz, FAU, CS DH & BHR 11
12. Generated research data
• Recogito data export
– CSV (tables), GeoJSON, RDF/OA, JSON-LD,…
– Case study: Ptolemaic tabulae novae & text
• Spatial Role Labeling
G. Goerz, FAU, CS DH & BHR 12
according to
predefined
cognitive-linguistic
taxonomy
14. Cognitive linguistics and Gestalt theory
• Mental models are based on universal cognitive
mechanisms and Gestalt-theoretical principles (!)
– cf. “Primary theory” (Smith/Mark 2001)
• Particularly relevant: spatial division of figure and
background in identifying and locating objects
• Spatial relations are represented through grammatical
markers and semantic fields
• From representational viewpoint: Mental models store
information on events and objects of the external
world, especially
– for orientation in space and references to places,
for topological and geometrical knowledge
G. Goerz, FAU, C.S. - AG DH
15. Spatial Construals and its Spatial
Parameters
• Gestalt principles of figure–ground (TRAJECTOR-
LANDMARK) asymmetries;
trajectory/path of TR and LM
• OBJECT CLASSIFICATIONS, mental rotations, 2,5/3-D
sketch, geometrical dimensions
• FRAMES OF REFERENCE (relative; intrinsic; absolute)
• TOPONYMS
(place / city names, buildings, bridges, churches,
fountains, walls, streets, squares, rivers, hills, gates,
memorials, temples, sites, regions, etc.)
G. Goerz, FAU, C.S. - AG DH
16. Spatial Construals and its Spatial
Parameters
• LANDMARKS
• DISTANCES (scale, scope, size),
encoded in adjectives, adverbs, verbs but mostly in
adpositions and case systems
• METRICAL SYSTEMS
(verbal systems such as posture verbs, classificatory
verbs and case systems)
• PERSPECTIVE
(bird’s eye, hodological, vectorial perspective)
• ELEMENTS OF COMMON SENSE KNOWLEDGE
(traveller reports, myths etc.)
G. Goerz, FAU, C.S. - AG DH
17. Spatial Construals and its Spatial
Parameters
• MOTION EVENT: SOURCE = Point of departure of TR
• PATH/TRAJECTORY = Movement of TR from SOURCE[TR(X)]
to GOAL[LM(Y)]
• GOAL = GOAL of TR'S movement to LM(Y);
often a container such as a room, city, town, church
etc.
• DISTANCE = proximate1[PROX], medial2[MED], distal3[DIST]
between TR and LM
• PROFILE = TRAJECTOR'S specification of LANDMARK
• Conceptualization of spatial structure: Static concepts
include a REGION, LOCATION, and dynamic concepts
include PATH and PLACEMENT of TR
G. Goerz, FAU, C.S. - AG DH
18. Spatial Role Labeling
in a Cognitive Linguistic Framework
• Given: SpatialML annotation scheme
• Definition of a brat “configuration” (taxonomy)
• Annotation sentence by sentence with brat, manually
• Parallel text: transfer to Latin
• XML/RDF export
– to be combined with dependency relations
– information integration with NER results
• Evaluation and Interpretation
– Evaluation of the use of prototypical functions with
lemmata (Latin/English) in order of frequency
– Specially: Landmarks, toponyms, frames of reference,
perspectives
G. Goerz, FAU, C.S. - AG DH
22. Machine Learning for Spatial Role
Labeling?
• Kordjamshidi et al. (ca. 2010 ff.), KU Leuven
– Hybrid approach (with klog rules; cf. “explainable AI”)
– CoNLL competition, training corpus (campus)
– new hybrid framework since 2015 (Univ. Illinois)
• Problem of training data
– only for English, different text sort
– Latin from manual labeling, very small
• Experiment with Sarah Schulz (U. Stuttgart, 2018)
– Software (above) incomplete: missing modules
– Therefore using open software: mod:nlpnet (base:
NLTK, numpy)
23. Machine Learning for Spatial Role
Labeling?
• Experiment with Sarah Schulz : mod:nlpnet
– Multilayer perceptron for POS tagging
– Convolutional NN for SRL tagging
• adapted to SpRL, actually figure–sp_ind-ground
– Results for English with problematic training
corpus: anything but representative
• find identifier: accuracy 0.96
• find figure and ground: precision 0.54, recall 0.25
• recall figure: 0.19, ground: 0.30
26. Semantic enhancement of data
• What is the meaning of annotations ?
• Semantics of annotation components defined in terms of a
formal ontology
• Formal ontology: knowledge modelling
– two methodological levels
• Reference ontology CIDOC CRM (ISO 21127) with extension
CRMgeo
– CRM event-based; linguistic-pragmatic approach
– Refining of domain object descriptions with technical
terms (“types”) from thesauri (“Pleiades vocabulary”)
– Use of authority files
G. Goerz, FAU, CS DH & BHR 26
28. CRM + CRMgeo in OWL-DL (Protégé)
G. Goerz, FAU, CS DH & BHR 28
29. CRM + CRMgeo in OWL-DL (Protégé)
G. Goerz, FAU, CS DH & BHR 29
30. hmap + CRM + CRMgeo
• hmap: domain ontology for historical maps
– Map Metadata
ID, Cartographer, Creator, Title, Place, Time,
Size, Material, Technique, Projection, Scale,
Orientation, Reference System, …
– Image (Reproduction) Metadata: Map, URL,
Dimension, Rights,...
– Content: Annotated places and connections
UUID, Transcription, Anchor, Type, URI
(Gazetteer), Label, Lat, Lng, Place Type,
Verif_status, …
G. Goerz, FAU, CS DH & BHR 30
31. In search of a semantic platform
• How can we perform the semantic
enhancement of annotation data ?
• How can we publish them as Linked
Open Data ?
• Transformation with VRE WissKI,
usage as Linked Open Data platform
• SPARQL query interface, RDF export,…
G. Goerz, FAU, CS DH & BHR 31
32. What is WissKI ?
(“Scholarly Communication Infrastructure“)
• Extension of CMS Drupal, customizable
• Web-based multi-user system
• Open source & open standards (Semantic
Web)
• Object-based documentation, multiple media
types
• Ontology-based representation (ECRM/OWL),
extensible by application ontologies &
controlled vocabularies
G. Goerz, FAU, CS DH & BHR 32
35. WissKI
• Create, Navigate, Find : main modes
• Create: Data input
– Form based or text based
– (automatic) linking
– data enrichment with external sources
– various import and export formats
• Pathbuilder: define semantics of fields
G. Goerz, FAU, CS DH & BHR 35
37. Linked Open Data (... SPARQL endpoint)
DNB GND
Geonames
BritishMuseum
Linking Open Data cloud diagram 2017, by Andrejs Abele, John P. McCrae, Paul Buitelaar, Anja
Jentzsch and Richard Cyganiak. http://lod-cloud.net/
G. Goerz, FAU, CS DH 37
38. Next steps
• “In the last analysis all maps are cognitive maps”
(Blakemore & Harley)
• Abstract conceptual knowledge represented by
object schemata – propositional and “depictional”
• Translation of descriptions of spatial objects and their
spatial relations
– Triples [figure, spatial_indicator, ground]
– Constructing a spatial property graph
– Generating sketches of cognitive maps to represent and
process reifications of cognitive objects on an
epistemological level, i.e. frame of reference, topology,
direction, trajectory, distance, and shape
• Spatial reasoning ?
G. Goerz, FAU, C.S. - AG DH