The document proposes building a Quechua Knowledge Graph called QUIPU to increase access to information and technology in the Quechua language. It discusses extracting knowledge from sources like Wikidata and Wiktionary to build the graph, hosting it using GraphDB, and curating the knowledge. The goal is to pilot conversational interfaces like chatbots and voice assistants that can retrieve information from QUIPU in Quechua to promote inclusion of indigenous communities.
Presentation "Applying Linked Open Data to a digital library: best practices and lessons learnt" by Gustavo Candela, Fundación Biblioteca Virtual Miguel de Cervantes, at the IMPACT Annual Members' Meeting 2017. data.cervantesvirtual.com
VALA Tech Camp 2017: Intro to Wikidata & SPARQLJane Frazier
A hands-on introduction to interrogation of Wikidata content using SPARQL, the query language used to query data represented in RDF, SKOS, OWL, and other Semantic Web standards.
Presented by myself and Peter Neish, Research Data Specialist @ University of Melbourne.
Scio - Moving to Google Cloud, A Spotify StoryNeville Li
Talk at Philly ETE Apr 28 2017
We will talk about Spotify’s story of migrating our big data infrastructure to Google Cloud. Over the past year or so we moved away from maintaining our own 2500+ node Hadoop cluster to managed services in the cloud. We replaced two key components in our data processing stack, Hive and Scalding, with BigQuery and Scio and are able to iterate at a much faster speed. We will focus the technical aspect of Scio, a Scala API for Apache Beam and Google Cloud Dataflow and how it changed the way we process data.
The Metadata Provenance Task Group aims to define a data model that allows for making
assertions about description sets. Creating a shared model of the data elements required to
describe an aggregation of metadata statements allows to collectively import, access, use and
publish facts about the quality, rights, timeliness, data source type, trust situation, etc. of the
described statements. In this paper we describe the preliminary model created by the task group,
together with first examples that demonstrate how the model is to be used.
Sparkling Water Webinar October 29th, 2014Sri Ambati
Sparkling Water is the newest application on the Apache Spark in-memory platform to extend Machine Learning for better predictions and to quickly deploy models into production. H2O is proud to partner with Cloudera and Databricks to bring this capability to a wide audience.
H2O is for data scientists and business analysts who need scalable and fast machine learning. H2O is an open source predictive analytics platform. Unlike traditional analytics tools, H2O provides a combination of extraordinary math and high performance parallel processing with unrivaled ease of use. H2O speaks the language of data science with support for R, Python, Scala, Java and a robust REST API. Smart business applications are powered by H2O’s NanoFast¬TM Scoring Engine. Learn more by going to http://www.h2o.ai and contact us for more information.
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
Apache Spark Toronto Meetup, July 27, 2016.
Wattpad talks about their experiences with Apache Spark. From starting in 2014 with Shark, to building distributed recommendation algorithms using ANN, to improving search results using a sessionized query log. We also talk about some of the issues we faced building our analytics pipeline, including getting spark to work with Luigi, an open source project by Spotify.
Presentation "Applying Linked Open Data to a digital library: best practices and lessons learnt" by Gustavo Candela, Fundación Biblioteca Virtual Miguel de Cervantes, at the IMPACT Annual Members' Meeting 2017. data.cervantesvirtual.com
VALA Tech Camp 2017: Intro to Wikidata & SPARQLJane Frazier
A hands-on introduction to interrogation of Wikidata content using SPARQL, the query language used to query data represented in RDF, SKOS, OWL, and other Semantic Web standards.
Presented by myself and Peter Neish, Research Data Specialist @ University of Melbourne.
Scio - Moving to Google Cloud, A Spotify StoryNeville Li
Talk at Philly ETE Apr 28 2017
We will talk about Spotify’s story of migrating our big data infrastructure to Google Cloud. Over the past year or so we moved away from maintaining our own 2500+ node Hadoop cluster to managed services in the cloud. We replaced two key components in our data processing stack, Hive and Scalding, with BigQuery and Scio and are able to iterate at a much faster speed. We will focus the technical aspect of Scio, a Scala API for Apache Beam and Google Cloud Dataflow and how it changed the way we process data.
The Metadata Provenance Task Group aims to define a data model that allows for making
assertions about description sets. Creating a shared model of the data elements required to
describe an aggregation of metadata statements allows to collectively import, access, use and
publish facts about the quality, rights, timeliness, data source type, trust situation, etc. of the
described statements. In this paper we describe the preliminary model created by the task group,
together with first examples that demonstrate how the model is to be used.
Sparkling Water Webinar October 29th, 2014Sri Ambati
Sparkling Water is the newest application on the Apache Spark in-memory platform to extend Machine Learning for better predictions and to quickly deploy models into production. H2O is proud to partner with Cloudera and Databricks to bring this capability to a wide audience.
H2O is for data scientists and business analysts who need scalable and fast machine learning. H2O is an open source predictive analytics platform. Unlike traditional analytics tools, H2O provides a combination of extraordinary math and high performance parallel processing with unrivaled ease of use. H2O speaks the language of data science with support for R, Python, Scala, Java and a robust REST API. Smart business applications are powered by H2O’s NanoFast¬TM Scoring Engine. Learn more by going to http://www.h2o.ai and contact us for more information.
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
Apache Spark Toronto Meetup, July 27, 2016.
Wattpad talks about their experiences with Apache Spark. From starting in 2014 with Shark, to building distributed recommendation algorithms using ANN, to improving search results using a sessionized query log. We also talk about some of the issues we faced building our analytics pipeline, including getting spark to work with Luigi, an open source project by Spotify.
Babar: Knowledge Recognition, Extraction and RepresentationPierre de Lacaze
Babar is a research project in the field of Artificial Intelligence. It aims to bridge together Neural AI and Symbolic AI. As such it is implemented in three different programming languages: Clojure, Python and CLOS.
The Clojure component (Clobar) implements the graphical user interface to Babar. Examples of the Clojure Hiccup library and interfacing Clojure to Javascript will be presented. The Python module (Pybar) implements the web crawling and scraping and the Neural Networks aspect of Babar. The Word Embedding and and LSTM (Long Short-Term Memory) components of Pybar will be described in detail. Finally the Common Lisp module (Lispbar) implements the Symbolic AI aspect of Babar. This latter includes an English Language Parser and Semantic Networks implemented as an in-memory Hypergraph.
We will present each of these components and target individual aspects with code examples. Specifically we will first present the web developments and Neural Networks components. Then the English Language parser will be examined in detail. We will also present the knowledge extraction aspect and bridge this with the Neural Network component.
Ultimately we will argue what can be termed "Neural AI" and "Symbolic AI" are at not at odds with each other but rather complement each other. In summary Artificial Intelligence is not a question of "brain" or "mind", but rather a question of "brain" and "mind".
The nature.com ontologies portal: nature.com/ontologiesTony Hammond
Presentation by Tony Hammond and Michele Pasin to Linked Science workshop, co-located with International Semantic Web Conference (ISWC) 2015, on October 12, 2015
Hacktoberfest 2020 'Intro to Knowledge Graph' with Chris Woodward of ArangoDB and reKnowledge. Accompanying video is available here: https://youtu.be/ZZt6xBmltz4
As of Drupal 7 we'll have RDFa markup in core, in this session I will:
-explain what the implications are of this and why this matters
-give a short introduction to the Semantic web, RDF, RDFa and SPARQL in human language
-give a short overview of the RDF modules that are available in contrib
-talk about some of the potential use cases of all these magical technologies
EuropeanaTech x AI: Qurator.ai @ Berlin State Librarycneudecker
The EuropeanaTech Community and Europeana Foundation are delighted to introduce a new webinar series to explore the opportunities and challenges of working with Artificial Intelligence in the cultural heritage and arts sector.
Out of the box, Accumulo's strengths are difficult to appreciate without first building an application that showcases its capabilities to handle massive amounts of data. Unfortunately, building such an application is non-trivial for many would-be users, which affects Accumulo's adoption.
In this talk, we introduce Datawave, a complete ingest, query, and analytic framework for Accumulo. Datawave, recently open-sourced by the National Security Agency, capitalizes on Accumulo's capabilities, provides an API for working with structured and unstructured data, and boasts a robust, flexible, and scalable backend.
We'll do a deep dive into Datawave's project layout, table structures, and APIs in addition to demonstrating the Datawave quickstart—a tool that makes it incredibly easy to hit the ground running with Accumulo and Datawave without having to develop a complete application.
Solving New School with the Old School (Clojure)C4Media
Video and slides synchronized, mp3 and slide download available at URL https://bit.ly/2Jq8Jb2.
Jearvon Dharrie discusses Clojure, a language that's taking some older ideas and solving 21st-century problems. Topics that are discussed: Clojure's answer to types, clojure.spec, the ability to write and reason about parallelism and concurrency with core.async, and more. Filmed at qconnewyork.com.
Jearvon Dharrie is a software engineer at Comcast. He spends his day working with Javascript, Ruby, and Python. In his free time he enjoys toying with programming languages. He is currently interested in Clojure and ClojureScript.
Silicon Valley Cloud Computing Meetup
Mountain View, 2010-07-19
Examples of Hadoop Streaming, based on Python scripts running on the AWS Elastic MapReduce service, which show text mining on the "Enron Email Dataset" from Infochimps.com plus data visualization using R and Gephi
Source at: http://github.com/ceteri/ceteri-mapred
Towards Knowledge Graphs Validation through Weighted Knowledge SourcesElwin Huaman
Knowledge graphs (KGs) have shown to be an important asset of large companies, which provide correct and reliable knowledge. To do so a critical task is knowledge validation, which measures whether statements from KGs are semantically correct and correspond to the so-called "real"world. We propose a Knowledge Graph Validation Framework.
Knowledge Graph Curation: A Practical FrameworkElwin Huaman
Knowledge Graphs (KGs) are very important for applications such as personal assistants, question-answering systems, and search engines. However, KGs inevitably contain wrong assertions, duplicates, or missing values, i.e., low-quality KGs produce low-quality applications that are built on top of them. Therefore, we propose a KG Curation Framework, which involves the assessment, cleaning, and enrichment of KGs.
More Related Content
Similar to Quipu: Quechua Knowledge Graph [Pilot: Building virtual assistants based on Quechua Language]
Babar: Knowledge Recognition, Extraction and RepresentationPierre de Lacaze
Babar is a research project in the field of Artificial Intelligence. It aims to bridge together Neural AI and Symbolic AI. As such it is implemented in three different programming languages: Clojure, Python and CLOS.
The Clojure component (Clobar) implements the graphical user interface to Babar. Examples of the Clojure Hiccup library and interfacing Clojure to Javascript will be presented. The Python module (Pybar) implements the web crawling and scraping and the Neural Networks aspect of Babar. The Word Embedding and and LSTM (Long Short-Term Memory) components of Pybar will be described in detail. Finally the Common Lisp module (Lispbar) implements the Symbolic AI aspect of Babar. This latter includes an English Language Parser and Semantic Networks implemented as an in-memory Hypergraph.
We will present each of these components and target individual aspects with code examples. Specifically we will first present the web developments and Neural Networks components. Then the English Language parser will be examined in detail. We will also present the knowledge extraction aspect and bridge this with the Neural Network component.
Ultimately we will argue what can be termed "Neural AI" and "Symbolic AI" are at not at odds with each other but rather complement each other. In summary Artificial Intelligence is not a question of "brain" or "mind", but rather a question of "brain" and "mind".
The nature.com ontologies portal: nature.com/ontologiesTony Hammond
Presentation by Tony Hammond and Michele Pasin to Linked Science workshop, co-located with International Semantic Web Conference (ISWC) 2015, on October 12, 2015
Hacktoberfest 2020 'Intro to Knowledge Graph' with Chris Woodward of ArangoDB and reKnowledge. Accompanying video is available here: https://youtu.be/ZZt6xBmltz4
As of Drupal 7 we'll have RDFa markup in core, in this session I will:
-explain what the implications are of this and why this matters
-give a short introduction to the Semantic web, RDF, RDFa and SPARQL in human language
-give a short overview of the RDF modules that are available in contrib
-talk about some of the potential use cases of all these magical technologies
EuropeanaTech x AI: Qurator.ai @ Berlin State Librarycneudecker
The EuropeanaTech Community and Europeana Foundation are delighted to introduce a new webinar series to explore the opportunities and challenges of working with Artificial Intelligence in the cultural heritage and arts sector.
Out of the box, Accumulo's strengths are difficult to appreciate without first building an application that showcases its capabilities to handle massive amounts of data. Unfortunately, building such an application is non-trivial for many would-be users, which affects Accumulo's adoption.
In this talk, we introduce Datawave, a complete ingest, query, and analytic framework for Accumulo. Datawave, recently open-sourced by the National Security Agency, capitalizes on Accumulo's capabilities, provides an API for working with structured and unstructured data, and boasts a robust, flexible, and scalable backend.
We'll do a deep dive into Datawave's project layout, table structures, and APIs in addition to demonstrating the Datawave quickstart—a tool that makes it incredibly easy to hit the ground running with Accumulo and Datawave without having to develop a complete application.
Solving New School with the Old School (Clojure)C4Media
Video and slides synchronized, mp3 and slide download available at URL https://bit.ly/2Jq8Jb2.
Jearvon Dharrie discusses Clojure, a language that's taking some older ideas and solving 21st-century problems. Topics that are discussed: Clojure's answer to types, clojure.spec, the ability to write and reason about parallelism and concurrency with core.async, and more. Filmed at qconnewyork.com.
Jearvon Dharrie is a software engineer at Comcast. He spends his day working with Javascript, Ruby, and Python. In his free time he enjoys toying with programming languages. He is currently interested in Clojure and ClojureScript.
Silicon Valley Cloud Computing Meetup
Mountain View, 2010-07-19
Examples of Hadoop Streaming, based on Python scripts running on the AWS Elastic MapReduce service, which show text mining on the "Enron Email Dataset" from Infochimps.com plus data visualization using R and Gephi
Source at: http://github.com/ceteri/ceteri-mapred
Towards Knowledge Graphs Validation through Weighted Knowledge SourcesElwin Huaman
Knowledge graphs (KGs) have shown to be an important asset of large companies, which provide correct and reliable knowledge. To do so a critical task is knowledge validation, which measures whether statements from KGs are semantically correct and correspond to the so-called "real"world. We propose a Knowledge Graph Validation Framework.
Knowledge Graph Curation: A Practical FrameworkElwin Huaman
Knowledge Graphs (KGs) are very important for applications such as personal assistants, question-answering systems, and search engines. However, KGs inevitably contain wrong assertions, duplicates, or missing values, i.e., low-quality KGs produce low-quality applications that are built on top of them. Therefore, we propose a KG Curation Framework, which involves the assessment, cleaning, and enrichment of KGs.
Hacia la Publicación Digital en Idioma Quechua - Towards Publishing in Quechu...Elwin Huaman
Hacia la Publicación Digital en Idioma Quechua
Towards Publishing in Quechua Language
“Introducción a la publicación digital”
Estructura
● Motivación (¿Por qué? ¿Cómo? ¿Qué? )
● Wikipediya (-Pilares de la Wikipedia (Neutralidad, …)
-Que es y que no es Wikipedia?)
● Wikimedia Commons (Normas de publicación, e.g., plagiarismo - Media, e.g. fotos)
● Wikidata (Estructurando el conocimiento -Enlazando Wikipages)
Nowadays knowledge is an important asset in every company and it is continuously collected and maintained to serve various purposes. During this presentation we will see how knowledge is created, transformed, and ends up being used by Google, Alexa, and Siri. This presentation also show the importance of an ecology of knowledge, how this knowledge affects our identity and the way we live.
Kipu (Knowledge that Inspires People like U) - Sustainable TravelElwin Huaman
Kipu es un motor de conocimiento (knowledge graph), chatbot (Kipu), buscador web (Smart Travel) y App (KipuLab) que almacena, evalúa, cura, y representa destinos sostenibles.
“Knowledge Graphs are very large semantic nets that integrate various and heterogeneous information sources to represent knowledge about certain domains of discourse” [Fensel et al., 2020]
Kipu: Knowledge that Inspires People like you
Atrévete a Cambiar el Mundo
Imagine if we could speed up time in the fight against climate change
LINKED DATA AND PUBLIC DATA TO IMPROVE TOURIST INFORMATION SERVICESElwin Huaman
Abstract: Peru is one of the developing countries of Latin America, which has not improved
the availability or quality of public tourist data(e.g. hotels, tourist places, restaurants,
shops)[1]. These types of data are generated and saved by public sector agencies like
Ministries, Municipalities.
Currently, the Peru Government enhances the publication of open data and has a National
Open Government Data Strategy for 2021, that is allowing to develop geolocation
applications that allow locate museums of Lima, identify the most problematic districts and
making data-based decisions[2]. However, Peru has only information about tourist places on
its website although in an unstructured format which implies the information does not have
a pre-defined data model and is not linked to other data. Besides, it does not include on its
open data repository. The main aim of this study is to enhance the tourist information
service by means of: i) the analysis of public data; ii) modeling a linked data format; and iii)
publishing of tourist data.
To achieve this goal, we analyzed the properties of public tourist data, model a format using
standard vocabularies and ontologies to link the information with external data. We
generated rdf files using a linked data publishing methodology[3]. Finally, we launch a fuseki
server that allows sparql queries and data exploitation[4].
The results of this study can improve the access for private and public organizations, which
use tourist information to help industrial competitiveness through query, sharing, reusing,
distribution and exploitation of public data.
References
[1] A. Young and S. Verhulst, The Global Impact of Open Data: Key Findings from Detailed
Case Studies Around the World. 2016.
[2] M. Castillo, J. Patiño, Organización de las Naciones Unidas, and Comisión Económica
para América Latina y el Caribe, “Datos Abiertos y Ciudades Inteligentes en América
Latina: Casos de Estudio,” p. 10, 2014.
[3] B. Villazón-Terrazas, L. M. Vilches-Blázquez, O. Corcho, and A. Gómez-Pérez,
“Methodological Guidelines for Publishing Government Linked Data,” in Linking
Government Data, 2011, pp. 27–49.
[4] T. Berners-Lee, “Linked Data for a Global Community,” 2010.
● Reforzará su conocimiento sobre citar y referenciar
● Conocerá Mendeley ¿Qué es Mendeley? Y para qué sirven los GRF
● Creara una cuenta de perfil y usará Mendeley
● Podrá instalar Mendeley en su escritorio
● Conocerá el entorno de trabajo, realizará búsquedas en las BD
● Creará y gestionará su propia biblioteca
● Podrá instalar un plugin de citas en Microsoft Word
● Podrá generar la bibliografía automáticamente
● Será capaz de compartir sus propias publicaciones
● Será capaz de participar de la comunidad científica
Introducción a DSpace - Universidad Nacional del Altiplano, PunoElwin Huaman
Al final de esta presentación el participante:
Conocerá la historia de DSpace
Entenderá que es DSpace, y para qué puede ser usado
Conocerá sobre la instalación de DSpace
Conocerá las características de DSpace y como está organizado
Sabrá conceptos básicos sobre Repositorios Institucionales (RI)
Sabrá las ventajas que supone implementar un RI
El presente informe presenta las estadísticas y un análisis correspondiente de los CMS más populares de la categoría Portales web. Este trabajo desarrolla una aproximación a los CMS más populares en la actualidad, también se muestra un avance aproximado de las tendencias de estos CMS durante el 2015, Se analizan también las características que hacen que estos CMS sean tan populares en el mercado actual de la Internet. Finalmente se presenta una conclusión como opinión personal del uso de WordPress.
Comercio Internacional: La importancia del comercio electrónico en PerúElwin Huaman
El presente trabajo busca conocer la importancia del comercio electrónico en Perú, no solo enfocado en los consumidores sino también desde la perspectiva empresarial, se pretende abordar temas que puedan definir al consumidor peruano, La forma en cómo se puede generar comercio electrónico en el Perú. En este trabajo se describen puntos de vista beneficiosos de cara a practicar el comercio electrónico en Perú.
Tutorial Web Services en PHP, REST, SOAPElwin Huaman
¿Que es PHP?
¿Que son los Servicios Web?
❏ ¿Que es SOAP?
❏ Librerias SOAP
❏ Crear un Servicio SOAP
❏ Crear un Cliente SOAP
❏ ¿Que es REST?
❏ Librerias SOAP
❏ Crear un Servicio REST
❏ Crear un Cliente REST
Conclusion
Bibliografia
¿Que es PHP?
¿Que son los Servicios Web?
❏ ¿Que es SOAP?
❏ Librerias SOAP
❏ Crear un Servicio SOAP
❏ Crear un Cliente SOAP
❏ ¿Que es REST?
❏ Librerias SOAP
❏ Crear un Servicio REST
❏ Crear un Cliente REST
Conclusion
Bibliografia
Practicando análisis cibermétrico en redes de investigadoresElwin Huaman
El presente trabajo realiza un análisis cibermetrico de una red de investigadores, una red de revistas y una red de palabras claves, este análisis se realiza con carácter práctico y exploratorio. La metodología empleada en la cual se apoya este trabajo es la cibermetria el cual permite la obtención de medidas de densidad, grados de centralidad, grados de intermediación, grados de cercanía de los actores en general de los cuales se deducen el análisis de comunidades. Los resultados obtenidos de las redes analizadas se representan gráficamente para una mejor comprensión.
Evaluacion de Sistemas de Busqueda Google, Carrot2, Usal.esElwin Huaman
Evaluación de Sistemas de Búsqueda Google, Carrot2, Usal.es.
Promedio comparativo de resultados que muestran la presición cada 5 resultados según sistema buscador.
El grado de precisión de cada buscador Google 56%, Carrot2 53% y Usal.es 26%
Análisis del uso del paquete de la editorial Elsevier, ScienceDirect, en el a...Elwin Huaman
El presente trabajo pretende analizar el uso del paquete editorial Elsevier, ScienceDirect en la Universidad de Salamanca y la Universidad de León, este análisis comprende el periodo del año 2010 entre enero y diciembre. La metodología empleada para su análisis es cuantitativo ya que se describe mediante el número de descargas y los títulos contratados. Primero se analizó el uso que hacían del paquete ambas universidades, para luego mostrar una representatividad a nivel de Ranking Top10 y Top25, segundo se halló el ratio de revistas por investigador y como último se muestran los títulos de más uso que representan a la ScienceDirect dentro de las universidades.
Hack is a programming language for HHVM that interoperates seamlessly with PHP. Hack reconciles the fast development cycle of PHP with the discipline provided by static typing, while adding many features commonly found in other modern programming languages.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
2. or talking knots, Quipu [key-poo] is an ancient Incan
knowledge base and “writing system”, consisting of
various type and colours of knots tied to a main
cord, which represents both statistical (crops
grown, taxes, workers, mines, etc.) and narrative
information (stories and histories).
What does Quipu mean?
@ringmar.net
5. “The global economy has been
transformed from a material-based
economy into a knowledge-based
economy. Whereas you can conquer oil
fields through war, you cannot acquire
knowledge that way. Hence today the
main source of wealth is knowledge.”
(Yuval Noah Harari)
6. Content
● Why QUIPU [The purpose]
● How we can built QUIPU [The process]
● What we achieve [The result]
7. Content
● Why QUIPU [The purpose]
● How we can built QUIPU [The process]
● What we achieve [The result]
15. Content
● Why QUIPU [The purpose]
● How we can built QUIPU [The process]
● What we achieve [The result]
16. Do we need the help of Machines?
Is information understandable by Humans and Machines?
Example: “Machu Picchu was built in c. 1450”
Machu Picchu can be:
Inca Citadel
https://www.wikidata.org/entity/Q676203
Town
https://www.wikidata.org/entity/Q397990
Store
https://www.wikidata.org/entity/Q2886434
17. Do we need the help of Machines?
How machines represent information?
Example: “Machu Picchu was built in c. 1450”
Simple statement: (Subject, Predicate, Object)
https://www.wikidata.org/entity/Q676203
Machu Picchu
"c. 1450"^^http://www.w3.org/1999/02/22-rdf-syntax-ns#langString
c. 1450
https://www.wikidata.org/prop/direct/P571
built
18. Do we need the help of Machines?
How machines represent information?
Example: “Machu Picchu was built in c. 1450”
Simple statement: (Subject, Predicate, Object)
wd:Q676203
Machu Picchu
"c. 1450"^^rdf:langString
c. 1450
wdp:P571
built
Prefix declarations:
wd :<https://www.wikidata.org/entity/>
wdp :<https://www.wikidata.org/prop/direct/>
rdf :<http://www.w3.org/1999/02/22-rdf-syntax-ns#>
19. Prefix declarations:
wd :<https://www.wikidata.org/entity/>
wdp :<https://www.wikidata.org/prop/direct/>
rdf :<http://www.w3.org/1999/02/22-rdf-syntax-ns#>
xsd :<http://www.w3.org/2001/XMLSchema#>
wd:Q676203
Machu Picchu
"c. 1450"^^rdf:langString
Built
wdp:P571
wd:Q5582862
Cusco
locationw
dp:P131
...jpg
image
wdp:P18
Inca Empireculturewdp:P2596
wd:Q28573
Built
wdp:P571
"c. 1438"^^rdf:langString
capital
wdp:P36
428 450^^xsd:integer
population
wdp:P1082
What is a
Knowledge Graph?
20. What is Quechua
Knowledge Graph?
Tawantinsuyu
Hatarichiska
wdp:P571
Prefix declarations:
wd :<https://www.wikidata.org/entity/>
wdp :<https://www.wikidata.org/prop/direct/>
rdf :<http://www.w3.org/1999/02/22-rdf-syntax-ns#>
xsd :<http://www.w3.org/2001/XMLSchema#>
wd:Q676203
Machu Pikchu
"c. 1450"^^rdf:langString
wd:Q5582862
Qusqu
suyu
w
dp:P131
...jpg
wanki
wdp:P18
kawsaywdp:P2596
wd:Q28573
Hatarichiska
wdp:P571
"c. 1438"^^rdf:langString
umalli_llaqta
wdp:P36
428 450^^xsd:integer
kawsaqkuna
wdp:P1082
21. RDF (Resource Description Framework) allows to represent knowledge
graphs using syntaxes like Turtle, N-Triples, JSON-LD,...
e.g. RDF model using Turtle
How to represent a Knowledge Graph?
prefix dbr :<http://dbpedia.org/resource/>
prefix dbo :<http://dbpedia.org/ontology/>
prefix xsd :<http://www.w3.org/2001/XMLSchema#>
dbr:Peru dbo:longName “Republic of Peru”^^xsd:string ;
dbo:capital dbr:Lima ;
dbo:currency dbr:Peruvian_sol ;
dbo:demonym “Peruvian”^^xsd:string ;
dbo:populationTotal “31 151 643”^^xsd:integer .
dbr:Lima dbo:populationTotal “8852000”^^xsd:integer ;
dbo:country dbr:Peru .
22. Talking Knowledge graphs: https://www.slideshare.net/STI-Innsbruck/talking-knowledge-graphs-ny
How to build a
Knowledge graph
23. Requirements:
● a well-known “standard” Ontology
or vocabulary, e.g. DBpedia Ontology
● homogeneous structure/models
e.g. a Place might be represented using
similar properties
● correct and complete information
e.g. how accurate is the knowledge
@ontology2.com
How to build a
Knowledge graph
24. Knowledge Creation
Methods
● Manual, uses Annotation tool for an specific domain.
● Semi-automatic, uses intermediate tools for extraction( e.g. Crawlers)
and for mapping(e.g. Annotation Editor) information.
● Mapping, maps different formats to an specific ontology and integrates
large knowledge bases.
● Automatic, applies Natural Language Processing (NLP), Machine
Learning (ML), and more.
25. Knowledge Creation
Sources for creating QUIPU:
● RDF Exports from Wikidata
● Wikidata Toolkit
● Wikidata Sparql endpoint (export/consume)
● Wikidata ApiSandbox (search entities)
● Quechua Wiktionary
● Quechua Wikipedia
● Wikipedia Extractor
● Quechua Dictionary
● Microsoft Translator
27. Requirements:
Knowledge Hosting
Requirements:
● Annotation - Tool
e.g. A platform for creating and hosting annotations
● Document store for hosting semantic web annotations
e.g. MongoDB for hosting semantically annotated data based on JSON-LD.
● Graph Database for Hosting the Knowledge Graph
e.g. GraphDB for hosting semantically annotated data based on RDF.
Requirements:
● Annotation - Tool
e.g. A platform for creating and hosting annotations
● Document store for hosting semantic web annotations
e.g. MongoDB for hosting semantically annotated data based on JSON-LD.
● Graph Database for Hosting the Knowledge Graph
e.g. GraphDB for hosting semantically annotated data based on RDF.
Requirements:
● Annotation - Tool
e.g. A platform for creating and hosting annotations
● Document store for hosting semantic web annotations
e.g. MongoDB for hosting semantically annotated data based on JSON-LD.
● Graph Database for Hosting the Knowledge Graph
e.g. GraphDB for hosting semantically annotated data based on RDF.
28. Knowledge Curation
Requirements:
● Assessment
i.e. assess the quality
● Cleaning
i.e. assess the correctness
● Enrichment
i.e. assess the completeness
Qusqu
Tawantinsuyu
Machu Pikchu
H
atarichiska
built
c. 1450
kawsay
culture
um
alli_llaqta
location
Entity
Literal
Relationship
1536
umalli_llaqta
capital
29. Knowledge Deployment
Requirements:
● Knowledge management technology
○ e.g. GraphDB
● Data accessibility
○ e.g. personalized agents
● Conversational interfaces
○ e.g. automating customer communication, chatbots
Conversational user interfaces
(e.g. chatbots, voice assistants)
@amazon.com @google.com
@slack.com @facebook.com @telegram.org
Talking Knowledge graphs: https://www.slideshare.net/STI-Innsbruck/talking-knowledge-graphs-ny
30. Content
● Why QUIPU [The purpose]
● How we can built QUIPU [The process]
● What we achieve [The result]
32. Pilot: QUIPU (Quechua Knowledge Graph)
● Knowledge Hosting
e.g. use GraphDB to store the knowledge graph
● Knowledge Curation
Assessment of the quality, using metrics
Cleaning, detect and correct errors
Enrichment, detect duplicates and resolve conflicting property values.
33. Pilot: QUIPU (Quechua Knowledge Graph)
● Knowledge Deployment
○ e.g. personalized agents
○ e.g. DialogFlow
○ e.g. MycroftAI
■ skill-Wiki
■ *develop a Quechua speech recognition skill
● based on a Spanish voice assistant
■ *use the Quechua Wikipedia
■ *develop skill-Quechua-Wiki
36. Take away
● facilitate sustainable development of cultural heritage knowledge in developing
countries through promoting technological support in a native language.
● increase access to information and communication technology in their native
language and will decrease the digital illiteracy.
● Reduce inequalities by given access to new technologies to indigenous communities,
it can ensure that the new decisions for developing technologies (e.g. interfaces) also
have to consider the Quechua language.
● Quality education (e.g. ensure that children and old people can acquire the knowledge
and skills needed in their native language)