This was presented at ISWC 2013 in Sydney, Australia.
Abstract:
With thousands of RDF data sources available on the Web covering disparate and possibly overlapping knowledge domains, the problem of providing high-level descriptions (in the form of metadata) of their content becomes crucial. In this paper we introduce a theoretical framework for describing data sources in terms of their completeness. We show how existing data sources can be described with completeness statements expressed in RDF. We then focus on the problem of the completeness of query answering over plain and RDFS data sources augmented with completeness statements. Finally, we present an extension of the completeness framework for federated data sources.
The document discusses an iterative clustering technique for conceptual clustering. It begins by asking what clustering and concepts are, then describes an iterative approach to cluster extraction and navigation of a graph. The approach involves multiple instances of clustering, first for packages and then for subprograms. Issues addressed include cyclic dependencies and multiple allocation. Metrics for precision and recall are provided for the two instances.
This document discusses artificial intelligence for game playing. It introduces different types of games and optimal strategies for games like minimax and alpha-beta pruning. It also discusses challenges for games of imperfect information that include elements of chance, as well as techniques for heuristic evaluation and expected value calculations when chance is involved.
This document discusses different informed search strategies for artificial intelligence problems. It begins by introducing best-first search and how it selects nodes for expansion based on an evaluation function. A* search is then described, which uses an admissible heuristic function to estimate costs. The document provides an example of running A* search on a problem involving traveling between cities in Romania. It evaluates A* search and discusses variants like iterative-deepening A* and recursive best-first search that aim to reduce its space complexity issues.
Postselection technique for quantum channels and applications for qkdwtyru1989
1) The document presents a technique for analyzing the security of real quantum key distribution (QKD) protocols by comparing them to an ideal protocol.
2) It introduces a formalism using completely positive and trace preserving maps and the diamond norm to measure the distance between real and ideal processes.
3) The main result is that for quantum evolutions that are permutation covariant, the diamond norm between them is bounded by a polynomial in the number of particles, when compared to evolutions corresponding to maximally entangled states.
4) This bound allows proving the security of QKD protocols against general attacks by showing they are secure against collective attacks, improving on previous analyses that relied on an exponential de Finetti theorem.
Distributed Keyword Search over RDF via MapReduceAntonio Maccioni
Non expert users need support to access linked data available on the Web. To this aim, keyword-based search is considered an essential feature of database systems.
The distributed nature of the Semantic Web demands query processing techniques to evolve towards a scenario where data is scattered on distributed data stores. Existing approaches to keyword search cannot guarantee scalability in a distributed environment, because, at runtime, they are unaware of the location of the relevant data to the query and thus, they cannot optimize join tasks.
In this paper, we illustrate a novel distributed approach to keyword search over RDF data that exploits the MapReduce paradigm by switching the problem from graph-parallel to data-parallel processing. Moreover, our framework is able to consider ranking during the building phase to return directly the best (top-k) answers in the first (k) generated results, reducing greatly the overall computational load and complexity.
Finally, a comprehensive evaluation demonstrates that our approach exhibits very good efficiency guaranteeing high level of accuracy, especially with respect to state-of-the-art competitors.
Prosody is an essential component of human speech. Prosody, broadly, describes all of the production qualities of speech that are not involved in conveying lexical information. Where the words are “what is said”, prosody is “how it is said”. Prosody of speech, plays an important role not only in communicating the syntax, semantics and pragmatics of spoken language, but also in conveying information about the speaker and their internal state (e.g. emotion or fatigue).
Understanding prosody is critical to understanding speech communication. Spoken language processing (SLP) technology that approaches human levels of competence will necessarily include automatic analysis of prosody. Despite the importance of prosody in spoken communication, researchers are often unable to reliably incorporate prosodic information into applications. One explanation is a lack of compact, consistent, and universal representations of prosodic information. This talk will describe the state of the art in prosodic analysis and its use in spoken language processing with a focus on the development of new representations of prosody.
A Concurrent Language for Argumentation: Preliminary NotesCarlo Taticchi
While agent-based modelling languages naturally implement concurrency, the currently available languages for argumentation do not allow to explicitly model this type of interaction. In this paper we introduce a concurrent language for handling process arguing and communicating using a shared argumentation framework (reminding shared constraint store as in concurrent constraint). We introduce also basic expansions, contraction and revision procedures as main bricks for enforcement, debate, negotiation and persuasion.
The document discusses an iterative clustering technique for conceptual clustering. It begins by asking what clustering and concepts are, then describes an iterative approach to cluster extraction and navigation of a graph. The approach involves multiple instances of clustering, first for packages and then for subprograms. Issues addressed include cyclic dependencies and multiple allocation. Metrics for precision and recall are provided for the two instances.
This document discusses artificial intelligence for game playing. It introduces different types of games and optimal strategies for games like minimax and alpha-beta pruning. It also discusses challenges for games of imperfect information that include elements of chance, as well as techniques for heuristic evaluation and expected value calculations when chance is involved.
This document discusses different informed search strategies for artificial intelligence problems. It begins by introducing best-first search and how it selects nodes for expansion based on an evaluation function. A* search is then described, which uses an admissible heuristic function to estimate costs. The document provides an example of running A* search on a problem involving traveling between cities in Romania. It evaluates A* search and discusses variants like iterative-deepening A* and recursive best-first search that aim to reduce its space complexity issues.
Postselection technique for quantum channels and applications for qkdwtyru1989
1) The document presents a technique for analyzing the security of real quantum key distribution (QKD) protocols by comparing them to an ideal protocol.
2) It introduces a formalism using completely positive and trace preserving maps and the diamond norm to measure the distance between real and ideal processes.
3) The main result is that for quantum evolutions that are permutation covariant, the diamond norm between them is bounded by a polynomial in the number of particles, when compared to evolutions corresponding to maximally entangled states.
4) This bound allows proving the security of QKD protocols against general attacks by showing they are secure against collective attacks, improving on previous analyses that relied on an exponential de Finetti theorem.
Distributed Keyword Search over RDF via MapReduceAntonio Maccioni
Non expert users need support to access linked data available on the Web. To this aim, keyword-based search is considered an essential feature of database systems.
The distributed nature of the Semantic Web demands query processing techniques to evolve towards a scenario where data is scattered on distributed data stores. Existing approaches to keyword search cannot guarantee scalability in a distributed environment, because, at runtime, they are unaware of the location of the relevant data to the query and thus, they cannot optimize join tasks.
In this paper, we illustrate a novel distributed approach to keyword search over RDF data that exploits the MapReduce paradigm by switching the problem from graph-parallel to data-parallel processing. Moreover, our framework is able to consider ranking during the building phase to return directly the best (top-k) answers in the first (k) generated results, reducing greatly the overall computational load and complexity.
Finally, a comprehensive evaluation demonstrates that our approach exhibits very good efficiency guaranteeing high level of accuracy, especially with respect to state-of-the-art competitors.
Prosody is an essential component of human speech. Prosody, broadly, describes all of the production qualities of speech that are not involved in conveying lexical information. Where the words are “what is said”, prosody is “how it is said”. Prosody of speech, plays an important role not only in communicating the syntax, semantics and pragmatics of spoken language, but also in conveying information about the speaker and their internal state (e.g. emotion or fatigue).
Understanding prosody is critical to understanding speech communication. Spoken language processing (SLP) technology that approaches human levels of competence will necessarily include automatic analysis of prosody. Despite the importance of prosody in spoken communication, researchers are often unable to reliably incorporate prosodic information into applications. One explanation is a lack of compact, consistent, and universal representations of prosodic information. This talk will describe the state of the art in prosodic analysis and its use in spoken language processing with a focus on the development of new representations of prosody.
A Concurrent Language for Argumentation: Preliminary NotesCarlo Taticchi
While agent-based modelling languages naturally implement concurrency, the currently available languages for argumentation do not allow to explicitly model this type of interaction. In this paper we introduce a concurrent language for handling process arguing and communicating using a shared argumentation framework (reminding shared constraint store as in concurrent constraint). We introduce also basic expansions, contraction and revision procedures as main bricks for enforcement, debate, negotiation and persuasion.
Data X Museum - Hari Museum Internasional 2022 - WMIDFariz Darari
This document discusses the importance of preserving cultural heritage through museums and digitizing cultural artifacts and traditions. It provides statistics on the diversity of Indonesian culture and examples of how structured data and APIs can be used to catalog and provide access to cultural works, including examples from Wikidata and the Metropolitan Museum of Art. The document encourages utilizing structured data to digitally preserve traditions like rendang and making museum data widely available to promote cultural heritage for all.
Kuis tryout 1 mata kuliah Dasar-Dasar Pemrograman 2 Fasilkom UI berisi soal pilihan ganda dan esai tentang konsep-konsep dasar Java seperti tipe data, pewarisan, package, class, objek, dan string builder. Soal-soal tersebut bertujuan mengetes pemahaman mahasiswa terhadap materi pemrograman dasar yang telah diajarkan.
Game theory is the study of strategic decision making between interdependent parties. It analyzes situations where players make decisions that will impact outcomes for themselves and others. The document provides examples of classic game theory scenarios like the prisoner's dilemma and discusses concepts like dominant strategies, Nash equilibriums, and mixed strategies. It also presents a two-player "two-finger Morra game" to illustrate game theory principles.
Neural Networks and Deep Learning: An IntroFariz Darari
This document provides an overview of neural networks and deep learning. It describes how artificial neurons are arranged in layers to form feedforward neural networks, with information fed from the input layer to subsequent hidden and output layers. Networks are trained using gradient descent to adjust weights between layers to minimize error. Convolutional neural networks are also discussed, which apply convolution and pooling operations to process visual inputs like images for tasks such as image classification. CNNs have achieved success in applications involving computer vision, natural language processing, and more.
Ringkasan dokumen tersebut adalah sebagai berikut:
1. Dokumen tersebut membahas tentang pengembangan talenta AI di perguruan tinggi dan hubungannya dengan industri, khususnya dalam memenuhi kebutuhan akan keterampilan AI.
2. Talenta AI di perguruan tinggi tidak hanya terfokus pada pendidikan AI saja, tetapi juga penelitian dan pengabdian masyarakat melalui teknologi AI.
3. Dibut
Basic Python Programming: Part 01 and Part 02Fariz Darari
This document discusses basic Python programming concepts including strings, functions, conditionals, loops, imports and recursion. It begins with examples of printing strings, taking user input, and calculating areas of shapes. It then covers variables and data types, operators, conditional statements, loops, functions, imports, strings, and recursion. Examples are provided throughout to demonstrate each concept.
This document discusses several topics related to properly implementing AI in education, including:
1) Ensuring AI teacher evaluation and models are not biased toward specific demographic groups or teaching styles.
2) The importance of data quality when training AI models, such as removing duplicates and standardizing formats.
3) The need for explainable AI models.
4) Examples of non-machine learning AI applications, such as an automated study topic scheduler.
5) A reminder that we have a choice in how AI is designed to have a positive impact.
Featuring pointers for: Single-layer neural networks and multi-layer neural networks, gradient descent, backpropagation. Slides are for introduction, for deep explanation on deep learning, please consult other slides.
Current situation: focus is limited to only implement Tridharma, that is, education, research, and community service, with little concern on openness aspect.
The openness of Tridharma can potentially be a breakthrough in mitigating the quality gap issue: opening Tridharma outputs for public would help to increase the citizen inclusion in accessing the quality content of Tridharma, hence narrowing the quality gap in higher education.
Defense Slides of Avicenna Wisesa - PROWDFariz Darari
This document presents ProWD, a tool for analyzing completeness in Wikidata. It introduces Wikidata and knowledge graphs, discusses issues like knowledge imbalance and inference errors due to lack of completeness awareness. It then presents a formal framework for completeness analysis using class, facet, and attribute profiles. This framework is implemented in ProWD, a proof of concept tool that allows analyzing Wikidata's completeness through single and compare views. ProWD is designed to be updated live and make completeness analysis accessible to laymen. Future work aims to expand the framework, improve scalability, and extend ProWD features.
This document provides an introduction to object-oriented programming concepts using Java. It begins by demonstrating how object-oriented thinking is natural through everyday examples of objects like cars and cats. It then defines key object-oriented programming terminology like class, object, attributes, and methods. The document walks through creating a sample Cube class to demonstrate these concepts in code. It shows how to define the class, instantiate objects, access attributes and call methods. The document also covers other OOP concepts like constructors, the toString() method, passing objects by reference, and the null value. Finally, it provides examples of real-world classes like String, LocalDate, Random and how to work with static variables and methods.
Testing in Python: doctest and unittest (Updated)Fariz Darari
The document discusses testing in Python. It defines testing vs debugging, and explains why testing is important even for professional programmers. It provides examples of manually testing a square area function that initially had a bug, and how the bug was detected and fixed. It then introduces doctest and unittest as systematic ways to test in Python, providing examples of using each. Finally, it discusses test-driven development as a software development method where tests are defined before writing code.
Testing in Python: doctest and unittestFariz Darari
The document discusses testing in Python. It defines testing vs debugging, and explains why testing is important even for professional programmers. It introduces doctest and unittest as systematic ways to test Python code. Doctest allows embedding tests in docstrings, while unittest involves writing separate test files. The document also covers test-driven development, which involves writing tests before coding to define desired behavior.
Dissertation Defense - Managing and Consuming Completeness Information for RD...Fariz Darari
The ever increasing amount of Semantic Web data gives rise to the question: How complete is the data? Though generally data on the Semantic Web is incomplete, many parts of data are indeed complete, such as the children of Barack Obama and the crew of Apollo 11. This thesis aims to study how to manage and consume completeness information about Semantic Web data. In particular, we first discuss how completeness information can guarantee the completeness of query answering. Next, we propose optimization techniques of completeness reasoning and conduct experimental evaluations to show the feasibility of our approaches. We also provide a technique to check the soundness of queries with negation via reduction to query completeness checking. We further enrich completeness information with timestamps, enabling query answers to be checked up to when they are complete. We then introduce two demonstrators, i.e., CORNER and COOL-WD, to show how our completeness framework can be realized. Finally, we investigate an automated method to generate completeness statements from text on the Web via relation cardinality extraction.
The document provides information about research writing. It discusses that everyone can be considered a researcher through everyday activities like using social media or traveling. Research is defined as a careful, diligent search to establish new facts or reach conclusions. The constituents of research are outlined as defining problems, formulating hypotheses, collecting and analyzing data, and validating conclusions. The document emphasizes that research writing is important and discusses choosing the right research topic and venue for publication. It provides tips for writing different sections of a research paper and following the common three-phase model of initial workshop or conference papers leading to a journal publication.
KOI - Knowledge Of Incidents - SemEval 2018Fariz Darari
We present KOI (Knowledge Of Incidents), a system that given news articles as input, builds a knowledge graph (KOI-KG) of incidental events.
KOI-KG can then be used to efficiently answer questions such as "How many killing incidents happened in 2017 that involve Sean?" The required steps in building the KG include:
(i) document preprocessing involving word sense disambiguation, named-entity recognition, temporal expression recognition and normalization, and semantic role labeling;
(ii) incidental event extraction and coreference resolution via document clustering; and (iii) KG construction and population.
Slides made and presented by Paramita.
Comparing Index Structures for Completeness ReasoningFariz Darari
Data quality is a major issue in the development of knowledge graphs. Data completeness is a key factor in data quality that concerns the breadth, depth, and scope of information contained in knowledge graphs. As for large-scale knowledge graphs (e.g., DBpedia, Wikidata), it is conceivable that given the amount of information contained in there, they may be complete for a wide range of topics, such as children of Donald Trump, cantons of Switzerland, and presidents of Indonesia. Previous research has shown how one can augment knowledge graphs with statements about their completeness, stating which parts of data are complete. Such meta-information can be leveraged to check query completeness, that is, whether the answer returned by a query is complete. Yet, it is still unclear how such a check can be done in practice, especially when a large number of completeness statements are involved. We devise implementation techniques to make completeness reasoning in the presence of large sets of completeness statements feasible, and experimentally evaluate their effectiveness in realistic settings based on the characteristics of real-world knowledge graphs.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Data X Museum - Hari Museum Internasional 2022 - WMIDFariz Darari
This document discusses the importance of preserving cultural heritage through museums and digitizing cultural artifacts and traditions. It provides statistics on the diversity of Indonesian culture and examples of how structured data and APIs can be used to catalog and provide access to cultural works, including examples from Wikidata and the Metropolitan Museum of Art. The document encourages utilizing structured data to digitally preserve traditions like rendang and making museum data widely available to promote cultural heritage for all.
Kuis tryout 1 mata kuliah Dasar-Dasar Pemrograman 2 Fasilkom UI berisi soal pilihan ganda dan esai tentang konsep-konsep dasar Java seperti tipe data, pewarisan, package, class, objek, dan string builder. Soal-soal tersebut bertujuan mengetes pemahaman mahasiswa terhadap materi pemrograman dasar yang telah diajarkan.
Game theory is the study of strategic decision making between interdependent parties. It analyzes situations where players make decisions that will impact outcomes for themselves and others. The document provides examples of classic game theory scenarios like the prisoner's dilemma and discusses concepts like dominant strategies, Nash equilibriums, and mixed strategies. It also presents a two-player "two-finger Morra game" to illustrate game theory principles.
Neural Networks and Deep Learning: An IntroFariz Darari
This document provides an overview of neural networks and deep learning. It describes how artificial neurons are arranged in layers to form feedforward neural networks, with information fed from the input layer to subsequent hidden and output layers. Networks are trained using gradient descent to adjust weights between layers to minimize error. Convolutional neural networks are also discussed, which apply convolution and pooling operations to process visual inputs like images for tasks such as image classification. CNNs have achieved success in applications involving computer vision, natural language processing, and more.
Ringkasan dokumen tersebut adalah sebagai berikut:
1. Dokumen tersebut membahas tentang pengembangan talenta AI di perguruan tinggi dan hubungannya dengan industri, khususnya dalam memenuhi kebutuhan akan keterampilan AI.
2. Talenta AI di perguruan tinggi tidak hanya terfokus pada pendidikan AI saja, tetapi juga penelitian dan pengabdian masyarakat melalui teknologi AI.
3. Dibut
Basic Python Programming: Part 01 and Part 02Fariz Darari
This document discusses basic Python programming concepts including strings, functions, conditionals, loops, imports and recursion. It begins with examples of printing strings, taking user input, and calculating areas of shapes. It then covers variables and data types, operators, conditional statements, loops, functions, imports, strings, and recursion. Examples are provided throughout to demonstrate each concept.
This document discusses several topics related to properly implementing AI in education, including:
1) Ensuring AI teacher evaluation and models are not biased toward specific demographic groups or teaching styles.
2) The importance of data quality when training AI models, such as removing duplicates and standardizing formats.
3) The need for explainable AI models.
4) Examples of non-machine learning AI applications, such as an automated study topic scheduler.
5) A reminder that we have a choice in how AI is designed to have a positive impact.
Featuring pointers for: Single-layer neural networks and multi-layer neural networks, gradient descent, backpropagation. Slides are for introduction, for deep explanation on deep learning, please consult other slides.
Current situation: focus is limited to only implement Tridharma, that is, education, research, and community service, with little concern on openness aspect.
The openness of Tridharma can potentially be a breakthrough in mitigating the quality gap issue: opening Tridharma outputs for public would help to increase the citizen inclusion in accessing the quality content of Tridharma, hence narrowing the quality gap in higher education.
Defense Slides of Avicenna Wisesa - PROWDFariz Darari
This document presents ProWD, a tool for analyzing completeness in Wikidata. It introduces Wikidata and knowledge graphs, discusses issues like knowledge imbalance and inference errors due to lack of completeness awareness. It then presents a formal framework for completeness analysis using class, facet, and attribute profiles. This framework is implemented in ProWD, a proof of concept tool that allows analyzing Wikidata's completeness through single and compare views. ProWD is designed to be updated live and make completeness analysis accessible to laymen. Future work aims to expand the framework, improve scalability, and extend ProWD features.
This document provides an introduction to object-oriented programming concepts using Java. It begins by demonstrating how object-oriented thinking is natural through everyday examples of objects like cars and cats. It then defines key object-oriented programming terminology like class, object, attributes, and methods. The document walks through creating a sample Cube class to demonstrate these concepts in code. It shows how to define the class, instantiate objects, access attributes and call methods. The document also covers other OOP concepts like constructors, the toString() method, passing objects by reference, and the null value. Finally, it provides examples of real-world classes like String, LocalDate, Random and how to work with static variables and methods.
Testing in Python: doctest and unittest (Updated)Fariz Darari
The document discusses testing in Python. It defines testing vs debugging, and explains why testing is important even for professional programmers. It provides examples of manually testing a square area function that initially had a bug, and how the bug was detected and fixed. It then introduces doctest and unittest as systematic ways to test in Python, providing examples of using each. Finally, it discusses test-driven development as a software development method where tests are defined before writing code.
Testing in Python: doctest and unittestFariz Darari
The document discusses testing in Python. It defines testing vs debugging, and explains why testing is important even for professional programmers. It introduces doctest and unittest as systematic ways to test Python code. Doctest allows embedding tests in docstrings, while unittest involves writing separate test files. The document also covers test-driven development, which involves writing tests before coding to define desired behavior.
Dissertation Defense - Managing and Consuming Completeness Information for RD...Fariz Darari
The ever increasing amount of Semantic Web data gives rise to the question: How complete is the data? Though generally data on the Semantic Web is incomplete, many parts of data are indeed complete, such as the children of Barack Obama and the crew of Apollo 11. This thesis aims to study how to manage and consume completeness information about Semantic Web data. In particular, we first discuss how completeness information can guarantee the completeness of query answering. Next, we propose optimization techniques of completeness reasoning and conduct experimental evaluations to show the feasibility of our approaches. We also provide a technique to check the soundness of queries with negation via reduction to query completeness checking. We further enrich completeness information with timestamps, enabling query answers to be checked up to when they are complete. We then introduce two demonstrators, i.e., CORNER and COOL-WD, to show how our completeness framework can be realized. Finally, we investigate an automated method to generate completeness statements from text on the Web via relation cardinality extraction.
The document provides information about research writing. It discusses that everyone can be considered a researcher through everyday activities like using social media or traveling. Research is defined as a careful, diligent search to establish new facts or reach conclusions. The constituents of research are outlined as defining problems, formulating hypotheses, collecting and analyzing data, and validating conclusions. The document emphasizes that research writing is important and discusses choosing the right research topic and venue for publication. It provides tips for writing different sections of a research paper and following the common three-phase model of initial workshop or conference papers leading to a journal publication.
KOI - Knowledge Of Incidents - SemEval 2018Fariz Darari
We present KOI (Knowledge Of Incidents), a system that given news articles as input, builds a knowledge graph (KOI-KG) of incidental events.
KOI-KG can then be used to efficiently answer questions such as "How many killing incidents happened in 2017 that involve Sean?" The required steps in building the KG include:
(i) document preprocessing involving word sense disambiguation, named-entity recognition, temporal expression recognition and normalization, and semantic role labeling;
(ii) incidental event extraction and coreference resolution via document clustering; and (iii) KG construction and population.
Slides made and presented by Paramita.
Comparing Index Structures for Completeness ReasoningFariz Darari
Data quality is a major issue in the development of knowledge graphs. Data completeness is a key factor in data quality that concerns the breadth, depth, and scope of information contained in knowledge graphs. As for large-scale knowledge graphs (e.g., DBpedia, Wikidata), it is conceivable that given the amount of information contained in there, they may be complete for a wide range of topics, such as children of Donald Trump, cantons of Switzerland, and presidents of Indonesia. Previous research has shown how one can augment knowledge graphs with statements about their completeness, stating which parts of data are complete. Such meta-information can be leveraged to check query completeness, that is, whether the answer returned by a query is complete. Yet, it is still unclear how such a check can be done in practice, especially when a large number of completeness statements are involved. We devise implementation techniques to make completeness reasoning in the presence of large sets of completeness statements feasible, and experimentally evaluate their effectiveness in realistic settings based on the characteristics of real-world knowledge graphs.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
CAKE: Sharing Slices of Confidential Data on BlockchainClaudio Di Ciccio
Presented at the CAiSE 2024 Forum, Intelligent Information Systems, June 6th, Limassol, Cyprus.
Synopsis: Cooperative information systems typically involve various entities in a collaborative process within a distributed environment. Blockchain technology offers a mechanism for automating such processes, even when only partial trust exists among participants. The data stored on the blockchain is replicated across all nodes in the network, ensuring accessibility to all participants. While this aspect facilitates traceability, integrity, and persistence, it poses challenges for adopting public blockchains in enterprise settings due to confidentiality issues. In this paper, we present a software tool named Control Access via Key Encryption (CAKE), designed to ensure data confidentiality in scenarios involving public blockchains. After outlining its core components and functionalities, we showcase the application of CAKE in the context of a real-world cyber-security project within the logistics domain.
Paper: https://doi.org/10.1007/978-3-031-61000-4_16
CAKE: Sharing Slices of Confidential Data on Blockchain
[ISWC 2013] Completeness statements about RDF data sources and their use for query answering
1. Completeness Statements about RDF Data Sources and
Their Use for Query Answering
Fariz Darari
Werner Nutt
Giuseppe Pirrò
Simon Razniewski
Oct 23rd, 2013
Supported by the project MAGIC: Managing Completeness of Data,
funded by the province of Bozen- Bolzano
Fariz Darari (ISWC 2013) Completeness Reasoning @ Linked Data 1 / 31
3. Motivation
Completeness
statement about the
IMDB data source
Quentin Tarantino
was the character
Mr. Brown
……………
……………
……………
http://www.imdb.com/title/tt0105236/fullcredits?ref_=tt_ov_st_sm#cast
Fariz Darari (ISWC 2013) Completeness Reasoning @ Linked Data 3 / 31
7. Incomplete Data Source
Quentin Tarantino is missing..
Fariz Darari (ISWC 2013) Completeness Reasoning @ Linked Data 7 / 31
8. Incomplete Data Source
An incomplete data source of Tarantino movies,
Gdbp = (Ga
dbp, Gi
dbp):
Fariz Darari (ISWC 2013) Completeness Reasoning @ Linked Data 8 / 31
9. Incomplete Data Source
An incomplete data source of Tarantino movies,
Gdbp = (Ga
dbp,Gi
dbp):
Fariz Darari (ISWC 2013) Completeness Reasoning @ Linked Data 9 / 31
10. Incomplete Data Source
An incomplete data source of Tarantino movies,
Gdbp = (Ga
dbp, Gi
dbp):
Fariz Darari (ISWC 2013) Completeness Reasoning @ Linked Data 10 / 31
11. Incomplete Data Source
Incomplete Data Source
An incomplete data source is a pair of two graphs,
G = (Ga
, Gi
), where Ga
⊆ Gi
.
We call Ga
the available graph and Gi
the ideal graph.
Fariz Darari (ISWC 2013) Completeness Reasoning @ Linked Data 11 / 31
13. Completeness Statements: Examples
To express that a source is complete
for all the triples about movies directed by Tarantino,
we use the statement
Cdir = Compl((?m, type, Movie), (?m, director, tarantino) | ∅),
Fariz Darari (ISWC 2013) Completeness Reasoning @ Linked Data 13 / 31
14. Completeness Statements: Examples
To express that a source is complete
for all the triples about movies directed by Tarantino,
we use the statement
Cdir = Compl((?m, type, Movie), (?m, director, tarantino) | ∅),
To express that a source is complete
for all triples about actors in movies directed by Tarantino,
we use
Cact =
Compl((?m, actor, ?a) | (?m, type, Movie), (?m, director, tara))
Fariz Darari (ISWC 2013) Completeness Reasoning @ Linked Data 13 / 31
15. Completeness Statement: Definition
Let P1 be a non-empty BGP (Basic Graph Pattern) and
P2 a BGP.
A completeness statement is defined as
Compl(P1 | P2)
where we call P1 the pattern and P2 the condition
of the statement.
Fariz Darari (ISWC 2013) Completeness Reasoning @ Linked Data 14 / 31
16. Satisfaction of Completeness Statements
To the statement
C = Compl(P1 | P2),
we associate the CONSTRUCT query
QC = (P1, P1 ∪ P2).
Then, we say:
C is satisfied by an IDS G = (Ga
, Gi
), written G |= C, if
QC Gi ⊆ Ga
.
Fariz Darari (ISWC 2013) Completeness Reasoning @ Linked Data 15 / 31
20. Completeness Statements
Cact = Compl((?m, actor, ?a) | (?m, type, Movie), (?m, director, tara)
Question: Gdbp |= Cact ?
No, because among the results of QCact Gi
dbp
,
there is (reservoirDogs, actor, tarantino) not in Ga
dbp.
Fariz Darari (ISWC 2013) Completeness Reasoning @ Linked Data 17 / 31
23. Query Completeness: Example
Qdir = ({?m}, {(?m, type, Movie), (?m, director, tarantino)})
Then, Qdir Gi
dbp
= Qdir Ga
dbp
, and therefore Gdbp |= Compl(Qdir ).
Fariz Darari (ISWC 2013) Completeness Reasoning @ Linked Data 19 / 31
24. Query Completeness: Definition
Definition
Let Q be a SELECT query. We write
Compl(Q)
to say that Q is complete.
An incomplete data source G = (Ga
, Gi
) satisfies Compl(Q),
written
G |= Compl(Q) if and only if Q Gi = Q Ga .
Fariz Darari (ISWC 2013) Completeness Reasoning @ Linked Data 20 / 31
25. Completeness Entailment
Let C be a set of completeness statements and
Q a SELECT query.
We say that C entails the completeness of Q, written
C |= Compl(Q),
if any incomplete data source satisfying C
also satisfies Compl(Q).
Fariz Darari (ISWC 2013) Completeness Reasoning @ Linked Data 21 / 31
26. Completeness Reasoning
Transfer Operator
For any set C of completeness statements and a graph G,
we define the transfer operator TC:
TC(G) =
C∈ C
QC G
Fariz Darari (ISWC 2013) Completeness Reasoning @ Linked Data 22 / 31
27. Completeness Reasoning
Transfer Operator
For any set C of completeness statements and a graph G,
we define the transfer operator TC:
TC(G) =
C∈ C
QC G
Prototypical Graph
Let Q = (W, P) be a SELECT query.
The prototypical graph ˜P is the graph resulting
from the mapping of variables in P to fresh, unique IRIs.
Fariz Darari (ISWC 2013) Completeness Reasoning @ Linked Data 22 / 31
28. Completeness of Basic Queries
Theorem
Let C be a set of completeness statements and
Q = (W, P) a basic query. Then,
C |= Compl(Q) if and only if ˜P = TC(˜P).
Fariz Darari (ISWC 2013) Completeness Reasoning @ Linked Data 23 / 31
29. Completeness Reasoning: Example
Consider the set of completeness statements
Cdir,act = { Cdir , Cact }
and the query
Qdir+act = ({ ?m }, Pdir+act )
where
Pdir+act =
{ (?m, type, Movie), (?m, director, tarantino), (?m, actor, tarantino) }.
Fariz Darari (ISWC 2013) Completeness Reasoning @ Linked Data 24 / 31
30. Completeness Reasoning: Example
Consider the set of completeness statements
Cdir,act = { Cdir , Cact }
and the query
Qdir+act = ({ ?m }, Pdir+act )
Then,
˜Pdir+act =
{ (c?m, type, Movie), (c?m, director, tarantino), (c?m, actor, tarantino) }
Fariz Darari (ISWC 2013) Completeness Reasoning @ Linked Data 25 / 31
36. The framework can also be applied to:
DISTINCT Queries
OPT Queries
Queries with RDFS Data Sources
Fariz Darari (ISWC 2013) Completeness Reasoning @ Linked Data 27 / 31
38. CoRner: Completeness Reasoner
Can check the completeness of a subset of SPARQL
queries depending on the completeness statements of
a single data source.
Developed in Java using the Apache Jena library.
Takes three inputs:
Completeness statements in RDF format
A SPARQL query
(optional) an RDFS ontology
Available at http://rdfcorner.wordpress.com/.
Fariz Darari (ISWC 2013) Completeness Reasoning @ Linked Data 29 / 31
39. Conclusions and Future Work
We developed a theoretical framework for the expression of
completeness statements about data sources, from which
we can ensure query completeness.
We provide a compact RDF representation of completeness
statements, which can be embedded in VoID descriptions,
and implemented the framework in CoRner.
We are interested in studying completeness reasoning with
more expressive queries and OWL 2.
Fariz Darari (ISWC 2013) Completeness Reasoning @ Linked Data 30 / 31