Linked Open Data (LOD) has emerged as one of the largest collections of interlinked structured datasets on the Web. Although the adoption of such datasets for applications is
increasing, identifying relevant datasets for a specific task or topic is still challenging. As an initial step to make such identification easier, we provide an approach to automatically identify the topic domains of given datasets. Our method utilizes existing knowledge sources, more specifically Freebase, and we present an evaluation which validates the topic domains we can identify with our system. Furthermore, we evaluate the effectiveness of identified topic domains for the purpose of finding relevant datasets, thus showing that our approach improves reusability of LOD datasets.
Domain Identification for Linked Open DataSarasi Sarangi
Linked Open Data (LOD) has emerged as one of the largest collections of interlinked structured datasets on the Web. Although the adoption of such datasets for applications is
increasing, identifying relevant datasets for a specific task or topic is still challenging. As an initial step to make such identification easier, we provide an approach to automatically identify the topic domains of given datasets. Our method utilizes existing knowledge sources, more specifically Freebase, and we present an evaluation which validates the topic domains we can identify with our system. Furthermore, we evaluate the effectiveness of identified topic domains for the purpose of finding relevant datasets, thus showing that our approach improves reusability of LOD datasets.
Improving Semantic Search Using Query Log AnalysisStuart Wrigley
Despite the attention Semantic Search is continuously gaining, several challenges affecting tool performance and user experience remain unsolved. Among these are: matching user terms with the searchspace, adopting view-based interfaces in the Open Web as well as supporting users while building their queries. This paper proposes an approach to move a step forward towards tackling these challenges by creating models of usage of Linked Data concepts and properties extracted from semantic query logs as a source of collaborative knowledge. We use two sets of query logs from the USEWOD workshops to create our models and show the potential of using them in the mentioned areas.
Evaluating Semantic Search Systems to Identify Future Directions of ResearchStuart Wrigley
Recent work on searching the Semantic Web has yielded a wide range of approaches with respect to the style of input, the underlying search mechanisms and the manner in which results are presented. Each approach has an impact upon the quality of the information retrieved and the user's experience of the search process. This highlights the need for formalised and consistent evaluation to benchmark the coverage, applicability and usability of existing tools and provide indications of future directions for advancement of the state-of-the-art. In this paper, we describe a comprehensive evaluation methodology which addresses both the underlying performance and the subjective usability of a tool. We present the key outcomes of a recently completed international evaluation campaign which adopted this approach and thus identify a number of new requirements for semantic search tools from both the perspective of the underlying technology as well as the user experience.
Date: March 3rd, 2016
Venue: Trondheim, Norway. Doctoral Seminar at NTNU
Please cite, link to or credit this presentation when using it or part of it in your work.
A statistical and schema independent approach to determine equivalent properties between linked datasets. The approach utilizes interlinking between datasets and property extensions to understand the equivalence of properties.
Talk given by prof. Amit Sheth at the ICMSE-MGI Digital Data Workshop held at Kno.e.sis Center from November 13-14 2013.
workshop page: http://wiki.knoesis.org/index.php/ICMSE-MGI_Digital_Data_Workshop
Domain Identification for Linked Open DataSarasi Sarangi
Linked Open Data (LOD) has emerged as one of the largest collections of interlinked structured datasets on the Web. Although the adoption of such datasets for applications is
increasing, identifying relevant datasets for a specific task or topic is still challenging. As an initial step to make such identification easier, we provide an approach to automatically identify the topic domains of given datasets. Our method utilizes existing knowledge sources, more specifically Freebase, and we present an evaluation which validates the topic domains we can identify with our system. Furthermore, we evaluate the effectiveness of identified topic domains for the purpose of finding relevant datasets, thus showing that our approach improves reusability of LOD datasets.
Improving Semantic Search Using Query Log AnalysisStuart Wrigley
Despite the attention Semantic Search is continuously gaining, several challenges affecting tool performance and user experience remain unsolved. Among these are: matching user terms with the searchspace, adopting view-based interfaces in the Open Web as well as supporting users while building their queries. This paper proposes an approach to move a step forward towards tackling these challenges by creating models of usage of Linked Data concepts and properties extracted from semantic query logs as a source of collaborative knowledge. We use two sets of query logs from the USEWOD workshops to create our models and show the potential of using them in the mentioned areas.
Evaluating Semantic Search Systems to Identify Future Directions of ResearchStuart Wrigley
Recent work on searching the Semantic Web has yielded a wide range of approaches with respect to the style of input, the underlying search mechanisms and the manner in which results are presented. Each approach has an impact upon the quality of the information retrieved and the user's experience of the search process. This highlights the need for formalised and consistent evaluation to benchmark the coverage, applicability and usability of existing tools and provide indications of future directions for advancement of the state-of-the-art. In this paper, we describe a comprehensive evaluation methodology which addresses both the underlying performance and the subjective usability of a tool. We present the key outcomes of a recently completed international evaluation campaign which adopted this approach and thus identify a number of new requirements for semantic search tools from both the perspective of the underlying technology as well as the user experience.
Date: March 3rd, 2016
Venue: Trondheim, Norway. Doctoral Seminar at NTNU
Please cite, link to or credit this presentation when using it or part of it in your work.
A statistical and schema independent approach to determine equivalent properties between linked datasets. The approach utilizes interlinking between datasets and property extensions to understand the equivalence of properties.
Talk given by prof. Amit Sheth at the ICMSE-MGI Digital Data Workshop held at Kno.e.sis Center from November 13-14 2013.
workshop page: http://wiki.knoesis.org/index.php/ICMSE-MGI_Digital_Data_Workshop
Amit Sheth, "Semantic Interoperability and Information Brokering in Global Information Systems," Keynote given at IEEE Meta-Data, Bathesda, MD, April 6 1999.
Amit Sheth, Pramod Anantharam, Krishnaprasad Thirunarayan, "kHealth: Proactive Personalized Actionable Information for Better Healthcare", Workshop on Personal Data Analytics in the Internet of Things at VLDB2014, Hangzhou, China, September 5, 2014.
Accompanying Video: http://youtu.be/pqcbwGYHPuc
Paper: http://www.knoesis.org/library/resource.php?id=2008
Presentation given by Chris Welty (IBM Research) at Knoesis. We get the permission to upload this presentation from Chris Welty. Event details are at: http://j.mp/Welty-at-Knoesis and the associate video is at: https://www.youtube.com/watch?v=grDKpicM5y0
Krishnaprasad Thirunarayan and Amit Sheth: Semantics-empowered Approaches to Big Data Processing for Physical-Cyber-Social Applications, In: Proceedings of AAAI 2013 Fall Symposium on Semantics for Big Data, Arlington, Virginia, November 15-17, 2013.
With the rapid proliferation of mobile phones, social media, and sensors, it is critical to collect and convert big data so generated into actionable information that is relevant for decision making. In this session, we explore challenges and approaches for synthesizing relevant background knowledge and inferences that can enable smart healthcare and ultimately benefit community at large.
Paper: http://www.knoesis.org/library/resource.php?id=1903
Krishnaprasad Thirunarayan, Pramod Anantharam, Cory Henson, and Amit Sheth, 'Trust Networks', In: 5th Indian International Conference on Artificial Intelligence (IICAI-11), December 14-16, 2011 (invited tutorial).
Amit Sheth, 'Semantic Computing in Real-World: Vertical and Horizontal application, within Enterprise and on the Web, ' Panel Presentation at International Conference on Semantic Computing (ICSC2011), Palo Alto, CA, September 20, 2011.
Harshal Patni, "Real Time Semantic Analysis of Streaming Sensor Data," MS Thesis Defense, Kno.e.sis Center, Wright State University, Dayton OH, March 21, 2001.
More at: http://wiki.knoesis.org/index.php/SSW
Dissertation Advisor: Prof. Amit Sheth
Cursing is not uncommon during conversations in the physical world: 0.5% to 0.7% of all the words we speak are curse words, given that 1% of all the words are first-person plural pronouns (e.g., we, us, our). On social media, people can instantly chat with friends without face-to-face interaction, usually in a more public fashion and broadly disseminated through highly connected social network. Will these distinctive features of social media lead to a change in people’s curs- ing behavior? In this paper, we examine the characteristics of cursing activity on a popular social media platform – Twitter, involving the analysis of about 51 million tweets and about 14 million users. In particular, we explore a set of questions that have been recognized as crucial for understanding curs- ing in offline communications by prior studies, including the ubiquity, utility, and contextual dependencies of cursing.
Original paper: http://knoesis.org/library/resource.php?id=1937
Pavan Kapanipathi, Prateek Jain, Chitra Venkataramani, Amit Sheth, User Interests Identification on Twitter Using a Hierarchical Knowledge Base, ESWC 2014, May 2014.
Paper at: http://j.mp/user-ig
More at: http://wiki.knoesis.org/index.php/Hierarchical_Interest_Graph
Invited talk presented by Hemant Purohit (http://knoesis.org/researchers/hemant) at the NCSU workshop on IT for sustainable tourism development. The talk presents application of technology developed for crisis coordination into more general marketplace coordination via social media for helping suppliers (micro-entrepreneurs) and demanders (tourists).
The recent emergence of the “Linked Data” approach for publishing data represents a major step forward in realizing the original vision of a web that can "understand and satisfy the requests of people and machines to use the web content" – i.e. the Semantic Web. This new approach has resulted in the Linked Open Data (LOD) Cloud, which includes more than 70 large datasets contributed by experts belonging to diverse communities such as geography, entertainment, and life sciences. However, the current interlinks between datasets in the LOD Cloud – as we will illustrate – are too shallow to realize much of the benefits promised. If this limitation is left unaddressed, then the LOD Cloud will merely be more data that suffers from the same kinds of problems, which plague the Web of Documents, and hence the vision of the Semantic Web will fall short.
This thesis presents a comprehensive solution to address the issue of alignment and relationship identification using a bootstrapping based approach. By alignment we mean the process of determining correspondences between classes and properties of ontologies. We identify subsumption, equivalence and part-of relationship between classes. The work identifies part-of relationship between instances. Between properties we will establish subsumption and equivalence relationship. By bootstrapping we mean the process of being able to utilize the information which is contained within the datasets for improving the data within them. The work showcases use of bootstrapping based methods to identify and create richer relationships between LOD datasets. The BLOOMS project (http://wiki.knoesis.org/index.php/BLOOMS) and the PLATO project, both built as part of this research, have provided evidence to the feasibility and the applicability of the solution.
We provide real time big data training in Chennai by industrial experts with real time scenarios.
Our Advanced topics will enhance the students expectations into high level knowledge in Big Data Technology.
For More Info.Reach our Big Data Technical Team@ +91 96677211551/56
The Experience of Big data Training Experts Team.
www.thecreatingexperts.com
SAP BEST INSTITUTES IN CHENNAI
http://www.youtube.com/watch?v=UpWthI0P-7g
Talk given by prof. T.K. Prasad at the workshop on Semantics in Geospatial Architectures: Applications and Implementation. The workshop was held from October 28-29, 2013 at Pyle Center (702 Langdon Street, Madison, WI), University of Wisconsin-Madison.
What is data discovery and how do people find out about data?
Metadata: What information helps potential users decide whether that data might be useful?
How and why do machines exchange information about research data?
Data without metadata and connections is useless:
Linked data
How Scholix is helping publishers and others to link data with publications and more
Metadata, controlled vocabularies, linked data and crosswalks
Things #11, #12, #13 of 23 Things
How do we make FAIR data? Finable, Accessible, Interoperable, Reusable?
MUDROD - Mining and Utilizing Dataset Relevancy from Oceanographic Dataset Me...Yongyao Jiang
MUDROD - Mining and Utilizing Dataset Relevancy from Oceanographic Dataset Metadata, Usage Metrics, and User Feedback to Improve Data Discovery and Access
Amit Sheth, "Semantic Interoperability and Information Brokering in Global Information Systems," Keynote given at IEEE Meta-Data, Bathesda, MD, April 6 1999.
Amit Sheth, Pramod Anantharam, Krishnaprasad Thirunarayan, "kHealth: Proactive Personalized Actionable Information for Better Healthcare", Workshop on Personal Data Analytics in the Internet of Things at VLDB2014, Hangzhou, China, September 5, 2014.
Accompanying Video: http://youtu.be/pqcbwGYHPuc
Paper: http://www.knoesis.org/library/resource.php?id=2008
Presentation given by Chris Welty (IBM Research) at Knoesis. We get the permission to upload this presentation from Chris Welty. Event details are at: http://j.mp/Welty-at-Knoesis and the associate video is at: https://www.youtube.com/watch?v=grDKpicM5y0
Krishnaprasad Thirunarayan and Amit Sheth: Semantics-empowered Approaches to Big Data Processing for Physical-Cyber-Social Applications, In: Proceedings of AAAI 2013 Fall Symposium on Semantics for Big Data, Arlington, Virginia, November 15-17, 2013.
With the rapid proliferation of mobile phones, social media, and sensors, it is critical to collect and convert big data so generated into actionable information that is relevant for decision making. In this session, we explore challenges and approaches for synthesizing relevant background knowledge and inferences that can enable smart healthcare and ultimately benefit community at large.
Paper: http://www.knoesis.org/library/resource.php?id=1903
Krishnaprasad Thirunarayan, Pramod Anantharam, Cory Henson, and Amit Sheth, 'Trust Networks', In: 5th Indian International Conference on Artificial Intelligence (IICAI-11), December 14-16, 2011 (invited tutorial).
Amit Sheth, 'Semantic Computing in Real-World: Vertical and Horizontal application, within Enterprise and on the Web, ' Panel Presentation at International Conference on Semantic Computing (ICSC2011), Palo Alto, CA, September 20, 2011.
Harshal Patni, "Real Time Semantic Analysis of Streaming Sensor Data," MS Thesis Defense, Kno.e.sis Center, Wright State University, Dayton OH, March 21, 2001.
More at: http://wiki.knoesis.org/index.php/SSW
Dissertation Advisor: Prof. Amit Sheth
Cursing is not uncommon during conversations in the physical world: 0.5% to 0.7% of all the words we speak are curse words, given that 1% of all the words are first-person plural pronouns (e.g., we, us, our). On social media, people can instantly chat with friends without face-to-face interaction, usually in a more public fashion and broadly disseminated through highly connected social network. Will these distinctive features of social media lead to a change in people’s curs- ing behavior? In this paper, we examine the characteristics of cursing activity on a popular social media platform – Twitter, involving the analysis of about 51 million tweets and about 14 million users. In particular, we explore a set of questions that have been recognized as crucial for understanding curs- ing in offline communications by prior studies, including the ubiquity, utility, and contextual dependencies of cursing.
Original paper: http://knoesis.org/library/resource.php?id=1937
Pavan Kapanipathi, Prateek Jain, Chitra Venkataramani, Amit Sheth, User Interests Identification on Twitter Using a Hierarchical Knowledge Base, ESWC 2014, May 2014.
Paper at: http://j.mp/user-ig
More at: http://wiki.knoesis.org/index.php/Hierarchical_Interest_Graph
Invited talk presented by Hemant Purohit (http://knoesis.org/researchers/hemant) at the NCSU workshop on IT for sustainable tourism development. The talk presents application of technology developed for crisis coordination into more general marketplace coordination via social media for helping suppliers (micro-entrepreneurs) and demanders (tourists).
The recent emergence of the “Linked Data” approach for publishing data represents a major step forward in realizing the original vision of a web that can "understand and satisfy the requests of people and machines to use the web content" – i.e. the Semantic Web. This new approach has resulted in the Linked Open Data (LOD) Cloud, which includes more than 70 large datasets contributed by experts belonging to diverse communities such as geography, entertainment, and life sciences. However, the current interlinks between datasets in the LOD Cloud – as we will illustrate – are too shallow to realize much of the benefits promised. If this limitation is left unaddressed, then the LOD Cloud will merely be more data that suffers from the same kinds of problems, which plague the Web of Documents, and hence the vision of the Semantic Web will fall short.
This thesis presents a comprehensive solution to address the issue of alignment and relationship identification using a bootstrapping based approach. By alignment we mean the process of determining correspondences between classes and properties of ontologies. We identify subsumption, equivalence and part-of relationship between classes. The work identifies part-of relationship between instances. Between properties we will establish subsumption and equivalence relationship. By bootstrapping we mean the process of being able to utilize the information which is contained within the datasets for improving the data within them. The work showcases use of bootstrapping based methods to identify and create richer relationships between LOD datasets. The BLOOMS project (http://wiki.knoesis.org/index.php/BLOOMS) and the PLATO project, both built as part of this research, have provided evidence to the feasibility and the applicability of the solution.
We provide real time big data training in Chennai by industrial experts with real time scenarios.
Our Advanced topics will enhance the students expectations into high level knowledge in Big Data Technology.
For More Info.Reach our Big Data Technical Team@ +91 96677211551/56
The Experience of Big data Training Experts Team.
www.thecreatingexperts.com
SAP BEST INSTITUTES IN CHENNAI
http://www.youtube.com/watch?v=UpWthI0P-7g
Talk given by prof. T.K. Prasad at the workshop on Semantics in Geospatial Architectures: Applications and Implementation. The workshop was held from October 28-29, 2013 at Pyle Center (702 Langdon Street, Madison, WI), University of Wisconsin-Madison.
What is data discovery and how do people find out about data?
Metadata: What information helps potential users decide whether that data might be useful?
How and why do machines exchange information about research data?
Data without metadata and connections is useless:
Linked data
How Scholix is helping publishers and others to link data with publications and more
Metadata, controlled vocabularies, linked data and crosswalks
Things #11, #12, #13 of 23 Things
How do we make FAIR data? Finable, Accessible, Interoperable, Reusable?
MUDROD - Mining and Utilizing Dataset Relevancy from Oceanographic Dataset Me...Yongyao Jiang
MUDROD - Mining and Utilizing Dataset Relevancy from Oceanographic Dataset Metadata, Usage Metrics, and User Feedback to Improve Data Discovery and Access
Using Lucene/Solr to Build CiteSeerX and Friendslucenerevolution
Presented by C. Lee Giles, Pennsylvania State University - See complete conference videos - http://www.lucidimagination.com/devzone/events/conferences/lucene-revolution-2012
Cyberinfrastructure or e-science has become crucial in many areas of science as data access often defines scientific progress. Open source systems have greatly facilitated design and implementation and supporting cyberinfrastructure. However, there exists no open source integrated system for building an integrated search engine and digital library that focuses on all phases of information and knowledge extraction, such as citation extraction, automated indexing and ranking, chemical formulae search, table indexing, etc. We propose the open source SeerSuite architecture which is a modular, extensible system built on successful OS projects such as Lucene/Solr and discuss its uses in building enterprise search and cyberinfrastructure for the sciences and academia. We highlight application domains with examples of specialized search engines that we have built for computer science, CiteSeerX, chemistry, ChemXSeer, archaeology, ArchSeer. acknowledgements, AckSeer, reference recommendation, RefSeer, collaboration recommendation, CollabSeer, and others, all using Solr/Lucene. Because such enterprise systems require unique information extraction approaches, several different machine learning methods, such as conditional random fields, support vector machines, mutual information based feature selection, sequence mining, etc. are critical for performance.
Using Lucene/Solr to Build CiteSeerX and Friendslucenerevolution
Presented by C. Lee Giles, Pennsylvania State University - See conference videos - http://www.lucidimagination.com/devzone/events/conferences/lucene-revolution-2012
Cyberinfrastructure or e-science has become crucial in many areas of science as data access often defines scientific progress. Open source systems have greatly facilitated design and implementation and supporting cyberinfrastructure. However, there exists no open source integrated system for building an integrated search engine and digital library that focuses on all phases of information and knowledge extraction, such as citation extraction, automated indexing and ranking, chemical formulae search, table indexing, etc. We propose the open source SeerSuite architecture which is a modular, extensible system built on successful OS projects such as Lucene/Solr and discuss its uses in building enterprise search and cyberinfrastructure for the sciences and academia. We highlight application domains with examples of specialized search engines that we have built for computer science, CiteSeerX, chemistry, ChemXSeer, archaeology, ArchSeer. acknowledgements, AckSeer, reference recommendation, RefSeer, collaboration recommendation, CollabSeer, and others, all using Solr/Lucene. Because such enterprise systems require unique information extraction approaches, several different machine learning methods, such as conditional random fields, support vector machines, mutual information based feature selection, sequence mining, etc. are critical for performance.
Lesson 7 in a set of 10 created by DataONE on Best Practices fo Data Management. The full module can be downloaded from the DataONE.org website at: http://www.dataone.org/educaiton-modules. Released under a CC0 license, attribution and citation requested.
High resolution mass spectrometry (HRMS) and non-targeted analysis (NTA) are of increasing interest in chemical forensics for the identification of emerging contaminants and chemical signatures of interest. At the US Environmental Protection Agency, our research using HRMS for non-targeted and suspect screening analyses utilizes databases and cheminformatics approaches that are applicable to chemical forensics. The CompTox Chemicals Dashboard is an open chemistry resource and web-based application containing data for ~760,000 substances. Basic functionality for searching through the data is provided through identifier searches, such as systematic name, trade names and CAS Registry Numbers. Advanced Search capabilities supporting mass spectrometry include mass and formula-based searches, combined substructure-mass searches and searching experimental mass spectral data against predicted fragmentation spectra. A specific type of data mapping in the underpinning database, using “MS-Ready” structures, has proven to be a valuable approach for structure identification that links structures that can be identified via HRMS with related substances in the form of salts, and other multi-component mixtures that are available in commerce. This presentation will provide an overview of the CompTox Chemicals Dashboard and demonstrate its utility for supporting structure identification and NTA in chemical forensics. This abstract does not necessarily represent the views or policies of the U.S. Environmental Protection Agency.
Building a Dataset Search Engine with Spark and Elasticsearch: Spark Summit E...Spark Summit
Elasticsearch provides native integration with Apache Spark through ES-Hadoop. However, especially during development, it is at best cumbersome to have Elasticsearch running in a separate machine/instance. Leveraging Spark Cluster with Elasticsearch Inside it is possible to run an embedded instance of Elasticsearch in the driver node of a Spark Cluster. This opens up new opportunities to develop cutting-edge applications. One such application is Dataset Search.
Oscar will give a demo of a Dataset Search Engine built on Spark Cluster with Elasticsearch Inside. Motivation is that once Elasticsearch is running on Spark it becomes possible and interesting to have the Elasticsearch in-memory instance join an (existing) Elasticsearch cluster. And this in turn enables indexing of Datasets that are processed as part of Data Pipelines running on Spark. Dataset Search and Data Management are R&D topics that should be of interest to Spark Summit East attendees who are looking for a way to organize their Data Lake and make it searchable.
Similar to Domain Identification for Linked Open Data (20)
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
1. Domain Identification for Linked Open Data
Sarasi Lalithsena
Pascal Hitzler
Amit Sheth
Kno.e.sis Center
Wright State University, Dayton, OH
Prateek Jain
IBM T.J. Watson Research Center
Yorktown, NY, USA
WI 2013 Atlanta, GA, USA
2. Motivation
lod cloud
262 datasets
870 alive datasets
“Linking Open Data cloud diagram, by Richard Cyganiak and Anja Jentzsch. http://lodcloud.net/”
2
4. Problem
• How do we identify the relevant datasets from this structured
knowledge space?
– How do we create a registry of topics which describe the
domain of a dataset?
4
5. State of the Art - CKAN
• In order to organize this large cloud CKAN encourages users to
tag their datasets in to following domains
- media
- geography
- life sciences
- publications
- government
- e-commerce
- social web
- user generated content
- schemata
- cross-domain
• CKAN administrators then manually go through these tagging
and organize the diagram
• CKAN provides a search for the datasets based on these manual
tagging and keywords
5
6. State of the Art - CKAN
But,
• Fixed set of tags can’t cope with the increasing diversity of the
datasets
– For an example what would be tags for Lingvoj dataset?
• Manual reviewing process will soon be unsustainable
• Classification is subjective
6
7. State of the Art- LODStats
• Stream based approach to collect the statistics of datasets
• Allow searching for datasets based on the keyword and
metadata provided by data publishers
7
8. State of the Art – Other
• Semantic Search Engine (SSE)
– SSEs such as Sigma, Swoogle and Watson allow to search
for instances and give the releted URI instance
– But are not designed for dataset search
• Federated Querying systems on LOD datasets
– Need to know seed URIs to find the relevant datasets
8
9. State of the Art – Existing Problems to
dataset lookup
• Rely on manual tagging provided by users and the manual
reviewing process
• Rely on keywords and metadata provided by users
• Need to know seed URIs to find the relevant datasets
• Need to know instances to start explore the datasets
9
10. What we propose?
• Introduce a systematic and sophisticated way to identify
possible domains, topics, tags (Topic Domain) to better describe
these datasets
• What are these topic domain can be?
– Predefined set of list
– Type of the schema of each dataset
10
12. How do we address the previous
problems
• Use the category system of existing knowledge sources as the
vocabulary to describe the domain
– Does not need to either rely on a predefined set of tags
– Does not need to rely on metadata and keywords
• Automatic way to identify the topic domains
• Vocabulary can be used to search the datasets and organize the
datasets
12
13. Our approach - Freebase
• Use Freebase as our knowledge source to identify the topic
domains
• Why Freebase?
– Wide Coverage
Has 39 million topics
– Simple Category Hierarchy System
• Freebase category system categorizes each topic in to types and
types are grouped in to domains
music
Domain
Artist
Type
• Utilized Freebase types and domains as our topic domains
13
15. Our Approach
STEP 1 Instance Identification
– Extract the instances of the dataset with its type
– Extract the human readable values of the instances and type
Granite and its type Rock
– Identify the closely related instance from the freebase for
each instance in our dataset
Ignimbrite, Rock
Slate, Rock
Granite, Rock
http://www.freebase.com/m/
01tx7r
http://www.freebase.com/m/
01c_9j
http://www.freebase.com/m/
03fcm
15
16. Our Approach
• Instance Identification
We attach the type information as well to the query string
Apple
Apple Company
Apple Fruit
Apple Fruit
16
17. Our Approach
• STEP 2 Category Hierarchy Creation
Ignimbrite
/geology/rock_type
geography
geology
{domain/type}
geography
Ignimbrite
rock type
geology
mountain
geography
mountain range
music
music
slate
rock type
geology
mountain
release track
recording
geography
granite
rock type
mountain
17
18. Our Approach
• Category Hierarchy Merging
geography
geology
Ignimbrite
mountain
rock type
mountain range
geology
geography
slate
music
release track
rock type
mountain
recording
geology
geography
granite
rock type
mountain
18
19. Our Approach
• Candidate Category Hierarchy Selection
Filter out insignificant category hierarchies using a simple
heuristics
geography
geology
Ignimbrite
mountain
rock type
mountain range
geology
geography
slate
music
release track
rock type
mountain
recording
geology
geography
granite
rock type
mountain
19
20. Our Approach
• Frequency Count Generation
Count the number of occurrences for each category (number of
instances having the given category)
Term
Frequency
Parent Node
geology
3
rock type
3
geology
mountain range
1
geography
…..
…
….
20
21. Implementation
• Map Reduce Deployment
STEP 2 and 3
map1
STEP 4
Reducer
1
map2
<Inst, type>
……
.......
……
……
Map 3
map4
…
STEP 5
Post Processing
…
…
Reducer
m
…
Map n
Instances belong to same type will go into a
single reducer
21
22. Evaluation
• We ran our experiments with 30 datasets in LOD for evaluation
Evaluation
Appropriateness of the identified
domain
Effectiveness in finding the datasets
User Study
22
23. Evaluation : Appropriateness of the
identified domain
• Select four high frequent domains and types from our results
• Mixed it with other randomly selected four domains and types
• Asked from users to select the terms that best represent the
higher level domains for the dataset – had 20 users
*
50% of the users
agreed on 73% of
the terms (88 out of
120)
23
24. Evaluation : Appropriateness of the
identified domain
TERMS WITH HIGHEST USER AGREEMENT FOR EACH DATASET, WE INDICATE BY A STAR (*)
THAT TERM WAS ALSO THE HIGHEST RANKED BY OUR SYSTEM (for 22 datasets)
24
26. Evaluation – Effectiveness in finding the
datasets
• Developed a search application using the normalized frequency
count
• User study with three other existing state of the art
– CKAN, LOD Stat and Sigma
• Term selection
• Top ten results are retrieved
• Asked users to rank which set of results they preferred
– 1(best ) to 4(worst)
• Calculate a user preference score using weighted average
26
28. Evaluation
Evaluation
Appropriateness of the identified
domain
User Study
Effectiveness in finding the datasets
1. User Study with three other SE
2. Evaluate CKAN as the baseline
29
30. Evaluation
Evaluation
Appropriateness of the identified
domain
User Study
Effectiveness in finding the datasets
1. User Study with three other SE
2. Evaluate CKAN as the baseline
3. Evaluate both CKAN and our
approach using a manually curated
gold standard
34
32. Conclusion and Future Work
• Our approach is helpful for systematically categorizing the
datasets
• Demonstrate the potential of using the categorization for finding
relevant datasets
• Utilize a diverse classification hierarchy such as Freebase
• There are other potential application that this work might be
important such browsing, interlinking and querying
• Plan to improve the domain coverage by using knowledge
sources such as Wikipedia
• Compare the interpretation given by multiple knowledge sources
to see which one gives you a better interpretation
37