These slides were presented as part of a W3C tutorial at the CSHALS 2010 conference (http://www.iscb.org/cshals2010). The slides are adapted from a longer introduction to the Semantic Web available at http://www.slideshare.net/LeeFeigenbaum/semantic-web-landscape-2009 .
A PDF version of the slides is available at http://thefigtrees.net/lee/sw/cshals/cshals-w3c-semantic-web-tutorial.pdf .
The document discusses semantic technologies and formal languages. It provides an overview of upcoming seminars on topics like the semantic web, linked open data, and topic maps. It then discusses what semantic technologies are and why they are useful. Finally, it covers elements of formal languages like syntax, semantics, and deduction rules.
The document outlines plans to deliver a unified web presence for Jisc by the end of 2014. It proposes making the Jisc website at jisc.ac.uk the default home for customer-facing content, while allowing some associate sites that would have a diminished user experience if integrated. By the end of 2014, 100% of sites will carry the Jisc banner, 80% of associate sites will use the Jisc domain, and key sites will be integrated or have common language/analytics. Involvement is sought from website managers and experts through meetings, secondments, and project teams.
These slides were presented as part of a W3C tutorial at the CSHALS 2010 conference (http://www.iscb.org/cshals2010). The slides are adapted from a longer introduction to the Semantic Web available at http://www.slideshare.net/LeeFeigenbaum/semantic-web-landscape-2009 .
A PDF version of the slides is available at http://thefigtrees.net/lee/sw/cshals/cshals-w3c-semantic-web-tutorial.pdf .
The document discusses semantic technologies and formal languages. It provides an overview of upcoming seminars on topics like the semantic web, linked open data, and topic maps. It then discusses what semantic technologies are and why they are useful. Finally, it covers elements of formal languages like syntax, semantics, and deduction rules.
The document outlines plans to deliver a unified web presence for Jisc by the end of 2014. It proposes making the Jisc website at jisc.ac.uk the default home for customer-facing content, while allowing some associate sites that would have a diminished user experience if integrated. By the end of 2014, 100% of sites will carry the Jisc banner, 80% of associate sites will use the Jisc domain, and key sites will be integrated or have common language/analytics. Involvement is sought from website managers and experts through meetings, secondments, and project teams.
This document discusses linked open data and how to publish data in a standardized, machine-readable format using semantic web technologies. It explains that linked data uses the Resource Description Framework (RDF) to represent information as a graph of interconnected resources identified by URIs. By publishing data according to linked data principles, separate databases can be connected to work as a single global database. The document provides examples of how cultural heritage data from different domains can be represented and linked in RDF, and outlines six steps for publishing linked data on the web.
The document discusses using JSON-LD and RDF to add semantic meaning to web APIs while maintaining compatibility with existing JSON formats. It explains how RDF uses triples to make statements about resources, and how JSON-LD allows embedding RDF semantics in JSON without changing the format. This allows merging data from multiple sources and facilitates data interchange and evolution of schemas over time.
In this talk I review some of the early visions of the Semantic Web, some of the different views, and I follow through on a thread of how Semantic Web technology has been adopted in search engines (and other companies). I end with a challenge to the research community to keep pursuing this research, rather than letting industry take over the "low end" and keep new work from flourishing.
Open data and reuse of public informationVestforsk.no
A presentation of open data and its potential, especially seen in light of the linked open data development.
Presentation held for Institute of Information and Media Science at the University of Bergen, 14.04.2011
The document discusses why a dedicated search engine like Elasticsearch is better than a traditional database for search tasks. It explains that databases are optimized for data storage and retrieval by unique IDs, but are slow and inefficient for full text search. Elasticsearch uses an inverted index which allows it to quickly search text fields and return relevant results. It analyzes, normalizes, and indexes documents upfront so queries can be executed rapidly against the index. Ranking algorithms ensure the most relevant documents are prioritized in results.
Radically Open Cultural Heritage Data on the WebJulie Allinson
What happens when tens of thousands of archival photos are shared with open licenses, then mashed up with geolocation data and current photos? Or when app developers can freely utilize information and images from millions of books? On this panel, we'll explore the fundamental elements of Linked Open Data and discover how rapidly growing access to metadata within the world's libraries, archives and museums is opening exciting new possibilities for understanding our past, and may help in predicting our future. Our panelists will look into the technological underpinnings of Linked Open Data, demonstrate use cases and applications, and consider the possibilities of such data for scholarly research, preservation, commercial interests, and the future of cultural heritage data.
The document discusses how to build your own search engine like Google using Apache Solr and Tika. It explains that Solr uses an inverted index and represents documents as vectors in a common term space to allow fast search across large datasets. Tika is used to extract text from different file formats. The document provides examples of how companies like Netflix use Solr for search capabilities on their websites.
This document discusses efforts to build out linked data resources in Japanese, including DBpedia Japanese and connecting it to other datasets. It describes creating extractions from the Japanese Wikipedia to generate over 80 million triples for DBpedia Japanese. It also discusses the Linked Open Data Initiative in Japan, which aims to promote linked data usage in both government and private sectors. Their projects include building platforms like DBpedia Japanese and CKAN Japanese as well as collaborating on open data projects.
The document provides an introduction to Prof. Dr. Sören Auer and his background in knowledge graphs. It discusses his current role as a professor and director focusing on organizing research data using knowledge graphs. It also briefly outlines some of his past roles and major scientific contributions in the areas of technology platforms, funding acquisition, and strategic projects related to knowledge graphs.
A talk given at the annual Computer Science for High School Teachers event at Victoria University of Wellington. I presented on some basics of the World Wide Web and why it's worth to preserve it, our work on non-expert tools to populate semantically enriched content, a current project to identify NZ native birds based on their calls that involves citizen science and contemporary deep learning using TensorFlow, a project that investigates the impact of online citizen science on the development of science capabilities of primary school children, and my collaboration with Adam Grener from the School of English, Film, Theater and Media Studies at VUW with whom I am working on computational tools for the literature studies.
The document provides an overview of the Semantic Web and linked data. It defines the Semantic Web as publishing structured data on the web in a format that computers can understand, rather than just documents. Linked data follows principles like using URIs to identify things and linking data across sources to integrate information. Query languages like SPARQL can then be used to search across linked data. Examples show how data can be published as RDF and linked to create a global database. Applications that consume and combine linked data from multiple sources are discussed.
A Multifaceted Look At Faceting - Ted Sullivan, LucidworksLucidworks
This document discusses using facets in Solr to facilitate relevant search. It provides an overview of facet history and how facets represent metadata that provides context about search results. Facets can be used for visualization, analytics, and understanding language semantics from text. The document argues that facets are dynamic context discovery tools that can be leveraged to find similar items and enhance search in various ways such as query autofiltering, typeahead suggestions, and text analytics.
This document discusses how the Semantic Web and linked open data can help address issues with isolated, disconnected biodiversity data sets by establishing common vocabularies and linking related resources. Key points include using URIs to identify concepts, representing data as subject-predicate-object triples, expressing data in formats like RDF and making data accessible via SPARQL querying. Adopting these linked data principles allows previously separate data sets to become interoperable components of a larger knowledge base.
RDF and Open Linked Data, a first approachhorvadam
This document discusses the potential benefits of libraries publishing their data as linked open data using semantic web technologies. It describes how linked data allows for standardized access to data across the web as a single API. Libraries can make their data more discoverable on the web and searchable by services like Google by publishing it as linked open data. Semantic web technologies like RDF and SPARQL allow for more powerful search capabilities. Several large libraries are already publishing portions of their data as linked open data, including authority files and entire catalogs. The document outlines some semantic web applications libraries could use to enhance discovery and provides examples of vocabularies for describing different types of metadata.
The document provides an overview of how the LOCAH project is applying Linked Data concepts to expose archival and bibliographic data from the Archives Hub and Copac as Linked Open Data. It describes the process of (1) modeling the data as RDF triples, (2) transforming existing XML data to RDF, (3) enhancing the data by linking to external vocabularies and datasets, (4) loading the RDF into a triplestore, and (5) creating Linked Data views to expose the data on the web. The goal is to publish structured data that can be interconnected across domains to enable new uses by both humans and machines.
The web of interlinked data and knowledge strippedSören Auer
Linked Data approaches can help solve enterprise information integration (EII) challenges by complementing text on web pages with structured, linked open data from different sources. This allows for intelligently combining, integrating, and joining structured information across heterogeneous systems. A distributed, iterative, bottom-up integration approach using Linked Data may help solve the EII problem in large companies by taking a pay-as-you-go approach.
Semantic Integration with Apache Jena and StanbolAll Things Open
The document provides an overview of semantic integration using Apache Jena and Apache Stanbol. It discusses using semantic web technologies like RDF, ontologies, and vocabularies to integrate data from various sources and allow machines and people to better understand and use the integrated information. It also provides technical details on Apache Jena, which can store and query RDF data, and Apache Stanbol, a semantic processing engine that can enhance content with metadata.
The document discusses the benefits of a federated and decentralized approach to knowledge and data on the web. It argues that centralized approaches like Big Data fail at web scale, as knowledge is inherently distributed and heterogeneous. A federated future based on light interfaces like Triple Pattern Fragments is envisioned, one where clients can query multiple data sources simultaneously for better performance and reliability compared to centralized endpoints. Serendipity and realistic expectations are important principles for this vision.
This document discusses linked open data and how to publish data in a standardized, machine-readable format using semantic web technologies. It explains that linked data uses the Resource Description Framework (RDF) to represent information as a graph of interconnected resources identified by URIs. By publishing data according to linked data principles, separate databases can be connected to work as a single global database. The document provides examples of how cultural heritage data from different domains can be represented and linked in RDF, and outlines six steps for publishing linked data on the web.
The document discusses using JSON-LD and RDF to add semantic meaning to web APIs while maintaining compatibility with existing JSON formats. It explains how RDF uses triples to make statements about resources, and how JSON-LD allows embedding RDF semantics in JSON without changing the format. This allows merging data from multiple sources and facilitates data interchange and evolution of schemas over time.
In this talk I review some of the early visions of the Semantic Web, some of the different views, and I follow through on a thread of how Semantic Web technology has been adopted in search engines (and other companies). I end with a challenge to the research community to keep pursuing this research, rather than letting industry take over the "low end" and keep new work from flourishing.
Open data and reuse of public informationVestforsk.no
A presentation of open data and its potential, especially seen in light of the linked open data development.
Presentation held for Institute of Information and Media Science at the University of Bergen, 14.04.2011
The document discusses why a dedicated search engine like Elasticsearch is better than a traditional database for search tasks. It explains that databases are optimized for data storage and retrieval by unique IDs, but are slow and inefficient for full text search. Elasticsearch uses an inverted index which allows it to quickly search text fields and return relevant results. It analyzes, normalizes, and indexes documents upfront so queries can be executed rapidly against the index. Ranking algorithms ensure the most relevant documents are prioritized in results.
Radically Open Cultural Heritage Data on the WebJulie Allinson
What happens when tens of thousands of archival photos are shared with open licenses, then mashed up with geolocation data and current photos? Or when app developers can freely utilize information and images from millions of books? On this panel, we'll explore the fundamental elements of Linked Open Data and discover how rapidly growing access to metadata within the world's libraries, archives and museums is opening exciting new possibilities for understanding our past, and may help in predicting our future. Our panelists will look into the technological underpinnings of Linked Open Data, demonstrate use cases and applications, and consider the possibilities of such data for scholarly research, preservation, commercial interests, and the future of cultural heritage data.
The document discusses how to build your own search engine like Google using Apache Solr and Tika. It explains that Solr uses an inverted index and represents documents as vectors in a common term space to allow fast search across large datasets. Tika is used to extract text from different file formats. The document provides examples of how companies like Netflix use Solr for search capabilities on their websites.
This document discusses efforts to build out linked data resources in Japanese, including DBpedia Japanese and connecting it to other datasets. It describes creating extractions from the Japanese Wikipedia to generate over 80 million triples for DBpedia Japanese. It also discusses the Linked Open Data Initiative in Japan, which aims to promote linked data usage in both government and private sectors. Their projects include building platforms like DBpedia Japanese and CKAN Japanese as well as collaborating on open data projects.
The document provides an introduction to Prof. Dr. Sören Auer and his background in knowledge graphs. It discusses his current role as a professor and director focusing on organizing research data using knowledge graphs. It also briefly outlines some of his past roles and major scientific contributions in the areas of technology platforms, funding acquisition, and strategic projects related to knowledge graphs.
A talk given at the annual Computer Science for High School Teachers event at Victoria University of Wellington. I presented on some basics of the World Wide Web and why it's worth to preserve it, our work on non-expert tools to populate semantically enriched content, a current project to identify NZ native birds based on their calls that involves citizen science and contemporary deep learning using TensorFlow, a project that investigates the impact of online citizen science on the development of science capabilities of primary school children, and my collaboration with Adam Grener from the School of English, Film, Theater and Media Studies at VUW with whom I am working on computational tools for the literature studies.
The document provides an overview of the Semantic Web and linked data. It defines the Semantic Web as publishing structured data on the web in a format that computers can understand, rather than just documents. Linked data follows principles like using URIs to identify things and linking data across sources to integrate information. Query languages like SPARQL can then be used to search across linked data. Examples show how data can be published as RDF and linked to create a global database. Applications that consume and combine linked data from multiple sources are discussed.
A Multifaceted Look At Faceting - Ted Sullivan, LucidworksLucidworks
This document discusses using facets in Solr to facilitate relevant search. It provides an overview of facet history and how facets represent metadata that provides context about search results. Facets can be used for visualization, analytics, and understanding language semantics from text. The document argues that facets are dynamic context discovery tools that can be leveraged to find similar items and enhance search in various ways such as query autofiltering, typeahead suggestions, and text analytics.
This document discusses how the Semantic Web and linked open data can help address issues with isolated, disconnected biodiversity data sets by establishing common vocabularies and linking related resources. Key points include using URIs to identify concepts, representing data as subject-predicate-object triples, expressing data in formats like RDF and making data accessible via SPARQL querying. Adopting these linked data principles allows previously separate data sets to become interoperable components of a larger knowledge base.
RDF and Open Linked Data, a first approachhorvadam
This document discusses the potential benefits of libraries publishing their data as linked open data using semantic web technologies. It describes how linked data allows for standardized access to data across the web as a single API. Libraries can make their data more discoverable on the web and searchable by services like Google by publishing it as linked open data. Semantic web technologies like RDF and SPARQL allow for more powerful search capabilities. Several large libraries are already publishing portions of their data as linked open data, including authority files and entire catalogs. The document outlines some semantic web applications libraries could use to enhance discovery and provides examples of vocabularies for describing different types of metadata.
The document provides an overview of how the LOCAH project is applying Linked Data concepts to expose archival and bibliographic data from the Archives Hub and Copac as Linked Open Data. It describes the process of (1) modeling the data as RDF triples, (2) transforming existing XML data to RDF, (3) enhancing the data by linking to external vocabularies and datasets, (4) loading the RDF into a triplestore, and (5) creating Linked Data views to expose the data on the web. The goal is to publish structured data that can be interconnected across domains to enable new uses by both humans and machines.
The web of interlinked data and knowledge strippedSören Auer
Linked Data approaches can help solve enterprise information integration (EII) challenges by complementing text on web pages with structured, linked open data from different sources. This allows for intelligently combining, integrating, and joining structured information across heterogeneous systems. A distributed, iterative, bottom-up integration approach using Linked Data may help solve the EII problem in large companies by taking a pay-as-you-go approach.
Semantic Integration with Apache Jena and StanbolAll Things Open
The document provides an overview of semantic integration using Apache Jena and Apache Stanbol. It discusses using semantic web technologies like RDF, ontologies, and vocabularies to integrate data from various sources and allow machines and people to better understand and use the integrated information. It also provides technical details on Apache Jena, which can store and query RDF data, and Apache Stanbol, a semantic processing engine that can enhance content with metadata.
The document discusses the benefits of a federated and decentralized approach to knowledge and data on the web. It argues that centralized approaches like Big Data fail at web scale, as knowledge is inherently distributed and heterogeneous. A federated future based on light interfaces like Triple Pattern Fragments is envisioned, one where clients can query multiple data sources simultaneously for better performance and reliability compared to centralized endpoints. Serendipity and realistic expectations are important principles for this vision.
Similar to Linked Open Data - Seminar 25.04.12 (20)
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
4. From Gopher to Super-Mashups
http://reegle.info/countries
www.vestforsk.no
5. Why do we want to add meaning
to data ?
When a computer understands what
data means, it can do
search,
reasoning and
combining
www.vestforsk.no
6. Meaning is about understanding
To understand we need a language
A language starts with words
www.vestforsk.no
8. Look at my coin collection
The first coin is called “Silver Tram” and is
from Armenia. It was made in 1246-47 AD.
The second coin is called “Gold Stater of
Lahor” and is from India. It was made in 127-151
AD.
< ... etc >
www.vestforsk.no
12. predicate
subject object
This coin is from India
www.vestforsk.no
13. With RDF Schema we can define
concepts and make simple
relations between them
www.vestforsk.no
14. This coin is from India, hence
from South Asia
www.vestforsk.no
15. But, RDF schema is limited
A language needs more expression and logic to
make good reasoning possible
That’s why OWL (The Web Ontology Language)
was invented
www.vestforsk.no
21. So,
Words in XML
Grammar in RDF (schema) and OWL
Rules in RL
There are a lot of things, that can be described
using standard formats
www.vestforsk.no
22. Suppose, I want to search for a specific coin
www.vestforsk.no
23. “I want all the golden coins, designed
in Asia, but used in the Europe,
between 1958 and 1989”
www.vestforsk.no
24. We can use SPARQL
(Protocol and RDF Query Language)
www.vestforsk.no
25. Because the Web is decentralized and data is in
many places, not only language is important
Exchange of data between different
DB for knowledge creation is an
ultimate goal
www.vestforsk.no
26. To make a connection a machine needs a source
For this, we use resource identifiers
Best known resource identifier is the URI
which consists of a name (urn) and
a location (url)
www.vestforsk.no
27. URI
URL
URN
http://www.mycollection.in/
Gold Stater of Lahor goldStater
www.vestforsk.no
28. With all this background
we are capable of using
the power of all
different
data resources on
the Web
www.vestforsk.no
29. Linked Data vs. Semantic Web
The Semantic Web, or the Web of
Data, is the ultimate goal
Linked Data provides the means to
reach that goal
Linked Data helps build the Web of
Data that later can be exploited by
more advanced technologies such
as intelligent agents
www.vestforsk.no
31. Databases store data to answer questions (1)
•How old is Rajendra? •When was VF founded?
•Where does Rajendra work? •Where is VF located?
•What is Rajendra interested •What can VF do for me?
in?
Persons Organisations
www.vestforsk.no
32. Databases store data to answer questions (2)
•Rajendra is .. years old. •VF was founded 27 years ago.
•Rajendra works in Sogndal. •VF is located in Norway.
•Rajendra is interested in the •VF offers IT-Consulting &
Linked Data. Research.
name date_birth work_place interests
organisation date_founded location services
Rajendra 08-08 Sogndal Linked Data
VF 1985 Norway IT-Consulting
& Research
Svein …. …. ….
nLink …. …. ….
Persons Organisations
www.vestforsk.no
33. Data from Databases can be exposed to the Web via HTML
Persons Organisations
www.vestforsk.no
34. Data from Databases can be accessed via APIs
<workPlace>Sogndal</workPlace> <location>Norway</location>
getWorkplace(„Rajendra“) getLocation(„VF“)
Persons Organisations
www.vestforsk.no
35. (Some) Information on the Web can be found via search engines
Questions won t be
Google answered necessarily
www.vestforsk.no
36. But how to get answers on complex questions? (1)
Who is interested in „Linked Data“ and is working in the same country
as VF is located?
www.vestforsk.no
37. But how to get answers on complex questions? (2)
Who is interested in „Linked Data“ and is working in the same country as
VF is located?
work_place same thing? location
Sogndal same country? Norway
name date_birth work_place interests organisation date_founded location services
Rajendra 08-08 Sogndal Linked Data VF 1985 Norway IT-Consulting &
Research
Svein …. …. ….
nLink …. …. ….
Still no answer
Persons Organisations
www.vestforsk.no
38. Is Mapping the solution?
Mapped!
work_place location
same country?
Sogndal Norway
Still not clear
And what, if we need to add another database?
name date_birth university course
What, if DB-owners
Rajendra 08-08 NTNU Computer
Science can t agree on a
Students Svein …. …. ….
common model?
www.vestforsk.no
39. Mapping is no solution
for a
distributed
Web of data
www.vestforsk.no
40. Before I come up with a
solution ,
let us understand four
simple things
www.vestforsk.no
41. Resources
place type Norway
isA
isA type partOf
work_place Sogndal
location
www.vestforsk.no
42. URIs & Namespaces
rdf:type
umbel:place dbpedia:Norway
rdfs:subClassOf
p:subdivisionName
rdf:type
rdfs:subClassOf
geo:point
dbpedia:Sogndal
geonames:country
dbpedia:Sogndal = http://dbpedia.org/resource/Sogndal
rdfs:subClassOf = http://www.w3.org/2000/01/rdf-schema#subClassOf
A namespace is an abstract container or environment created to hold a logical
grouping of unique identifier or symbols.
www.vestforsk.no
43. Ontologies
place type Norway
isA
isA type partOf
work_place Sogndal
location
has
has
Person worksFor
Organisation
isA
studiesAt
University
www.vestforsk.no
44. What, if each
resource (classes
and individuals) had a
URI?
www.vestforsk.no
45. Expose data from databases as resources & triples on the Web
dbpedia:Sogndal dbpedia:Norway
foaf:based_near foaf:based_near
persons:Rajendra orgs:VF
foaf:name foaf:birthday foaf:based_near foaf:topic_interest foaf:name foaf:birthday foaf:based_near orgs:services
persons:Rajendra
Rajendra 08-08 dbpedia:Sogndal dbpedia:LinkedData orgs:VF VF 1985 dbpedia:Norway IT-Consulting &
Research
persons:Svein Svein …. …. …. orgs:nLink
nLink …. …. ….
Persons Organisations
www.vestforsk.no
46. Link data and do queries all over the Web
dbpedia:Sogndal p:subdivisionName dbpedia:Norway
foaf:based_near foaf:based_near
persons:Rajendra orgs:VF
foaf:topic_interest
dbpedia:LinkedData
Who is interested in „Linked Data“ and
is working in the same country as VF is located?
www.vestforsk.no
47. Link data from more than 40 datasets
Make use of more
http://esw.w3.org/topic/SweoIG/TaskForces/CommunityProjects/LinkingOpenData
than 2 Billion triples!
www.vestforsk.no
48. The Linking Open Data cloud diagram
Link data from more
than 295 datasets
Last updated: 2011-09-19
http://richard.cyganiak.de/2007/10/lod/
www.vestforsk.no
49. How to get answers on really complex questions?
dbpedia:Sogndal p:subdivisionName dbpedia:Norway owl:sameAs Scandinavia:Norge
foaf:based_near foaf:based_near
persons:Rajendra orgs:VF
3.6
foaf:topic_interest
Scandenavia:unemployment_rate_total
dbpedia:LinkedData
Who is interested in „Linked Data“ and is working in a country
where the unemployment rate is lower than 4%?
www.vestforsk.no
50. New way to get knowledge and answers —
not by searching the web, but by doing dynamic computations based
on a vast collection of data, algorithms, and methods
http://www.wolframalpha.com/
www.vestforsk.no
51. Comprehensive Knowledge Archive Network
Open Knowledge Foundation http://no.ckan.net/
Licensed under the Open Database
www.vestforsk.no
52. A collaboration between: Norwegian Press Association,
Association of Norwegian Editors, Norwegian Union of
Journalists, and Department of Journalism
http://www.offentlighet.no/Registeroffentlighet/Alle-registre
www.vestforsk.no
53. Linked data ...
publishing data on the Web ...
... to enable integration, linking and reuse
across silos
www.vestforsk.no
54. Six Steps to Publishing Linked Data
1. Understand the Principles
2. Model Your Data
3. Choose URIs for Things in your Data
4. Setup Your Infrastructure
5. Link to other Data Sets
6. Describe and Publicise your Data
www.vestforsk.no
55. Can’t we just publish data as files?
pdf
easy to read and publish
Excel
allows further processing and analysis
csv
processing without need for proprietary tools
But ...
structure of data not explained
no connection between different data sets, silos
static and fixed – can’t retrieve just slices relevant to problem
www.vestforsk.no
56. Linked data
Apply the principles of the Web to publication of data
The Web:
is a global network of pages
each identified by a URL
fetching a URL gives a document
pages connected by links
open, anyone can say anything about anything else
www.vestforsk.no
57. Linked data
Apply the principles to the web to publication of data
The linked data web:
is a global network of things
each identified by a URI
fetching a URI gives a set of statements
things connected by typed links
open, anyone can say anything about anything else
Linked data is “data you can click on”
www.vestforsk.no
58. Linked Data - Paradigm
Use URIs as names for things
Use HTTP URIs so that people can look up those
names.
When someone looks up a URI, provide useful
information.
Include links to other URIs. so that they can discover
more things.
www.vestforsk.no
59. LOD Benefits
other humans and applications can
easily access your data using Web technologies
follow the links in order to obtain further
contextual information
links to your data and search engine indices
can increase the visibility of your data
www.vestforsk.no
60. JSON-LD - JSON for Linking Data
JSON-LD (JavaScript Object Notation for Linking Data) is a
lightweight Linked Data format that gives your data context.
It is easy for humans to read and write. It is easy for machines to parse
and generate.
It is based on the already successful JSON format and provides a way
to help JSON data interoperate at Web-scale.
If you are already familiar with JSON, writing JSON-LD is very easy.
These properties make JSON-LD an ideal Linked Data interchange
language for JavaScript environments, Web service, and unstructured
databases such as CouchDB and MongoDB.
http://json-ld.org/spec/latest/json-ld-syntax/
www.vestforsk.no
61. This RDF model in standard XML notation
<rdf:RDF
xmlns:rdf="http://www.w3.org/1999/02/22-
rdf-syntax-ns#"
xmlns:dc="http://purl.org/dc/elements/1.1
/"> <rdf:Description
rdf:about="/wiki/Tony_Benn">
<dc:title>Tony Benn</dc:title>
<dc:publisher>Wikipedia</dc:publisher>
</rdf:Description> </rdf:RDF>
www.vestforsk.no
62. written in JSON-LD like this:
{ "@context": { "title":
"http://purl.org/dc/elements/1.1/title",
"publisher":
"http://purl.org/dc/elements/1.1/publishe
r" }, "@id": "/wiki/Tony_Benn", "title":
"Tony Benn", "publisher": "Wikipedia" }
A context is used to allow developers to use aliases
for IRIs.
www.vestforsk.no
63. JSON-LD object
An Internationalized Resource Identifier (IRI)
is a mechanism for representing unique identifiers on
the web.
In Linked Data, IRIs (or URI references) are commonly
used for describing entities and properties.
{ "a": "Person", "name": "Manu Sporny",
"homepage": "http://manu.sporny.org/"
"avatar":
"http://twitter.com/account/profile_image/m
anusporny" }
www.vestforsk.no
64. Unambiguous Identifiers for JSON
If a set of terms, like Person, name, and homepage,
are defined in a context, and that context is used to
resolve the names in JSON objects, machines could
automatically expand the terms to something
meaningful and unambiguous
{ "http://www.w3.org/1999/02/22-rdf-syntax-
ns#type": "http://xmlns.com/foaf/0.1/Person",
"http://xmlns.com/foaf/0.1/name": "Manu Sporny",
"http://xmlns.com/foaf/0.1/homepage":
"http://manu.sporny.org"
"http://rdfs.org/sioc/ns#avatar":
"http://twitter.com/account/profile_image/manusporn
y" }
www.vestforsk.no
65. JSON-LD Example
Let's start by building up a fictitious bike store called
"Links Bike Shop". We've already got our bike store
setup athttp://store.example.com/ and are using
linked data principles.
Here's some of the URLs:
http://store.example.com/: The home page of the store.
http://store.example.com/products/links-swift-chain: A
chain product.
http://store.example.com/products/links-speedy-lube: A
chain lube product.
www.vestforsk.no
66. We want to start creating some linked data for this
fictitious store and start with rough JSON data on
the store itself.
{
"@id": "http://store.example.com/",
"@type": "Store",
"name": "Links Bike Shop",
"description": "The most "linked" bike
store on earth!"
}
www.vestforsk.no
67. Next let's create some rough data for
our two premier products
{
"@id":
"http://store.example.com/products/links-
swift-chain",
"@type": "Product",
"name": "Links Swift Chain",
"description": "A fine chain with many
links.",
"category":
["http://store.example.com/categories/par
ts",
"http://store.example.com/categories/chai
ns"],
"price": "10.00",
"stock": 10
}
www.vestforsk.no
69. To make this into a full JSON-LD document
we combine the data, add a @context,
and adjust some values.
{
"@id": "http://store.example.com/",
"@type": "Store",
"name": "Links Bike Shop",
"description": "The most "linked" bike
store on earth!",
"product": [
...
...
www.vestforsk.no
71. Publishing Solutions and Tools
Triplify
Goal: expose semantics available in RDBMS as simple as
possible
Available for most popular Web app languages
PHP (ready), Ruby/Python (under dev.)
Works with most popular Web app databases
MySQL, PHP-PDO DBs (SQLite, Oracle, DB2, MS SQL,
PostgreSQL)
www.vestforsk.no
72. Virtuoso RDF Views
transforms the result of SQL SELECT statements into RDF
mapping steps
define RDFS class IRIs for each table
define construction of subject IRIs from primary key column values
define construction of predicate IRIs from each non-key column
www.vestforsk.no
73. Marrying DBs with RDF & Ontologies
Relational Databases RDF & Ontologies
Data Model Relational Triples
(tables, columns, rows) (subject, predicate, object)
Schema and data
separation
Implicit information
Scalability
Schema flexibility
Web data integration
readiness
Using DBs for storage and querying of RDF &
ontologies
Publishing DB content as RDF
www.vestforsk.no
74. DBpedia is a community effort to extract structured information from
Wikipedia and to make this information available on the Web. DBpedia
allows you to ask sophisticated queries against Wikipedia, and to link other
data sets on the Web to Wikipedia data.
The DBpedia knowledge base currently describes more than 2.6 million
things, including at least 213,000 persons, 328,000 places, 57,000 music
albums, 36,000 films, 20,000 companies. The knowledge base consists
of 274 million pieces of information (RDF triples).
http://dbpedia.org/
DBpedia and all other linked data is searchable with SPARQL
http://en.wikipedia.org/wiki/SPARQL
www.vestforsk.no
75. Open Streetmap
OpenStreetMap is a free editable map of the whole world.
It is made by people like you.
OpenStreetMap allows you to view, edit and use geographical data in a
collaborative way from anywhere on Earth.
www.openstreetmap.org
GeoNames
The GeoNames geographical database is available for download
free of charge under a creative commons attribution license. It
contains over eight million geographical names and consists of
6.5 million unique features.
www.geonames.org
www.vestforsk.no
76. Creating Open Data
Public Domain – Only after the expiration of copyright
Science Commons protocol for open data
Creative Commons Zero
Public Domain Dedication & License with Community Norms
o Avoid Technical protection measures
o Give credit where credit’s due
o Use Open formats
o Let others know!
o Share your work too!
Photo by suttonhoo @ Flickr, CC BY-NC-SA
www.vestforsk.no