Linked GeoRef is an open dataset and API that aims to improve geographical annotation and finding of public sector information by integrating different geographical reference systems into a single linked data framework. It assigns unique identifiers (URIs) to codes from systems like AGS, NUTS, and Berlin's LOR, and links related codes to show hierarchical and equivalence relationships. This allows different systems to be used together and converted between each other. The dataset is hosted with SPARQL endpoint for lookup and conversion of codes between systems.
Digitales Graffiti und vernetzte Daten für digitale StädteKnud Möller
German language presentation about the data foundations of digital, smart cities: open data, citizen participation (digital graffiti) and linked data. Presented on 10th September at Xinnovations 2012 in Berlin.
(http://lod2.eu/BlogPost/webinar-series) In this Webinar Michael Martin presents CubeViz - a facetted browser for statistical data utilizing the RDF Data Cube vocabulary which is the state-of-the-art in representing statistical data in RDF. This vocabulary is compatible with SDMX and increasingly being adopted. Based on the vocabulary and the encoded Data Cube, CubeViz is generating a facetted browsing widget that can be used to filter interactively observations to be visualized in charts. Based on the selected structure, CubeViz offer beneficiary chart types and options which can be selected by users.
If you are interested in Linked (Open) Data principles and mechanisms, LOD tools & services and concrete use cases that can be realised using LOD then join us in the free LOD2 webinar series!
Presentation done by Ander García, Maria Teresa Linaza, Javier Franco and Miriam Juaristi, during "Data management" workshop, of the ENTER2015 eTourism conference.
Open Source Statistical Software for the Statistical OfficeMark Van Der Loo
A general overview of what open source software is and how communities surrounding open source software interact. I also discuss FOSS tools for Official Statistics through the 'awesome official statistics awesomelist' and how FOSS tools are used at Statistics Netherlands.
Automated interpretability of linked data ontologies: an evaluation within th...Nuno Freire
Publication and usage of linked data has been highly pursued by cultural heritage institutions and service providers in this domain. Much research and cooperation are taking place in adapting and improving cultural heritage data models for linked data and in defining ontologies and vocabularies, as well as the setting up of services based on linked data. This article presents an evaluation of ontologies and vocabularies published as liked data, which originate from the cultural heritage domain, or are frequently used and linked to in this domain. Our study aims to evaluate their usability by crawlers operating on the web of data, according to specifications and practices of linked data, the Semantic Web and ontology reasoning. We evaluate having in mind the use case of general data consumption applications based on RDF, RDF Schema, OWL, SKOS and linked data’s guidelines. We have evaluated twelve ontologies and vocabularies and identified that four were not fully compliant, and that alignments between ontologies are not included in the definitions of the ontologies. This study contributes to the research of novel services consuming linked data. It also allows to better assess the automation that can be achieved to handle the variety and large volume of linked data, when assessing the viability of new services based on linked data in cultural heritage.
Digitales Graffiti und vernetzte Daten für digitale StädteKnud Möller
German language presentation about the data foundations of digital, smart cities: open data, citizen participation (digital graffiti) and linked data. Presented on 10th September at Xinnovations 2012 in Berlin.
(http://lod2.eu/BlogPost/webinar-series) In this Webinar Michael Martin presents CubeViz - a facetted browser for statistical data utilizing the RDF Data Cube vocabulary which is the state-of-the-art in representing statistical data in RDF. This vocabulary is compatible with SDMX and increasingly being adopted. Based on the vocabulary and the encoded Data Cube, CubeViz is generating a facetted browsing widget that can be used to filter interactively observations to be visualized in charts. Based on the selected structure, CubeViz offer beneficiary chart types and options which can be selected by users.
If you are interested in Linked (Open) Data principles and mechanisms, LOD tools & services and concrete use cases that can be realised using LOD then join us in the free LOD2 webinar series!
Presentation done by Ander García, Maria Teresa Linaza, Javier Franco and Miriam Juaristi, during "Data management" workshop, of the ENTER2015 eTourism conference.
Open Source Statistical Software for the Statistical OfficeMark Van Der Loo
A general overview of what open source software is and how communities surrounding open source software interact. I also discuss FOSS tools for Official Statistics through the 'awesome official statistics awesomelist' and how FOSS tools are used at Statistics Netherlands.
Automated interpretability of linked data ontologies: an evaluation within th...Nuno Freire
Publication and usage of linked data has been highly pursued by cultural heritage institutions and service providers in this domain. Much research and cooperation are taking place in adapting and improving cultural heritage data models for linked data and in defining ontologies and vocabularies, as well as the setting up of services based on linked data. This article presents an evaluation of ontologies and vocabularies published as liked data, which originate from the cultural heritage domain, or are frequently used and linked to in this domain. Our study aims to evaluate their usability by crawlers operating on the web of data, according to specifications and practices of linked data, the Semantic Web and ontology reasoning. We evaluate having in mind the use case of general data consumption applications based on RDF, RDF Schema, OWL, SKOS and linked data’s guidelines. We have evaluated twelve ontologies and vocabularies and identified that four were not fully compliant, and that alignments between ontologies are not included in the definitions of the ontologies. This study contributes to the research of novel services consuming linked data. It also allows to better assess the automation that can be achieved to handle the variety and large volume of linked data, when assessing the viability of new services based on linked data in cultural heritage.
The Power of Semantic Technologies to Explore Linked Open DataOntotext
Atanas Kiryakov's, Ontotext’s CEO, presentation at the first edition of Graphorum (http://graphorum2017.dataversity.net/) – a new forum that taps into the growing interest in Graph Databases and Technologies. Graphorum is co-located with the Smart Data Conference, organized by the digital publishing platform Dataversity.
The presentation demonstrates the capabilities of Ontotext’s own approach to contributing to the discipline of more intelligent information gathering and analysis by:
- graphically explorinh the connectivity patterns in big datasets;
- building new links between identical entities residing in different data silos;
- getting insights of what type of queries can be run against various linked data sets;
- reliably filtering information based on relationships, e.g., between people and organizations, in the news;
- demonstrating the conversion of tabular data into RDF.
Learn more at http://ontotext.com/.
As part of the final BETTER Hackathon, project partners prepared 4 hackathon exercises. Fraunhofer IAIS organised this exercise in conjunction with external partner MKLab ITI-CERTH (EOPEN project). This step-by-step exercise featured the setup of local Docker images on Linux OS featuring Dcoker Compose and (pre-installed) Python, SANSA, Hadoop, Apache Spark and Apache Zeppelin. It featured semantic transformation and and the use of SANSA (Scalable Semantic Analytics Stack - http://sansa-stack.net/) libraries on a sample of tweets ahead of geo-clustering.
Project website (Hackathon information): https://www.ec-better.eu/pages/2nd-hackathon
Github repository: https://github.com/ec-better/hackathon-2020-semanticgeoclustering
Linked Statistical Data: does it actually pay off?Oscar Corcho
Invited keynote at the ISWC2015 Workshop on Semantics and Statistics (SemStats 2015). http://semstats.github.io/2015/
The release of the W3C RDF Data Cube recommendation was a significant milestone towards improving the maturity of the area of Linked Statistical Data. Many Data Cube-based datasets have been released since then. Tools for the generation and exploitation of such datasets have also appeared. While the benefits for the usage of RDF Data Cube and the generation of Linked Data in this area seem to be clear, there are still many challenges associated to the generation and exploitation of such data. In this talk we will reflect about them, based on our experience on generating and exploiting such type of data, and hopefully provoke some discussion about what the next steps should be.
What to do with the existing spatial data in planningKarel Charvat
Spatial planning acts between all levels of government so planners face important challenges in the development of territorial frameworks and concepts every day.
Spatial planning systems, the legal situation and spatial planning data management are completely different and fragmented throughout Europe.
Nevertheless, planning is a holistic activity.
All tasks and processes must be solved comprehensively with
input from various sources.
It is necessary to make inputs interoperable because it allows the user to search data from different sources, view them, download them and use them with help of geoinformation technologies (GIT).
Slides of my talk at OSLCfest in Stockholm Nov 6, 2019
Video recording of the talk is available here:
https://www.facebook.com/oslcfest/videos/2261640397437958/
The need of Interoperability in Office and GIS formatsMarkus Neteler
Free GIS and Interoperability: The need of Interoperability in Office and GIS formats
GIS Open Source, interoperabilità e cultura del dato nei SIAT della Pubblica Amministrazione
[GIS Open Source, interoperability and the 'culture of data' in the spatial data warehouses of the Public Administration]
New tools are being developed by Czech Living Lab WirelessInfo, which allow users to easily publish their data and metadata as part of a Spatial Data Infrastructure (SDI). The paper describes the design of a Technological Infrastructure on the basis of ISO and OGC Standards and also the implementation of a prototype and first experiences. The solution is designed in distributed system form, which provides the connection to metadata about spatial data and services. This solution tests the principle of catalogue services at both national and international level which could be used in the UN SDI context. A catalogue portal is one of the independent components of GeoHosting complex system for raster and vector spatial data sharing. The catalogue portal provides data source searching on the basis of their metadata records through structure queries. The portal also contains edit functionality for new metadata records creating or editing. The metadata catalogue system corresponds to ISO 19115/19119/19139 standards [1], [2], [3], [4] and provides for cascade searching on the other standardized catalogue systems. The difference is, there exist different other initiatives offering publishing of own content like Google technology or OpenStreet Map, that GeoHosting is based fully on INSPIRE European standards and support establishing of network of distributed servers.
Overview of the world of geospatial metadata, and the role of the EDINA service GoGeo in creating, saving, and discovering it. Presented on 19 June 2014 by Tony Mathys in Aberdeen, Scotland.
The global need to securely derive (instant) insights, have motivated data architectures from distributed storage, to data lakes, data warehouses and lake-houses. In this talk we describe Tag.bio, a next generation data mesh platform that embeds vital elements such as domain centricity/ownership, Data as Products, Self-serve architecture, with a federated computational layer. Tag.bio data products combine data sets, smart APIs, statistical and machine learning algorithms into decentralized data products for users to discover insights using FAIR Principles. Researchers can use its point and click (no-code) system to instantly perform analysis and share versioned, reproducible results. The platform combines a dynamic cohort builder with analysis protocols and applications (low-code) to drive complex analysis workflows. Applications within data products are fully customizable via R and Python plugins (pro-code), and the platform supports notebook based developer environments with individual workspaces.
Join us for a talk/demo session on Tag.bio data mesh platform and learn how major pharma industries and university health systems are using this technology to promote value based healthcare, precision healthcare, find cures for disease, and promote collaboration (without explicitly moving data around). The talk also outlines Tag.bio secure data exchange features for real world evidence datasets, privacy centric data products (confidential computing) as well as integration with cloud services
A Taxonomy of the Data Resource in the Networked IndustryBoris Otto
This presentation reports on the design of a taxonomy of the data resource in the networked industry. It was held on the 7th International Scientific Symposium on Logistics on June 6, 2014, in Cologne, Germany. The presentation motivates the topic, analyzes four networking industry cases and discusses a first version of the taxonomy. The presentation argues that for companies aiming at designing a future-proof data architecture leveraging the potentials of the industrial internet, collaborative forms of organizations etc. transparency about data sources, data ownership, criticality, compliance of standards of data, data quality are key for success. In addition, the presentation introduces a first sketch of a method supporting businesses in applying the taxonomy.
Linked Data Overview - structured data on the web for US EPA 201402033 Round Stones
This presentation provides a Jargon-free overview of Linked Open Data. Linked Data is being used by the US EPA for US Government data publication. The Linked Data approach allows for an increased ability to combine data from multiple sources and decreased costs.
Closing of the EU Data Cloud event at the European Data Forum 2012 in Copenhagen: http://www.data-forum.eu/program/eu-data-cloud
What comes after LATC? What are other examples of Linked Data usage?
The Power of Semantic Technologies to Explore Linked Open DataOntotext
Atanas Kiryakov's, Ontotext’s CEO, presentation at the first edition of Graphorum (http://graphorum2017.dataversity.net/) – a new forum that taps into the growing interest in Graph Databases and Technologies. Graphorum is co-located with the Smart Data Conference, organized by the digital publishing platform Dataversity.
The presentation demonstrates the capabilities of Ontotext’s own approach to contributing to the discipline of more intelligent information gathering and analysis by:
- graphically explorinh the connectivity patterns in big datasets;
- building new links between identical entities residing in different data silos;
- getting insights of what type of queries can be run against various linked data sets;
- reliably filtering information based on relationships, e.g., between people and organizations, in the news;
- demonstrating the conversion of tabular data into RDF.
Learn more at http://ontotext.com/.
As part of the final BETTER Hackathon, project partners prepared 4 hackathon exercises. Fraunhofer IAIS organised this exercise in conjunction with external partner MKLab ITI-CERTH (EOPEN project). This step-by-step exercise featured the setup of local Docker images on Linux OS featuring Dcoker Compose and (pre-installed) Python, SANSA, Hadoop, Apache Spark and Apache Zeppelin. It featured semantic transformation and and the use of SANSA (Scalable Semantic Analytics Stack - http://sansa-stack.net/) libraries on a sample of tweets ahead of geo-clustering.
Project website (Hackathon information): https://www.ec-better.eu/pages/2nd-hackathon
Github repository: https://github.com/ec-better/hackathon-2020-semanticgeoclustering
Linked Statistical Data: does it actually pay off?Oscar Corcho
Invited keynote at the ISWC2015 Workshop on Semantics and Statistics (SemStats 2015). http://semstats.github.io/2015/
The release of the W3C RDF Data Cube recommendation was a significant milestone towards improving the maturity of the area of Linked Statistical Data. Many Data Cube-based datasets have been released since then. Tools for the generation and exploitation of such datasets have also appeared. While the benefits for the usage of RDF Data Cube and the generation of Linked Data in this area seem to be clear, there are still many challenges associated to the generation and exploitation of such data. In this talk we will reflect about them, based on our experience on generating and exploiting such type of data, and hopefully provoke some discussion about what the next steps should be.
What to do with the existing spatial data in planningKarel Charvat
Spatial planning acts between all levels of government so planners face important challenges in the development of territorial frameworks and concepts every day.
Spatial planning systems, the legal situation and spatial planning data management are completely different and fragmented throughout Europe.
Nevertheless, planning is a holistic activity.
All tasks and processes must be solved comprehensively with
input from various sources.
It is necessary to make inputs interoperable because it allows the user to search data from different sources, view them, download them and use them with help of geoinformation technologies (GIT).
Slides of my talk at OSLCfest in Stockholm Nov 6, 2019
Video recording of the talk is available here:
https://www.facebook.com/oslcfest/videos/2261640397437958/
The need of Interoperability in Office and GIS formatsMarkus Neteler
Free GIS and Interoperability: The need of Interoperability in Office and GIS formats
GIS Open Source, interoperabilità e cultura del dato nei SIAT della Pubblica Amministrazione
[GIS Open Source, interoperability and the 'culture of data' in the spatial data warehouses of the Public Administration]
New tools are being developed by Czech Living Lab WirelessInfo, which allow users to easily publish their data and metadata as part of a Spatial Data Infrastructure (SDI). The paper describes the design of a Technological Infrastructure on the basis of ISO and OGC Standards and also the implementation of a prototype and first experiences. The solution is designed in distributed system form, which provides the connection to metadata about spatial data and services. This solution tests the principle of catalogue services at both national and international level which could be used in the UN SDI context. A catalogue portal is one of the independent components of GeoHosting complex system for raster and vector spatial data sharing. The catalogue portal provides data source searching on the basis of their metadata records through structure queries. The portal also contains edit functionality for new metadata records creating or editing. The metadata catalogue system corresponds to ISO 19115/19119/19139 standards [1], [2], [3], [4] and provides for cascade searching on the other standardized catalogue systems. The difference is, there exist different other initiatives offering publishing of own content like Google technology or OpenStreet Map, that GeoHosting is based fully on INSPIRE European standards and support establishing of network of distributed servers.
Overview of the world of geospatial metadata, and the role of the EDINA service GoGeo in creating, saving, and discovering it. Presented on 19 June 2014 by Tony Mathys in Aberdeen, Scotland.
The global need to securely derive (instant) insights, have motivated data architectures from distributed storage, to data lakes, data warehouses and lake-houses. In this talk we describe Tag.bio, a next generation data mesh platform that embeds vital elements such as domain centricity/ownership, Data as Products, Self-serve architecture, with a federated computational layer. Tag.bio data products combine data sets, smart APIs, statistical and machine learning algorithms into decentralized data products for users to discover insights using FAIR Principles. Researchers can use its point and click (no-code) system to instantly perform analysis and share versioned, reproducible results. The platform combines a dynamic cohort builder with analysis protocols and applications (low-code) to drive complex analysis workflows. Applications within data products are fully customizable via R and Python plugins (pro-code), and the platform supports notebook based developer environments with individual workspaces.
Join us for a talk/demo session on Tag.bio data mesh platform and learn how major pharma industries and university health systems are using this technology to promote value based healthcare, precision healthcare, find cures for disease, and promote collaboration (without explicitly moving data around). The talk also outlines Tag.bio secure data exchange features for real world evidence datasets, privacy centric data products (confidential computing) as well as integration with cloud services
A Taxonomy of the Data Resource in the Networked IndustryBoris Otto
This presentation reports on the design of a taxonomy of the data resource in the networked industry. It was held on the 7th International Scientific Symposium on Logistics on June 6, 2014, in Cologne, Germany. The presentation motivates the topic, analyzes four networking industry cases and discusses a first version of the taxonomy. The presentation argues that for companies aiming at designing a future-proof data architecture leveraging the potentials of the industrial internet, collaborative forms of organizations etc. transparency about data sources, data ownership, criticality, compliance of standards of data, data quality are key for success. In addition, the presentation introduces a first sketch of a method supporting businesses in applying the taxonomy.
Linked Data Overview - structured data on the web for US EPA 201402033 Round Stones
This presentation provides a Jargon-free overview of Linked Open Data. Linked Data is being used by the US EPA for US Government data publication. The Linked Data approach allows for an increased ability to combine data from multiple sources and decreased costs.
Closing of the EU Data Cloud event at the European Data Forum 2012 in Copenhagen: http://www.data-forum.eu/program/eu-data-cloud
What comes after LATC? What are other examples of Linked Data usage?
Kasabi, an online data market based on linked data principles, offers data publishers an easy way to publish, link and monetise data, while giving developers of data-centric applications access to this data in different formats and through a number of different interfaces.
1h SPARQL tutorial given at the "Practical Cross-Dataset Queries on the Web of Data" tutorial at WWW2012. Supported by the LATC FP7 Project. http://latc-project.eu/
Linked Data and semantic technologies have seen a remarkable uptake in recent years. However, there is still a significant divide in organisations and companies between implementers and executive decision makers regarding the adoption of Linked Data. While implementers are early adopters, enthusiastic about new technologies, executives who decide whether or not new and potentially costly projects go ahead tend to be sceptical, thinking in terms of costs and benefits. What is needed to bridge this divide is a kind of “executive whispering”: presenting a potential Linked Data project not in terms of technology, but of the concrete benefits it will bring to a particular organisation or company.
The talk draws from the significant experience of the Talis and Kasabi Consulting team.
What is RDFa, what is new in RDFa 1.1, why is important for Linked Data, who uses it and how? - Talk given at WebDirectionsSouth 2010 in Sydney, 14/10/2010
The Semantic Web (and what it can deliver for your business)Knud Möller
3-hour talk I gave on behalf of Social Bits and the Irish Internet Association (IIA). Contains an introduction to the general idea of the Semantic Web and Linked Data, its relevance and opportunities for businesses, and a look under the hood - how does it all work?
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
Linked GeoRef
1. Linked GeoRef
An Open Dataset and API for Improving Geographical
Annotation and Finding of Public Sector Information
Knud Möller (Datalysator), Florian Marienfeld (Willhöft IT)
2nd Open Data Dialog, 19 November 2013, Berlin, Germany
S trOR
A LYns aAdTaining
DAT olutio n
web data s
2. The Challenge
• Most (Open
(Government)) datasets
refer to some location.
• Let’s declare that in the
metadata!
http://www.opendata-hro.de
D TA
2
LYSATOR
web data solutions and training
3. Which Standard?
•There are plenty of options
• ISO authority: ISO 3166-1/2, Codes for the
representation of names of countries and
their subdivisions
• EU regulation: NUTS (Nomenclature of
Units for Territorial Statistics)
• German elaboration: Amtlicher
Gemeindeschlüssel (AGS) / Amtlicher
Regionalschlüssel (ARS)
• Geometry: Geographical Bounding Boxes
3
D TA
LYSATOR
web data solutions and training
4. Best fit for Germany?
• AGS would work great
• From federal states down to boroughs:
• 05: Nordrhein-Westfalen
• 3: Regierungsbezirk Köln
• 82: Rhein-Sieg-Kreis
• 064: Gemeinde Swisttal
➡ 0538364
D TA
4
LYSATOR
web data solutions and training
5. 053: Regierungsbezirk Köln
05382: Rhein-Sieg-Kreis
05: Nordrhein-Westfalen
5
Maps taken from Wikipedia
05382064: Gemeinde Swisttal
D TA
LYSATOR
web data solutions and training
6. Problems with AGS
• only Germany
• not even all of it:
• no further detail for city-states like Berlin
or Hamburg (no “Neukölln” in AGS)
• city states have their own local systems, e.g.
“Lebensweltlich orientierte Räume” (LOR) in
Berlin
D TA
6
LYSATOR
web data solutions and training
7. Problems with others
• NUTS and ISO:
• not as detailed as AGS
• not “at home” in administration
• bounding boxes
• not precise enough
• coordinates not so easy to read as text
• Everybody likes their own approach best...
D TA
7
LYSATOR
web data solutions and training
8. Why Choose?
• Proposal: Linked Data solution
• combine all approaches in one
dataset
• every code in every system gets
unique identifier: a URI dereference it (surf to it) and find
out what it means!
• codes are linked: what is the same? what is
contained in what (hierarchy)?
8
D TA
LYSATOR
web data solutions and training
9. Linked GeoRef
• http://lgeoref.org/nuts/DE (Germany)
• http://lgeoref.org/nuts/DEA (North Rhine-Westphalia)
• http://lgeoref.org/ags/05 (North Rhine-Westphalia)
• http://lgeoref.org/ags/12 (Brandenburg)
• http://lgeoref.org/ags/05/3/82 (Rhein-Sieg-Kreis)
• http://lgeoref.org/berlin/lor/08 (Neukölln)
• http://lgeoref.org/berlin/lor/02/02/02/06 (Graefekiez)
D TA
9
LYSATOR
web data solutions and training
14. Usecase: govdata.de
• geo-referencing data
• map-based search and filtering
• hard to enforce one scheme in metadata
• Linked GeoRef could help: spatial-uri field
in addition to spatial (coordinates) and
spatial-text (human-readable):
"spatial-uri": "http://lgeoref.org/berlin/lor/08"
D TA
14
LYSATOR
web data solutions and training
15. Usecase: Code Converter
• convert between different reference systems
• “which AGS is this NUTS ‘DEE’”?
D TA
15
LYSATOR
web data solutions and training
16. Collateral Usage
• central reference point for different code
systems
• lookup for relations between codes
(containment, identity, overlap, etc.)
• links to other widely-used reference
datasets, e.g. DBpedia, GeoNames
D TA
16
LYSATOR
web data solutions and training
17. Internals
• data in various formats (mostly Excel) from various
sources converted to RDF
enriched and
• interlinked,Geonames) linked to external data
(DBpedia,
• each reference system in its own namespace
•
•
•
•
http://lgeoref.org/ags/
http://lgeoref.org/nuts/
http://lgeoref.org/berlin/lor/
etc.
• hosted in RDF store with SPARQL endpoint (dydra.com)
• simple Linked Data frontend with Pubby
D TA
17
LYSATOR
web data solutions and training
18. Take-home
• challenge: Open Data needs geographical
context
• many useful geo reference systems (AGS,
NUTS, ...)
• but: everybody likes a different one
• so: don’t choose, use all of them together
• Linked GeoRef provides integrated,
dereferenceable framework for this
D TA
18
LYSATOR
web data solutions and training