Introducing Generalized Deduplication for Energy-efficient IoT Networks with ...LEGATO project
Abstract: The growing number of Internet of Things (IoT) devices is increasing the network traffic load globally. As a solution to this problem we propose Hermes, a network protocol that leverages the concept of generalized deduplication (GD) to reduce not only the network traffic, but also the energy consumption of devices. GD is a novel technique that maps identical and similar data chunks to a so called basis while applying an error correcting code onto them. The difference between the original data chunk and the basis is encoded in a deviation.
With this poster we motivate the use and purpose of the Hermes protocol. Then we outline the concept of GD and describe the architecture of a Hermes network including the message exchange. Finally, we highlight our micro- and macro-benchmark results on a synthetic data set.
Poster presented by Christian Göttel at the LEGaTO Final Event: 'Low-Energy Heterogeneous Computing Workshop'
Wikidata as a linking hub for knowledge organization systems? Integrating an ...Joachim Neubert
Wikidata has been created in order to support all of the roughly 300 Wikipedia projects. Besides interlinking all Wikipedia pages about a specific item – e.g., a person - in different languages, it also connects to more than 1900 different sources of authority information.
We will present lessons learned from using Wikidata as a linking hub for two personal name authorities in economics (GND and RePEc author identifiers) and demonstrate the benefits of moving a mapping from a closed environment to Wikidata as a public and community-curated linking hub. We will further ask to what extent these experiences can be transferred to knowledge organization systems and how the limitation to simple 1:1 relationships (as for authorities) can be overcome. Using STW Thesaurus for Economics as an example, we will investigate how we can make use of existing cross-concordances to "seed" Wik-idata with external identifiers, and how transitive mappings to yet un-mapped vocabularies can be earned.
This document provides a history of Semantic MediaWiki (SMW) and Wikidata, including:
- SMW was created in 2006 as a MediaWiki extension to add structured data to wiki pages. It is now used on over 1500 sites.
- Wikidata launched in 2012 as a structured database to provide common facts across Wikimedia projects.
- SMW allows adding both unstructured text and structured data to wiki pages through features like online forms, queries, and semantic web standards.
- An example FINA wiki uses several SMW extensions to integrate structured data from Wikidata into person pages and visualizations.
- Opportunities are discussed to further link SMW and Wikidata data through mappings, reconciliation
This document discusses mapping financial data from XBRL filings to Linked Open Data. It describes using ReDeFer to convert XBRL XML instances and schemas to RDF, generating over 1.3 million triples. A prototype demonstrates publishing and querying the semantic XBRL dataset using the Rhizomer tool. While the initial mapping is straightforward, the author notes the resulting RDF could be tailored further and semantic mappings could facilitate cross-querying financial data across filings, companies and accounting principles.
Testing spatial data deliverance in SQL and NoSQL DatabaseDany Laksono
This document summarizes research that tested the performance of delivering spatial data through SQL and NoSQL databases using a NodeJS fullstack web application. It describes testing different sized spatial datasets stored in PostGIS and MongoDB and delivered via a MEAN (MongoDB, Express, AngularJS, NodeJS) framework. The results showed that MongoDB performed better than PostGIS at delivering large spatial datasets, with response times increasing more sharply with dataset size for PostGIS. The document concludes that a MEAN framework could be used as an alternative to traditional LAMP frameworks for web GIS applications.
The Evolution of Blue Ocean Databases, from SQL to BlockchainTrent McConaghy
1. The evolution of blue ocean databases, from Oracle to MySQL to MongoDB to BigchainDB
2. Decentralized software stacks, including decentralized file systems, decentralized databases, and decentralized processing (smart contracts)
[This was presented at a BigchainDB Hackfest, Feb 2017 in Berlin]
This document discusses open data and real-time data sharing protocols. It defines open data as data that is freely available to access, use, and share, with an open license. It notes that open data licenses specify conditions of reuse, such as requiring attribution and sharing-alike of derived works. The document then discusses challenges of scaling data sharing to billions of IoT devices and protocols like CoAP and MQTT that can help address this through pub/sub messaging and small packet sizes. It also mentions using transducers to build processing pipelines that can operate on streaming data sources.
SC7 Webinar 5 13/12/2017 UoA Presentation "Technical aspects of the 3rd secur...BigData_Europe
This document summarizes the technical aspects of the third Secure Societies pilot project. The pilot uses multiple data sources including satellite imagery, news articles, and Twitter posts. It has two main workflows: a change detection workflow that analyzes satellite images to detect changes over time using image processing techniques, and an event detection workflow that monitors social media and news to cluster events. Both workflows store results as linked geospatial data for visualization and querying through an interface. The change detector component parallelizes change detection algorithms on images using Spark.
Introducing Generalized Deduplication for Energy-efficient IoT Networks with ...LEGATO project
Abstract: The growing number of Internet of Things (IoT) devices is increasing the network traffic load globally. As a solution to this problem we propose Hermes, a network protocol that leverages the concept of generalized deduplication (GD) to reduce not only the network traffic, but also the energy consumption of devices. GD is a novel technique that maps identical and similar data chunks to a so called basis while applying an error correcting code onto them. The difference between the original data chunk and the basis is encoded in a deviation.
With this poster we motivate the use and purpose of the Hermes protocol. Then we outline the concept of GD and describe the architecture of a Hermes network including the message exchange. Finally, we highlight our micro- and macro-benchmark results on a synthetic data set.
Poster presented by Christian Göttel at the LEGaTO Final Event: 'Low-Energy Heterogeneous Computing Workshop'
Wikidata as a linking hub for knowledge organization systems? Integrating an ...Joachim Neubert
Wikidata has been created in order to support all of the roughly 300 Wikipedia projects. Besides interlinking all Wikipedia pages about a specific item – e.g., a person - in different languages, it also connects to more than 1900 different sources of authority information.
We will present lessons learned from using Wikidata as a linking hub for two personal name authorities in economics (GND and RePEc author identifiers) and demonstrate the benefits of moving a mapping from a closed environment to Wikidata as a public and community-curated linking hub. We will further ask to what extent these experiences can be transferred to knowledge organization systems and how the limitation to simple 1:1 relationships (as for authorities) can be overcome. Using STW Thesaurus for Economics as an example, we will investigate how we can make use of existing cross-concordances to "seed" Wik-idata with external identifiers, and how transitive mappings to yet un-mapped vocabularies can be earned.
This document provides a history of Semantic MediaWiki (SMW) and Wikidata, including:
- SMW was created in 2006 as a MediaWiki extension to add structured data to wiki pages. It is now used on over 1500 sites.
- Wikidata launched in 2012 as a structured database to provide common facts across Wikimedia projects.
- SMW allows adding both unstructured text and structured data to wiki pages through features like online forms, queries, and semantic web standards.
- An example FINA wiki uses several SMW extensions to integrate structured data from Wikidata into person pages and visualizations.
- Opportunities are discussed to further link SMW and Wikidata data through mappings, reconciliation
This document discusses mapping financial data from XBRL filings to Linked Open Data. It describes using ReDeFer to convert XBRL XML instances and schemas to RDF, generating over 1.3 million triples. A prototype demonstrates publishing and querying the semantic XBRL dataset using the Rhizomer tool. While the initial mapping is straightforward, the author notes the resulting RDF could be tailored further and semantic mappings could facilitate cross-querying financial data across filings, companies and accounting principles.
Testing spatial data deliverance in SQL and NoSQL DatabaseDany Laksono
This document summarizes research that tested the performance of delivering spatial data through SQL and NoSQL databases using a NodeJS fullstack web application. It describes testing different sized spatial datasets stored in PostGIS and MongoDB and delivered via a MEAN (MongoDB, Express, AngularJS, NodeJS) framework. The results showed that MongoDB performed better than PostGIS at delivering large spatial datasets, with response times increasing more sharply with dataset size for PostGIS. The document concludes that a MEAN framework could be used as an alternative to traditional LAMP frameworks for web GIS applications.
The Evolution of Blue Ocean Databases, from SQL to BlockchainTrent McConaghy
1. The evolution of blue ocean databases, from Oracle to MySQL to MongoDB to BigchainDB
2. Decentralized software stacks, including decentralized file systems, decentralized databases, and decentralized processing (smart contracts)
[This was presented at a BigchainDB Hackfest, Feb 2017 in Berlin]
This document discusses open data and real-time data sharing protocols. It defines open data as data that is freely available to access, use, and share, with an open license. It notes that open data licenses specify conditions of reuse, such as requiring attribution and sharing-alike of derived works. The document then discusses challenges of scaling data sharing to billions of IoT devices and protocols like CoAP and MQTT that can help address this through pub/sub messaging and small packet sizes. It also mentions using transducers to build processing pipelines that can operate on streaming data sources.
SC7 Webinar 5 13/12/2017 UoA Presentation "Technical aspects of the 3rd secur...BigData_Europe
This document summarizes the technical aspects of the third Secure Societies pilot project. The pilot uses multiple data sources including satellite imagery, news articles, and Twitter posts. It has two main workflows: a change detection workflow that analyzes satellite images to detect changes over time using image processing techniques, and an event detection workflow that monitors social media and news to cluster events. Both workflows store results as linked geospatial data for visualization and querying through an interface. The change detector component parallelizes change detection algorithms on images using Spark.
Un documento describe brevemente el caso de Paola Díaz en Uruguay, donde alguien usurpó su identidad. Incluye secciones sobre la presentación del caso, puntos a considerar y seguir la pista correcta, pero sin mayores detalles sobre los hechos del caso en sí.
This document contains recommendations for the European Union Location Framework (EULF) to improve how location data is used across different EU policy areas and in e-government services. It includes 15 recommendations targeted at public administrations, EU institutions, and EULF stakeholders. Each recommendation proposes guidance documents or standards to better align location data practices with INSPIRE and integrate location information into policy and digital services.
This document discusses linking smart meter data to 3D City Information Models (CityGML) to improve energy efficiency in buildings. It provides background on Sinergis and the Sunshine project, which uses open standards to support energy efficiency. It then discusses EU energy consumption trends, the role of buildings, relevant data models like CityGML and INSPIRE, and initiatives like Green Button that could be used to link smart meter readings to 3D city models using open protocols. Integrating smart meter data with 3D models in this way may help analyze and improve building energy performance.
This document discusses challenges around measuring and improving building energy efficiency, including a lack of standardized data formats and definitions. It provides examples of projects aiming to open up energy data using standards like INSPIRE and Green Button. Realizing the full benefits of building energy performance data requires addressing ongoing barriers like inconsistent data formats and time spent on data cleaning rather than analysis.
Presentazione 2a Conferenza Nazionale OpenGeoData.
Tavolo di lavoro "Stradari":
http://www.opengeodata.it/index.php?option=com_content&view=article&id=251:il-tavolo-di-lavoro-sugli-stradari&catid=50:eventi-ogd&Itemid=76
Este documento enumera varios instrumentos aerófonos como la flauta, el oboe y el fagot que se usan en bandas y orquestas sinfónicas. También menciona instrumentos como el cuerno, la flauta shakuhachi, flautas barrocas y otros instrumentos andinos como el pincuyo y el sikus. El documento fue realizado por Andres Senges e Ivan Garcia.
Italian local governments use ECO, an open Territorial Information System, to detect illegal building construction by comparing satellite maps to public records. ECO integrates data from various databases and devices to create a unified Municipal Building Registry. It correlates geospatial and IoT data on buildings, utilities, and sensors to enable applications like public lighting optimization, traffic monitoring, and pollution analysis. ECO customizes maps with user-added layers and APIs to integrate with other systems according to users' needs.
The LOD Gateway: Open Source Infrastructure for Linked DataDavid Newbury
Presented at the CIDOC conference in Mexico City, 2023, this talk provides a walkthrough of the digital infrastructure behind the LOD Gateway, a critical part of Getty's digital API infrastructure.
It discusses the difference between graphs, documents, and how both are important for different use cases.
The document discusses the Semantic Web and its potential to make web data more accessible and useful for machines. It describes how the Semantic Web aims to standardize how data is published and linked on the web so that machines can more easily interpret and combine data from different sources. Examples are given of early applications that demonstrate aspects of this vision, like semantic search engines, linked open data projects, and microformats for annotating web pages.
(Or, building better UX / Apps with distributed databases and data synchronisation techniques).
This was my talk at Cocoaheads Berlin 17th February 2016.
Fog computing is a model that processes and stores data near network edge devices rather than solely in cloud data centers. It extends cloud computing to the edge of the network to provide low latency services to end users. Key characteristics include proximity to users, dense geographical distribution, and support for mobility. Fog computing is well-suited for applications requiring real-time processing like industrial automation and IoT networks of sensors. It helps improve quality of service by bringing services closer to users and enabling real-time analytics on distributed data sources.
The Graph is a decentralized protocol that helps with indexing and querying data from blockchains like Ethereum. It functions as a layer that sits between decentralized applications and their data sources, making the data easily accessible. Developers define how data should be indexed and structured from smart contracts using a subgraph manifest file. Graph nodes then continuously scan blockchains for new data and update the stored indices based on events from smart contracts. Decentralized applications can perform complex queries on the indexed data through the GraphQL interface.
Building with JavaScript - write less by using the right toolsChristian Heilmann
This document discusses strategies for writing concise JavaScript code that achieves a lot through the use of the right tools and progressive enhancement techniques. It advocates an approach of writing robust, understandable code over writing the smallest amount of code possible. Specific techniques mentioned include progressive enhancement, event delegation, rendering on the server side, and caching on the client side. The document cautions that writing less code does not inherently mean writing better code, and that copy-pasting examples can lead to large, complex codebases if not integrated properly.
Internet of Things (IoT) represents a remarkable transformation of the way in which our world will soon interact. Much like the World Wide Web connected computers to networks, and the next evolution connected people to the Internet and other people, IoT looks poised to interconnect devices, people, environments, virtual objects and machines in ways that only science fiction writers could have imagined.
This document discusses using cloud technologies to provide social services. It outlines Oracle's strategy of using a service-oriented architecture and componentized enterprise functional architecture to deliver social welfare and human services applications in the cloud. The document aims to address common myths about public sector cloud usage, including that everything will go to the public cloud, that you're either cloud or not cloud, that clouds are one size fits all, that cloud will lock you in, and that reducing costs is the sole benefit of cloud. It emphasizes the importance of a hybrid cloud model and standards-based interoperability.
The document discusses how the growing Internet of Things (IoT) and increase in data collection will impact businesses. It notes that while IoT and big data are not revolutionary on their own, together they will require changes in how data is managed and analyzed. Specifically, it argues that to succeed with the rise of IoT, systems must be optimized to reduce data transfers, data must be fragmented into smaller transactions instead of bulk transfers, and data must be made widely accessible through open platforms and tools. The document provides examples of how companies like Netflix, Facebook, and others have optimized data handling and argues this approach will be needed as IoT devices proliferate into the billions.
The document discusses how the growing Internet of Things (IoT) and increase in data collection will impact businesses. It notes that while IoT and big data are not revolutionary on their own, together they will require changes in how data is managed and analyzed. Specifically, it argues that to succeed with the rise of IoT, systems must be optimized to reduce data transactions, data must be fragmented into smaller pieces to ease analysis, and data must be made widely accessible through open platforms and tools. The document cautions that failing to properly manage the growing amounts of connected devices and data could lead to security risks and negatively impact businesses.
Mary Barnsdale article about Fog Computing for CiscoMary Barnsdale
Fog computing is a new networking paradigm that extends cloud computing to the edge of the network, enabling data processing to occur closer to sensors and devices. It involves using small computing devices with storage and networking capabilities at the edge of the network, rather than sending all data to the cloud. This allows for real-time response and processing of data from the billions of devices that will be connected as part of the Internet of Everything. Cisco researchers are developing the architecture for fog computing to help support the growth of smart grids, smart cities, and other applications requiring low latency processing of data from devices.
A Review- Fog Computing and Its Role in the Internet of ThingsIJERA Editor
Fog computing extends the Cloud Computing paradigm to the edge of the network, thus enabling a new breed of applications and services. Dening characteristics of the Fog are: a) Low latency and location awareness; b) Wide-spread geographical distribution; c) Mobility; d) Very large number of nodes, e) Predominant role of wireless access, f) Strong presence of streaming and real time applications, g) Het-erogeneity. In this paper we argue that the above characteristics make the Fog the appropriate platform for a number of critical Internet of Things (IoT) services and applications, namely, Connected Vehicle, Smart Grid , Smart Cities, and, in general, Wireless Sensors and Actuators Net-works (WSANs).
Un documento describe brevemente el caso de Paola Díaz en Uruguay, donde alguien usurpó su identidad. Incluye secciones sobre la presentación del caso, puntos a considerar y seguir la pista correcta, pero sin mayores detalles sobre los hechos del caso en sí.
This document contains recommendations for the European Union Location Framework (EULF) to improve how location data is used across different EU policy areas and in e-government services. It includes 15 recommendations targeted at public administrations, EU institutions, and EULF stakeholders. Each recommendation proposes guidance documents or standards to better align location data practices with INSPIRE and integrate location information into policy and digital services.
This document discusses linking smart meter data to 3D City Information Models (CityGML) to improve energy efficiency in buildings. It provides background on Sinergis and the Sunshine project, which uses open standards to support energy efficiency. It then discusses EU energy consumption trends, the role of buildings, relevant data models like CityGML and INSPIRE, and initiatives like Green Button that could be used to link smart meter readings to 3D city models using open protocols. Integrating smart meter data with 3D models in this way may help analyze and improve building energy performance.
This document discusses challenges around measuring and improving building energy efficiency, including a lack of standardized data formats and definitions. It provides examples of projects aiming to open up energy data using standards like INSPIRE and Green Button. Realizing the full benefits of building energy performance data requires addressing ongoing barriers like inconsistent data formats and time spent on data cleaning rather than analysis.
Presentazione 2a Conferenza Nazionale OpenGeoData.
Tavolo di lavoro "Stradari":
http://www.opengeodata.it/index.php?option=com_content&view=article&id=251:il-tavolo-di-lavoro-sugli-stradari&catid=50:eventi-ogd&Itemid=76
Este documento enumera varios instrumentos aerófonos como la flauta, el oboe y el fagot que se usan en bandas y orquestas sinfónicas. También menciona instrumentos como el cuerno, la flauta shakuhachi, flautas barrocas y otros instrumentos andinos como el pincuyo y el sikus. El documento fue realizado por Andres Senges e Ivan Garcia.
Italian local governments use ECO, an open Territorial Information System, to detect illegal building construction by comparing satellite maps to public records. ECO integrates data from various databases and devices to create a unified Municipal Building Registry. It correlates geospatial and IoT data on buildings, utilities, and sensors to enable applications like public lighting optimization, traffic monitoring, and pollution analysis. ECO customizes maps with user-added layers and APIs to integrate with other systems according to users' needs.
The LOD Gateway: Open Source Infrastructure for Linked DataDavid Newbury
Presented at the CIDOC conference in Mexico City, 2023, this talk provides a walkthrough of the digital infrastructure behind the LOD Gateway, a critical part of Getty's digital API infrastructure.
It discusses the difference between graphs, documents, and how both are important for different use cases.
The document discusses the Semantic Web and its potential to make web data more accessible and useful for machines. It describes how the Semantic Web aims to standardize how data is published and linked on the web so that machines can more easily interpret and combine data from different sources. Examples are given of early applications that demonstrate aspects of this vision, like semantic search engines, linked open data projects, and microformats for annotating web pages.
(Or, building better UX / Apps with distributed databases and data synchronisation techniques).
This was my talk at Cocoaheads Berlin 17th February 2016.
Fog computing is a model that processes and stores data near network edge devices rather than solely in cloud data centers. It extends cloud computing to the edge of the network to provide low latency services to end users. Key characteristics include proximity to users, dense geographical distribution, and support for mobility. Fog computing is well-suited for applications requiring real-time processing like industrial automation and IoT networks of sensors. It helps improve quality of service by bringing services closer to users and enabling real-time analytics on distributed data sources.
The Graph is a decentralized protocol that helps with indexing and querying data from blockchains like Ethereum. It functions as a layer that sits between decentralized applications and their data sources, making the data easily accessible. Developers define how data should be indexed and structured from smart contracts using a subgraph manifest file. Graph nodes then continuously scan blockchains for new data and update the stored indices based on events from smart contracts. Decentralized applications can perform complex queries on the indexed data through the GraphQL interface.
Building with JavaScript - write less by using the right toolsChristian Heilmann
This document discusses strategies for writing concise JavaScript code that achieves a lot through the use of the right tools and progressive enhancement techniques. It advocates an approach of writing robust, understandable code over writing the smallest amount of code possible. Specific techniques mentioned include progressive enhancement, event delegation, rendering on the server side, and caching on the client side. The document cautions that writing less code does not inherently mean writing better code, and that copy-pasting examples can lead to large, complex codebases if not integrated properly.
Internet of Things (IoT) represents a remarkable transformation of the way in which our world will soon interact. Much like the World Wide Web connected computers to networks, and the next evolution connected people to the Internet and other people, IoT looks poised to interconnect devices, people, environments, virtual objects and machines in ways that only science fiction writers could have imagined.
This document discusses using cloud technologies to provide social services. It outlines Oracle's strategy of using a service-oriented architecture and componentized enterprise functional architecture to deliver social welfare and human services applications in the cloud. The document aims to address common myths about public sector cloud usage, including that everything will go to the public cloud, that you're either cloud or not cloud, that clouds are one size fits all, that cloud will lock you in, and that reducing costs is the sole benefit of cloud. It emphasizes the importance of a hybrid cloud model and standards-based interoperability.
The document discusses how the growing Internet of Things (IoT) and increase in data collection will impact businesses. It notes that while IoT and big data are not revolutionary on their own, together they will require changes in how data is managed and analyzed. Specifically, it argues that to succeed with the rise of IoT, systems must be optimized to reduce data transfers, data must be fragmented into smaller transactions instead of bulk transfers, and data must be made widely accessible through open platforms and tools. The document provides examples of how companies like Netflix, Facebook, and others have optimized data handling and argues this approach will be needed as IoT devices proliferate into the billions.
The document discusses how the growing Internet of Things (IoT) and increase in data collection will impact businesses. It notes that while IoT and big data are not revolutionary on their own, together they will require changes in how data is managed and analyzed. Specifically, it argues that to succeed with the rise of IoT, systems must be optimized to reduce data transactions, data must be fragmented into smaller pieces to ease analysis, and data must be made widely accessible through open platforms and tools. The document cautions that failing to properly manage the growing amounts of connected devices and data could lead to security risks and negatively impact businesses.
Mary Barnsdale article about Fog Computing for CiscoMary Barnsdale
Fog computing is a new networking paradigm that extends cloud computing to the edge of the network, enabling data processing to occur closer to sensors and devices. It involves using small computing devices with storage and networking capabilities at the edge of the network, rather than sending all data to the cloud. This allows for real-time response and processing of data from the billions of devices that will be connected as part of the Internet of Everything. Cisco researchers are developing the architecture for fog computing to help support the growth of smart grids, smart cities, and other applications requiring low latency processing of data from devices.
A Review- Fog Computing and Its Role in the Internet of ThingsIJERA Editor
Fog computing extends the Cloud Computing paradigm to the edge of the network, thus enabling a new breed of applications and services. Dening characteristics of the Fog are: a) Low latency and location awareness; b) Wide-spread geographical distribution; c) Mobility; d) Very large number of nodes, e) Predominant role of wireless access, f) Strong presence of streaming and real time applications, g) Het-erogeneity. In this paper we argue that the above characteristics make the Fog the appropriate platform for a number of critical Internet of Things (IoT) services and applications, namely, Connected Vehicle, Smart Grid , Smart Cities, and, in general, Wireless Sensors and Actuators Net-works (WSANs).
This presentation discusses moving enterprise IT to public cloud. It notes that enterprise IT organizations face complex environments, growing costs, and lack of resources. The cloud looks like an option to help address these issues and generate business advantage. While there are challenges with cloud adoption related to security, control, and trust, the presentation argues that cloud providers may offer greater availability, security, and efficiency than traditional IT environments through their large scale operations. It advocates a hybrid approach for enterprises, moving commodity services to public cloud while using private cloud for high value services and legacy systems, with a goal of saying goodbye to legacy over time.
IoT and the pervasive nature of fast data and apache sparkStephen Dillon
This white paper and the associated blog http://bit.ly/1X4t9YH will introduce the Fast Data paradigm and provide a context within the scope of the Internet of Things and analytics. We will review Big Data and the architectural building blocks of Fast Data and then briefly survey the state of the art solutions in the open-source market whereas these are readily available to everyone regardless of budget constraints. We will then dive into Apache Spark as well as explore the Lambda architecture which is a popular approach to Fast Data and one Apache Spark supports
well. We will conclude with a look towards what is next for Fast Data as the IoT market trends towards the need to support "Fog computing" a.k.a. Edge Computing use cases.
IoT and the Pervasive Nature of Fast Data and Apache SparkStephen Dillon
This document discusses the relationship between IoT, fast data, and Apache Spark. It begins with an introduction to fast data and how IoT has served as a catalyst for its adoption. Next, it reviews big data concepts and defines fast data. It then discusses the lambda architecture and state-of-the-art fast data technologies like Apache Storm, Flink, Ignite, and Spark. It focuses on describing Spark's streaming, SQL, machine learning, and graph processing capabilities. Finally, it concludes that fast data is still evolving with trends around edge computing and the need to support low-latency insights from IoT data.
Databases have been used for over 40 years to organize information in a variety of contexts like inventory, class schedules, and personal records. Relational databases remain popular today despite attempts to replace them with object-oriented databases. Cloud computing and big data have further transformed databases by allowing extremely large datasets to be analyzed for trends and patterns. Modern databases can provide targeted recommendations and offers by analyzing individual user information and behaviors.
Francesco Baldassarri - Deliver Data at Scale - Codemotion Amsterdam 2019 - Codemotion
IoT revolution is ended. Thanks to hardware improvement, building an intelligent ecosystem is easier than never before for both startups and large-scale enterprises. The real challenge is now to connect, process, store and analyze data: in the cloud, but also, at the edge. We’ll give a quick look on frameworks that aggregate dispersed devices data into a single global optimized system allowing to improve operational efficiency, to predict maintenance, to track asset in real-time, to secure cloud-connected devices and much more.
Cappellacci di zucca open e standard (@ Smart City Exhibition 2015)Piergiorgio Cipriano
To be really useful, open (geo)data need to be harmonised using standard semantics.
Moreover, we also need data with good quality level ... otherwise we only have junk food.
Oltre agli ingredienti (specifiche di contenuto) occorrono ricette per modelli implementativi e rappresentazioni. Esempio di ricetta di dati "Buildings" di Reggio-Emilia nel progetto GeoSmartCity (www.geosmartcity.eu).
(Italiano)
Presentazione tenuta in occasione del workshop "Come la geoICT può supportare il governo del territorio" organizzato da INU e AMFM il 26 marzo 2015 (http://www.inu.it/wp-content/uploads/seminario_scandicci.pdf).
Intervento tenuto insieme a Patrizia Saggini (Comune di Anzola dell'Emilia) e Stefano Olivucci (Regione Emilia-Romagna)
Standard geodata models for Energy Performance of Buildings: experiences from...Piergiorgio Cipriano
Presentation at the workshop "Benchmarking Energy Sustainability in Cities" organised by the Joint Research Centre of the European Commission (Torino, 25/11/2014):
http://iet.jrc.ec.europa.eu/energyefficiency/workshop/benchmarking-energy-sustainability-cities
Presentation at the Smart City Exhibition 2014 (Bologna, IT).
Is it possible to estimate energy performance and monitor energy consumption and CO2 emissions, using geodata at city level??
It is what we are doing in GeoSmartCity project (http://www.geosmartcity.eu) with the Municipality of Reggio Emilia and other 4 other European Municipalities already involved in the Covenant of Mayors.
On June 2014 an online survey has been launched* to collect information about the need for automatic large-scale assessment of building energy behaviour, based on “location” information (geodata) available from public registers (e.g. cadastre, urban planning data etc.).
* http://snipurl.com/energy-mapping-survey
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Project Management Semester Long Project - Acuityjpupo2018
Acuity is an innovative learning app designed to transform the way you engage with knowledge. Powered by AI technology, Acuity takes complex topics and distills them into concise, interactive summaries that are easy to read & understand. Whether you're exploring the depths of quantum mechanics or seeking insight into historical events, Acuity provides the key information you need without the burden of lengthy texts.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
1. GetLOD
GetLOD is a solution jointly designed
and developed by Planetek Italia
and Sinergis during the development
of the Geoportal of Emilia Romagna
Region
2. dati.regione.it
OGC WFS
Download
GeoRepository
LOD Back-end
GI Middleware
OGC CSW
MD 19115
RDF
dump
TripleStore
www
OGC server
GI Data & Metadata
MD server
Triple server
LOD Front-end
JAVA
API
mapping file
F2R
catalogazione
Ricerca
API
connettori
CKAN API
25. … and “energy
certificates” taken from
the regional database …
26. … and photovoltaic
arrays already mounted
on roofs, taken from the
national service of
Gestore Servizi
Energetici, …
27. … and gas/electricity
consumption of building
units in 2012 and 2013
from the service
provided by the National
Tax Agency
28. GetLOD could help the Local
Administration to configure
“ad hoc” downloads from
different services (like WFS)
and provide just the data I
need already “interliked”
29. GetLOD could help the
Local Administration to
configure “ad hoc”
downloads from different
services (like WFS) and
provide just the data I need
already “interliked”
30. This means that the
local authority may
play as a sort of
“broker”, able to
connect multiple
download services and
provide you …
32. Well … I can get both simple
“flat” shapefiles and RDF
XML-encoded data, together
with metadata provided by
catalogue services, if
available at source
33. This way you may
access interlinked data
you need, available
“on demand” and
navigate them in one
click … right?
34. Exactly!
And I save a lot of time,
so I am able to produce
a better proposal at a
lower cost.
35. Wow, this is very
interesting … also
considering recent
researchers, like the
LOD4WFS recently
presented at AGILE …