HDF4 and HDF-EOS format reading has recently been added to the NetCDF-Java 4.0 library, while HDF5 / NetCDF-4 format reading has been improved. This talk will summarize the status of reading the HDF family of formats through the NetCDF-Java library, with particular attention to the mapping between these formats and the Common Data Model.
This slide will provide an overview of current functionality, techniques, and tips for visualization and query of HDF and netCDF data in ArcGIS, as well as future plans. Hierarchical Data Format (HDF) and netCDF (network Common Data Form) are two widely used data formats for storing and manipulating scientific data. The NetCDF format also supports temporal data by using multidimensional arrays. The basic structure of data in this format and how to work with it will be covered in the context of standardized data structures and conventions. This slide will demonstrate the tools and techniques for ingesting HDF and netCDF data efficiently in ArcGIS, as well as some common workflows to employ the visualization capabilities of ArcGIS for effective animation and analysis of your data.
HDF AND HDF-EOS WORKSHOP II (1998)
Source: http://hdfeos.org/workshops/ws02/presentations/ilg2/ilg2.ppt
Slide 11 discusses the key difference between HDF and HDF-EOS: object level, not file-level.
HDF4 and HDF-EOS format reading has recently been added to the NetCDF-Java 4.0 library, while HDF5 / NetCDF-4 format reading has been improved. This talk will summarize the status of reading the HDF family of formats through the NetCDF-Java library, with particular attention to the mapping between these formats and the Common Data Model.
This slide will provide an overview of current functionality, techniques, and tips for visualization and query of HDF and netCDF data in ArcGIS, as well as future plans. Hierarchical Data Format (HDF) and netCDF (network Common Data Form) are two widely used data formats for storing and manipulating scientific data. The NetCDF format also supports temporal data by using multidimensional arrays. The basic structure of data in this format and how to work with it will be covered in the context of standardized data structures and conventions. This slide will demonstrate the tools and techniques for ingesting HDF and netCDF data efficiently in ArcGIS, as well as some common workflows to employ the visualization capabilities of ArcGIS for effective animation and analysis of your data.
HDF AND HDF-EOS WORKSHOP II (1998)
Source: http://hdfeos.org/workshops/ws02/presentations/ilg2/ilg2.ppt
Slide 11 discusses the key difference between HDF and HDF-EOS: object level, not file-level.
The Earth System Grid Federation (ESGF) is a large international collaboration that operates a global infrastructure for management and access of Earth System data. Some of the most valuable data collections served by ESGF include the output of global climate models used for the IPCC reports on climate change (CMIP3, CMIP5 and the upcoming CMIP6), regional climate model output (CORDEX), and observational data from several American and European agencies (Obs4MIPs). This talk will present a brief introduction to ESGF, describe the data access and analysis methods currently available or planned for the future, and conclude with some ideas on how this infrastructure could be used as a testbed for executing distributed analytics on a global scale.
Generating Executable Mappings from RDF Data Cube Data Structure DefinitionsChristophe Debruyne
Data processing is increasingly the subject of various internal and external regulations, such as GDPR which has recently come into effect. Instead of assuming that such processes avail of data sources (such as files and relational databases), we approach the problem in a more abstract manner and view these processes as taking datasets as input. These datasets are then created by pulling data from various data sources. Taking a W3C Recommendation for prescribing the structure of and for describing datasets, we investigate an extension of that vocabulary for the generation of executable R2RML mappings. This results in a top-down approach where one prescribes the dataset to be used by a data process and where to find the data, and where that prescription is subsequently used to retrieve the data for the creation of the dataset “just in time”. We argue that this approach to the generation of an R2RML mapping from a dataset description is the first step towards policy-aware mappings, where the generation takes into account regulations to generate mappings that are compliant. In this paper, we describe how one can obtain an R2RML mapping from a data structure definition in a declarative manner using SPARQL CONSTRUCT queries, and demonstrate it using a running example. Some of the more technical aspects are also described.
Reference: Christophe Debruyne, Dave Lewis, Declan O'Sullivan: Generating Executable Mappings from RDF Data Cube Data Structure Definitions. OTM Conferences (2) 2018: 333-350
Using the Data Cube vocabulary for Publishing Environmental Linked Data on la...Laurent Lefort
Canberra Semantic Web Meetup.
Initiatives have been launched to develop semantic vocabularies representing statistical classifications and discovery metadata. Tools are also being created by statistical organizations to support the publication of dimensional data conforming to the Data Cube specification, now in Last Call at W3C.
The meeting will be an opportunity to hear about two semantic Web and Linked Data initiatives for statistical data that are driven by the Australian Government. The Bureau of Meteorlogy and CSIRO have recently released a Linked Data version of the ACORN-SAT historical climate data at http://lab.environment.data.gov.au and the ABS has released the Census data modelled in the Data Cube vocabulary which is part of a challenge the ABS is organising in context of the SemStats Workshop (http://www.datalift.org/en/event/semstats2013/challenge) at the International Semantic Web Conference (ISWC) in Sydney (http://iswc2013.semanticweb.org).
Come along to hear about these two projects, the challenges encountered and the solutions developed.
The Earth System Grid Federation (ESGF) is a large international collaboration that operates a global infrastructure for management and access of Earth System data. Some of the most valuable data collections served by ESGF include the output of global climate models used for the IPCC reports on climate change (CMIP3, CMIP5 and the upcoming CMIP6), regional climate model output (CORDEX), and observational data from several American and European agencies (Obs4MIPs). This talk will present a brief introduction to ESGF, describe the data access and analysis methods currently available or planned for the future, and conclude with some ideas on how this infrastructure could be used as a testbed for executing distributed analytics on a global scale.
Generating Executable Mappings from RDF Data Cube Data Structure DefinitionsChristophe Debruyne
Data processing is increasingly the subject of various internal and external regulations, such as GDPR which has recently come into effect. Instead of assuming that such processes avail of data sources (such as files and relational databases), we approach the problem in a more abstract manner and view these processes as taking datasets as input. These datasets are then created by pulling data from various data sources. Taking a W3C Recommendation for prescribing the structure of and for describing datasets, we investigate an extension of that vocabulary for the generation of executable R2RML mappings. This results in a top-down approach where one prescribes the dataset to be used by a data process and where to find the data, and where that prescription is subsequently used to retrieve the data for the creation of the dataset “just in time”. We argue that this approach to the generation of an R2RML mapping from a dataset description is the first step towards policy-aware mappings, where the generation takes into account regulations to generate mappings that are compliant. In this paper, we describe how one can obtain an R2RML mapping from a data structure definition in a declarative manner using SPARQL CONSTRUCT queries, and demonstrate it using a running example. Some of the more technical aspects are also described.
Reference: Christophe Debruyne, Dave Lewis, Declan O'Sullivan: Generating Executable Mappings from RDF Data Cube Data Structure Definitions. OTM Conferences (2) 2018: 333-350
Using the Data Cube vocabulary for Publishing Environmental Linked Data on la...Laurent Lefort
Canberra Semantic Web Meetup.
Initiatives have been launched to develop semantic vocabularies representing statistical classifications and discovery metadata. Tools are also being created by statistical organizations to support the publication of dimensional data conforming to the Data Cube specification, now in Last Call at W3C.
The meeting will be an opportunity to hear about two semantic Web and Linked Data initiatives for statistical data that are driven by the Australian Government. The Bureau of Meteorlogy and CSIRO have recently released a Linked Data version of the ACORN-SAT historical climate data at http://lab.environment.data.gov.au and the ABS has released the Census data modelled in the Data Cube vocabulary which is part of a challenge the ABS is organising in context of the SemStats Workshop (http://www.datalift.org/en/event/semstats2013/challenge) at the International Semantic Web Conference (ISWC) in Sydney (http://iswc2013.semanticweb.org).
Come along to hear about these two projects, the challenges encountered and the solutions developed.
Is there a way that we can build our Azure Data Factory all with parameters b...Erwin de Kreuk
Is there a way that we can build our Data Factory all with parameters all based on MetaData? Yes there's and I will show you how to. During this session I will show how you can load Incremental or Full datasets from your sql database to your Azure Data Lake. The next step is that we want to track our history from these extracted tables. We will do this with Azure Databricks using Delta Lake. The last step that we want, is to make this data available in Azure SQL Database or Azure Synapse Analytics. Oh and we want to have some logging as well from our processes A lot to talk and to demo about during this session.
Rainer Schmidt, AIT Austrian Institute of Technology, presented Scalable Preservation Workflows from SCAPE at the 5-days ‘Digital Preservation Advanced Practitioner Training’ event (http://bit.ly/1fYCvMO), hosted by DPC, in Glasgow on 15-19 July 2013.
The presentation gives an introduction to the SCAPE Platform, it presents scenarios from SCAPE Testbeds and it finally describes how to create scalable workflows and execute them on the SCAPE Platform.
In these slides we analyze why the aggregate data models change the way data is stored and manipulated. We introduce MapReduce and its open source implementation Hadoop. We consider how MapReduce jobs are written and executed by Hadoop.
Finally we introduce spark using a docker image and we show how to use anonymous function in spark.
The topics of the next slides will be
- Spark Shell (Scala, Python)
- Shark Shell
- Data Frames
- Spark Streaming
- Code Examples: Data Processing and Machine Learning
DSD-INT 2020 BlueEarth Engine - hydroMT - model builder frameworkDeltares
Presentation by Dirk Eilander, Deltares, at the BlueEarth User Day: Explain the past, explore the future, during Delft Software Days - Edition 2020. Monday, 16 November 2020.
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FMESafe Software
Following the popularity of “Cloud Revolution: Exploring the New Wave of Serverless Spatial Data,” we’re thrilled to announce this much-anticipated encore webinar.
In this sequel, we’ll dive deeper into the Cloud-Native realm by uncovering practical applications and FME support for these new formats, including COGs, COPC, FlatGeoBuf, GeoParquet, STAC, and ZARR.
Building on the foundation laid by industry leaders Michelle Roby of Radiant Earth and Chris Holmes of Planet in the first webinar, this second part offers an in-depth look at the real-world application and behind-the-scenes dynamics of these cutting-edge formats. We will spotlight specific use-cases and workflows, showcasing their efficiency and relevance in practical scenarios.
Discover the vast possibilities each format holds, highlighted through detailed discussions and demonstrations. Our expert speakers will dissect the key aspects and provide critical takeaways for effective use, ensuring attendees leave with a thorough understanding of how to apply these formats in their own projects.
Elevate your understanding of how FME supports these cutting-edge technologies, enhancing your ability to manage, share, and analyze spatial data. Whether you’re building on knowledge from our initial session or are new to the serverless spatial data landscape, this webinar is your gateway to mastering cloud-native formats in your workflows.
Cloud Revolution: Exploring the New Wave of Serverless Spatial DataSafe Software
Once in a while, there really is something new under the sun. The rise of cloud-hosted data has fueled innovation in spatial data storage, enabling a brand new serverless architectural approach to spatial data sharing. Join us in our upcoming webinar to learn all about these new ways to organize your data, and leverage data shared by others. Explore the potential of Cloud Native Geospatial Formats in your workflows with FME, as we introduce five new formats: COGs, COPC, FlatGeoBuf, GeoParquet, STAC and ZARR.
Learn from industry experts Michelle Roby from Radiant Earth and Chris Holmes from Planet about these cloud-native geospatial data formats and how they can make data easier to manage, share, and analyze. To get us started, they’ll explain the goals of the Cloud-Native Geospatial Foundation and provide overviews of cloud-native technologies including the Cloud-Optimized GeoTIFF (COG), SpatioTemporal Asset Catalogs (STAC), and GeoParquet.
Following this, our seasoned FME team will guide you through practical demonstrations, showcasing how to leverage each format to its fullest potential. Learn strategic approaches for seamless integration and transition, along with valuable tips to enhance performance using these formats in FME.
Discover how these formats are reshaping geospatial data handling and how you can seamlessly integrate them into your FME workflows and harness the explosion of cloud-hosted data.
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...pchutichetpong
M Capital Group (“MCG”) expects to see demand and the changing evolution of supply, facilitated through institutional investment rotation out of offices and into work from home (“WFH”), while the ever-expanding need for data storage as global internet usage expands, with experts predicting 5.3 billion users by 2023. These market factors will be underpinned by technological changes, such as progressing cloud services and edge sites, allowing the industry to see strong expected annual growth of 13% over the next 4 years.
Whilst competitive headwinds remain, represented through the recent second bankruptcy filing of Sungard, which blames “COVID-19 and other macroeconomic trends including delayed customer spending decisions, insourcing and reductions in IT spending, energy inflation and reduction in demand for certain services”, the industry has seen key adjustments, where MCG believes that engineering cost management and technological innovation will be paramount to success.
MCG reports that the more favorable market conditions expected over the next few years, helped by the winding down of pandemic restrictions and a hybrid working environment will be driving market momentum forward. The continuous injection of capital by alternative investment firms, as well as the growing infrastructural investment from cloud service providers and social media companies, whose revenues are expected to grow over 3.6x larger by value in 2026, will likely help propel center provision and innovation. These factors paint a promising picture for the industry players that offset rising input costs and adapt to new technologies.
According to M Capital Group: “Specifically, the long-term cost-saving opportunities available from the rise of remote managing will likely aid value growth for the industry. Through margin optimization and further availability of capital for reinvestment, strong players will maintain their competitive foothold, while weaker players exit the market to balance supply and demand.”
As Europe's leading economic powerhouse and the fourth-largest hashtag#economy globally, Germany stands at the forefront of innovation and industrial might. Renowned for its precision engineering and high-tech sectors, Germany's economic structure is heavily supported by a robust service industry, accounting for approximately 68% of its GDP. This economic clout and strategic geopolitical stance position Germany as a focal point in the global cyber threat landscape.
In the face of escalating global tensions, particularly those emanating from geopolitical disputes with nations like hashtag#Russia and hashtag#China, hashtag#Germany has witnessed a significant uptick in targeted cyber operations. Our analysis indicates a marked increase in hashtag#cyberattack sophistication aimed at critical infrastructure and key industrial sectors. These attacks range from ransomware campaigns to hashtag#AdvancedPersistentThreats (hashtag#APTs), threatening national security and business integrity.
🔑 Key findings include:
🔍 Increased frequency and complexity of cyber threats.
🔍 Escalation of state-sponsored and criminally motivated cyber operations.
🔍 Active dark web exchanges of malicious tools and tactics.
Our comprehensive report delves into these challenges, using a blend of open-source and proprietary data collection techniques. By monitoring activity on critical networks and analyzing attack patterns, our team provides a detailed overview of the threats facing German entities.
This report aims to equip stakeholders across public and private sectors with the knowledge to enhance their defensive strategies, reduce exposure to cyber risks, and reinforce Germany's resilience against cyber threats.
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
2. Agenda
• Achieved / Ongoing Tools
• Assisted RDF Data Cube Schema Mapping
• Generic RDF Data Cube Builder
• RDF Data Cube Explorer
• RDF Data Cube geo-data supported Dashboard
• Planned Tools
• Real-Time LOSD Publishing Pipeline
• Marine Institute and Lithuanian Pilots’ Dashboards updates
• Collaboration Space and A Feedback Mechanism
2
3. LOSD Publishing pipeline [NUIG Approach]
3
Assisted RDF
Data Cube
Schema
Mapping
Generic RDF
Data Cube
Builder
RDF Data
Cube
Explorer
LOSD
Dashboards
[ET - MI]
4. LOSD Publishing pipeline [flow chart]
4
Schema Mapping Stage
Pilot / user dataset to RDF Data Cube Vocabulary Mapping
[Spreadsheet]
RDF Data Cube schema as rdf refine configuration file [json]
[Open refine] and [RDF refine ext.]
Pilot / user mapped RDF
Data Cube schema
[rdf/xml or ttl]
Pilot / user dataset [csv]
RDF Data Cube Builder Application
[Web App – Desktop – CMD - Web
service]
Pilot / user RDF Data Cube
[rdf/xml or ttl]
Data Cube Building Stage Data Cube Exploration Stage
JSON QB REST API implementation [java web service]
RDF Data Store [virtuso/f]
Data Cube
Explorer
[django -
wordpress]
ET dashboard
MI dashboard
5. Assisted RDF Data Cube Schema Mapping
• This tool/pipeline is created to ease the mapping process of government
statistical datasets into RDF according to the RDF data cube vocabulary.
• This tool integrates [spreadsheets, open refine, rdf refine, and rdf data cube] to
produce a generic LOSD mapping tool that fits any pilot or use case.
• The following demo shows the Schema Mapping pipeline steps/stages.
5
27. LOSD Publishing pipeline [flow chart]
27
Schema Mapping Stage
Pilot / user dataset to RDF Data Cube Vocabulary Mapping
[Spreadsheet]
RDF Data Cube schema as rdf refine configuration file [json]
[Open refine] and [RDF refine ext.]
Pilot / user mapped RDF
Data Cube schema
[rdf/xml or ttl]
Pilot / user dataset [csv]
RDF Data Cube Builder Application
[Web App – Desktop – CMD - Web
service]
Pilot / user RDF Data Cube
[rdf/xml or ttl]
Data Cube Building Stage Data Cube Exploration Stage
JSON QB REST API implementation [java web service]
RDF Data Store [virtuso/f]
Data Cube
Explorer
[django -
wordpress]
ET dashboard
MI dashboard
28. Generic RDF Data Cube Builder
• RDF Data Cube Builder is used to transform government statistical data [csv]
into RDF Data Cubes [rdf] based on the predefined mapping produced in the
previous stage.
• Cube Builder Application implemented in deferent interfaces Web App.,
Desktop App., Command Line and Web Service.
• The following demo shows the Cube Builder usages.
28
29. Generic RDF Data Cube Builder
• Web Application Interface [using the Web Service]
29
30. Generic RDF Data Cube Builder
• Web Application Interface [using the Web Service]
30
31. Generic RDF Data Cube Builder
• Web Application Interface [using the Web Service]
31
32. Generic RDF Data Cube Builder
• Web Application Interface [using the Web Service]
32
33. Generic RDF Data Cube Builder
• Web Application Interface [using the Web Service]
33
34. Generic RDF Data Cube Builder
• Web Application Interface [using the Web Service]
34
35. Generic RDF Data Cube Builder
• Web Application Interface [using the Web Service]
35
36. Generic RDF Data Cube Builder
• Web Application Interface [using the Web Service]
36
37. Generic RDF Data Cube Builder
• Web Application Interface [using the Web Service]
37
38. Generic RDF Data Cube Builder
• Web Application Interface [using the Web Service]
38
39. Generic RDF Data Cube Builder
• Web Application Interface [using the Web Service]
39
45. LOSD Publishing pipeline [flow chart]
45
Schema Mapping Stage
Pilot / user dataset to RDF Data Cube Vocabulary Mapping
[Spreadsheet]
RDF Data Cube schema as rdf refine configuration file [json]
[Open refine] and [RDF refine ext.]
Pilot / user mapped RDF
Data Cube schema
[rdf/xml or ttl]
Pilot / user dataset [csv]
RDF Data Cube Builder Application
[Web App – Desktop – CMD - Web
service]
Pilot / user RDF Data Cube
[rdf/xml or ttl]
Data Cube Building Stage Data Cube Exploration Stage
JSON QB REST API implementation [java web service]
RDF Data Store [virtuso/f]
Data Cube
Explorer
[django -
wordpress]
ET dashboard
MI dashboard
46. RDF Data Cube Explorer
• RDF Data Cube Explorer is a webserver that integrates [json QB API, RDF
Data Stores, Pivot table, Maps etc .. ] to produce explorations and
visualizations dashboards for LOSD previously created and loaded into RDF
Data Stores.
• Those Dash Boards are designed and implemented to meet the pilot use cases
requirements.
• The following demo shows the Cube Explorer abilities.
46
48. • Considering the sad incident in March 2017, the Cube Explorer would assist the
search and rescue efforts by pointing out possible search points for the remains
using wind and wave direction readings.
RDF Data Cube Explorer [Maritime Search and Rescue]
49. • Average Wind Direction in (+) or (–) 24 hours of the incident date (assuming the
incident happened on 12/09/2016) grouped by longitude and latitude points
using data sourced from [Irish Weather Buoy Network] data source.
• As shown in the next slide using our cube explorer rescue staff can choose the
dataset contains the relevant data and choose the visualization and aggregation
types, then they can filter out the data to keep only relevant dimensions (e.g.
longitude ) and measures (e.g. day) and the relevant values (e.g. 11,12, and
13/09/2016).
• From the visualizations the rescue staff can see that, during the incident.
RDF Data Cube Explorer [Maritime Search and Rescue]
50. Maritime Search and Rescue, avg. wind direction
RDF Data Cube Explorer [Maritime Search and Rescue]
51. Maritime Search and Rescue, mean wave direction
RDF Data Cube Explorer [Maritime Search and Rescue]
52. • Using the visualizations the rescue staff can see that, during the relevant time
period and geo-coordinates of the incident avg. wind direction was between (181.8
and 355 degrees) and mean wave direction was between (216.75 and 273
degrees). So that they will focus their efforts accordingly.
RDF Data Cube Explorer [Maritime Search and Rescue]
54. • The installation process of new generators (wind or wave based) starts
with locating the most fitting spot that will guarantee the best operating
conditions for the maximum productivity and safety.
RDF Data Cube Explorer [Marine Renewable Energy]
55. • Using avg. wind speed around the year [for fixed location generators] or in
certain periods [for movable generators] it can be decided where and when to
install the generators.
RDF Data Cube Explorer [Marine Renewable Energy]
56. • Marine Renewable Energy, avg. wind speed {year based}
RDF Data Cube Explorer [Marine Renewable Energy]
57. Marine Renewable Energy, avg. wind speed {monthly based}
RDF Data Cube Explorer [Marine Renewable Energy]
58. • Using the visualizations we can see that,
• (1) Buoy doesn't get data for wind speed at the geo point (5.42, 58.48), the wind speed
sensor could be in need for replacement or maintenance.
• (2) Geo points (9.99, 54.99) and (10.54, 51.21) have the highest avg. wind speed around
the year.
• (3) Goe point (6.70, 51.69) have the highest avg. wind speed in November and geo point
(9.99, 54.99) have the highest avg. wind speed in August.
• So, “regarding other decision inputs” for fixed wind generators marine staff could place it at
the area around geo point (9.99, 54.99) and (10.54, 51.21).
• And for movable wind generators, best spot in August is (9.99, 54.99) and in November is
(6.70, 51.69).
• Same process will fit for the wave generators.
RDF Data Cube Explorer [Marine Renewable Energy]
59. RDF Data Cube Explorer [Maritime Tourism and Leisure]
60. • For setting up safe diving and swimming locations, and also avoiding dangerous
locations swimmer and surfers can use wave speed and height, wind speed, gust, sea
temperature and air temperature readings.
RDF Data Cube Explorer [Maritime Tourism and Leisure]
61. Maritime Tourism and Leisure, avg. sea temperature in August
RDF Data Cube Explorer [Maritime Tourism and Leisure]
62. Maritime Tourism and Leisure, avg. air temperature in August
RDF Data Cube Explorer [Maritime Tourism and Leisure]
63. RDF Data Cube Explorer [Generic Use Cases of the Cube Explorer Tool]
64. • Cube Explorer works by pointing it to a RDF Data Cube server, then it will fetch all
available cube data sets and offer the user a verity of RDF data cube visualizations and
filtering options.
• e.g. choosing the Foreign direct investment per capita RDF Data Cube for explorations
as shown below.
RDF Data Cube Explorer [Generic Use Cases of the Cube Explorer Tool]
65. • Cube Explorer offers a verity of visualization and exploration options e.g. Bar
charts, Heat Maps, Line chart, Table Bar Chart etc. .
RDF Data Cube Explorer [Generic Use Cases of the Cube Explorer Tool]
66. • e.g. Highlighting the Foreign direct investment per capita for the country of Lithuania.
• Visualizing the Data Cube using country wide, county, and municipality levels with
temporal filtering would surely assist decision makers to achieve more with this
information in hand.
RDF Data Cube Explorer [Generic Use Cases of the Cube Explorer Tool]
67. RDF Data Cube Explorer [Generic Use Cases of the Cube Explorer Tool]
68. RDF Data Cube Explorer [Generic Use Cases of the Cube Explorer Tool]
69. RDF Data Cube Explorer [Generic Use Cases of the Cube Explorer Tool]
70. • e.g. Highlighting the Average Monthly Earnings for the country of Lithuania.
• Visualizing the Data Cube using country wide, county, and municipality levels
with temporal filtering would surely assist decision makers to achieve more
with this information in hand.
RDF Data Cube Explorer [Generic Use Cases of the Cube Explorer Tool]
71. RDF Data Cube Explorer [Generic Use Cases of the Cube Explorer Tool]
72. RDF Data Cube Explorer [Generic Use Cases of the Cube Explorer Tool]
73. • e.g. Highlighting the Resident Population Demography for the country of
Lithuania.
• Visualizing the Data Cube using country wide, county, and municipality levels
with temporal filtering would surely assist decision makers to achieve more with
this information in hand.
RDF Data Cube Explorer [Generic Use Cases of the Cube Explorer Tool]
74. RDF Data Cube Explorer [Generic Use Cases of the Cube Explorer Tool]
75. RDF Data Cube Explorer [Generic Use Cases of the Cube Explorer Tool]
76. RDF Data Cube Explorer [Generic Use Cases of the Cube Explorer Tool]
• Git repo:
https://github.com/OpenGovIntelligence/data-cube-explorer
77. RDF Data Cube geo-data supported Dashboard
• This dashboard mainly designed and implemented to meet the Lithuanian pilot
use case requirements.
• The following demo shows the Data Cube geo-data supported Dashboard look.
77
78. RDF Data Cube geo-data supported Dashboard
• Lithuanian Data Cubes Geo based Visualization screen samples.
78
79. RDF Data Cube geo-data supported Dashboard
• Lithuanian Data Cubes Geo based Visualization screen samples.
79
80. RDF Data Cube geo-data supported Dashboard
• Lithuanian Data Cubes Geo based Visualization screen samples.
80
81. Planned
• Real-Time LOSD Publishing Pipeline
• Marine Institute / Lithuanian Pilot Dashboards updates
• Collaboration Space / Feed Back Mechanism
81
82. Real-Time LOSD Publishing Pipeline
• To target real time marine data use case [ search and rescue], we are planning
to design and implement a real time version of the LOSD publishing pipeline.
• Following a drafted vision of the above pipeline.
82
83. MI
servers
APIs
OGI RDF store
(Data of Curre
Week only){Weekly}
API calls
Cube
Annotation
Temporary
CSV DB
Cube
Builder
Push to
RDF
Store
Cursor
Tracking
Drop
outdated
data
Real-Time LOSD Publishing Pipeline