Strategies for effective web-based data dissemination include identifying different types of users like tourists, harvesters, and builders and tailoring content and features to their needs. An optimal strategy considers technical aspects like platforms, hosting, and design as well as administrative aspects like content management, user support, and resource allocation to balance costs and usability. The goal is to facilitate two-way communication through data access and promote statistical knowledge.
DataGraft: Data-as-a-Service for Open Datadapaasproject
Tutorial at "The Data Matters Series – Transforming Service Industry via Big Data Analytics", May 4, 2016, Cyberjaya, Malaysia
http://www.eventbrite.com/e/the-data-matters-series-transforming-service-industry-via-big-data-analytics-tickets-24617911837
Open data presentation on tools and automationPia Waugh
This is a short presentation about how to make your data lovely, including how to prioritise, clean, extract, transform and automate data publishing in your organisation.
DataGraft: Data-as-a-Service for Open Datadapaasproject
Tutorial at "The Data Matters Series – Transforming Service Industry via Big Data Analytics", May 4, 2016, Cyberjaya, Malaysia
http://www.eventbrite.com/e/the-data-matters-series-transforming-service-industry-via-big-data-analytics-tickets-24617911837
Open data presentation on tools and automationPia Waugh
This is a short presentation about how to make your data lovely, including how to prioritise, clean, extract, transform and automate data publishing in your organisation.
Service innovation: the hidden value of open dataSlim Turki, Dr.
> Presented at the Share-PSI Krems Workshop: A self sustaining business model for open data
- http://www.w3.org/2013/share-psi/workshop/krems/papers/ServiceInnovation-theHiddenValueOfOpenData
- http://www.w3.org/2013/share-psi/workshop/krems/
> Summary
The development of a data driven economy has been a major orientation of economic policies over the past few years based on (i) the wider availability of data promoted in particular by the Open Data movement and (ii) the development of dedicated tools to support heterogeneous data and data in large quantities (Big data). Reports anticipate the creation of enormous amounts of economic activity and growth opportunities. However the promise of the data-driven economy lies to a large extent in the development of new services. The return on investment of open data policies for instance should be evaluated from the services created based on open data sets. Open data promoters couple more and more open data initiatives with actions dedicated to the promotion of the datasets for the creation of new services. Nevertheless the results in terms of services created remain below the expectations of open data promoters. Indeed most services created are not sustainable and / or do not use the variety of datasets. They are to a wide extent relying on a limited number of very popular datasets. In order to make the promise of the data-driven economy a reality, it is therefore necessary to increase reuse and value extracted by services from data. Our hypothesis is that service innovation approaches can help understand the mechanisms that drive the creation of services. We therefore propose to analyse the roles that the data can have in the design of services based on a theoretical framework of service innovation.
The data-driven economy promises the creation of enormous amounts of economic activity and growth opportunities. However these projections lie to a large extent in the development of new services. Currently, the results in terms of service creation remain below the expectations of open data promoters. Indeed most services created are not sustainable and / or do not use the variety of datasets. They are to a wide extent relying on a limited number of very popular datasets. To increase the reuse and the value extracted by services from data, our hypothesis is that service innovation approaches can help understand the mechanisms that drive the creation of services. We therefore propose a review the current approaches to encouraging the creation of services based on data, an analysis of the creation of services from two open data platforms, in the UK and in Singapore, and a description of the roles that the data can have in the design of services based on a theoretical framework of service innovation.
Muriel Foulonneau 1, Slim Turki 1, Géradine Vidou 1, Sébastien Martin 2
1 Public Research Centre Henri Tudor, Luxembourg-Kirchberg, Kirchberg
2 Université Paris 8, Vincennes-Saint-Denis, France
muriel.foulonneau@tudor.lu
slim.turki@tudor.lu
geraldine.vidou@tudor.lu
Proceedings of 14th European Conference on eGovernment – ECEG 2014
12-13 June 2014
Brasov, Romania
Dataverse data repository is great opportunity for research communities to make their data FAIR: Findable, Accessible, Interoperable, and Re-usable. It's developed to help open outcome of scientific research projects to the public.
presented at the 2011 SemTech
Open government data and related services/applications are quickly growing on the Web. Although most agree that the government data has great potential in solving real world problems, there are still many challenges that must be addressed. This talk will describe several representative domain applications and provide concrete examples of evolving technical challenges remaining. We will show solution paths that have proven useful and make recommendations on the corresponding Semantic Web best practices.
• Scalability. How can we handle(e.g. search and cleanse) the 3,000+ raw/tool datasets, and the additional 300,000+ geo datasets from data.gov?
• Interoperability. Multi-scale open government data came from city governments, state governments, and national governments. How can one compare the GDP of the US and China, and later link to state-level financial data? Open government data covers many domains. How can one associate open government data with domain knowledge to build a cancer prevention application?
• Provenance and quality. How should provenance be leveraged to facilitate high-quality data management interactions (e.g. reuse, mash-up and feedback) between the government and the public?
Presentation given at the conference "open data for impact"
Erasmus+ project "Public Makers"
https://www.linkedin.com/posts/wide-luxembourg_opendata-publicmakers-activity-6818166878473596928-7ImU/
EDF2014: BIG - NESSI Networking Session: Edward Curry, National University of...European Data Forum
BIG - NESSI Networking Session, Talk by Edward Curry, National University of Ireland Galway at the European Data Forum 2014, 20 March 2014 in Athens, Greece: The Big Data Value Chain.
This presentation contains a broad introduction to big data and its technologies.
Big data is a term that describes the large volume of data – both structured and unstructured – that inundates a business on a day-to-day basis.
Big Data is a phrase used to mean a massive volume of both structured and unstructured data that is so large it is difficult to process using traditional database and software techniques. In most enterprise scenarios the volume of data is too big or it moves too fast or it exceeds current processing capacity.
EDF2013: Invited talk Florian Bauer: Unleashing climate and energy knowledge ...European Data Forum
Invited talk Florian Bauer, Operations & IT Director REEEP, at the European Data Forum 2013, 10 April 2013 in Dublin, Ireland: Unleashing climate and energy knowledge with Linked Open Data and consistent terminology
Open Data in Practice: Five Years of Lessons Learned and Best Practice in ac...Andrew Stott
Presentation given to a Workshop on Open Data and Rural Development for the Governments of Andhra Pradesh and Telangana in Hyderabad on 04 September 2014
EDF2013: Invited Talk Julie Marguerite: Big data: a new world of opportunitie...European Data Forum
Invited Talk Julie Marguerite, THALES, at the European Data Forum 2013, 9 April 2013 in Dublin, Ireland: Big data: a new world of opportunities for software services
Due to the arrival of new technologies, devices, and communication means, the amount of data produced by mankind is growing rapidly every year. This gives rise to the era of big data. The term big data comes with the new challenges to input, process and output the data. The paper focuses on limitation of traditional approach to manage the data and the components that are useful in handling big data. One of the approaches used in processing big data is Hadoop framework, the paper presents the major components of the framework and working process within the framework.
Service innovation: the hidden value of open dataSlim Turki, Dr.
> Presented at the Share-PSI Krems Workshop: A self sustaining business model for open data
- http://www.w3.org/2013/share-psi/workshop/krems/papers/ServiceInnovation-theHiddenValueOfOpenData
- http://www.w3.org/2013/share-psi/workshop/krems/
> Summary
The development of a data driven economy has been a major orientation of economic policies over the past few years based on (i) the wider availability of data promoted in particular by the Open Data movement and (ii) the development of dedicated tools to support heterogeneous data and data in large quantities (Big data). Reports anticipate the creation of enormous amounts of economic activity and growth opportunities. However the promise of the data-driven economy lies to a large extent in the development of new services. The return on investment of open data policies for instance should be evaluated from the services created based on open data sets. Open data promoters couple more and more open data initiatives with actions dedicated to the promotion of the datasets for the creation of new services. Nevertheless the results in terms of services created remain below the expectations of open data promoters. Indeed most services created are not sustainable and / or do not use the variety of datasets. They are to a wide extent relying on a limited number of very popular datasets. In order to make the promise of the data-driven economy a reality, it is therefore necessary to increase reuse and value extracted by services from data. Our hypothesis is that service innovation approaches can help understand the mechanisms that drive the creation of services. We therefore propose to analyse the roles that the data can have in the design of services based on a theoretical framework of service innovation.
The data-driven economy promises the creation of enormous amounts of economic activity and growth opportunities. However these projections lie to a large extent in the development of new services. Currently, the results in terms of service creation remain below the expectations of open data promoters. Indeed most services created are not sustainable and / or do not use the variety of datasets. They are to a wide extent relying on a limited number of very popular datasets. To increase the reuse and the value extracted by services from data, our hypothesis is that service innovation approaches can help understand the mechanisms that drive the creation of services. We therefore propose a review the current approaches to encouraging the creation of services based on data, an analysis of the creation of services from two open data platforms, in the UK and in Singapore, and a description of the roles that the data can have in the design of services based on a theoretical framework of service innovation.
Muriel Foulonneau 1, Slim Turki 1, Géradine Vidou 1, Sébastien Martin 2
1 Public Research Centre Henri Tudor, Luxembourg-Kirchberg, Kirchberg
2 Université Paris 8, Vincennes-Saint-Denis, France
muriel.foulonneau@tudor.lu
slim.turki@tudor.lu
geraldine.vidou@tudor.lu
Proceedings of 14th European Conference on eGovernment – ECEG 2014
12-13 June 2014
Brasov, Romania
Dataverse data repository is great opportunity for research communities to make their data FAIR: Findable, Accessible, Interoperable, and Re-usable. It's developed to help open outcome of scientific research projects to the public.
presented at the 2011 SemTech
Open government data and related services/applications are quickly growing on the Web. Although most agree that the government data has great potential in solving real world problems, there are still many challenges that must be addressed. This talk will describe several representative domain applications and provide concrete examples of evolving technical challenges remaining. We will show solution paths that have proven useful and make recommendations on the corresponding Semantic Web best practices.
• Scalability. How can we handle(e.g. search and cleanse) the 3,000+ raw/tool datasets, and the additional 300,000+ geo datasets from data.gov?
• Interoperability. Multi-scale open government data came from city governments, state governments, and national governments. How can one compare the GDP of the US and China, and later link to state-level financial data? Open government data covers many domains. How can one associate open government data with domain knowledge to build a cancer prevention application?
• Provenance and quality. How should provenance be leveraged to facilitate high-quality data management interactions (e.g. reuse, mash-up and feedback) between the government and the public?
Presentation given at the conference "open data for impact"
Erasmus+ project "Public Makers"
https://www.linkedin.com/posts/wide-luxembourg_opendata-publicmakers-activity-6818166878473596928-7ImU/
EDF2014: BIG - NESSI Networking Session: Edward Curry, National University of...European Data Forum
BIG - NESSI Networking Session, Talk by Edward Curry, National University of Ireland Galway at the European Data Forum 2014, 20 March 2014 in Athens, Greece: The Big Data Value Chain.
This presentation contains a broad introduction to big data and its technologies.
Big data is a term that describes the large volume of data – both structured and unstructured – that inundates a business on a day-to-day basis.
Big Data is a phrase used to mean a massive volume of both structured and unstructured data that is so large it is difficult to process using traditional database and software techniques. In most enterprise scenarios the volume of data is too big or it moves too fast or it exceeds current processing capacity.
EDF2013: Invited talk Florian Bauer: Unleashing climate and energy knowledge ...European Data Forum
Invited talk Florian Bauer, Operations & IT Director REEEP, at the European Data Forum 2013, 10 April 2013 in Dublin, Ireland: Unleashing climate and energy knowledge with Linked Open Data and consistent terminology
Open Data in Practice: Five Years of Lessons Learned and Best Practice in ac...Andrew Stott
Presentation given to a Workshop on Open Data and Rural Development for the Governments of Andhra Pradesh and Telangana in Hyderabad on 04 September 2014
EDF2013: Invited Talk Julie Marguerite: Big data: a new world of opportunitie...European Data Forum
Invited Talk Julie Marguerite, THALES, at the European Data Forum 2013, 9 April 2013 in Dublin, Ireland: Big data: a new world of opportunities for software services
Due to the arrival of new technologies, devices, and communication means, the amount of data produced by mankind is growing rapidly every year. This gives rise to the era of big data. The term big data comes with the new challenges to input, process and output the data. The paper focuses on limitation of traditional approach to manage the data and the components that are useful in handling big data. One of the approaches used in processing big data is Hadoop framework, the paper presents the major components of the framework and working process within the framework.
The software development process is complete for computer project analysis, and it is important to the evaluation of the random project. These practice guidelines are for those who manage big-data and big-data analytics projects or are responsible for the use of data analytics solutions. They are also intended for business leaders and program leaders that are responsible for developing agency capability in the area of big data and big data analytics .
For those agencies currently not using big data or big data analytics, this document may assist strategic planners, business teams and data analysts to consider the value of big data to the current and future programs.
This document is also of relevance to those in industry, research and academia who can work as partners with government on big data analytics projects.
Technical APS personnel who manage big data and/or do big data analytics are invited to join the Data Analytics Centre of Excellence Community of Practice to share information of technical aspects of big data and big data analytics, including achieving best practice with modeling and related requirements. To join the community, send an email to the Data Analytics Centre of Excellence
The slide aids to understand and provide insights on the following topics,
* Overview for Data Science
* Definition of Data and Information
* Types of Data and Representation
* Data Value Chain - [ Data Acquisition; Data Analysis; Data Curating; Data Storage; Data Usage ]
* Basic concepts of Big Data
SEAMLESS AUTOMATION AND INTEGRATION OF MACHINE LEARNING CAPABILITIES FOR BIG ...ijdpsjournal
The paper aims at proposing a solution for designing and developing a seamless automation and
integration of machine learning capabilities for Big Data with the following requirements: 1) the ability to
seamlessly handle and scale very large amount of unstructured and structured data from diversified and
heterogeneous sources; 2) the ability to systematically determine the steps and procedures needed for
analyzing Big Data datasets based on data characteristics, domain expert inputs, and data pre-processing
component; 3) the ability to automatically select the most appropriate libraries and tools to compute and
accelerate the machine learning computations; and 4) the ability to perform Big Data analytics with high
learning performance, but with minimal human intervention and supervision. The whole focus is to provide
a seamless automated and integrated solution which can be effectively used to analyze Big Data with highfrequency
and high-dimensional features from different types of data characteristics and different
application problem domains, with high accuracy, robustness, and scalability. This paper highlights the
research methodologies and research activities that we propose to be conducted by the Big Data
researchers and practitioners in order to develop and support seamless automation and integration of
machine learning capabilities for Big Data analytics.
SEAMLESS AUTOMATION AND INTEGRATION OF MACHINE LEARNING CAPABILITIES FOR BIG ...ijdpsjournal
The paper aims at proposing a solution for designing and developing a seamless automation and integration of machine learning capabilities for Big Data with the following requirements: 1) the ability to seamlessly handle and scale very large amount of unstructured and structured data from diversified and heterogeneous sources; 2) the ability to systematically determine the steps and procedures needed for
analyzing Big Data datasets based on data characteristics, domain expert inputs, and data pre-processing component; 3) the ability to automatically select the most appropriate libraries and tools to compute and accelerate the machine learning computations; and 4) the ability to perform Big Data analytics with high learning performance, but with minimal human intervention and supervision. The whole focus is to provide
a seamless automated and integrated solution which can be effectively used to analyze Big Data with highfrequency
and high-dimensional features from different types of data characteristics and different application problem domains, with high accuracy, robustness, and scalability. This paper highlights the research methodologies and research activities that we propose to be conducted by the Big Data researchers and practitioners in order to develop and support seamless automation and integration of machine learning capabilities for Big Data analytics.
Dirigida a directivos y analistas de mediana y gran empresa, Big Data Spain celebró una charla previa a la conferencia de la segunda edición del 7y 8 de noviembre del 2013.
Vídeo youtube: https://www.youtube.com/watch?v=6HbWErRCD1g
¿Quieres saber más?
http://www.paradigmatecnologico.com/
Oscar Méndez, co-fundador de www.paradigmatecnologico.com y www.stratio.com, habló de Big Data desde un punto de vista de negocio, y despejó dudas acerca del coste y recursos necesarios para aprovechar esta tecnología.
Las plataformas v2.0 post-Hadoop permiten el despligue rápido y simple de herramientas integradas de data mining, data processing, data analysis y data visualization. Los avances de los últimos 12 meses dejan atrás las limitaciones de sistemas de Business Intelligence tradicionales.
Similar to Workshop Rio de Janeiro Strategies for Web Based Data Dissemination (20)
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
Workshop Rio de Janeiro Strategies for Web Based Data Dissemination
1. Strategies for web based data dissemination
A strategy is a plan of action designed to achieve a vision - from Greek "στρατηγία" (strategia).
Zoltan Nagy – Statistics Division, Department of Economic and Social affairs, United Nations
United Nations Regional Workshop on Data Dissemination and Communication
Rio de Janeiro, Brazil, 5 - 7 June 2013
2. Existing Strategies
Fundamental Principles of Official Statistics
“statistics that meet the test of practical utility are to be compiled and made available on an
impartial basis by official statistical agencies to honor citizens' entitlement to public information”
Handbook of Statistical Organizations
National Strategies for the Development of Statistics
(NSDS)
The Generic Statistical Business Process Model (GSBPM)
3. Data dissemination = communication
Policy makingProfessionals & public
COMMUNICATION
COMMUNICATION
COMMUNICATION
COMMUNICATION
Analysis & research
Statistical needs
and education
Knowledge based society
Policy accountability
Analysis & assessment
Policy options
Policy decisions
Policy validations
INDEPENDENT OFFICIAL STATISTICS
ECONOMIC AND SOCIAL PROGRESS
4. The importance of web-based data-dissemination
Everyone who has access to internet is becoming a potential user of statistics.
From 2008 to 2013 the number of Internet users grows by 67%
Forget the last war.
5. Identifying users
User groups
Decision makers (government at central and local level, businesses)
Academia (institution that use, research and analyze data)
Educational (primary, secondary, tertiary)
Public at large
Tourists Harvesters and (data) Miners
6. Tourists
Novice or infrequent users, and typically make up the
majority of individual users.
Looking for basic data either out of curiosity, or to
inform personal decisions.
Want to be able to find and view data quickly and
easily, they prefer low levels of complexity and need
only limited functionality.
7. Harvesters
Intermediate and fairly frequent users, who are
looking for data to inform basic research or economic
decisions.
They will accept increased complexity if it results in
addition functionality and flexibility in the way they
can view and download data.
8. (Data) Miners
Expert users, typically small in number, but using
large volumes of data on a regular basis, often for
detailed research or analysis.
They want simplicity, easy downloads functionality
and flexibility, take data offline
9. A new type - Builders
Experts that want to reuse statistical data without copying
or downloading it.
Requesting ability to access data servers at 24/7 and feed
data to maps, visualizations and other applications.
Web services - interoperable machine-to-machine
interaction over a network".
Mashups – hybrid web applications
Visualizations
Mappings
Data aggregators
10. Defining the content
Data
Topic – domain specific or across-domain
Coverage – geographical and time
Aggregation level - micro and macro data
Nature of the data – tables, tabulations, time-series, datapoints
Documentation
Metadata (descriptive and structural)
Methodologies and standards
Classifications
Best practices, business processes, etc.
11. Subscription models
Registration
No registration required
Registration required (provides better tracking,
communication etc)
Subscription
Free (preferred by many countries)
For fee (cost recovery, profit, one-time, periodical, service
based )
Multi-tier (free basic and for fee premium services)
12. User management
User access (registered vs unregistered users)
User support, helpdesk
User surveys (online polling)
User activity tracking
Web server statistics
Analytic services (Google analytics)
Custom built tracking services
Social networking (Facebook, Twitter..)
13. Site administration
Data Management
Data correction facility
Data upload facility
Data availability
Metadata Management
Structural metadata
Descriptive metadata
Data upload calendar
Management Reporting
14. Resource allocation
+ Data dissemination group
(Centralized or Decentralized)
+ Systems/Application development
+ Hardware and software requirements
+ Long-term maintenance
+ Operation
+ Helpdesk
--------------------------------------------------
= TOTAL COST OF OWNERSHIP (TCO)
17. Software platform and architecture
Off-the-shelf products
Custom development (in-house, outsourcing)
Open source platforms
Proprietary platforms
Self hosting
Outsourced hosting
18. Design considerations
Simplicity and ease of use
Easy of navigation
Bookmarking
Searchability
Drill down
Dimensional search
Full text search
19. Conclusions
One size does not fit all
Web-based data dissemination should work as a two
way communication
Focus has to be on users who frequently visit our
sites
The maintenance of web-based data-dissemination
products is a long term commitment
We have to be aware of TCO