This presentation was given at the 2011 DoD symposium on SOA & Semantic Technology, and demonstrates the use of open standard metadata tags to implement the Government Performance and Results Act Modernization Act (GPRAMA) using topical examples like cloud computing, and the meaningful use of electronic health record exchanges.
This is a presentation given for a HealthData.gov Developer Challenge; see
http://www.health2con.com/devchallenge/health-data-platform-metadata-challenge/
and/or
http://www.health2con.com/devchallenge/health-data-platform-simple-sign-on-challenge/
(both links contain the same embedded video and deck)
Interacting with Linked Data to Facilitate its SustainabilityRoberto García
Presentation about the importance of user participation for the sustainability of Linked Data publishing. It also shows an approach to automatic User Interface generation for Linked Data that then facilitates users participation.
Data Harvesting, Curation and Fusion Model to Support Public Service Recommen...Citadelh2020
CITADEL is a H2020 European project that is creating an ecosystem of best practices, tools, and recommendations to transform Public Administrations (PAs) via an inclusive approach in order to provide stakeholders with more efficient, inclusive and citizen-centric services. The CITADEL ecosystem will allow PAs to use what they already know plus new data to implement what really matters to citizens in order to shape and co-create more efficient and inclusive public services. CITADEL innovates by using ICTs to find out why citizens stop using public services, and use this information to re-adjust provision to bring them back in. Also, it identifies why citizens are not using a given public service (due to affordability, accessibility, lack of knowledge, embarrassment, lack of interest, etc.) and, where appropriate, use this information to make public services more attractive, so they start using the services.
The DataTank, a tool designed and developed by IMEC’s IDLab, will be extended to provide the Data Harvesting/Curation/Fusion (DHCF) component of the platform. The DataTank provides an open source, open data platform which not only allows publishing datasets according to standardised guidelines and taxonomies (DCAT-AP), but also transforms the data into a variety of reusable formats. The extension will include an intelligent way of harvesting and fusion of different data sources using semantics and Linked Data mapping technologies developed by IDLab. In the context of CITADEL the new HCF component will enable the visualization and analysis of trends for the usage of public services in European cities, playing a key role in generating personalized recommendations to the citizens as well as to PAs in terms of suggesting improvements to the current suite of public services.
https://twitter.com/Citadelh2020
https://twitter.com/gayane_sedraky
https://twitter.com/imec_int
https://twitter.com/IDLabResearch
Data Harvesting, Curation and Fusion Model to Support Public Service Recommen...Gayane Sedrakyan
CITADEL is a H2020 European project that is creating an ecosystem of best practices, tools, and recommendations to transform Public Administrations (PAs) via an inclusive approach in order to provide stakeholders with more efficient, inclusive and citizen-centric services. The CITADEL ecosystem will allow PAs to use what they already know plus new data to implement what really matters to citizens in order to shape and co-create more efficient and inclusive public services. CITADEL innovates by using ICTs to find out why citizens stop using public services, and use this information to re-adjust provision to bring them back in. Also, it identifies why citizens are not using a given public service (due to affordability, accessibility, lack of knowledge, embarrassment, lack of interest, etc.) and, where appropriate, use this information to make public services more attractive, so they start using the services.
The DataTank, a tool designed and developed by IMEC’s IDLab, will be extended to provide the Data Harvesting/Curation/Fusion (DHCF) component of the platform. The DataTank provides an open source, open data platform which not only allows publishing datasets according to standardised guidelines and taxonomies (DCAT-AP), but also transforms the data into a variety of reusable formats. The extension will include an intelligent way of harvesting and fusion of different data sources using semantics and Linked Data mapping technologies developed by IDLab. In the context of CITADEL the new HCF component will enable the visualization and analysis of trends for the usage of public services in European cities, playing a key role in generating personalized recommendations to the citizens as well as to PAs in terms of suggesting improvements to the current suite of public services.
This is a presentation given for a HealthData.gov Developer Challenge; see
http://www.health2con.com/devchallenge/health-data-platform-metadata-challenge/
and/or
http://www.health2con.com/devchallenge/health-data-platform-simple-sign-on-challenge/
(both links contain the same embedded video and deck)
Interacting with Linked Data to Facilitate its SustainabilityRoberto García
Presentation about the importance of user participation for the sustainability of Linked Data publishing. It also shows an approach to automatic User Interface generation for Linked Data that then facilitates users participation.
Data Harvesting, Curation and Fusion Model to Support Public Service Recommen...Citadelh2020
CITADEL is a H2020 European project that is creating an ecosystem of best practices, tools, and recommendations to transform Public Administrations (PAs) via an inclusive approach in order to provide stakeholders with more efficient, inclusive and citizen-centric services. The CITADEL ecosystem will allow PAs to use what they already know plus new data to implement what really matters to citizens in order to shape and co-create more efficient and inclusive public services. CITADEL innovates by using ICTs to find out why citizens stop using public services, and use this information to re-adjust provision to bring them back in. Also, it identifies why citizens are not using a given public service (due to affordability, accessibility, lack of knowledge, embarrassment, lack of interest, etc.) and, where appropriate, use this information to make public services more attractive, so they start using the services.
The DataTank, a tool designed and developed by IMEC’s IDLab, will be extended to provide the Data Harvesting/Curation/Fusion (DHCF) component of the platform. The DataTank provides an open source, open data platform which not only allows publishing datasets according to standardised guidelines and taxonomies (DCAT-AP), but also transforms the data into a variety of reusable formats. The extension will include an intelligent way of harvesting and fusion of different data sources using semantics and Linked Data mapping technologies developed by IDLab. In the context of CITADEL the new HCF component will enable the visualization and analysis of trends for the usage of public services in European cities, playing a key role in generating personalized recommendations to the citizens as well as to PAs in terms of suggesting improvements to the current suite of public services.
https://twitter.com/Citadelh2020
https://twitter.com/gayane_sedraky
https://twitter.com/imec_int
https://twitter.com/IDLabResearch
Data Harvesting, Curation and Fusion Model to Support Public Service Recommen...Gayane Sedrakyan
CITADEL is a H2020 European project that is creating an ecosystem of best practices, tools, and recommendations to transform Public Administrations (PAs) via an inclusive approach in order to provide stakeholders with more efficient, inclusive and citizen-centric services. The CITADEL ecosystem will allow PAs to use what they already know plus new data to implement what really matters to citizens in order to shape and co-create more efficient and inclusive public services. CITADEL innovates by using ICTs to find out why citizens stop using public services, and use this information to re-adjust provision to bring them back in. Also, it identifies why citizens are not using a given public service (due to affordability, accessibility, lack of knowledge, embarrassment, lack of interest, etc.) and, where appropriate, use this information to make public services more attractive, so they start using the services.
The DataTank, a tool designed and developed by IMEC’s IDLab, will be extended to provide the Data Harvesting/Curation/Fusion (DHCF) component of the platform. The DataTank provides an open source, open data platform which not only allows publishing datasets according to standardised guidelines and taxonomies (DCAT-AP), but also transforms the data into a variety of reusable formats. The extension will include an intelligent way of harvesting and fusion of different data sources using semantics and Linked Data mapping technologies developed by IDLab. In the context of CITADEL the new HCF component will enable the visualization and analysis of trends for the usage of public services in European cities, playing a key role in generating personalized recommendations to the citizens as well as to PAs in terms of suggesting improvements to the current suite of public services.
Enabling Self-service Data Provisioning Through Semantic Enrichment of Data |...Ahmad Assaf
Publicly available datasets contain knowledge from various domains such as encyclopedic, government, geographic, entertainment and so on. The increasing diversity of these datasets makes it difficult to annotate them with a fixed number of pre-defined tags. Moreover, manually entered tags are subjective and may not capture their essence and breadth. We propose a mechanism to automatically attach meta information to data objects by leveraging knowledge bases like DBpedia and Freebase which facilitates data search and acquisition for business users.
Linked Open Data (LOD) has emerged as one of the largest collections of interlinked datasets on the web. In order to benefit from this mine of data, one needs to access to descriptive information about each dataset (or metadata). This metadata enables dataset discovery, understanding, integration and maintenance. Data portals, which are datasets' access points, offer metadata represented in different and heterogeneous models. We first propose a harmonized dataset model based on a systematic literature survey that enables complete metadata coverage to enable data discovery, exploration and reuse by business users. Second, rich metadata information is currently very limited to a few data portals where they are usually provided manually, thus being often incomplete and inconsistent in terms of quality. We propose a scalable automatic approach for extracting, validating, correcting and generating descriptive linked dataset profiles. This approach applies several techniques in order to check the validity of the metadata provided and to generate descriptive and statistical information for a particular dataset or for an entire data portal.
Traditional data quality is a thoroughly researched field with several benchmarks and frameworks to grasp its dimensions. Ensuring data quality in Linked Open Data is much more complex. It consists of structured information supported by models, ontologies and vocabularies and contains queryable endpoints and links. We propose an objective assessment framework for Linked Data quality based on quality metrics that can be automatically measured. We further present an extensible quality measurement tool implementing this framework that helps on one hand data owners to rate the quality of their datasets and get some hints on possible improvements, and on the other hand data consumers to choose their data sources from a ranked set.
Web Services Discovery and Recommendation Based on Information Extraction and...ijwscjournal
This paper shows that the problem of web services representation is crucial and analyzes the various
factors that influence on it. It presents the traditional representation of web services considering traditional
textual descriptions based on the information contained in WSDL files. Unfortunately, textual web services
descriptions are dirty and need significant cleaning to keep only useful information. To deal with this
problem, we introduce rules based text tagging method, which allows filtering web service description to
keep only significant information. A new representation based on such filtered data is then introduced.
Many web services have empty descriptions. Also, we consider web services representations based on the
WSDL file structure (types, attributes, etc.). Alternatively, we introduce a new representation called
symbolic reputation, which is computed from relationships between web services. The impact of the use of
these representations on web service discovery and recommendation is studied and discussed in the
experimentation using real world web services.
A new model for interoperable administrative dataRob Worthington
This presentation shares Kwantu's work on interoperable administrative systems. It was given at the Global Partnership for Sustainable Development Data National Data Roadmap Workshop in Costa Rica in 2018.
Service innovation: the hidden value of open dataSlim Turki, Dr.
> Presented at the Share-PSI Krems Workshop: A self sustaining business model for open data
- http://www.w3.org/2013/share-psi/workshop/krems/papers/ServiceInnovation-theHiddenValueOfOpenData
- http://www.w3.org/2013/share-psi/workshop/krems/
> Summary
The development of a data driven economy has been a major orientation of economic policies over the past few years based on (i) the wider availability of data promoted in particular by the Open Data movement and (ii) the development of dedicated tools to support heterogeneous data and data in large quantities (Big data). Reports anticipate the creation of enormous amounts of economic activity and growth opportunities. However the promise of the data-driven economy lies to a large extent in the development of new services. The return on investment of open data policies for instance should be evaluated from the services created based on open data sets. Open data promoters couple more and more open data initiatives with actions dedicated to the promotion of the datasets for the creation of new services. Nevertheless the results in terms of services created remain below the expectations of open data promoters. Indeed most services created are not sustainable and / or do not use the variety of datasets. They are to a wide extent relying on a limited number of very popular datasets. In order to make the promise of the data-driven economy a reality, it is therefore necessary to increase reuse and value extracted by services from data. Our hypothesis is that service innovation approaches can help understand the mechanisms that drive the creation of services. We therefore propose to analyse the roles that the data can have in the design of services based on a theoretical framework of service innovation.
Presentation on a new system for regional document/information management, given by Timo Baur, CCCCC, Belize at the Marketplace for techies session at the 4th I-K-Mediary workshop in Bangladesh, January 2011.
Authoring Linked Data using Semantic MediaWikiLaurent Alquier
Semantic Wikis have matured to become much more than wikis. Recent advances in Semantic MediaWiki make it an ideal, low cost platform for data integration and authoring of Linked Data. A practical example applied to Translational Research will be provided.
Slides from BioIT World 2011 talk - Informatics track.
HDI III - Healthdata.gov - Now, Next and ChallengesGeorge Thomas
This is a presentation that will be given at the 2012 Health Datapalooza (http://hdiforum.org), describing the new healthdata.gov site, its PaaS/DaaS direction, and related i2/ONC developer challenges.
Enabling Self-service Data Provisioning Through Semantic Enrichment of Data |...Ahmad Assaf
Publicly available datasets contain knowledge from various domains such as encyclopedic, government, geographic, entertainment and so on. The increasing diversity of these datasets makes it difficult to annotate them with a fixed number of pre-defined tags. Moreover, manually entered tags are subjective and may not capture their essence and breadth. We propose a mechanism to automatically attach meta information to data objects by leveraging knowledge bases like DBpedia and Freebase which facilitates data search and acquisition for business users.
Linked Open Data (LOD) has emerged as one of the largest collections of interlinked datasets on the web. In order to benefit from this mine of data, one needs to access to descriptive information about each dataset (or metadata). This metadata enables dataset discovery, understanding, integration and maintenance. Data portals, which are datasets' access points, offer metadata represented in different and heterogeneous models. We first propose a harmonized dataset model based on a systematic literature survey that enables complete metadata coverage to enable data discovery, exploration and reuse by business users. Second, rich metadata information is currently very limited to a few data portals where they are usually provided manually, thus being often incomplete and inconsistent in terms of quality. We propose a scalable automatic approach for extracting, validating, correcting and generating descriptive linked dataset profiles. This approach applies several techniques in order to check the validity of the metadata provided and to generate descriptive and statistical information for a particular dataset or for an entire data portal.
Traditional data quality is a thoroughly researched field with several benchmarks and frameworks to grasp its dimensions. Ensuring data quality in Linked Open Data is much more complex. It consists of structured information supported by models, ontologies and vocabularies and contains queryable endpoints and links. We propose an objective assessment framework for Linked Data quality based on quality metrics that can be automatically measured. We further present an extensible quality measurement tool implementing this framework that helps on one hand data owners to rate the quality of their datasets and get some hints on possible improvements, and on the other hand data consumers to choose their data sources from a ranked set.
Web Services Discovery and Recommendation Based on Information Extraction and...ijwscjournal
This paper shows that the problem of web services representation is crucial and analyzes the various
factors that influence on it. It presents the traditional representation of web services considering traditional
textual descriptions based on the information contained in WSDL files. Unfortunately, textual web services
descriptions are dirty and need significant cleaning to keep only useful information. To deal with this
problem, we introduce rules based text tagging method, which allows filtering web service description to
keep only significant information. A new representation based on such filtered data is then introduced.
Many web services have empty descriptions. Also, we consider web services representations based on the
WSDL file structure (types, attributes, etc.). Alternatively, we introduce a new representation called
symbolic reputation, which is computed from relationships between web services. The impact of the use of
these representations on web service discovery and recommendation is studied and discussed in the
experimentation using real world web services.
A new model for interoperable administrative dataRob Worthington
This presentation shares Kwantu's work on interoperable administrative systems. It was given at the Global Partnership for Sustainable Development Data National Data Roadmap Workshop in Costa Rica in 2018.
Service innovation: the hidden value of open dataSlim Turki, Dr.
> Presented at the Share-PSI Krems Workshop: A self sustaining business model for open data
- http://www.w3.org/2013/share-psi/workshop/krems/papers/ServiceInnovation-theHiddenValueOfOpenData
- http://www.w3.org/2013/share-psi/workshop/krems/
> Summary
The development of a data driven economy has been a major orientation of economic policies over the past few years based on (i) the wider availability of data promoted in particular by the Open Data movement and (ii) the development of dedicated tools to support heterogeneous data and data in large quantities (Big data). Reports anticipate the creation of enormous amounts of economic activity and growth opportunities. However the promise of the data-driven economy lies to a large extent in the development of new services. The return on investment of open data policies for instance should be evaluated from the services created based on open data sets. Open data promoters couple more and more open data initiatives with actions dedicated to the promotion of the datasets for the creation of new services. Nevertheless the results in terms of services created remain below the expectations of open data promoters. Indeed most services created are not sustainable and / or do not use the variety of datasets. They are to a wide extent relying on a limited number of very popular datasets. In order to make the promise of the data-driven economy a reality, it is therefore necessary to increase reuse and value extracted by services from data. Our hypothesis is that service innovation approaches can help understand the mechanisms that drive the creation of services. We therefore propose to analyse the roles that the data can have in the design of services based on a theoretical framework of service innovation.
Presentation on a new system for regional document/information management, given by Timo Baur, CCCCC, Belize at the Marketplace for techies session at the 4th I-K-Mediary workshop in Bangladesh, January 2011.
Authoring Linked Data using Semantic MediaWikiLaurent Alquier
Semantic Wikis have matured to become much more than wikis. Recent advances in Semantic MediaWiki make it an ideal, low cost platform for data integration and authoring of Linked Data. A practical example applied to Translational Research will be provided.
Slides from BioIT World 2011 talk - Informatics track.
HDI III - Healthdata.gov - Now, Next and ChallengesGeorge Thomas
This is a presentation that will be given at the 2012 Health Datapalooza (http://hdiforum.org), describing the new healthdata.gov site, its PaaS/DaaS direction, and related i2/ONC developer challenges.
Continuous Delivery and Micro Services - A SymbiosisEberhard Wolff
Continuous Delivery profits from Micro Services - and the other way round. This presentation shows how the two technologies work together - and how Micro Services can be used to simplify the transition to Continuous Delivery.
IEEE 2014 DOTNET CLOUD COMPUTING PROJECTS A scientometric analysis of cloud c...IEEEMEMTECHSTUDENTPROJECTS
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
Structure, Personalization, Scale: A Deep Dive into LinkedIn SearchC4Media
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/1Gel2jo.
The authors discuss some of the unique challenges they've faced delivering highly personalized search over semi-structured data at massive scale. Filmed at qconnewyork.com.
Asif Makhani heads Search at LinkedIn. Prior to that, he was a founding member of A9 and led the development and launch of Amazon CloudSearch. Daniel Tunkelang leads LinkedIn's efforts around query understanding. Before that, he led LinkedIn's product data science team. He previously led a local search quality team at Google.
Accelerate Report Migrations from Cognos Power BI & TableauSenturus
Learn how the Senturus Report Insights app decodes Cognos reports and models accelerating report migrations to Power BI and Tableau. Save time. Save money. View this on-demand webinar with demos: https://senturus.com/resources/accelerate-report-migrations-from-cognos-to-power-bi-tableau/.
Senturus offers a full spectrum of services in business intelligence and training on Tableau, Power BI and Cognos. Our resource library has hundreds of free live and recorded webinars, blog posts, demos and unbiased product reviews available on our website at: http://www.senturus.com/senturus-resources/.
In this session, Melissa Sussman, Lead Technical Evangelist at Sumo Logic, explores the company's contributions to open source projects. Sumo has made a serious commitment to OpenTelemetry (OTel), OpenSLO, and open core solutions. Melissa also discusses data collection and how open source tooling (such as Kubernetes, Prometheus, Fluentbit, and Fluentd) are used with Sumo Logic products.
Speakers:
Melissa Sussmann
A very important topic... I have mentioned here from its formal definition to its example. This presentation also explains the recommended format to write an effective project proposal. If any of its reader finds any point to be difficult to him/her, he/she can freely ask in comments or through inbox. Thank you :-)
Its very informative and authentic ...Its all about how to write a Solid Proposal for your Project. It also include a complete format of writing the proposal. it will provide you with enough knowledge that u will be able to write a Good Proposal.. Hope u will show interest while reading it. Give us ur Feedback about this,.. Thank you..
SAP has released a number of updates to their LMS Admin UI. In this slideshare, we will cover what is new and how you can prepare.
Interested in learning more? Watch this on-demand webinar. https://www.gpstrategies.com/archived-webinar/discovering-the-new-successfactors-lms-admin-features/
Lean Kanban India 2017 | Case study - Hybrid Agile Implementation Model to En...LeanKanbanIndia
Session Title: Case study - Hybrid Agile Implementation Model to Enable Enterprise Agility in 9 months (with focus on Kanban and Flow Management)
Session Overview: In this session we will present a detailed case study of implementing agile @ scale in a unique way to enable business agility in a large software product group of ~1500 people that develops large integrated enterprise product. Our model has great amount of focus on Kanban and Flow management – from teams to program to portfolio. The unique hybrid implementation model was developed to handle challenges as: dependencies across various product groups while some working in agile and some not; coaching limitation that do not allow to coach all the groups at once; legacy product (~ 15 years) with veteran managers (culture and support); distributed development (5 locations in India) and last, high business pressure to respond fast to changes. In 9 months with team of 5 agile coaches we achieved most of our business goals (feature cycle time reduced by 50%, time to market by 60%, throughput by 30% and dramatic improvement in quality). In our session we will describe our hybrid approach in implementing agile in scale for fast results.
Smart Data Webinar: A semantic solution for financial regulatory complianceDATAVERSITY
In this webinar Mike will describe a practical semantics-based approach to regulatory compliance and reporting for the financial sector using a reference ontology such as the Financial Industry Business Ontology (FIBO). This approach links the reference ontology to existing data resources with minimal disruption to existing data assets. The webinar will describe the kind of ontology that is needed for this kind of application, the principles for building or extending a reference ontology and some of the challenges in mapping this to legacy data.
Large-scale Reasoning with a Complex Cultural Heritage Ontology (CIDOC CRM) ...Vladimir Alexiev, PhD, PMP
Vladimir Alexiev, Dimitar Manov, Jana Parvanova and Svetoslav Petrov. In proceedings of workshop Practical Experiences with CIDOC CRM and its Extensions (CRMEX 2013) at TPDL 2013, 26 Sep 2013, Valetta, Malta
Similar to Realizing the GPRAMA using Government Linked Data (20)
Implementing the Open Government Directive using the technologies of the Soci...George Thomas
This presentation demonstrates the use of Semantic Web technologies with Social Networking tools, considering metadata specifications as Social Media. Example ontologies and instance data from the Capital Planning and Investment Control and Business Motivation are created that link 'what' (Agency IT investments) with 'why' (Agency goals and objectives), using a simple linking ontology. Knowledge Workers use a Semantic Halo Mediawiki to curate the data.
This presentation is the culmination of my detail to the E-Government Office in the US Office of Management and Budget and the work I did to evolve and mature initiatives like recovery.gov and data.gov.
'Transparency, Participation, Collaboration'
Solution Architecture works in progress for recovery.gov
This is a presentation I gave at the Sunlight Foundations http://transparencycamp.org/ on 2/28/09.
With respect to whether the ideas and approaches I've expressed and advocated here will ultimately be realized by those now responsible for managing and operating this initiative - Caveat Venditor/Emptor.
A presentation that captures some basic ideas about connecting planning data with spending data, part of my OMB detail in support of the Obama Administration transparency and open Government goals.
https://bit.ly/BabeSideDoll4u Babeside is a company that specializes in creating handcrafted reborn dolls. These dolls are designed to be incredibly lifelike, with realistic skin tones and hair, and they have become increasingly popular among collectors and those who use them for therapeutic purposes. At Babeside, we believe that our reborn dolls can provide comfort and healing to anyone who needs it.
The Healing Power of Babeside's Handcrafted Creations
Our reborn dolls are more than just beautiful pieces of art - they can also help alleviate stress, anxiety, depression, and other mental health conditions. Studies have shown that holding or cuddling a soft object like a stuffed animal or a reborn doll can release oxytocin, which is often referred to as the "love hormone." This hormone helps us feel calm and relaxed, reducing feelings of stress and anxiety.
In addition to their physical benefits, reborn dolls can also offer emotional support. For many people, having something to care for and nurture can bring a sense of purpose and fulfillment. Reborn dolls can also serve as a reminder of happy memories or loved ones who have passed away.
Welcome to the Program Your Destiny course. In this course, we will be learning the technology of personal transformation, neuroassociative conditioning (NAC) as pioneered by Tony Robbins. NAC is used to deprogram negative neuroassociations that are causing approach avoidance and instead reprogram yourself with positive neuroassociations that lead to being approach automatic. In doing so, you change your destiny, moving towards unlocking the hypersocial self within, the true self free from fear and operating from a place of personal power and love.
Collocation thường gặp trong đề thi THPT Quốc gia.pdf
Realizing the GPRAMA using Government Linked Data
1. Realizing the GPRAMA
using Government Linked Data
George Thomas, HHS
DoD 2011 SOA & Semantic Technology Symposium
2011-07-13, 3:45-4:20pm, standards (green) track
2. 2
About me…
• HHS OCIO
– Office of Enterprise Architecture
– Working on a variety of (mostly ACA related) modernization projects
• Data.gov PMO Semantic Web / Linked Data lead
– TPC: open to all!
– Send me an email if interested in participating…
• Clinical Quality Linked Data (CQLD)
– With the Centers for Medicare and Medicaid Services (CMS)
– See CQLD blog post on health.data.gov
• W3C Government Linked Data Working Group (GLD)
– Co-chair, member only, see W3C GLD wiki
– Focused on SW standards and best practices for OGD
• Graduate School
– SOA Instructor
– Part of their EA Certificate program
3. 3
This Presentation
• Gov Performance & Results Act Modernization Act
– GPRAMA overview
• IT Dashboard
– Capital Planning Investment Control (CPIC) data
• Object Management Group Standards
– Business Motivation Model (BMM)
• ‘Bizmo’ Linking Vocabulary
– CPIC + BMM
• Agency Data Creation and Publication
– Empowering content owners, exposing machine readable data
• Freebase Demo
– Finding IT investments that support Federal Gov Goals
• Data Syndication and Aggregation
– Architectures and tool examples
4. 4
GPRAMA Brief Overview
• Strategic Planning
– Qualifying the Ends
• Performance Planning
– Quantifying the Means
• Reporting
– Annual to quarterly
• Federal Priority Goals
– Cross Agency, Government-wide
• Agency Priority Goals
– Intra Agency
• (Formalizing new/existing Organizational Roles)
• (Training…)
8. 8
BMM Terms
• A desired result is a generalization of goals and objectives
– A goal is something an Org is trying to achieve
– An objective quantifies a goal, specifying timing and
measurement
• A course of action is something an Org does to achieve a
desired result
– A strategy is a broad, lasting course of action
– A tactic is a narrow, fleeting course of action
• An influencer is something that can affect the Org’s ability to
achieve its goals or implement its strategies
– An actuator is an influencer that can be considered as a
quantity that can increase or decrease over time
• An assessment is a judgment of an influencer’s affect on an Org
– Strengths, Weaknesses, Opportunities and Threats are
common kinds of assessments (SWOT)
15. 15
Key Interlinking /bizmo#properties
– #supports properties link CPIC ex53/ex300’s
• to BMM Strategies, Tactics, Goals, Objectives, etc.
– extending the Ex53/300 specs – without changing them!
– #maintainsExhibit53/300 link OrganizationalUnit’s
• to CPIC investment information
17. 17
Ontology Classes/Properties = Tags
• Create semantic annotations (part of wysiwyg editor tools)
– auto-completion suggests tags to reuse from ontologies that have
been imported into the wiki (note existing tags from BMM ontology)
18. 18
NHIN RDF/XML IEP: Export Excerpt
• Wiki instance data and metadata curation (SME
edits) maintain class and property specs of
ontologies imported off the Web when exported or
accessed by other sites/users/apps
19. 19
Browsing Linked Datasets on SMW
• Filtering through properties defined by Bizmo ontology
• finds CPIC instance data that is linked to BMM instance data
20. 20
SDW = SNS enabled LOD
• This presentation is summarized by the interactive data below
• And - it’s a Wiki – SME’s can easily add annotations and data!
21. 21
Freebase Demo (1 of 4) – CPIC+BMM
• Using a modern browser, search for ‘Exhibit 53’
– On http://freebase.com/labs/parallax
• Select ‘Exhibit 53 collection (2 topics)’ which takes you to;
– http://www.freebase.com/labs/parallax/browse.html?type=%2Fbase%2Fbizmo%2Fe53
22. 22
Freebase Demo (2 of 4) – CPIC+BMM
• Two ‘topic’ (instances of type Exhibit 53) are returned
– From the schemas defined in the bizmo ‘base’
• One HHS and one GSA Exhibit 53
– Data sourced from itdashboard.gov extracts (circa ~2009 – perseverance!)
• Click on ‘Contains’ (on the right) in the ‘Connections’ browser
– Faceted browsing, RDF properties show up as ‘connections’…
23. 23
Freebase Demo (3 of 4) – CPIC+BMM
• Three ‘topic’ (instances of type Exhibit 53 Recordset) are returned
– All from HHS, these are Ex53 ‘row’ entries (note other facets on the left)
• Which of these supports an Administration Goal?
– Click on ‘more connections’, then type ‘goal’ to filter properties on the fly
• Click on the ‘Supports Federal Goal’ property link (on the right)
– in the ‘connections’ browser to filter the Exhibit 53 Recordsets
24. 24
Freebase Demo (4 of 4) – CPIC+BMM
• One ‘topic’ (instance of type Goal) is returned
– The Ex53 Recordset entry for the National Health Information Network
• Which links to an Administration Goal
– ‘Health Care Reform’ (these are notional/exemplary instances…)
• You’ve just browsed from all Ex53 entries to a specific entry
– Via the connections in the RDF Schema (as described previously)
26. 26
Google’s PubSubHubBub (PuSH)
• A feed URL (a "topic") declares its Hub server(s) in its Atom or RSS XML file, via
<link rel="hub" ...>.
• The hub(s) can be run by the publisher of the feed, or can be a community hub that
anybody can use. (Atom and RssFeeds are supported)
• A subscriber (a server that's interested in a topic), initially fetches the Atom URL as
normal. If the Atom file declares its hubs, the subscriber can then avoid lame,
repeated polling of the URL and can instead register with the feed's hub(s) and
subscribe to updates.
• The subscriber subscribes to the Topic URL from the Topic URL's declared Hub(s).
• When the Publisher next updates the Topic URL, the publisher software pings the
Hub(s) saying that there's an update.
• The hub efficiently fetches the published feed and multicasts the new/changed
content out to all registered subscribers.
27. 27
Sindice.com – SemWeb Index++
• Structured Data
– RDF crawler
• Register URL’s
– Manually
– Automated
• Using Ping
Submission API
• Check out
– JS Widgets and
Sig.ma interface